text
stringlengths
100
500k
subset
stringclasses
4 values
BMC Public Health Protocol for a national monthly survey of alcohol use in England with 6-month follow-up: 'The Alcohol Toolkit Study' Emma Beard1,2, Jamie Brown1,2, Robert West2, Crispin Acton3, Alan Brennan4, Colin Drummond5, Matthew Hickman6, John Holmes4, Eileen Kaner7, Karen Lock8, Matthew Walmsley9 & Susan Michie1 BMC Public Health volume 15, Article number: 230 (2015) Cite this article Timely tracking of national patterns of alcohol consumption is needed to inform and evaluate strategies and policies aimed at reducing alcohol-related harm. Between 2014 until at least 2017, the Alcohol Toolkit Study (ATS) will provide such tracking data and link these with policy changes and campaigns. By virtue of its connection with the 'Smoking Toolkit Study' (STS), links will also be examined between alcohol and smoking-related behaviour. The ATS consists of cross-sectional household, computer-assisted interviews of representative samples of adults in England aged 16+. Each month a new sample of approximately 1800 adults complete the survey (~n = 21,600 per year). All respondents who consent to be followed-up are asked to complete a telephone survey 6 months later. The ATS has been funded to collect at least 36 waves of baseline and 6-month follow-up data across a period of 3 years. Questions cover alcohol consumption and related harm (AUDIT), socio-demographic characteristics, attempts to reduce or cease consumption and factors associated with this, and exposure to health professional advice on alcohol. The ATS complements the STS, which has been tracking key performance indicators relating to smoking since 2006. As both the ATS and STS involve the same respondents, it is possible to assess interactions between changes in alcohol and tobacco use. Data analysis will involve: 1) Descriptive and exploratory analyses undertaken according to a pre-defined set of principles while allowing scope for pursuing lines of enquiry that arise from prior analyses; 2) Hypothesis testing according to pre-specified, published analysis plans. Descriptive data on important trends will be published monthly on a dedicated website: www.alcoholinengland.info. The Alcohol Toolkit Study will improve understanding of population level factors influencing alcohol consumption and be an important resource for policy evaluation and planning. The UK has among the highest per capita alcohol consumption of any country in the world [1,2], with 9.1 million adults drinking at levels above recommended limits [3,4]. The costs to society in terms of the health, social and criminal implications of excessive alcohol consumption are estimated to be more than £21 billion per annum [5]. In 2010, 3.5% of all deaths in England were wholly attributable to excessive alcohol consumption [6,7]. A further 1.1% of deaths were partially attributable [6]. The rates of harmful drinking in a given country are a function of many factors, including financial cost, cultural norms [8-11] and beliefs about alcohol-related harm [3]. In 2012 the English Government's alcohol strategy proposed a range of policies to tackle alcohol-related harm: 1) helping individuals to change their drinking behaviour; 2) taking action locally; 3) improving treatment for alcohol dependence; 4) sharing responsibility with the alcohol industry; 5) imposing a minimum unit price; and 6) extending restrictions on advertisement to teenagers and children [5]. Since then the Government has withdrawn some policies (e.g., minimum unit pricing) but has gone ahead with some others: a ban on sales of alcohol 'below cost'; both reductions and increases in alcohol duties; introducing screening and brief intervention for risk drinking as part of NHS Health Checks; and voluntary agreements with industry to reduce availability of high-strength canned beverages, lower the strength of existing beverages and promote low strength alternatives, and increase the number of product labels providing alcohol content information [12-14]. Alcohol charities have also promoted 'Dry January' (www.dryjanuary.org.uk) to encourage drinkers to abstain for one month, while more than 70 local authorities have voluntary agreements with retailers to withdraw high-strength, low-cost beers and ciders from sale [15]. Timely and detailed surveillance data could help to inform and evaluate national and local alcohol policies. Several large-scale surveys collect data on alcohol use in the United Kingdom on an annual or less frequent basis [16] (e.g. Health Survey for England, General Lifestyle Survey, and Adult Psychiatric Morbidity Survey; See Table 1). Table 1 Topics addressed by surveys in England regarding alcohol use The ATS will fill an important gap by gathering and publishing monthly data on representative samples, including detailed information on alcohol use, attempts to reduce or cease alcohol consumption, and exposure to health professional advice on alcohol. It includes a 6-month telephone follow-up of each monthly sample which will provide data on within-individual trends and consistency in alcohol-related measures. The ATS is modelled on, and involves the same respondents as the Smoking Toolkit Study (STS) [26], which was set up in 2006 and has already collected data on approximately 175,000 individuals. The STS has demonstrated the value of having monthly figures on key performance indicators and that it has the granularity to detect temporal trends that would otherwise be missed [27-31]. The linkage of these two surveys will provide a unique opportunity to compare trends in smoking and alcohol use behaviour at an individual, regional and national level, and to examine the interdependencies between these two behaviours. The ATS aims to provide timely tracking of alcohol-related behaviours on a monthly basis to inform and evaluate national alcohol control policies in England. It will also permit analysis of trends as a function of major socio-demographic variables. Monthly, quarterly and annual trends will be published (overall and stratified by age group, social grade, region, gender and smoking status) on: Prevalence of hazardous drinking (Measured by the Alcohol Use Disorders Identification Test (AUDIT)) Prevalence of hazardous drinkers who report attempting to reduce their alcohol consumption Prevalence of different methods used by hazardous drinkers attempting to reduce their consumption Prevalence among hazardous drinkers of receipt of advice to reduce alcohol consumption from a health professional in the past year Changes in these variables associated with events, such as the introduction of pricing policies, will also be assessed. The ATS involves monthly cross-sectional household computer-assisted interviews, conducted by Ipsos Mori, of approximately 1,800 adults aged 16+ and over in England. All participants who agree to be re-contacted are followed-up at 6 months by a telephone survey. Baseline data were first collected in March 2014, to be followed up in September 2014. The baseline survey uses a type of random location sampling, which is a hybrid between random probability and simple quota sampling. England is first split into 171,356 'Output Areas', comprising of approximately 300 households. These areas are then stratified according to ACORN characteristics and geographic region. ACORN is a socio-economic profiling tool developed by CACI (http://www.caci.co.uk/acorn/), and works by segmenting UK postcodes into 5 categorises (wealthy achievers, urban prosperity, comfortably off, moderate means and hard-pressed). These categorises are further divided into 17 groups and 56 types using government and consumer research data (e.g., census data and lifestyle survey records). The areas are then randomly allocated to interviewers, who travel to their selected areas and conduct the electronic interviews with one member of the household. Interviews are conducted until quotas based upon factors influencing the probability of being at home (i.e. working status, age and gender) are fulfilled. Morning interviews are avoided to maximise participant availability. This method of sampling is often seen as superior to conventional quota sampling because the choice of properties approached is significantly reduced by randomly allocating small output areas to the interviewers. However, as interviewers still choose the houses within these particular areas, a response rate cannot be calculated. This is because there is no definite gross sample, with units fulfilling the criteria of the quota being interchangeable. Ethical approval for the STS was granted by the UCL Ethics Committee (ID 0498/001). Further ethical approval for the ATS was not deemed necessary by the Committee, since asking individuals about their alcohol consumption in addition to smoking does not place them at additional risk of harm. The study is initially funded for a three year period between March 2014 and March 2017. It is anticipated that data on around 21,600 individuals will be collected each year, with a proposed 3 year sample size of 64,800. However, as with the STS, which has been collecting data for eight years, it is envisaged that the ATS will be extended beyond this initial period. Questions were developed by an expert panel and policy-makers. The key domains addressed at baseline are: 1) prevalence and frequency of harmful alcohol consumption (AUDIT) [32]; 2) current attempts and motivation to reduce alcohol consumption below harmful levels; 3) health-care professional advice about alcohol consumption; 4) types of drinks consumed and amount spent; 4) urges to drink; 5) recent serious attempts to cut down; and 7) help sought and factors contributing to recent attempts to reduce intake. Additional questions, as with the STS, can be added if new hypotheses are generated. The STS and ATS were set up as 'toolkits' for researchers in alcohol and tobacco control, affording the ability to add specific questions (e.g. on mental health). Thus researchers would be provided with all the contextual measurements listed below, negating the requirement for multiple surveys. Baseline measures Socio-demographic characteristics Data are collected on gender, ethnicity, socio-economic status, marital status, number of residents and children in the household, sexuality, age, disability, religion, internet access and use, and government office region (London, South East, South West, East Anglia, East Midlands, West Midlands, Yorkshire/Humberside, North West and North East). Socio-economic status is assessed through car and home ownership, employment status, educational achievements, income and by social-grade. Social grade is measured using the National Readership Survey social-grades system: A: higher managerial, administrative or professional; B: intermediate managerial, administrative or professional; C1: supervisory or clerical and junior managerial administrative or professional; C2: skilled manual workers; D: Semi and unskilled manual workers; and E: Causal or lowest grade workers, pensioners and others who depend on the welfare state for their income [33]. Smoking-related questions All participants taking part in the ATS are also asked questions regarding their use of tobacco products and smoking behaviour. Full details of these questions and details of the methodology are available from www.smokinginengland.info and Fidler et al. [26]. Prevalence and frequency of hazardous drinking The AUDIT (See Table 2: Question 1: a-k) is used to screen for alcohol use [32]. The AUDIT identifies people who could be classed as dependent, harmful or hazardous drinkers; and has proven validity, high internal consistency and good test-retest reliability across gender, age and cultures [34-38]. The full AUDIT consists of 10 questions: questions 1–3 deal with alcohol consumption, 4–6 with alcohol dependence and 7–10 with alcohol-related problems. A score of 8 or more in men (sometimes 7 in women) indicates a hazardous or harmful alcohol consumption, whilst a score greater than 20 is suggestive of alcohol dependence. The ATS uses an extended item version of the AUDIT questionnaire adopted in previous 'Screening and Brief Alcohol Intervention in Primary Care (SIPS)' trials; to allow exploration of higher levels of alcohol consumption in addition to the standard AUDIT scoring system [39,40]. The 2009 Adult Psychiatric Morbidity Survey reported that 24.2% of participants scored 8 or above on the AUDIT, which included 3.8% of adults whose drinking could be classified as harmful. Among males, the highest prevalence of hazardous and harmful drinking was in the 25–34 age group; whereas in females it was in those aged 16–24 [41]. Table 2 Main baseline questionnaire for the ATS Those who score 8 or more (i.e. indicating hazardous and or harmful alcohol consumption and possible dependence) on the AUDIT or 5 or more on the AUDIT-C (i.e. indicating high-risk consumption) at baseline, are then asked the following questions. Current attempts and motivation to reduce alcohol consumption One of the aims of the Department of Health's 'Reducing Harmful Drinking' policy, is to encourage adults to reduce their alcohol intake to recommended levels [3]. Thus participants are asked if they are currently attempting to reduce their alcohol intake (see Table 2: Question 2) and whether they are motivated to do so (see Table 2: Question 5). Question 5 was derived from the Motivation To Stop Scale (MTSS) used in the STS, which has been shown to be a reliable predictor of attempts to quit smoking [42]. The reason for using this scale is twofold: first, insofar that it proves valid and reliable, it is useful to have a single-item measure for motivation and, secondly, the scale will allow a meaningful comparison between the relative levels of motivation for changing the two behaviours. Receipt of health professional advice on drinking Participants are asked if they received any advice about their alcohol consumption from their GP and the form of this advice i.e. whether they were given help within the surgery or referred to a specialist service (See Table 2: Question 3). It is recommended by the National Institute for Health and Care Excellence (NICE) that GPs provide adults with brief advice on their alcohol consumption, and offer the provision of self-help materials or a referral to a specialist clinic [43]. GP brief advice also forms part of the Department of Health's 'Reducing Harmful Drinking' policy [3]. Previous studies have shown that GP advice is successful in reducing alcohol intake and encouraging treatment for alcohol dependence [44]. Types of drinks consumed In the 2009 Omnibus Survey, 'Drinking: Adults' behaviour and knowledge', women were proportionately less likely to drink beer and more likely to drink wine, fortified wine, spirits and alcopops than men. There were also significant age differences, with spirits being the most popular drink among women aged 16 to 24; in comparison with wine among women aged 45–56 years of age. To determine whether relationships exist between these and other socio-demographic variables, the ATS also assesses the type of drinks consumed by participants (See Table 2: Question 4). This is measured using a similar categorisation system to the ONS Omnibus Survey, that is, which types of alcoholic drink the respondents consume most often [45]. Previous surveys have estimated that UK households spend around £15 billion a year on alcohol; although this is likely to be an underestimation given that the Government received £18.2 billion in alcohol duty and tax in 2013/2014 [46]. Thus expenditure on alcohol accounts for around 18% of total expenditure on all food and drink [47]. Households with individuals aged 60–64 spend more on alcohol, tobacco and narcotics, than those with individuals under 30 (£16.40 each week versus to £10.50) [48]. It is important to track expenditure over time and the association with socio-demographic characteristics to help inform policy, but also assess the impact of alcohol-related interventions. Thus, the ATS also assesses the amount spent on alcohol per week (See Table 2: Question 6). Urges to drink The urges to drink question (See Table 2: Question 7) was derived from the urges to smoke question used in the STS. This has been shown to be a valid measure of the severity of cigarette dependence in terms of predicting relapse following a quit attempt [49]. Several studies using a variety of methods have also shown that urges and cravings to drink predict relapse following treatment for alcohol dependency [50]. Although items exist for assessing alcohol dependence, this 1-item measure will allow comparisons with tobacco dependence. Serious attempts to reduce intake How many serious attempts participants have made to cut down is an important measure of response to interventions (e.g. [51]) (See Table 2: Question 8). Number of attempts to reduce alcohol consumption was included as the majority of individuals will most likely opt for safer levels of drinking as opposed to complete abstinence [52]. Help sought and motives for recent attempts to reduce intake Questions relating to the most recent attempt to cut down on drinking include: 1) assessing what help was sought (e.g. medication, group counselling or helpline); and 2) reasons for trying to cut down (e.g. advice from doctors, an advert or health problems) (See Table 2: Questions 9 and 10). The most recent English policy on alcohol use stated that it intended to improve treatment of alcohol dependence [5]. Current treatments recommended by NICE include medication (e.g. acamprosate, disulfiram and naltrexone), which help to prevent relapse and/or to limit the amount of alcohol consumed; and counselling in the form of self-help groups, 12-step facilitation therapy, cognitive behavioural therapy and family therapy [53]. Evidence suggests that drugs such as acamprosate significantly reduce the risk of any drinking and increase the rate of abstinence; while there appears to be some evidence for the effectiveness of psychosocial interventions, including cognitive-behavioural coping skills training and motivational interviewing [54,55]. Six month follow-up At 6-months follow-up participants who score 8 or more (i.e. indicating hazardous or harmful alcohol consumption and possible dependence) on the AUDIT or 5 or more on the AUDIT-C (i.e. indicating high-risk consumption) at baseline are asked to complete: 1) the AUDIT and questions regarding their current attempts to reduce intake and motivation to cut down; 2) if they received GP advice on alcohol consumption; 3) the types of alcohol typically consumed; 4) the amount spent on alcohol; 5) strength of urges to drink; 6) how many attempts to cut down they have made in the previous 6 months; 7) how long ago their most recent serious quit attempt started and how long it lasted before they went back to drinking as before or more heavily; 8) what they used to help them cut down and what contributed to their most recent attempt; and 9) whether their attempt was planned or unplanned. A key motivation for the follow-up is to assess prospective associations between changes in alcohol consumption and exposure to real-world events and treatments [28,56]. The reason for limiting the follow-up to those who are at least classified as hazardous/harmful or high-risk drinkers at baseline is that these types of analyses among abstinent and moderate drinkers are less relevant. Data analysis and dissemination Descriptive statistics will be used to describe the basic features of the data, including the socio-demographic profile of participants and occurrence of alcohol related behaviours. These will include parameter estimates (e.g. means and percentages), number of participants, and measures of spread i.e. confidence intervals and standard deviations. When reporting prevalence data, data will first be weighted using a rim (marginal) weighting technique. This involves an iterative sequence of weighting adjustments whereby separate nationally representative target profiles are set (for gender, working status, children in the household, age, social-grade and region). This process is then repeated until all variables match the specified targets. To assess differences among groups where the outcome is normally distributed, t-tests for two groups (e.g., men and women) and ANOVA (or related methods e.g. ANCOVA) for comparisons of more than two groups (e.g. social-grades AB, C1, C2, D and E), will be used. If parametric assumptions are violated, transformations will be performed or appropriate non-parametric tests employed (e.g., Kruskal Wallis and Mann–Whitney U). For dichotomous outcomes, log-linear regression or chi-squared tests will be implemented. When adjustment for confounding variables is required, or to examine the association between a quantitative independent variable and quantitative outcome, multivariable linear regression will be used. For the association between continuous predictors and dichotomous outcomes, Generalised Linear Models will be adopted, specifically the binomial family for odds ratios or log-binomial for relative risks. To assess trends and changes over time as a function of policies and population-based interventions, interrupted time analysis will be used as this allows adjustment for autocorrelation and consideration of underlying trends in the time series [57]. Mediation analyses, segmented regression and non-linear regression analyses, will also be used where appropriate (e.g., segmented regression when assumptions of time-series are violated and there is no evidence of autocorrelation). Mediation analysis allows researchers to assess which factors 'mediate' between independent and dependent variables [58]; segmented (or piecewise) regression analysis allows the independent variable to be partitioned into separate intervals when its relationship with the dependent variable changes at a 'break point' [59]; while non-linear (polynomial) regression can be used when relationships represent a curved line [60]. Bayes Factors will also be calculated where appropriate. These indicate the relative likelihood of a hypothesised difference/association (hypothesis 1) versus no difference/association (hypothesis 2); and allow one to distinguish between two interpretations of a null result: there is evidence for the null-hypothesis or the data are insensitive in distinguishing an effect. The latter can be rejected if the study is adequately powered, however, there are problems with this in practice. First, calculating power requires the specification of a minimally interesting value. This is often difficult, particularly when similar studies have not been conducted. Secondly, power does not use the data itself in order to determine how sensitively that very data can distinguish the null and alternative hypotheses [61]. The Bayes factor is given by: $$ BF=\frac{P\left( Data\Big|{H}_1\right)}{P\left( Data\Big|{H}_2\right)} $$ which is the probability of the data given hypothesis 1 (H1) over the probability of the data given hypothesis 2 (H2); and thus, is simply a ratio of the likelihoods of the two hypotheses. Bayes factors vary from 0 to ∞, where 1 indicates that the data do not favour either hypothesis; values greater than 1 indicate increasing evidence for the alternative hypothesis over the null; and values less than 1 indicate increasing evidence of the null over the alternative. Jeffreys [62] suggested that 'substantial evidence' be reflected by a factor less than 1/3 or greater than 3, with any value between these being only 'anecdotal' evidence. Results will be disseminated using a website (www.alcoholinengland.info) and regular updates to key English stakeholders, including Public Health England (PHE) and the Department of Health policy and communications teams. All publications will be discussed in advance with the ATS Study Advisory Group. This group was appointed to oversee and input strategically on the overall progress, priorities and planned analyses of the ATS. The ATS has several important strengths, including the ability to examine changes in the prevalence of harmful drinking and other key performance indicators, such as attempts and motivation to cut down, in a timely manner. Tracking monthly changes permits a much more sensitive test of the possible effects of interventions than can be achieved by annual national surveys. It will also provide information on methods of reduction and how these relate to success rates and contextual variables, such as socio-demographic characteristics. The large sample size and follow-up will also allow for relationships between government policies/alcohol initiatives and reductions in consumption to be accurately estimated and tested prospectively. The data will provide quick and direct estimates of policy impacts on consumption and also inform the estimation of longer-term health benefits and cost saving by providing evidence to incorporate into existing policy appraisal models [63]. The addition of the ATS to the STS will also allow comparisons to be made between tobacco and alcohol use. Although the ATS is restricted to data from England and cannot document the situation for the rest of the UK or other countries; it provides a framework on which other national surveys can be modelled. Moreover, given that alcohol policies and treatment in England are similar to other countries, findings from the ATS, to some extent, can be used to inform international policy [64]. The main limitation, as with all survey data, is the likelihood of underreporting alcohol intake. This is often due to a combination of poor recall, participants being untruthful, inadequacies in measurement instruments and sampling bias [65,66]. The extent of underreporting is somewhat mitigated by the use of computer assisted interviews since the presence of a computer enhances participants' perceptions of privacy and thus increases responses to sensitive questions [67]. Recall bias is also limited in the ATS by assessing typical alcohol consumption; and including response categories (which act as triggers). The ATS also affords the ability to cross-validate responses due to the inclusion of questions measuring similar underlying constructs. For example, answers to the AUDIT are likely to be analogous to measures of urges to drink, while motivation to quit is likely to be analogous to attempts to reduce intake. Smith L, Foxcroft D. Drinking in the UK. An exploration of trends. York: Joseph Rowntree Foundation; 2009. Horton R. GBD 2010: understanding disease, injury and risk. Lancet. 2012;380:2053–4. Ellison J. Reducing harmful drinking. Department of Health. 2013. [https://www.gov.uk/government/policies/reducing-harmful-drinking] Milton A. The evidence base for alcohol guidelines: Supplementary written evidence submitted by the Department of Health. 2012. [http://www.publications.parliament.uk/pa/cm201012/cmselect/cmsctech/writev/1536/ag00a.htm] HM Government. The government's alcohol strategy. 2012. [https://www.gov.uk/government/publications/alcohol-strategy] Jones L, Bellis MA. Updating England-Specific Alcohol-Attributable Fractions. 2013. [http://www.cph.org.uk/wp-content/uploads/2014/03/24892-ALCOHOL-FRACTIONS-REPORT-A4-singles-24.3.14.pdf] National Health Service (NHS) Information Centre. Statistics on alcohol, England; 2009. [http://www.ic.nhs.uk/pubs/alcohol09] Bryden A, Roberts B, Petticrew M, McKee M. A systematic review of the influence of community level social factors on alcohol use. Health Place. 2014;21:70–855. Delk EW, Meilman PW. Alcohol use among college students in Scotland compared with norms from the United States. J Amer Coll Hlth. 1996;44:274–81. McAlaney J, McMahon J. Normative beliefs, misperceptions, and heavy episodic drinking in a British student sample. J Stud Alcohol Drugs. 2007;68:385–92. The Academy of Medical Sciences. Brain science, addiction and drugs: An Academy of Medical Sciences working group report chaired by Professor Sir Gabriel Horn FRS FRCP; 2008. [Available from: http://www.acmedsci.ac.uk/viewFile/524414fc8746a.pdf] Home Office. Guidance on banning the sale of alcohol below the cost of duty plus VAT for suppliers of alcohol and enforcement authorities in England and Wales. 2014. [https://www.gov.uk/government/uploads/…/Guidance_on_BBCS_3.pdf] HM Treasury. Budget 2014. 2014. [https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/293759/37630_Budget_2014_Web_Accessible.pdf] Home Office. Next steps following the consultation on delivering the Government's alcohol strategy. 2013. [https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/223773/Alcohol_consultation_response_report_v3.pdf] Davenport, R. OLN challenges reducing the strength in Brussels. 2014. [http://www.offlicencenews.co.uk/news/fullstory.php/aid/14519/OLN_challenges_Reducing_the_Strength_in_Brussels.html] ONS. An overview of 40 years of data (General Lifestyle Survey Overview - a report on the 2011 General Lifestyle Survey. 2013. [http://www.ons.gov.uk/ons/rel/ghs/general-lifestyle-survey/2011/rpt-40-years.html] DOH & Food Standards Agency. National Diet and Nutrition Survey. Headline results from Years 1 and 2 (combined) of the rolling programme (2008/2009-2009/2010). 2011. [https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/216484/dh_128550.pdf] Gregory J, Foster K, Tyler H, Wiseman H. The dietary and nutritional survey of British adults. London: HMSO; 1990. McFall S. Understanding Society: The UK Household Longitudinal Study waves 1–3 User Manual. 2013. [https://www.understandingsociety.ac.uk/documentation/mainstage] ONS. Opinions and lifestyle (opinions) survey: information for clients 2014–2015. 2014. [http://www.ons.gov.uk/ons/about-ons/products-and-services/opn/index.html] ONS. The future of the General Lifestyle Survey. 2011. [http://www.ons.gov.uk/ons/about-ons/get-involved/consultations/archived-consultations/2011/the-future-of-the-glf-survey/index.html] Bridges S, Doyle M, Fuller E, Knott C, Mindell J, Moody A, Ryley A, Scholes S, Seabury C, Wardle H, Whalley R. Health Survey for England 2012. 2012; 2. Methods doc, [http://www.hscic.gov.uk/catalogue/PUB13218/HSE2012-Methods-and-docs.pdf] ONS. Drinking habits amongst adults, 2012. 2013. [http://www.ons.gov.uk/ons/dcp171778_338863.pdf] National Centre for Social Research. Adult Psychiatric Morbidity Survey 2007: User guide. 2007. [http://doc.ukdataservice.ac.uk/doc/6379/mrdoc/pdf/6379datadocs.pdf] Casswell S, Meier P, MacKintosh AM, Brown A, Hastings G, Thamarangsi T, et al. Q: The International Alcohol Control Study (IAC) — evaluating the impact of alcohol policies. Alcohol Clin Exp Res. 2012;36(8):1462–7. Fidler J, Shahab L, West O, Jarvis M, McEwen A, Stapleton J, et al. 'The smoking toolkit study': a national study of smoking and smoking cessation in England. BMC Public Health. 2011;11:479. Fidler JA, West R. Enjoyment of smoking and urges to smoke as predictors of attempts and success of attempts to stop smoking: a longitudinal study. Drug Alcohol Depend. 2010;115(1–2):30–4. Beard E, McNeill A, Aveyard P, Fidler J, Michie S, West R. Use of nicotine replacement therapy for smoking reduction and during enforced temporary abstinence: a national survey of English smokers. Addiction. 2011;106(1):197–204. Kotz D, Brown J, West R. Prospective cohort study of the effectiveness of smoking cessation treatment used in the "real world". Mayo Clin Proc. 2014;89(10):1360–7. Brown J, Beard E, Kotz D, Michie S, West R. Real-world effectiveness of e-cigarettes when used to aid smoking cessation: A cross-sectional population study. Addiction 2014, 109: doi:10.1111/add.12623. Brown J, West R, Fidler J, Bish A, McEwen A. Brief physician advice and smoking cessation: Cross-sectional findings from an English household survey. Psychol Health. 2012;27:14. Babor TF, Higgins-Biddle JC, Saunders JB, Monterior MG. AUDIT: The Alcohol Use Disorders Identification Test. Guidelines for use in Primary Care. World Health Organisation; 2001. [http://www.talkingalcohol.com/files/pdfs/WHO_audit.pdf] Ipsos MediaCT. Social Grade: A classification Tool. 2009. [http://www.ipsos-mori.com/DownloadPublication/1285_MediaCT_thoughtpiece_Social_Grade_July09_V3_WEB.pdf] Saunders JB, Aasland OG, Amundsen A, Grant M. Alcohol consumption and related problems among primary health care patients: WHO collaborative project on early detection of persons with harmful alcohol consumption I. Addiction. 1993;88:349–62. Saunders JB, Aasland OG, Babor TF, de la Fuente JR, Grant M. Development of the Alcohol Use Disorders Identification Test (AUDIT): WHO collaborative project on early detection of persons with harmful alcohol consumption. Addiction. 1993;88:791–804. Allen JP, Litten RZ, Fertig JB, Babor T. A review of research on the Alcohol Use Disorders Identification Test (AUDIT). Alcohol Clin Exp Res. 1997;21(4):613–9. Hays RD, Merz JF, Nicholas R. Response burden, reliability, and validity of the CAGE, short MAST, and AUDIT alcohol screening measures. Behav Res Methods, Instrum Comput. 1995;27:277–80. Reinert DF, Aellen JP. The alcohol use disorders identification test (AUDIT): a review of recent research. Alcohol Clin Exp Res. 2002;26(2):272–9. Kaner E, Bland M, Cassidy P, Coulton S, Dale V, Deluca P, Gilvarry E, Godfrey C, Heather N, Myles J, Newbury-Birch D, Oyefeso A, Parrott S, Perryman K, Phillips T, Shepherd J, Drummond C. Effectiveness of screening and brief alcohol intervention in primary care (SIPS trial): a pragmatic cluster randomised controlled trial. Br Med J. 2013;346. doi:10.1136/bmj.e8501 Drummond C, Deluca P, Coulton S, Bland M, Cassidy P, Crawford M, et al. The effectiveness of alcohol screening and brief intervention in emergency departments: a multicentre pragmatic randomised controlled trial. PLoS One. 2014;9(6):e99463. McManus S, Meltzer H, Brugha T, Bebbington P, Jenkins R. Adult psychiatric morbidity in England, 2007: results of a household survey. London: National Centre for Social Research; 2009. Kotz D, Brown J, West R. Predictive validity of the Motivation To Stop Scale (MTSS): a single-item measure of motivation to stop smoking. Drug Alcohol Depend. 2013;128(1–2):15–9. National Institute for Health & Care Excellence (NICE). Training plan: Alcohol-use disorders: preventing harmful drinking. 2010. [http://guidance.nice.org.uk/nicemedia/live/13001/49023/49023.pdf] Kaner E, Beyer F, Dickinson H, Pienaar E, Campbell F, Schlesinger C, Heather N, Saunders J, Burnand B. Effectiveness of brief alcohol interventions in primary care populations. Cochrane Database Syst Rev 2007, CD004148. ONS. Drinking: Adults' Behaviour and knowledge in 2009. 2009. [http://www.ons.gov.uk/ons/rel/lifestyles/drinking--adult-s-behaviour-and-knowledge/2009-report/index.html] Beer B, Association P. Statistical Handbook 2014. London: Brewing Publications Ltd; 2014. p. 2014. ONS. Consumer Trends 2010. 2010. [http://www.ons.gov.uk/ons/rel/consumer-trends/consumer-trends/q2-2010/index.html] ONS. UK households spent an average of £474 a week in 2010. 2011. [http://www.ons.gov.uk/ons/dcp171780_245235.pdf] Fidler J, Shahab L, West R. Strength of urges to smoke as a measure of severity of cigarette dependence: comparison with the Fagerström Test for Nicotine Dependence and its components. Addiction. 2010;106(3):631–8. Rohsenow D, Monti PM. Does urge to drink predict relapse after treatment? Alcohol Res Health. 1999;23(3):225–32. Kaner EF, Heather N, McAvoy BR, Lock CA, Gulvarry E. Intervention for excessive alcohol consumption in primary health care: attitudes and practices of English general practitioners. Alcohol Alcohol. 1999;34:559–66. Rutherford L, Sharp C, Bromley C. The Scottish Health Survey. 2012; 1. Adults, [http://www.bhfactive.org.uk/userfiles/Documents/AdultsSHeS2011.pdf] National Institute for Health and Care Excellence (NICE). Alcohol-use disorders: diagnosis, assessment and management of harmful drinking and alcohol dependence. 2011. [http://www.nice.org.uk/nicemedia/live/13337/53191/53191.pdf] Klimas J, Field CA, Cullen W, O'Gorman CS, Glynn LG, Keenan E, et al. Psychosocial interventions to reduce alcohol consumption in concurrent problem alcohol and illicit drug users. Cochrane Database Syst Rev. 2012;11:CD009269. Raistrick D, Heather N, Godfey C. Review of the effectiveness of treatment for alcohol problems. 2006. [http://www.nta.nhs.uk/uploads/nta_review_of_the_effectiveness_of_treatment_for_alcohol_problems_fullreport_2006_alcohol2.pdf] Ferguson SG, Brown J, Frandsen M, West R. Associations between use of pharmacological aids in a smoking cessation attempt and subsequent quitting activity: a population study. Addiction. 2014, doi:10.1111/add.12795. Gottman JM. Time-series analysis: a comprehensive introduction for social scientists. Cambridge: Cambridge University Press. Government of Scotland: Scottish Index of Multiple Deprivation; 1981. MacKinnon DP, Fairchild AJ, Fritz MS. Mediation analysis. Ann Rev Psychol. 2007;58:593. Pastor R, Guallar E. Use of two-segmented logistic regression to estimate change-points in epidemiologic studies. Am J Epidemiol. 1998;148:631–42. Royston P, Altman DG. Regression using fractional polynomials of continuous covariates: parsimonious parametric modelling. J R Stat Soc Series C (Appl Stat). 1994;43(3):429–67. Dienes Z. Using Bayes to get the most out of non-significant results. Front Psychol. 2014. doi:10.3389/fpsyg.2014.00781. Jeffreys H. The theory of probability. Oxford, England: Oxford University Press; 1961. Brennan A, Meng Y, Holmes J, Hill-McManus D, Mier PS. Potential benefits of minimum unit pricing for alcohol versus a ban on below cost selling in England 2014: modelling study. BMJ. 2014;349:g5452. Holmes J, Guo Y, Maheswaran R, Nicholls J, Meier PS, Brennan A. The impact of spatial and temporal availability of alcohol on its consumption and related harms: a critical review in the context of UK licensing policies. Drug Alcohol Rev. 2014;33(5):515–25. Meier PS, Meng Y, Holmes J, Baumberg B, Purshouse R, Hill-McManus D, et al. Adjusting for unrecorded consumption in survey and per capita sales data: quantification of impact on gender- and Age-specific alcohol-attributable fractions for oral and pharyngeal cancers in Great Britain. Alcohol Alcohol. 2013;48(2):241–9. Gmel G, Rehm J. Measuring alcohol consumption. Contemp Drug Probl. 2004;31(3):467–540. Boca FKD, Darkes J. The validity of self-reports of alcohol consumption: state of the science and challenges for research. Addiction. 2003;98:1–12. This study is funded by the Society for Study of Addiction and the National Institute for Health Research (NIHR)'s School for Public Health Research (SPHR). The views are those of the authors(s) and not necessarily those of the NHS, the NIHR or the Department of Health. SPHR is a partnership between the Universities of Sheffield; Bristol; Cambridge; Exeter; UCL; The London School for Hygiene and Tropical Medicine; the LiLaC collaboration between the Universities of Liverpool and Lancaster and Fuse; The Centre for Translational Research in Public Health, a collaboration between Newcastle, Durham, Northumbria, Sunderland and Teesside Universities. Research Department of Clinical, Educational and Health Psychology, University College London, London, UK Emma Beard, Jamie Brown & Susan Michie Department of Epidemiology and Public Health, University College London, London, UK Emma Beard, Jamie Brown & Robert West Alcohol Misuse, Department of Health, London, UK Crispin Acton ScHARR, The University of Sheffield, Sheffield, UK Alan Brennan & John Holmes Institute of Psychiatry, Kings College London, London, UK Colin Drummond School of Social and Community Medicine, University of Bristol, Bristol, UK Matthew Hickman Institute of Health & Society, Newcastle University, Newcastle, UK Eileen Kaner Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, London, UK Karen Lock Public Health England, London, UK Matthew Walmsley Emma Beard Robert West Alan Brennan Susan Michie Correspondence to Emma Beard. EB and JB have both received unrestricted research funding from Pfizer for the STS. JB's post is funded by the Society for Study of Addiction and EB's post by the National Institute for Health Research (NIHR) School for Public Health Research (SPHR). EB and JB are also funded by Cancer Research UK (CRUK). RW has received travel funds and hospitality from, and undertaken research and consultancy for, pharmaceutical companies that manufacture and/or research products aimed at helping smokers to stop. AB has conducted economic evaluations of pharmaceuticals and interventions for both government bodies and the pharmaceutical industry. CD is part-funded by the NIHR Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King's College London and the NIHR South London Collaborations for Leadership in Health Research and Care. EB, JB, SM and RW produced the first draft of the manuscript. All other authors commented on this draft and contributed to the final version. All authors read and approved the final manuscript. Beard, E., Brown, J., West, R. et al. Protocol for a national monthly survey of alcohol use in England with 6-month follow-up: 'The Alcohol Toolkit Study'. BMC Public Health 15, 230 (2015). https://doi.org/10.1186/s12889-015-1542-7 Alcohol toolkit study Smoking toolkit study Submission enquiries: [email protected]
CommonCrawl
crunchingnumbers(a)live digesting hard math since 2016 Braille in Modern World You may have seen braille in elevators and on ATMs and door signs, but brushed it off as something that's for blind people and not for you. As a student, you may have seen braille in a math problem involving patterns and binary choices. As a puzzle enthusiast, you may have seen braille in a decryption challenge. But is that all there is to braille? For an upcoming Toastmasters speech, I decided to get to know braille, by researching and interviewing locals who professionally work with people who are blind and visually impaired. Surprisingly, the more I looked into braille, the more I realized its diminishing role in the modern world. I want to address the problem today. 1. How to read braille First, not every blind person can read and write in braille. An alarming statistic comes from the National Federation of the Blind (NFB) in their 2009 report. Less than 10% of blind people in the U.S. are literate in braille, and the rate is similar with blind children, which points to a dismal future. The NFB concluded with a hopeful plan to double the literacy rate by 2015, but has yet to indicate any change. If we are to suggest using braille to people who are blind and visually impaired, we had better understand how it works. Braille is a 6-dot system: A set of 6 dots is called a cell, and the dots have the names 1-6, as shown above. Each dot in a cell can be raised or flat, so there can be 64 different cells. That does not seem like a whole lot to work with. But braille, in fact, represents many things, including multiple languages, math, music, and even programming. The trick is to allow assigning multiple roles to a cell and allow considering a group of cells as one entity. It's ingenious, but at the same time, you can imagine how trying to represent everything with just 6 dots can cause problems. With this in mind, let us look at the English language and math in braille. a. English braille English braille consists of two levels: Grade 1 and Grade 2. In Grade 1 braille, we turn each letter, number, and punctuation mark in a sentence into a cell, in a one-to-one fashion. Hence, if you already understand how words are spelled and sentences are constructed in English, you can write in Grade 1 braille. Let us first examine the English letters in braille: Figure 1. Letters in braille. At a glance, this looks like a lot to remember. However, there are a couple of patterns. First, the bottom three lines of cells are copies of the top line, with one or both of dots 3 and 6 raised. Second, the "corner pieces" are assigned to the 4th, 6th, 8th, and 10th letters, and show a counterclockwise rotation when taken as a sequence. The letter w appears as an afterthought, because it is not used natively in French. (Braille started in France in the 1820s.) We can also "lower" the dots on the a-j line to write ten more cells. Many of these are used to show punctuation marks. Figure 2. Punctuation marks in braille. And here are the remaining fourteen cells, which involve dots 3-6: Figure 3. The remaining cells. Three of these merit a special mention. Dots-6 (called capital sign) capitalizes the letter that follows afterward. You can place two of this to write a word in "all caps." Placing dots-3456 (called number sign) before the letter a-j creates a number between 0 and 9. The numbers are arranged in "keyboard" order. In other words, 1 and a share the same cell, 2 and b, and so on, with 0 and j sharing the last cell. The cell with no raised dots indicates a space. Now, you can imagine that long words would be cumbersome to write. We also know from experience that certain letters tend to appear together (e.g. as a prefix or suffix). Lastly, there may be words that are more useful to the blind. These words should be easier to read and write. Figure 4. A sentence written in Grade 1 braille. Grade 2 braille addresses these problems by introducing contractions. Contractions occur in two ways: A cell represents a group of letters, sometimes an entire word. A group of cells forms the abbreviation of a word (e.g. bl for blind, brl for braille). Almost every cell takes on a double duty to accomplish these two goals. Figure 5. The same sentence in Grade 2 braille. Unfortunately, the contractions also create problems. There are many, carefully laid down rules for when to use contractions (based on spelling, phonetics, or optimality) and which contraction comes first. Imagine that you are a software engineer whose job is to parse a word into cells while accounting for all these rules. It is not easy to take the plunge and write code that can translate English to braille and vice versa. Since 1992, efforts have been put into modernizing and standardizing English braille. The result is Unified English Braille (UEB). UEB removes and simplifies some of the contractions and punctuation marks in Grade 2 braille, in order to better reflect ideas that are relevant in the modern world and to pave a lasting future for braille. (The punctuation marks above remained the same.) In particular, standardization allows books and materials in braille to be more easily shared among the countries that use English. The schools for the blind, book lending programs, and braille certification programs in the U.S. are transitioning to UEB now. b. Nemeth braille Nemeth braille (pronounced ne-meth, not nee-muth) uses the six dots to represent ideas and notations that are common in math. Using the number sign (dots-3456), letter sign (dots-56), and punctuation sign (dots-456), we can write math along with English in a sentence, much like I do on this blog. Figure 6. Special signs in Nemeth braille. Again, there is a long list of rules for representing math. We consider a small handful below. First, the cells for numbers are not the ones used in Grade 2 braille and UEB. Instead, we use the "lowered" cells that we had previously used for punctuation marks: Figure 7. Numbers in Nemeth braille. The number sign and punctuation sign allow us to understand whether we are looking at a number or a punctuation mark. Next, let us consider operators. Note that two cells are needed to create the equal sign: dots-46, followed by dots-13. Figure 8. Basic operators in Nemeth braille. If you are familiar with LaTeX, you will feel more at ease when you read and write an expression in Nemeth braille. Even if you aren't, you can with practice by looking at examples. The key is to think about how you would describe an expression to a blind person or a computer, who cannot see the expression in print. Consider writing a fraction in LaTeX. No matter how complicated the expressions in the numerator and denominator may be, we would write \frac{numerator}{denominator}. This line of code captures the essence of the fraction. We are telling LaTeX that there is a fraction ahead with the frac command, the numerator looks like numerator, and the denominator looks like denominator. Furthermore, we can write additional LaTeX code in numerator and denominator, so that we can describe how their expressions look in print more precisely. We write a fraction in Nemeth braille in a similar manner: Figure 9. An equation that involves a fraction. The fraction signs (dots-1456 to open, dots-3456 to close) indicate that we are writing a fraction, and the slash sign (dots-34) separates the expressions for the numerator and denominator. The numerator and denominator may hold additional braille code. At the Texas School for the Blind and Visually Impaired, the math teachers and braillists create and distribute homework written in Nemeth braille. In return, the students write their answers in braille using a typewriter such as Perkins Brailler. Figure 10. A homework problem in Nemeth braille. As you can see above, Nemeth braille works well. Word problems and multiple-choice questions can be given easily. Tables of information can be included as well, although they may require more space due to formatting. One major obstacle is conveying visual information, such as drawings of geometric shapes, graphs of functions in 2D and 3D, and colors, shadows, and transparencies to highlight certain ideas. We may try to approximate the contour with dotted cells or explain what is shown in words. However, we must wonder how much information gets lost in doing so. 2. The decline of braille Earlier I mentioned that the braille literacy rate among the blind is estimated to be 10%. This is a significant drop when you consider the rate in the 1960s, which passed 50%. What happened? According to Ava Smith, the Director of the Talking Book Program, audiobooks became popular around the 1970s and a very disastrous decision took place: Blind children no longer need to be taught braille, since they can listen to audiobooks to learn English. As it turned out, listening by itself did not help them develop literacy skills. Audiobooks provide one form of learning but not a holistic one. (source) When we read a sentence by sight, we observe how to spell, how to follow grammar, how to format a text, how to use punctuation marks, etc. Oftentimes, particularly in poems, the author uses these in a very deliberate manner to highlight his or her ideas. You can't take in these ideas and learn to create your own, when you only listen to the sentence and never see it written. There is much information in writing that we can take for granted. In addition, when you listen to someone speak the sentence, you much depend on that person's interpretation of the sentence. The pronunciation, the inflection, the emotion, the pace—they are theirs, not yours. How will you give that sentence your own voice if you never learn to read? In 1975, the Individuals with Disabilities Education Act (IDEA) allowed students with disabilities to attend a public school. Unfortunately, most teachers in public schools did not know braille, and there were simply not enough outside resources—braille books, braillists, and Teachers of the Visually Impaired (TVIs)—to help the blind students use braille to learn as well as sighted students. Furthermore, alternatives to braille began to appear. People who had some sight could choose to read large print books. Compared to large print and audio, braille books are more costly to produce, bulkier in weight and number of volumes, and more crippling in the case of damage or loss. Refreshable braille displays—electronic braille—would certainly eliminate a lot of these problems. However, they are expensive (Humanware and Freedom Scientific sell their mid-ranged, 40-cell displays for about $3,000), can show only one line at a time, and are prone to failure. An electronic braille display in use. (source) If we are to advocate using braille daily, we need a display that is cheap (Transforming Braille Group is aiming for $320 for a 20-cell display), can show a full page of braille (without raising the price significantly), and is reliable. Computers and cell phones are almost universal now. They, by default, include accessibility options like high contrast, magnifier, and screen reader, as well as personal apps, for people who are blind and visually impaired. Braille is simply lagging behind in technology. 3. What can we do? As sighted people, how can we help further braille? For the most part, awareness is key. Knowing what braille is—braille allows blind people to learn various ideas and share their own—is good, but knowing how to read and write in braille is even better. (It's easier than learning a foreign language, in my opinion.) We can start out small. The Talking Book Program does community outreach and teaches kids how to write secret messages in braille to their friends. Puzzled Pint loves to make adults read braille (albeit Grade 1) by hiding the solution to a puzzle in braille. Ask, what can you do to get you and others interested in learning braille? If you work at a restaurant or a company, offer braille copies of your restaurant menu or company brochure. Blind people are like everybody else. They eat, they drink, and they conduct business. There are many braille production groups that can help you with creating braille copies. You can get creative with braille menus and brochures. (source) We should also advocate for the inclusion of braille in mail and currency. Blind people get mail like everybody, but they cannot see what they just received. U.S. is the only country whose paper bills are of the same size, shape, color, and feel. There is no way a blind person can tell the denomination, unless the person had systematically placed the bills in a wallet or gets help from a money reader (which takes time). There are a few additional things that we can do to help people who are blind and visually impaired. If you work in design—websites, games, electronics, and mobile apps—make sure that they can use your products with ease. Knowbility provides training for creating websites that are accessible, and World Wide Web Consortium (W3C) a list of links for mobile apps. Please spread word about programs like Bookshare and National Library Service (NLS). NLS suggests that 1.4% of the population in any state may be eligible for their program. However, the Talking Book Program serves fewer than 20,000 people in Texas, out of the possible 378,000 or so according to the formula given by NLS. With limited funds, the Talking Book Program cannot advertise itself. The only way to be known and heard by people with disabilities is word of mouth. Lastly, treat people with blindness (and any other disability) with respect and kindness as you would any other person, and don't be afraid to say words related to sight to them. If you are not sure whether you should help a blind person, just ask. Every one of us knows what help we want. a. Helping the blind and visually impaired Accessibility for mobile apps (World Wide Web Consortium) Accessibility for websites (Knowbility) Assistive technology funding Braille production groups National Library Service Talking Book Program b. Learning braille Braille certification programs Rules of English braille (comprehensive, full) Rules of UEB (comprehensive, full) Rules of Nemeth braille (comprehensive, full) UEB chart Nemeth braille chart I want to thank Gloria Bennett of the Texas School for the Blind and Visually Impaired, and Ava Smith and Dina Abramson of the Talking Book Program. They were integral to my understanding of what is happening to braille nationwide and in the state of Texas, and were more than happy to give me a tour and introduce me to various equipment for printing braille and recording digital audio. There is a lot of information that I did not cover here. If you are interested in learning more about what they do, please see the interview transcripts below. Ava Smith and Dina Abramson, Talking Book Program, Interview. Charles Petzold, Code: The Hidden Language of Computer Hardware and Software. Gloria Bennett, Texas School for the Blind and Visually Impaired, Interview. National Federation of the Blind, Blindness Statistics. National Federation of the Blind, The Braille Literacy Crisis in America. Perkins School for the Blind, A Low Cost Revolution in Refreshable Braille. TIME, Blind People Tell Money Bills Apart. Share crunchingnumbers: Author IsaacPosted on June 7, 2016 April 1, 2018 Categories Blog postsTags Braille, Nemeth Braille, Speeches, Unified English Braille Previous Previous post: Solving Nonograms with Compressive Sensing: Part 4 Next Next post: Sierpinski Shirt 8 Lecture Notes CI with GitHub Actions for Ember Apps: Part 2 3 Refactoring Techniques Container Queries: Cross-Resolution Testing Container Queries: Adaptive Images crunchingnumbers(a)live Create a website or blog at WordPress.com
CommonCrawl
QUBIC - The Q&U Bolometric Interferometer for Cosmology - A novel way to look at the polarized Cosmic Microwave Background (1801.03730) A. Mennella, P.A.R. Ade, J. Aumont, S. Banfi, P. Battaglia, E.S. Battistelli, A. Baù, B. Bélier, D.Bennett, L. Bergé, J.Ph. Bernard, M. Bersanelli, M.A. Bigot-Sazy, N. Bleurvacq, G. Bordier, J. Brossard, E.F. Bunn, D.P. Burke, D. Buzi, A. Buzzelli, D. Cammilleri, F. Cavaliere, P. Chanial, C. Chapron, F. Columbro, G. Coppi, A. Coppolecchia, F. Couchot, R. D'Agostino, G. D'Alessandro, P. de Bernardis, G. De Gasperis, M. De Leo, M. De Petris, T. Decourcelle, F. Del Torto, L. Dumoulin, A. Etchegoyen, C. Franceschet, B. Garcia, A. Gault, D. Gayer, M. Gervasi, A. Ghribi, M. Giard, Y. Giraud-Héraud, M. Gradziel, L. Grandsire, J.Ch. Hamilton, D. Harari, V. Haynes, S. Henrot-Versillé, N. Holtzer, F. Incardona, J. Kaplan, A. Korotkov, N. Krachmalnicoff, L. Lamagna, J. Lande, S. Loucatos, A. Lowitz, V. Lukovic, B. Maffei, S. Marnieros, J. Martino, S. Masi, A. May, M. McCulloch, M.C. Medina, L. Mele, S. Melhuish, L. Montier, A. Murphy, D. Néel, M.W. Ng, C. O'Sullivan, A. Paiella, F. Pajot, A. Passerini, A.Pelosi, C. Perbost, O. Perdereau, F. Piacentini, M. Piat, L. Piccirillo, G. Pisano, D. Préle, R. Puddu, D. Rambaud, O. Rigaut, G.E. Romero, M. Salatino, A. Schillaci, S. Scully, M. Stolpovskiy, F. Suarez, A. Tartari, P. Timbie, S. Torchinsky, M. Tristram, C. Tucker, G. Tucker, D. Viganò, N. Vittorio, F. Voisin, B. Watson, M. Zannoni, A. Zullo Jan. 11, 2018 astro-ph.IM In this paper we describe QUBIC, an experiment that takes up the challenge posed by the detection of primordial gravitational waves with a novel approach, that combines the sensitivity of state-of-the art bolometric detectors with the systematic effects control typical of interferometers. The so-called "self-calibration" is a technique deeply rooted in the interferometric nature of the instrument and allows us to clean the measured data from instrumental effects. The first module of QUBIC is a dual band instrument (150 GHz and 220 GHz) that will be deployed in Argentina during the Fall 2018. QUBIC Technical Design Report (1609.04372) J. Aumont, S. Banfi, P. Battaglia, E.S. Battistelli, A. Baù, B. Bélier, D.Bennett, L. Bergé, J.Ph. Bernard, M. Bersanelli, M.A. Bigot-Sazy, N. Bleurvacq, G. Bordier, J. Brossard, E.F. Bunn, D. Buzi, A. Buzzelli, D. Cammilleri, F. Cavaliere, P. Chanial, C. Chapron, G. Coppi, A. Coppolecchia, F. Couchot, R. D'Agostino, G. D'Alessandro, P. de Bernardis, G. De Gasperis, M. De Petris, T. Decourcelle, F. Del Torto, L. Dumoulin, A. Etchegoyen, C. Franceschet, B. Garcia, A. Gault, D. Gayer, M. Gervasi, A. Ghribi, M. Giard, Y. Giraud-Héraud, M. Gradziel, L. Grandsire, J.Ch. Hamilton, D. Harari, V. Haynes, S. Henrot-Versillé, N. Holtzer, J. Kaplan, A. Korotkov, L. Lamagna, J. Lande, S. Loucatos, A. Lowitz, V. Lukovic, B. Maffei, S. Marnieros, J. Martino, S. Masi, A. May, M. McCulloch, M.C. Medina, S. Melhuish, A. Mennella, L. Montier, A. Murphy, D. Néel, M.W. Ng, C. O'Sullivan, A. Paiella, F. Pajot, A. Passerini, A.Pelosi, C. Perbost, O. Perdereau, F. Piacentini, M. Piat, L. Piccirillo, G. Pisano, D. Prêle, R. Puddu, D. Rambaud, O. Rigaut, G.E. Romero, M. Salatino, A. Schillaci, S. Scully, M. Stolpovskiy, F. Suarez, A. Tartari, P. Timbie, M. Tristram, G. Tucker, D. Viganò, N. Vittori, F. Voisin, B. Watson, M. Zannoni, A. Zullo May 11, 2017 astro-ph.CO, astro-ph.IM QUBIC is an instrument aiming at measuring the B mode polarisation anisotropies at medium scales angular scales (30-200 multipoles). The search for the primordial CMB B-mode polarization signal is challenging, because of many difficulties: smallness of the expected signal, instrumental systematics that could possibly induce polarization leakage from the large E signal into B, brighter than anticipated polarized foregrounds (dust) reducing to zero the initial hope of finding sky regions clean enough to have a direct primordial B-modes observation. The QUBIC instrument is designed to address all aspects of this challenge with a novel kind of instrument, a Bolometric Interferometer, combining the background-limited sensitivity of Transition-Edge-Sensors and the control of systematics allowed by the observation of interference fringe patterns, while operating at two frequencies to disentangle polarized foregrounds from primordial B mode polarization. Its characteristics are described in details in this Technological Design Report. Simulated magnetic field expulsion in neutron star cores (1512.07151) J.G. Elfritz, J.A. Pons, N. Rea, K. Glampedakis, D. Viganò Dec. 22, 2015 astro-ph.SR, astro-ph.HE The study of long-term evolution of neutron star (NS) magnetic fields is key to understanding the rich diversity of NS observations, and to unifying their nature despite the different emission mechanisms and observed properties. Such studies in principle permit a deeper understanding of the most important parameters driving their apparent variety, e.g. radio pulsars, magnetars, x-ray dim isolated neutron stars, gamma-ray pulsars. We describe, for the first time, the results from self-consistent magneto-thermal simulations considering not only the effects of the Hall-driven field dissipation in the crust, but adding a complete set of proposed driving forces in a superconducting core. We emphasize how each of these core-field processes drive magnetic evolution and affect observables, and show that when all forces are considered together in vectorial form, the net expulsion of core magnetic flux is negligible, and will have no observable effect in the crust (consequently in the observed surface emission) on megayear time-scales. Our new simulations suggest that strong magnetic fields in NS cores (and the signatures on the NS surface) will persist long after the crustal magnetic field has evolved and decayed, due to the weak combined effects of dissipation and expulsion in the stellar core. Population Synthesis of Isolated Neutron Stars with magneto-rotational evolution II: from radio-pulsars to magnetars (1507.05452) M. Gullón, J.A. Pons, J.A. Miralles, D. Viganò, N. Rea, R. Perna July 20, 2015 astro-ph.HE Population synthesis studies constitute a powerful method to reconstruct the birth distribution of periods and magnetic fields of the pulsar population. When this method is applied to populations in different wavelengths, it can break the degeneracy in the inferred properties of initial distributions that arises from single-band studies. In this context, we extend previous works to include $X$-ray thermal emitting pulsars within the same evolutionary model as radio-pulsars. We find that the cumulative distribution of the number of X-ray pulsars can be well reproduced by several models that, simultaneously, reproduce the characteristics of the radio-pulsar distribution. However, even considering the most favourable magneto-thermal evolution models with fast field decay, log-normal distributions of the initial magnetic field over-predict the number of visible sources with periods longer than 12 s. We then show that the problem can be solved with different distributions of magnetic field, such as a truncated log-normal distribution, or a binormal distribution with two distinct populations. We use the observational lack of isolated NSs with spin periods P>12 s to establish an upper limit to the fraction of magnetars born with B > 10^{15} G (less than 1\%). As future detections keep increasing the magnetar and high-B pulsar statistics, our approach can be used to establish a severe constraint on the maximum magnetic field at birth of NSs. The X-ray outburst of the Galactic Centre magnetar SGR J1745-2900 during the first 1.5 year (1503.01307) F. Coti Zelati, N. Rea, A. Papitto, D. Viganò, J. A. Pons, R. Turolla, P. Esposito, D. Haggard, F. K. Baganoff, G. Ponti, G. L. Israel, S. Campana, D. F. Torres, A. Tiengo, S. Mereghetti, R. Perna, S. Zane, R. P. Mignani, A. Possenti, L. Stella March 4, 2015 astro-ph.HE In 2013 April a new magnetar, SGR 1745-2900, was discovered as it entered an outburst, at only 2.4 arcsec angular distance from the supermassive black hole at the Centre of the Milky Way, Sagittarius A*. SGR 1745-2900 has a surface dipolar magnetic field of ~ 2x10^{14} G, and it is the neutron star closest to a black hole ever observed. The new source was detected both in the radio and X-ray bands, with a peak X-ray luminosity L_X ~ 5x10^{35} erg s^{-1}. Here we report on the long-term Chandra (25 observations) and XMM-Newton (8 observations) X-ray monitoring campaign of SGR 1745-2900, from the onset of the outburst in April 2013 until September 2014. This unprecedented dataset allows us to refine the timing properties of the source, as well as to study the outburst spectral evolution as a function of time and rotational phase. Our timing analysis confirms the increase in the spin period derivative by a factor of ~2 around June 2013, and reveals that a further increase occurred between 2013 Oct 30 and 2014 Feb 21. We find that the period derivative changed from 6.6x10^{-12} s s^{-1} to 3.3x10^{-11} s s^{-1} in 1.5 yr. On the other hand, this magnetar shows a slow flux decay compared to other magnetars and a rather inefficient surface cooling. In particular, starquake-induced crustal cooling models alone have difficulty in explaining the high luminosity of the source for the first ~200 days of its outburst, and additional heating of the star surface from currents flowing in a twisted magnetic bundle is probably playing an important role in the outburst evolution. Searching for small-scale diffuse emission around SGR 1806-20 (1411.0394) D. Viganò, N. Rea, P. Esposito, S. Mereghetti, G.L. Israel, A. Tiengo, R. Turolla, S. Zane, L. Stella Nov. 3, 2014 astro-ph.HE Diffuse radio emission was detected around the soft gamma-ray repeater SGR 1806-20, after its 2004 powerful giant flare. We study the possible extended X-ray emission at small scales around SGR 1806-20, in two observations by the High Resolution Camera Spectrometer (HRC-S) on board of the Chandra X-ray Observatory: in 2005, 115 days after the giant flare, and in 2013, during quiescence. We compare the radial profiles extracted from data images and PSF simulations, carefully considering various issues related with the uncertain calibration of the HRC PSF at sub-arcsecond scales. We do not see statistically significant excesses pointing to an extended emission on scales of arcseconds. As a consequence, SGR 1806-20 is compatible with being point-like in X-rays, months after the giant flare, as well as in quiescence. Creation of magnetic spots at the neutron star surface (1408.3833) U. Geppert, D. Viganò Aug. 17, 2014 astro-ph.SR, astro-ph.HE According to the partially screened gap scenario, an efficient electron-positron pair creation, a general precondition of radio-pulsar activity, relies on the existence of magnetic spots, i.e., local concentrations of strong and small scale magnetic field structures at the surface of neutron stars. They have a strong impact on the surface temperature, which is potentially observable. Here we reinforce the idea that such magnetic spots can be formed by extracting magnetic energy from the toroidal field that resides in deep crustal layers, via Hall drift. We study and discuss the magneto-thermal evolution of qualitatively different neutron star models and initial magnetic field configurations that lead to the creation of magnetic spots. We find that magnetic spots can be created on a timescale of $10^4$ years with magnetic field strengths $\gtrsim 5\times 10^{13}$ G, provided almost the whole magnetic energy is stored in its toroidal component, and that the conductivity in the inner crust is not too large. The lifetime of the magnetic spots is at least $\sim$one million of years, being longer if the initial field permeates both core and crust. The outburst decay of the low magnetic field magnetar SGR 0418+5729 (1303.5579) N. Rea, G. L. Israel, J. A. Pons, R. Turolla, D. Vigano, S. Zane, P. Esposito, R. Perna, A. Papitto, G. Terreran, A. Tiengo, D. Salvetti, J. M. Girart, Aina Palau, A. Possenti, M. Burgay, E. Gogus, A. Caliandro, C. Kouveliotou, D. Gotz, R. P. Mignani, E. Ratti, L. Stella April 17, 2013 astro-ph.GA, astro-ph.HE We report on the long term X-ray monitoring of the outburst decay of the low magnetic field magnetar SGR 0418+5729, using all the available X-ray data obtained with RXTE, SWIFT, Chandra, and XMM-Newton observations, from the discovery of the source in June 2009, up to August 2012. The timing analysis allowed us to obtain the first measurement of the period derivative of SGR 0418+5729: \dot{P}=4(1)x10^{-15} s/s, significant at ~3.5 sigma confidence level. This leads to a surface dipolar magnetic field of B_dip ~6x 10^{12} G. This measurement confirms SGR 0418+5729 as the lowest magnetic field magnetar. Following the flux and spectral evolution from the beginning of the outburst up to ~1200 days, we observe a gradual cooling of the tiny hot spot responsible for the X-ray emission, from a temperature of ~0.9 to 0.3 keV. Simultaneously, the X-ray flux decreased by about 3 orders of magnitude: from about 1.4x10^{-11} to 1.2x10^{-14} erg/s/cm^2 . Deep radio, millimeter, optical and gamma-ray observations did not detect the source counterpart, implying stringent limits on its multi-band emission, as well as constraints on the presence of a fossil disk. By modeling the magneto-thermal secular evolution of SGR 0418+5729, we infer a realistic age of ~550 kyr, and a dipolar magnetic field at birth of ~10^{14} G. The outburst characteristics suggest the presence of a thin twisted bundle with a small heated spot at its base. The bundle untwisted in the first few months following the outburst, while the hot spot decreases in temperature and size. We estimate the outburst rate of low magnetic field magnetars to be about one per year per galaxy, and we briefly discuss the consequences of such result in several other astrophysical contexts. Hall drift in the crust of neutron stars - necessary for radio pulsar activity? (1206.1790) U. Geppert, J. Gil, G. Melikidze, J. A. Pons, D. Vigano June 8, 2012 astro-ph.SR, astro-ph.HE The radio pulsar models based on the existence of an inner accelerating gap located above the polar cap rely on the existence of a small scale, strong surface magnetic field $B_s$. This field exceeds the dipolar field $B_d$, responsible for the braking of the pulsar rotation, by at least one order of magnitude. Neither magnetospheric currents nor small scale field components generated during neutron star's birth can provide such field structures in old pulsars. While the former are too weak to create $B_s \gtrsim 5\times 10^{13}$G$\;\gg B_d$, the ohmic decay time of the latter is much shorter than $10^6$ years. We suggest that a large amount of magnetic energy is stored in a toroidal field component that is confined in deeper layers of the crust, where the ohmic decay time exceeds $10^7$ years. This toroidal field may be created by various processes acting early in a neutron star's life. The Hall drift is a non-linear mechanism that, due to the coupling between different components and scales, may be able to create the demanded strong, small scale, magnetic spots. Taking into account both realistic crustal microphysics and a minimal cooling scenario, we show that, in axial symmetry, these field structures are created on a Hall time scale of $10^3$-$10^4$ years. These magnetic spots can be long-lived, thereby fulfilling the pre-conditions for the appearance of the radio pulsar activity. Such magnetic structures created by the Hall drift are not static, and dynamical variations on the Hall time scale are expected in the polar cap region. The influence of magnetic field geometry on magnetars X-ray spectra (1111.4158) D. Viganò, J. A. Pons University of Liverpool, Nov. 17, 2011 astro-ph.HE Nowadays, the analysis of the X-ray spectra of magnetically powered neutron stars or magnetars is one of the most valuable tools to gain insight into the physical processes occurring in their interiors and magnetospheres. In particular, the magnetospheric plasma leaves a strong imprint on the observed X-ray spectrum by means of Compton up-scattering of the thermal radiation coming from the star surface. Motivated by the increased quality of the observational data, much theoretical work has been devoted to develop Monte Carlo (MC) codes that incorporate the effects of resonant Compton scattering in the modeling of radiative transfer of photons through the magnetosphere. The two key ingredients in this simulations are the kinetic plasma properties and the magnetic field (MF) configuration. The MF geometry is expected to be complex, but up to now only mathematically simple solutions (self-similar solutions) have been employed. In this work, we discuss the effects of new, more realistic, MF geometries on synthetic spectra. We use new force-free solutions in a previously developed MC code to assess the influence of MF geometry on the emerging spectra. Our main result is that the shape of the final spectrum is mostly sensitive to uncertain parameters of the magnetospheric plasma, but the MF geometry plays an important role on the angle-dependence of the spectra.
CommonCrawl
Identification and Functional Outcome of mRNAs Associated with RNA-Binding Protein TIA-1 Isabel López de Silanes, Stefanie Galbán, Jennifer L. Martindale, Xiaoling Yang, Krystyna Mazan-Mamczarz, Fred E. Indig, Geppino Falco, Ming Zhan, Myriam Gorospe Isabel López de Silanes Laboratory of Cellular and Molecular Biology Stefanie Galbán Jennifer L. Martindale Xiaoling Yang Krystyna Mazan-Mamczarz Fred E. Indig Research Resources Branch Geppino Falco Laboratory of Genetics, National Institute on Aging-Intramural Research Program, National Institutes of Health, Baltimore, Maryland 21224 Ming Zhan For correspondence: [email protected] [email protected] Myriam Gorospe The RNA-binding protein TIA-1 (T-cell intracellular antigen 1) functions as a posttranscriptional regulator of gene expression and aggregates to form stress granules following cellular damage. TIA-1 was previously shown to bind mRNAs encoding tumor necrosis factor alpha (TNF-α) and cyclooxygenase 2 (COX-2), but TIA-1 target mRNAs have not been systematically identified. Here, immunoprecipitation (IP) of TIA-1-RNA complexes, followed by microarray-based identification and computational analysis of bound transcripts, was used to elucidate a common motif present among TIA-1 target mRNAs. The predicted TIA-1 motif was a U-rich, 30- to 37-nucleotide (nt)-long bipartite element forming loops of variable size and a bent stem. The TIA-1 motif was found in the TNF-α and COX-2 mRNAs and in 3,019 additional UniGene transcripts (∼3% of the UniGene database), localizing preferentially to the 3′ untranslated region. The interactions between TIA-1 and target transcripts were validated by IP of endogenous mRNAs, followed by reverse transcription and PCR-mediated detection, and by pulldown of biotinylated RNAs, followed by Western blotting. Further studies using RNA interference revealed that TIA-1 repressed the translation of bound mRNAs. In summary, we report a signature motif present in mRNAs that associate with TIA-1 and provide support to the notion that TIA-1 represses the translation of target transcripts. Posttranscriptional mechanisms controlling pre-mRNA splicing and maturation, as well as mRNA transport, turnover, and translation, critically influence gene expression programs in mammalian cells. Central to the posttranscriptional regulatory events is the interaction of RNAs with RNA-binding proteins (RBPs) that influence their splicing, localization, stability, and association with the translation machinery (13, 16, 23, 30, 41). Many ribonucleoprotein (RNP) complexes that govern mRNA stability and translation in response to various stimuli (e.g., developmental, stress-inducing, immune, and proliferative) are comprised of transcripts that bear uridine- or adenine/uridine-rich elements (collectively termed AREs) and the proteins that bind the AREs (ARE-RBPs) (5, 42). ARE-bearing mRNAs have received a great deal of attention, since many of them encode proteins that regulate the cell division cycle, apoptosis, proliferation, immune response, oncogenesis, and inflammation (9). Likewise, many ARE-RBPs have been described that modulate the stability of target mRNAs, their translation, or sometimes both processes: AU-binding factor 1 (AUF1), tristetraprolin (TTP), K homology splicing-regulatory protein (KSRP), butyrate response factor 1 (BRF1), the Hu proteins (HuR, HuB, HuC, and HuD), T-cell-restricted intracellular antigen 1 (TIA-1), and the TIA-1-related protein TIAR (4, 6, 7, 22, 31, 34, 36, 43). TIA-1 has been reported to participate in the regulation of alternative pre-mRNA splicing of bound mRNAs (18, 19). However, TIA-1 has been best characterized as a suppressor of translation, as shown for the target ARE-bearing mRNAs encoding tumor necrosis factor alpha (TNF-α) and cyclooxygenase 2 (COX-2) (15, 35). Following stimulation with bacterial lipopolysaccharide, macrophages derived from either wild-type or TIA-1−/− mice expressed the same levels of TNF-α mRNA, but TIA-1−/− cells expressed much more TNF-α protein than cells expressing TIA-1. In TIA-1−/− macrophages, the levels of TNF-α mRNA found in polysomes were significantly higher, lending further support to the notion that TIA-1 functions as a translational silencer (35). Similarly, the steady-state levels of COX-2 mRNA were the same in TIA-1-expressing and -deficient fibroblasts, but cells lacking TIA-1 had significantly higher levels of COX-2 mRNA in polysomes and expressed elevated levels of COX-2 protein (15). The mechanisms whereby TIA-1 represses translation have been investigated most extensively in cells responding to environmental stress agents. Stress-triggered translational inhibition is characterized by the activation of one or more protein kinases (PKR, PERK, GCN2, and HRI) that phosphorylate the α subunit of eukaryotic initiation factor 2 (eIF-2α), a constituent of the ternary complex (eIF-2-GTP-$$mathtex$$\(\mathrm{tRNA}_{\mathrm{i}}^{\mathrm{Met}}\)$$mathtex$$) that loads initiator $$mathtex$$\(\mathrm{tRNA}_{\mathrm{i}}^{\mathrm{Met}}\)$$mathtex$$onto the small ribosomal subunit to initiate protein translation (14, 27). Phosphorylated eIF-2α inhibits translation by reducing the availability of the active ternary complex; under these conditions, TIA-1 has been proposed to interact with the translational machinery on the 5′ region of the mRNA and to promote the assembly of noncanonical, translationally incompetent initiation complexes (3). In situations of stress, when many transcripts are simultaneously subject to such translational silencing, the self-aggregating properties of TIA-1 promote the formation of cytoplasmic foci known as stress granules (SGs), which are generally believed to represent sites of translational inhibition (26). Nonetheless, the underlying translational control mechanisms mediated by TIA-1 are likely to be similar in stressed and unstressed cells (1, 2). In the case of mRNAs bearing 3′ untranslated region (3′UTR) AREs that form complexes with TIA-1 (TNF-α and COX-2), the likelihood will be greater that translationally silent preinitiation complexes assemble on the 5′ region of the transcript, and therefore ARE-bearing mRNAs would be preferentially subject to translational repression (3). Given that TIA-1 is implicated in critical cellular events, including the response to stress agents, apoptotic stimuli, and inflammatory factors, we thought it would be highly valuable to systematically identify the collection of TIA-1 target mRNAs. Here, we describe efforts undertaken to elucidate such TIA-1-associated transcripts in human colorectal cancer cells using en masse methodologies. The analysis was carried out by immunoprecipitating TIA-1-RNA complexes from stressed cells and identifying the bound transcripts using microarrays. Computational analysis of the target transcripts led to the elucidation of a shared, U-rich motif present in TIA-1 target mRNAs. The data revealed that novel TIA-1 target mRNAs can be successfully identified using this motif and that mRNAs associating with TIA-1 are translationally repressed. Cell culture, treatments, and siRNA transfections.Human colorectal carcinoma RKO cells were cultured in minimum essential medium (Invitrogen), supplemented with 10% fetal bovine serum and antibiotics. Cells were subjected to heat shock (HS) by incubating at 45°C for 1 h. Sodium arsenite and carbonyl cyanide 4-(trifluoromethoxy)phenylhydrazone (FCCP) were from Sigma. Four small interfering RNA (siRNA) molecules targeting TIA were assessed: CTG GGCTAACAGAACAACTAA (T1, employed in subsequent experiments), AACGATTTGGGAGGTAGTGAA (T2), CACAACAAATTGGCCAGTATA (T3), and CGGAAGATAATGGGTAAGGAA (T4); the control siRNA was AATTCTCCGAACGTGTCACGT. Small interfering RNAs (100 nM, QIAGEN) were transfected with Oligofectamine (Invitrogen), and cells were harvested 2 to 4 days after transfection, as indicated. IP assays.Immunoprecipitation (IP) of TIA-1-mRNA complexes from RKO cell lysates was used to assess the association of endogenous TIA-1 with endogenous target mRNAs. The IP assay was performed essentially as described previously (32, 37), except 100 million cells were used as starting material and lysate supernatants were precleared for 30 min at 4°C using 15 μg of immunoglobulin G (IgG) (Santa Cruz Biotechnology) and 50 μl of protein A-Sepharose beads (Sigma) that had been previously swollen in NT2 buffer (50 mM Tris [pH 7.4], 150 mM NaCl, 1 mM MgCl2, and 0.05% Nonidet P-40 [NP-40]) supplemented with 5% bovine serum albumin. Beads (100 μl) were incubated (16 h, 4°C) with 30 μg of antibody (either goat IgG [Santa Cruz Biotechnology] or goat anti-TIA-1 [Santa Cruz Biotechnology]) and then for 1 h at 4°C with 3 mg of cell lysate. After extensive washes and digestion of proteins in the IP material (37), the RNA was extracted and used either for hybridization of cDNA arrays (below) or for verification of TIA-1 target transcripts. For the latter analysis, RNA in the IP material was used to perform reverse transcription-PCR (RT-PCR) to detect the presence of specific target mRNAs using gene-specific primer pairs (available upon request). PCR products were visualized after electrophoresis in 1% agarose gels stained with ethidium bromide. To assess the proteins present in the IP material, the above procedure was followed, except proteins were not digested and were instead extracted from the beads using Laemmli buffer and detected by Western blot analysis. Where indicated, purified recombinant proteins (either glutathione S-transferase [GST] or GST-TIA-1 at a concentration of 500 nM) were incubated with the precleared cell lysates for an additional 30 min at 45°C, before adding beads that had been precoated with anti-GST antibody. All subsequent steps were as described above, including IP, washes, RT, and PCR amplification. cDNA array analysis.RNA in the material obtained after IP reactions using either an anti-TIA-1 antibody or IgG was reverse transcribed in the presence of [α-33P]dCTP (MP Biomedicals), and the radiolabeled product was used to hybridize cDNA arrays (http://www.grc.nia.nih.gov/branches/rrb/dna/index/dnapubs.htm#2,MGC arrays containing 9,600 genes), employing previously reported methodologies (32, 37, 38). All of the data were analyzed using the Array Pro software (Media Cybernetics, Inc.), then normalized by Z-score transformation (8) and used to calculate differences in signal intensities. Significant values were tested using a two-tailed Z-test and a P of ≤0.01. The data were calculated from three independent experiments. The complete cDNA array data are available from the authors. Computational analysis.Human UniGene records were first identified from the most strongly enriched TIA-1 targets derived from the array analysis; the top 185 transcripts from which the 3′UTRs were available (available upon request) served as the experimental data set for the identification of the TIA-1 motif. The 185 3′UTR sequences were first scanned with RepeatMasker (www.repeatmasker.org) to remove repetitive sequences. The remaining sequences were divided into 100-base-long subsequences with 50-base overlap between consecutive sequences and were organized into 50 data sets. Common RNA motifs were elucidated from each of the 50 random data sets. The top 10 candidate motifs from each random data set were selected and were used to build the stochastic context-free grammar (SCFG) model. The SCFG model of each candidate motif was used to search against the experimental 3′UTR data set as well as the entire human UniGene 3′UTR data set to obtain the number of hits for each motif. The motif with the highest enrichment in the experimental data set over the entire UniGene data set was considered to be the best TIA-1 candidate motif. The enrichment was examined by Fisher's exact test. The identification of the RNA motif in unaligned sequences was conducted using FOLDALIGN software (21), and the identified motif was modeled by the SCFG algorithm and searched against the transcript data set using the COVE and COVELS software packages (17). The motif logo was constructed using WebLogo (http://weblogo.berkeley.edu/). RNAplot was used to depict the secondary structure of the representative RNA motifs. The computation was performed using the NIH Biowulf computer farm. Both UniGene and Refseq data sets were downloaded from NCBI. Synthesis of biotinylated transcripts and analysis of TIA-1 bound to biotinylated RNA.For in vitro synthesis of biotinylated transcripts, reverse-transcribed total RNA was used as the template for PCRs using 5′ oligonucleotides that contained the T7 RNA polymerase promoter sequence. All oligonucleotide pairs used to synthesize DNA templates for the production of biotinylated transcripts are available upon request. The following genes (with the amplified regions indicated in parentheses) were assayed for biotin pulldown: ACTG1 (1201 to 1904), PFN1 (556 to 733), ACTB (1219 to 1724), APH-1A (864 to 1910), DEK (1312 to 2077), MTA1 (2018 to 2609), GAPDH (981 to 1283), CALM2 (522 to 1071), SNRPF (345 to 445), CDK9 (1229 to 1732), APEX1 (1314 to 1541), and PTMA (546 to 1202). The PCR-amplified products were resolved on agarose gels, and transcripts were purified and used as templates for the synthesis of the corresponding biotinylated RNAs using T7 RNA polymerase and biotin-CTP (39). Biotin pull-down assays (39) were carried out by incubating 500 nM of either recombinant GST or GST-TIA-1 proteins (prepared in Escherichia coli using an expression vector kindly provided by J. Varcarcel and P. Anderson [18]) with 0.2 μg of biotinylated transcripts for 30 min at 45°C. Complexes were isolated using streptavidin-conjugated Dynabeads (Dynal), and bound proteins in the pull-down material were analyzed by Western blotting using antibodies recognizing GST (below). Immunofluorescence.RKO cells cultured on dishes containing coverslips were fixed in 4% formaldehyde (15 min) and permeabilized in cold 0.2% Triton X-100 in phosphate-buffered saline (PBS) (15 min). After incubation in blocking buffer (5% horse serum in PBS) for 1 h at 37°C, coverslips were incubated with either goat anti-TIA-1 or goat anti-TIAR (Santa Cruz Biotechnology) in blocking buffer (1 h at 37°C, 1:200 dilution), washed with PBS containing 0.1% Tween 20, and further incubated with Alexa Fluor 568-labeled donkey anti-goat IgG (heavy plus light chains) (Molecular Probes; 1 h at 37°C, 1:500 dilution). After washes with PBS containing 0.1% Tween 20, coverslips were mounted in Vectashield (Vector Laboratories) and visualized with a Zeiss LSM410 confocal microscope. Representative photographs from three independent experiments are shown. Negative-control incubations were performed without primary antibody. Analysis of newly translated protein.Newly translated CALM2, SNRPF, CASP8, and (control) glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were assessed by incubating 4 × 106 cells with 1.5 mCi l-[35S]methionine and l-[35S]cysteine (Easy Tag EXPRESS; NEN/Perkin Elmer) per 100-mm plate for 20 min, whereupon cells were lysed using TSD lysis buffer (50 mM Tris [pH 7.5], 1% sodium dodecyl sulfate [SDS], and 5 mM dithiothreitol). IP reactions were carried out as previously described (33) for 1 h at 4°C using appropriate antibodies and IgG as a control. Following extensive washes in TNN buffer (50 mM Tris [pH 7.5], 5 mM EDTA, 0.5% NP-40, 250 mM NaCl), the IP material was resolved by either 15% (for CALM2 and SNRPF) or 10% (for CASP8) SDS-polyacrylamide gel electrophoresis, transferred onto polyvinylidene difluoride filters, and visualized using a PhosphorImager (Molecular Dynamics). Western blot analysis.The preparation of whole-cell, cytoplasmic, and nuclear lysates was previously described (28, 40). Protein lysates (5 to 20 μg) were resolved by SDS-polyacrylamide gel electrophoresis and transferred onto nitrocellulose membranes. Antibodies were used to detect α-tubulin, hnRNP C1/C2, S6, GST, TIAR, or TIA-1 (Santa Cruz Biotechnology), CASP8 (BD Pharmingen), CALM2 (Upstate Cell Signaling Solutions), or SNRPF (a gift from D. Ingelfinger). Following secondary antibody incubations, signals were visualized by enhanced chemiluminescence. Expression levels, subcellular localization, and immunoprecipitation of TIA-1 after stress.The subcellular localization of TIA-1 was investigated in colorectal carcinoma RKO cells exposed to a variety of stress agents, including heat (45°C, 1 h) (HS), sodium arsenite (0.5 μM, 45 min), or the mitochondrial uncoupler FCCP (1 μM, 90 min) (Fig. 1A). Each of these treatments triggered the accumulation of cytoplasmic TIA-1 into SGs (Fig. 1A), in keeping with previous reports (24, 25); HS yielded the most robust effects and was chosen for further study. By Western blot analysis, the relative levels of TIA-1 in the nuclear and cytoplasmic compartments remained largely unchanged in response to HS (Fig. 1B), supporting the notion that SGs were formed by dynamic recruitment of TIA-1 (24) through mechanisms that did not significantly deplete the pool of nuclear TIA-1. In order to identify the collection of mRNAs that were TIA-1 targets, we first sought to perform immunoprecipitation assays under conditions that preserved the pools of mRNAs bound to TIA-1. As assessed by Western blot analysis, TIA-1 was undetectable in the IgG IP material but was specifically immunoprecipitated by the anti-TIA-1 antibody; in the latter IP group, TIA-1 levels were again found to remain unchanged in response to HS (Fig. 1C). Under these conditions, no significant TIAR was detected in the TIA-1 IP material, although after we used larger amounts of IP material and longer exposure times to detect Western blotting signals, we did observe low levels of TIAR in the TIA-1 IP samples (unpublished data), consistent with the ability of TIAR and TIA-1 to form protein-protein associations. Given that TIA-1 is an integral constituent of SGs, we tested whether SGs might have been immunoprecipitated along with TIA-1 by monitoring the presence of the small ribosomal subunit S6 in the IP material. As shown, S6 was undetectable in all of the IP lanes, suggesting that intact SGs indeed failed to immunoprecipitate by this procedure. The RNA associated with TIA-1 in the IP material was then extracted (Fig. 2A) and used to prepare reverse-transcribed products that were subsequently hybridized to human cDNA arrays (http://www.grc.nia.nih.gov/branches/rrb/dna/dna.htm#, MGC arrays, 9,600 genes). The array hybridization patterns obtained from untreated RKO cultures were similar to those obtained from HS cultures, but overall signal intensities were higher in the HS groups (not shown). The latter set of array data was therefore used for the identification of TIA-1 target mRNAs. The association between TIA-1 and target mRNAs was deemed specific on the basis of evidence that the anti-TIA-1 antibody exhibited no appreciable cross-reactivity with other cellular proteins (particularly TIAR, as indicated below). The specificity of the interactions was further ensured through the elimination of nontarget mRNAs associating with either the antibody or the beads (detected in the IgG arrays). Three hundred array spots (∼3% of the total spots on the array) had Z scores of> 1.00 in a comparison of the signals in TIA-1 IP arrays with those in IgG IP arrays and were thus deemed to represent specific TIA-1-associated transcripts. Of the specific TIA-1-associated transcripts, the 185 transcripts for which full-length mRNAs were available (the experimental data set [available upon request]) were selected for further analysis. Sequence, structure, and preferred 3′UTR localization of the predicted TIA-1 motif.The RNA sequences of the experimental data set were used in computational analyses to identify and characterize TIA-1 motifs on the basis of both primary RNA sequences and secondary structures. Of the 100 possible candidate motifs initially identified from the experimental data set, one motif comprising 30 to 37 nt was identified as showing the highest relative number of hits in the experimental data set over the entire UniGene data set. The sequence alignment and motif logo (graphic representation of the relative frequency of nucleotides at each position), as well as examples of the secondary structures of this putative TIA-1, are shown (Fig. 2B and C). The motif was found to be highly U-rich in its 5′ segment and AU-rich in its 3′ segment (Fig. 2B); Fig. 2C depicts 10 examples of the TIA-1 motif, with the corresponding mRNAs indicated below. Each hit of this motif was assigned a score, a value that reflects the degree to which each particular motif matches the motif model (Fig. 2B). A partial list (after removal of hypothetical proteins) of the experimental data set mRNAs bearing the TIA-1 motif is shown (Table 1), and the positions of individual TIA-1 motif hits and the score of each motif hit are indicated in Table 1. Similar to what was observed for mRNAs bearing HuR motifs (32), there were multiple occurrences of the TIA-1 motif within many of the transcripts examined. It is important to note that only 42 putative targets (∼23% of the experimental data set) had hits for the TIA-1 motif. The fact that the remaining 143 putative TIA-1 target transcripts from the experimental data set did not appear to contain the 30- to 37-base motif suggested that additional TIA-1 motifs may also exist that were not identified in this analysis. In this regard, it should also be explained that other motifs which had a greater number of hits in the experimental data set were identified, but the hit frequency was lower relative to the hit frequency within the UniGene database and were thus considered to be less strong TIA-1 motifs. Importantly, the two mRNAs that have been shown to be targets of TIA-1 (those encoding TNF-α and COX-2 [10, 35]) were found to contain at least one motif hit (Table 2). Table 3 lists the relative frequency with which the motif was found in the UniGene database; sequences comprising 5′UTR, coding region (CR), and 3′UTR were assessed separately. The number of motif hits for each data set was calculated with respect to the relative sequence length of the data set (presented as hits per kb). Within the UniGene database, the frequency of the TIA-1 motif in the 3′UTR is 0.203 per kb, a markedly higher frequency than that seen for the 5′UTR (0.056 per kb) or the CR (0.029 per kb). Comparable differences were seen in the experimental data set (Table 3), wherein the TIA-1 motif hits were almost exclusively found in the 3′UTRs of target mRNAs. As anticipated, the frequency ofthe motif occurrence (0.469 per kb in the 3′UTR) was higher than that calculated for the general UniGene transcript set. The complete list of human UniGene transcripts containing the TIA-1 motif is available from the authors. Taken together, the TIA-1 motif identified from the array-derived experimental data set displays features that were previously predicted, including its U-richness, its existence on published TIA-1 target mRNAs, and its preferential location on the 3′UTRs of target mRNAs (12, 29). Assessment of the validity of the TIA-1 motif in the identification of novel TIA-1 target transcripts.To test the usefulness of the motif in predicting TIA-1 target mRNAs, we carried out two types of validation efforts. First, we monitored the ability of endogenous TIA-1 to immunoprecipitate TIA-1-containing mRNA-protein (mRNP) complexes and assessed the presence of bound target mRNAs of interest by RT-PCR (Fig. 3A). The mRNAs tested by this method included both array-identified mRNAs (array targets) as well as transcripts predicted to be TIA-1 targets after a search of the UniGene database for motif-bearing mRNAs (database targets). Following IP of mRNP complexes present in RKO cell lysates using an anti-TIA-1 antibody (or IgG in control IP reactions), RT-PCR assays were performed to detect the presence of the endogenous mRNAs of interest in the IP material (Fig. 3A). Of note, nontarget GAPDH and SDHA mRNAs (encoding housekeeping genes) could also be amplified, albeit inefficiently and to the same extent in both IP groups; these findings revealed the presence of low levels of contaminating, unspecific mRNAs in all IP samples, verified the use of equal amounts of input material, and demonstrated that TIA-1 mRNA targets were amplified from TIA-1 IP material much more readily than from IgG IPs. Second, we assessed the interaction of TIA-1 with target transcripts in vitro. Biotinylated transcripts encompassing motif-containing 3′UTR sequences (details in Materials and Methods and unpublished data) from the mRNAs indicated (Fig. 3B and C) were incubated with recombinant TIA-1 protein (GST-TIA-1). The formation of complexes was then assessed by pulling down the biotinylated RNA using streptavidin-coated beads followed by Western blot analysis of TIA-1 levels in the pull-down material. Included in this validation were several TIA-1 target transcripts identified in the cDNA array: CALM2, ACTB, APH-1A, SNRPF, CDK9, and APEX1. As shown in Fig. 3B, all of the biotinylated transcripts were capable of pulling down TIA-1 but not a control protein (GST); in addition, one biotinylated transcript, corresponding to SLAMF1 3′UTR, failed to show binding (not shown). In further control incubations, a biotinylated GAPDH transcript that lacked a TIA-1 motif failed to show any binding to TIA-1 (Fig. 3B). The validation efforts were extended to UniGene database genes predicted to encode targets of TIA-1 on the basis of the detection of TIA-1 motif hits in the corresponding mRNAs. In all cases, the biotinylated transcripts tested (MTA1, ACTG1, PTMA, PFN1, and DEK) showed binding to TIA-1, while control incubations with nontarget RNA (GAPDH) or with GST did not (Fig. 3C). Moreover, in vitro supplementation of recombinant purified protein (either GST-TIA-1 or GST) to cell lysates that were prepared as described above (Fig. 3A) recapitulated the specific binding of TIA-1 to the target mRNAs tested (Fig. 3D), indicating that exogenously added TIA-1 retains the target specificity of the endogenous protein (Fig. 3A). To ascertain whether the presence of the TIA-1 motif was sufficient for TIA-1 to associate with a given RNA, we synthesized two sets of biotinylated transcripts (Fig. 4A, schematic). The first were three small RNAs comprising either the TIA-1 motif from the TNF-α mRNA (similar results were observed with the TIA-1 motif present in the CASP8 mRNA [data not shown]) plus 20-base flanking RNA from each side of the motif [TIA-1(+)], only the flanking regions [TIA-1(−)], or a mutated TIA-1 motif with flanking regions [TIA-1(mut)]. In the second set, we tested whether the presence of the TIA-1 motif would render a heterologous, nontarget transcript (the GAPDH 3′UTR) capable of forming complexes with TIA-1. As shown in Fig. 4B, only transcripts bearing the TIA-1(+) motif in each transcript set showed specific interaction with GST-TIA-1. No binding was seen when the TIA-1 motif was absent [TIA-1(−) transcripts] or when it was mutated [TIA-1(mut) transcripts], indicating that the presence of an intact TIA-1 motif was required for binding. No complexes were detected in control incubations with GST. Transcripts can be rendered TIA-1 targets by adding the TIA-1 motif. (A) Schematic of the transcripts prepared. (Top) Short RNAs comprising either the TIA-1 motif and flanking (20 bases each) 3′ and 5′ regions [TIA-1(+)], an RNA lacking the TIA-1 motif comprising only the flanking regions [TIA-1(−)], or an RNA comprising a mutated TIA-1 motif [TIA-1(mut)]. (Bottom) Chimeric RNAs comprising the 3′UTR of GAPDH (dotted line) and each of the RNAs mentioned above. The oligonucleotides and methodologies used to synthesize these biotinylated transcripts are described elsewhere (Materials and Methods and unpublished data); the TIA-1 motif and the flanking regions were taken from the TNF-α mRNA. (B) The indicated biotinylated transcripts were incubated with either GST-TIA-1 or GST (500 nM each), whereupon their association was assessed by pulldown of the RNA using streptavidin-conjugated beads, followed by protein analysis of the pull-down material by Western blotting using an anti-GST antibody. We also tested whether TIA-1 colocalized with target mRNAs by in situ hybridization using labeled antisense transcripts. While this approach posed major technical challenges, preliminary experiments showed that one abundant target mRNA colocalized with TIA-1 in cytoplasmic SGs after HS (unpublished data). Together, these approaches support the notion that TIA-1 associates with motif-bearing mRNAs. TIA-1 knockdown relieves the translation of target mRNAs.Additional insight into the association of TIA-1 with target mRNAs and the functional consequences of these interactions was sought by RNA interference (RNAi)-mediated reduction of TIA-1 levels. The effects of four different small interfering RNAs targeting TIA-1 are shown (Fig. 5A; T1 was used in subsequent experiments). Transfection with each of the siRNA molecules caused a dramatic reduction in TIA-1 levels (to less than 5% of the levels seen in control siRNA populations). It was important to assess the levels of TIAR in the TIA-1 siRNA populations, given the extensive sequence homology between the two proteins; as shown, TIAR levels remained unaltered (Fig. 5A). In keeping with this reduction in TIA-1 levels, RT-PCR amplification of putative target transcripts encoding CALM2, SNRPF, and CASP8 was markedly reduced when using IP material obtained from the TIA-1 siRNA population (Fig. 5B). Analysis of TIA-1 and TIAR by immunofluorescence confirmed the effects of RNAi on TIA-1 expression and further showed that SGs were readily detectable in the TIA-1 siRNA population (as seen by TIAR fluorescence), suggesting that TIAR alone may be sufficient for the assembly of SGs (Fig. 6). To investigate the influence of TIA-1 on the expression of target transcripts, the steady-state levels of mRNAs encoding CALM2, SNRPF, and CASP8 were first examined in cells transfected with either a control siRNA or an siRNA targeting TIA-1. As shown in Fig. 7A, the levels of these mRNAs remained unchanged in all of the treatment groups, as assessed by semiquantitative RT-PCR analysis. To test whether TIA-1 contributed to regulating the levels of CALM2, SNRPF, and CASP8 by influencing their translation, nascent protein synthesis was measured by performing a brief (20-min) incubation in the presence of 35S-labeled amino acids, immediately followed by IP using specific antibodies (20, 33). HS was found to considerably lower the translation of these target mRNAs (Fig. 7B). Importantly, their translation was enhanced in each TIA-1 siRNA group (Fig. 7B), in keeping with the role of TIA-1 as a translational suppressor. The levels of the encoded proteins changed less dramatically in each population, although HS reduced their levels in each case, and TIA-1 silencing caused increases in protein abundance, particularly in the HS groups (Fig. 7C). These observations support the notion that TIA-1 suppressed the levels of expression of the proteins encoded by target mRNAs in both unstimulated and heat-stressed cells (Fig. 7C). In summary, the TIA-1 motif identified using this approach appears to predict with a high degree of confidence if a novel mRNA will associate with TIA-1 and hence become subject to TIA-1-mediated translational repression. Here, we describe studies aimed at systematically identifying mRNA subsets associated with TIA-1, a pivotal regulator of the cellular response to inflammatory, proapoptotic, and stress-inducing agents. We employed IP assays to isolate TIA-1-containing mRNPs and then used the mRNA in the IP material for hybridization of cDNA arrays. Computational analysis of the target mRNAs led to the identification of a 30- to 37-nt motif present in 23% of the target mRNAs, suggesting that other, as-yet-unknown signature motifs might exist on TIA-1 target mRNAs. The 5′ segment of the 30- to 37-nt RNA motif was remarkably U-rich, in agreement with earlier analyses of TIA-1-bound RNAs (12, 29), while the 3′ segment was predominantly AU-rich. The TIA-1 motif was strikingly more abundant in the 3′UTRs of TIA-1-bound mRNAs on the array, as well as in putative TIA-1 target mRNAs in the UniGene database, and was predicted to adopt a distinct secondary structure consisting of a loop of variable size and a bent stem. Importantly, the motif was found in the TNF-α and COX-2 mRNAs, two reported TIA-1 target transcripts. To assess the usefulness of the TIA-1 motif for identifying bona fide target mRNAs, several validation approaches were undertaken. First, the mRNAs present in the TIA-1-associated IP material were tested by RT-PCR analysis using sequence-specific primers; among the particular target mRNAs examined were several transcripts that were identified during the initial cDNA array analysis and several that were predicted to be targets after the UniGene database was searched for transcripts containing the TIA-1 motif. By this methodology, all of the predicted targets were validated (Fig. 3A), except for two array transcripts which did not display appreciable enrichment in the TIA-1 IP material (not shown). In the second set of validation efforts, we sought to determine whether putative TIA-1 target transcripts, synthesized in the presence of biotinylated CTP, formed complexes with recombinant TIA-1. All of the predicted TIA-1-mRNA interactions were also validated by this in vitro assay (Fig. 3B); only biotinylated SLAMF1 3′UTR transcript did not show the expected binding to TIA-1, despite being enriched by IP plus RT-PCR analysis (not shown), suggesting that perhaps the recombinant transcript did not retain the proper native conformation of the SLAMF1 mRNA. Importantly, when the TIA-1 motif was artificially linked to a heterologous RNA (the nontarget GAPDH 3′UTR), the resulting chimeric transcript was rendered a TIA-1 target; binding was undetectable or at background levels when the motif was absent or mutated (Fig. 4). These experiments further support the validity of the TIA-1 motif elucidated in this study. An additional line of investigation used to substantiate the existence of these interactions assessed the subcellular colocalization of TIA-1 and target SNRPF transcript. The in situ hybridization signals for SNRPF RNA and the immunofluorescent TIA-1 signals (available upon request) overlapped at the sites of SGs. Taken together, the TIA-1 motif reported here successfully identified 23% of TIA-1 target RNAs. TIA-1 has been reported to participate in the regulation of bound transcripts in at least two distinct processes: pre-mRNA splicing and translational suppression. TIA-1 was previously shown to regulate the splicing of several mRNAs, including the TIA-1 pre-mRNA itself and those encoding fibroblast growth factor receptor 2 and the Fas receptor (11, 18, 29). In this regard, it should be noted that no TIA-1 motif hits were found on either the TIA-1 mRNA or the fibroblast growth factor receptor 2 mRNA (data not shown), although TIA-1 motifs were found on two Fas receptor isoforms (TNFRSF6 and ARTS-1). In this investigation, we have not directly tested whether the mRNAs bearing the motif described here are also targets of TIA-1-regulated splicing. Since the array-based identification of TIA-1 target transcripts was conducted selectively on poly(A)-containing RNA (Materials and Methods), we anticipate that only mature mRNA species were detected on the cDNA arrays. Accordingly, it remains to be directly tested whether a different motif signals the TIA-1-dependent regulation of pre-mRNA splicing. However, TIA-1 has been characterized most extensively as a translational suppressor. The findings reported here indeed support this role for TIA-1, as the translation of the target mRNAs studied (CALM2, SNRPF, and CASP8) was enhanced when TIA-1 levels were knocked down by RNA interference (Fig. 7B and C). The TIA-1-mediated translational suppression likely relies on the ability of TIA-1 to promote the formation of noncanonical preinitiation complexes by usurping the position of the ternary complex (eIF-2-GTP-$$mathtex$$\(\mathrm{tRNA}_{\mathrm{i}}^{\mathrm{Met}}\)$$mathtex$$) at the 5′UTR of an mRNA. Whereas the active ternary complex (featuring unphosphorylated eIF-2α) promotes the initiation of translation, TIA-1 instead triggers the aggregation of TIA-1-associated ribonucleoprotein complexes into translationally silent SGs (2). In order to assess the influence of TIA-1 on the translation of target mRNAs, we have employed a methodology for assessing nascent protein biosynthesis which measures the pulse incorporation of 35S-labeled amino acids onto nascent polypeptide chains. This method uniquely provides a measure of new translation but has major limitations, as it can be used only to analyze abundant proteins for which highly sensitive and specific antibodies are available. Approximately one dozen additional antibodies were tested to assess as many additional proteins encoded by putative TIA-1 target mRNAs. Unfortunately, the IP signals in each case were well below the levels of detection of the assay (not shown), and therefore, the nascent translation of the corresponding proteins could not be studied by this approach. A comprehensive assessment of the translation of TIA-1's target mRNAs thus awaits the development of more sensitive methods. The changes in abundance of the CASP8, CALM2, and SNRPF proteins in cells expressing different levels of TIA-1 determined by Western blotting (Fig. 7C) mirrored those observed when measuring nascent translation (Fig. 7B), suggesting that the changes in protein abundance are indeed linked to the changes in protein biosynthesis influenced by TIA-1. Interestingly, our findings support the notion that TIA-1 functions as a translational inhibitor even in the absence of stress, since the nascent translation of CASP8, CALM2, and SNRPF was elevated in TIA-1 knockdown cells that had been left without HS. These results are also in agreement with earlier observations that in unstimulated macrophages derived from TIA-1−/− mice, the levels of TNF-α mRNA found in polysomes were higher than those seen in macrophages from wild-type mice, and the TIA−/− cells expressed elevated levels of the cytokine (35). In our studies, HS did not seem to increase the levels of TIA-1 in the cytoplasm (Fig. 1B) or promote the binding of TIA-1 to target mRNAs (data not shown). Thus, the mechanism(s) whereby HS silences target mRNA translation in a TIA-1-dependent fashion, including possible TIA-1 posttranslational modification through phosphorylation or its association with other proteins, remains to be formally investigated. After the 1-hour HS treatment, however, it is unlikely that the pronounced decline in the levels of these three proteins is due solely to reduced translation rates (Fig. 7B); instead, it is likely to be influenced by altered proteolysis or other posttranslational events. In light of these observations, we propose that TIA-1 contributes to altering protein expression by influencing the biosynthesis of encoded proteins. This level of regulation likely functions in juxtaposition with additional processes controlling protein levels, such as subcellular protein transport, proteolysis, and/or protein secretion. It is noteworthy that in populations in which TIA-1 was knocked down (Fig. 5), SGs were still detected (Fig. 6); similarly, silencing of TIAR failed to block SG formation (K. Mazan-Mamczarz and M. Gorospe, unpublished data). Whereas TIA-1 and TIAR appear to have interchangeable roles regarding SG formation, their relative affinities for target mRNAs have not been compared systematically. En masse efforts to identify TIAR and TIA-1 target mRNAs under way in our laboratory have indicated the existence of both shared and specific target ARE-containing transcripts (I. López de Silanes, K. Mazan-Mamczarz, and M. Gorospe, unpublished data), in keeping with the notion that TIAR and TIA-1 are functionally distinct (42), although they both appear to bind to several common targets, such as the COX-2 and TNF-α mRNAs. In summary, we have systematically identified many TIA-1 target mRNAs and describe a common RNA motif among them. Using a variety of approaches, the association of TIA-1 with mRNAs that were either detected as microarray targets or identified on the basis of the presence of the TIA-1 motif in the UniGene database transcripts was validated using several approaches. Importantly, TIA-1 was found to repress the translation of target mRNAs. These discoveries provide comprehensive and valuable insight into the ribonucleoprotein complexes that govern gene expression at the posttranscriptional level. Expression levels and subcellular localization of TIA-1 after stress. (A) RKO cells were either left untreated (control) or were treated with heat (45°C, 1 h), sodium arsenite (0.5μ M, 45 min), or FCCP (1 μM, 90 min). TIA-1 levels were assessed by immunofluorescence; nuclei (discontinuous line) and SGs (arrowheads) are indicated. Cells in each field were visualized by phase-contrast microscopy. (B) Western blot analysis of whole-cell (total), nuclear (Nuc.), and cytoplasmic (Cytopl.) levels of TIA-1 in cells that were either left untreated (control [C]) or treated with HS as explained above; assessment of the levels ofα -tubulin and hnRNP C1/C2 served to monitor the equal loading of samples and the quality of the cytoplasmic and nuclear preparations, respectively. (C) Immunoprecipitation assays using lysates (Lys.) from cells that were either left untreated (control [C]) or exposed to HS. The precipitates were then used for Western blot analysis (details in Materials and Methods) to monitor the levels of TIA-1, TIAR, and S6 (a protein present in the small ribosomal subunit). Sequence and structure of the predicted TIA-1 motif, elucidated from TIA-1-bound transcripts. (A) Schematic of the experimental approach. RKO whole-cell lysates (from control or HS populations) were used for IP assays by employing either IgG or anti-TIA-1 antibodies. RNA was subsequently extracted from the ribonucleoprotein complexes present in the IP material and was reverse transcribed; the resulting radiolabeled molecules were used to hybridize a cDNA array (details in Materials and Methods). (B) Probability matrix (graphic logo) indicating the relative frequency of finding each residue at each position within the motif. The relative frequencies were elucidated from the array-derived experimental data set. (C) Secondary structures of 10 representative examples of the TIA-1 motif in specific mRNAs; the corresponding gene names are shown. Sample TIA-1 hits 9 and 10 are found at two different locations on the same mRNA. Validation of TIA-1 target transcripts. (A) The association of endogenous TIA-1 with endogenous target mRNAs in RKO cells was tested by IP, followed by detection of the target transcripts by RT and PCR amplification (30 to 38 cycles) of the IP material. PCR products were visualized by electrophoresis in ethidium bromide-stained 1% agarose gels. Several target transcripts were tested among the array targets and database targets; control nontargets (housekeeping mRNAs) showed no enrichment between the IgG and TIA-1 IP groups and served to monitor the equal input of IP materials. Biotin pull-down assays were conducted to assess the ability of recombinant TIA-1 (GST-TIA-1) to form complexes with biotinylated transcripts of interest. The indicated biotinylated transcripts among the array targets (B) or the database targets (C) were incubated with either GST-TIA-1 or GST (500 nM each), whereupon their association was assessed by pulldown of the RNA using streptavidin-conjugated beads, followed by the analysis of proteins bound to the pull-down material by Western blotting using an anti-GST antibody. Input GST-TIA-1 and GST (250 ng each) are shown. (D) Lysates that were prepared as described above for panel A were incubated in the presence of either GST-TIA-1 or GST (500 nM each) as described in Materials and Methods; the formation of complexes between these proteins and the indicated mRNAs was tested by IP using an anti-GST antibody, followed by RT-PCR analysis. GAPDH amplification was included as a control, since the GAPDH mRNA bound the IP material at low levels, in a nonspecific manner. RNA interference-mediated reduction of TIA-1 expression. (A) TIA-1 expression levels were monitored 48 h after transfection of RKO cells with either a control siRNA (Ctrl. siRNA) or with one of four independent siRNAs targeting TIA-1 (T1 to T4). TIAR levels were examined to assess the specificity of the RNAi intervention, and α-tubulin levels served to monitor the equal loading of samples. (B) IP and RT-PCR amplification assays were performed to further assess the specificity of the association of TIA-1 with target transcripts in each transfection group. IgG, control (Ctrl.) IP reactions to detect background association of mRNAs of interest. Immunofluorescent detection of TIA-1 and TIAR after RNAi. Forty-eight hours after transfection of RKO cells with either control (Ctrl.) siRNA or siRNA (T1) targeting TIA-1, cells were left untreated (Untr.) or treated with HS, and TIA-1 and TIAR were detected by immunofluorescence. Several SGs are indicated (arrowheads). Effect of TIA-1 knockdown on the expression of target mRNAs. Cells were transfected and treated as described in the legend to Fig. 6, whereupon RNA and protein extracts were prepared. (A) Total RNA was extracted, and the reverse-transcribed product (100, 10, and 1 ng for CALM2, SNRNPF, and CASP8; 10, 1, and 0.1 ng for GAPDH) was used for PCR amplification mixtures. Untr., untreated; Ctrl., control; −, no RT product added to the PCR mixture. (B) Assessment of nascent translation of CALM2, SNRNPF, CASP8, and (control) GAPDH after HS in cells expressing either wild-type TIA-1 levels (Ctrl. siRNA) or knocked-down TIA-1 levels (TIA-1 siRNA); signals indicate the relative intensities of the corresponding 35S-labeled proteins after incubating the cells (that had been left without treatment or immediately after they were subjected to HS) with [35S]methionine and l-[35S]cysteine for 20 min, preparing lysates, and carrying out IP with the corresponding antibodies (a representative control IgG IP assay is indicated). Representative radiolabeled signals from two independent experiments are shown. (C) Representative Western blot analyses of the levels of CALM2 (in 10μ g lysate), SNRPF (in 5 μg lysate), and CASP8 (in 20μ g lysate) in each of the treatment groups; representativeα -tubulin levels (in 20 μg lysate) were assessed to monitor loading differences. Following densitometry scanning of signals from two independent experiments, the intensities of CALM2, SNRPF, CASP8, and α-tubulin within the linear range of the signals were calculated; intensities are represented as a percentage of the signal intensity relative to the signal in untreated (Untr.) cells or control (Ctrl.) siRNA-transfected cells at either 2 or 4 days below the blots. TIA-1 motif-bearing targets on cDNA arraysa Motif location and score in reported TIA-1 targetsa Relative presence of the TIA-1 motif in the 5′UTR, CR, and 3′UTR of human genesa We thank K. G. Becker and the NIA Array Facility for providing cDNA arrays for analysis and A. Lal and T. Kawai for valuable discussions. This research was supported by the Intramural Research Program of the NIA, NIH. Received 15 May 2005. Returned for modification 8 June 2005. Accepted 8 August 2005. Anderson, P., and N. Kedersha. 2002. Visibly stressed: the role of eIF2, TIA-1, and stress granules in protein translation. Cell Stress Chaperones 7:213-221. Anderson, P., and N. Kedersha. 2002. Stressful initiations.J. Cell Sci. 115:3227-3234. Anderson, P., K. Phillips, G. Stoecklin, and N. Kedersha. 2004. Post-transcriptional regulation of proinflammatory proteins.J. Leukoc. Biol. 76: 42-47. Antic, D., and J. D. Keene. 1997. Embryonic lethal abnormal visual RNA-binding proteins involved in growth, differentiation, and posttranscriptional gene expression.Am. J. Hum. Genet. 61:273-278. Bevilacqua, A., M. C. Ceriani, S. Capaccioli, and A. Nicolin.2003 . Post-transcriptional regulation of gene expression by degradation of messenger RNAs. J. Cell. Physiol. 195:356-372. Brennan, C. M., and J. A. Steitz. 2001. HuR and mRNA stability. Cell. Mol. Life Sci. 58:266-277. Carballo, E., W. S. Lai, and P. J. Blackshear.1998 . Feedback inhibition of macrophage tumor necrosis factor-α production by tristetraprolin. Science 281:1001-1005. Cheadle, C., M. P. Vawter, W. J. Freed, and K. G. Becker. 2003. Analysis of microarray data using Z score transformation. J. Mol. Diagn. 5:73-81. Chen, C. Y., and A.-B. Shyu. 1995. AU-rich elements: characterization and importance in mRNA degradation.Trends Biochem. Sci. 20:465-470. Cok, S. J., S. J. Acton, and A. R. Morrison. 2003. The proximal region of the 3′-untranslated region of cyclooxygenase-2 is recognized by a multimeric protein complex containing HuR, TIA-1, TIAR, and the heterogeneous nuclear ribonucleoprotein U. J. Biol. Chem. 278:36157-36162. Del Gatto-Konczak, F., C. F. Bourgeois, C. Le Guiner, L. Kister, M. C. Gesnel, J. Stevenin, and R. Breathnach.2000 . The RNA-binding protein TIA-1 is a novel mammalian splicing regulator acting through intron sequences adjacent to a 5′ splice site. Mol. Cell. Biol. 20:6287-6299. Dember, L. M., N. D. Kim, K. Q. Liu, and P. Anderson. 1996. Individual RNA recognition motifs of TIA-1 and TIAR have different RNA binding specificities.J. Biol. Chem. 271:2783-2788. Derrigo, M., A. Cestelli, G. Savettieri, and I. Di Liegro.2000 . RNA-protein interactions in the control of stability and localization of messenger RNA. Int. J. Mol. Med. 5:111-123. Dever, T. E. 2002. Gene-specific regulation by general translation factors. Cell 108:545-556. Dixon, D. A., G. C. Balch, N. Kedersha, P. Anderson, G. A. Zimmerman, R. D. Beauchamp, and S. M. Prescott. 2003. Regulation of cyclooxygenase-2 expression by the translational silencer TIA-1. J. Exp. Med. 198:475-481. Dreyfuss, G., V. N. Kim, and N. Kataoka. 2002. Messenger-RNA-binding proteins and the messages they carry. Nat. Rev. Mol. Cell. Biol. 3:195-205. Eddy, S. R., and R. Durbin. 1994. RNA sequence analysis using covariance models. Nucleic Acids Res. 22:2079-2088. Forch, P., O. Puig, N. Kedersha, C. Martinez, S. Granneman, B. Seraphin, P. Anderson, and J. Valcarcel. 2000. The apoptosis-promoting factor TIA-1 is a regulator of alternative pre-mRNA splicing. Mol. Cell 6:1089-1098. Forch, P., and J. Valcarcel. 2001. Molecular mechanisms of gene expression regulation by the apoptosis-promoting protein TIA-1.Apoptosis 6:463-468. Galbán, S., J. L. Martindale, K. Mazan-Mamczarz, I. López de Silanes, J. Fan, W. Wang, J. Decker, and M. Gorospe.2003 . Influence of the RNA-binding protein HuR in pVHL-regulated p53 expression in renal carcinoma cells. Mol. Cell. Biol. 23:7083-7095. Gorodkin, J., L. J. Heyer, and G. D. Stormo.1997 . Finding the most significant common sequence and structure motifs in a set of RNA sequences. Nucleic Acids Res. 25:3724-3732. Gueydan, C., L. Droogmans, P. Chalon, G. Huez, D. Caput, and V. Kruys.1999 . Identification of TIAR as a protein binding to the translational regulatory AU-rich element of tumor necrosis factorα mRNA. J. Biol. Chem. 274:2322-2326. Hollams, E. M., K. M. Giles, A. M. Thomson, and P. J. Leedman. 2002. mRNA stability and the control of gene expression: implications for human disease.Neurochem. Res. 27:957-980. Kedersha, N., M. R. Cho, W. Li, P. W. Yacono, S. Chen, N. Gilks, D. E. Golan, and P. Anderson. 2000. Dynamic shuttling of TIA-1 accompanies the recruitment of mRNA to mammalian stress granules. J. Cell Biol. 151: 1257-1268. Kedersha, N. L., M. Gupta, W. Li, I. Miller, and P. Anderson.1999 . RNA-binding proteins TIA-1 and TIAR link the phosphorylation of eIF-2α to the assembly of mammalian stress granules. J. Cell Biol. 147:1431-1442. Kedersha, N. L., and P. Anderson. 2002. Stress granules: sites of mRNA triage that regulate mRNA stability and translatability. Biochem. Soc. Trans. 30:963-969. Kimball, S. R. 2001. Regulation of translation initiation by amino acids in eukaryotic cells. Prog. Mol. Subcell. Biol. 26:155-184. Lal, A., K. Mazan-Mamczarz, T. Kawai, X. Yang, J. L. Martindale, and M. Gorospe. 2004. Concurrent versus individual binding of TIA-1 and AUF1 to common labile target mRNAs. EMBO J. 23:3092-3102. Le Guiner, C., F. Lejeune, D. Galiana, L. Kister, R. Breathnach, J. Stevenin, and F. Del Gatto-Konczak. 2001. TIA-1 and TIAR activate splicing of alternative exons with weak 5′ splice sites followed by a U-rich stretch on their own pre-mRNAs.J. Biol. Chem. 276:40638-40646. Le Hir, H., D. Gatfield, E. Izaurralde, and M. J. Moore.2001 . The exon-exon junction complex provides a binding platform for factors involved in mRNA export and nonsense-mediated mRNA decay. EMBO J. 20:4987-4997. Loflin, P., C. Y. Chen, and A.-B. Shyu. 1999. Unraveling a cytoplasmic role for hnRNP D in the in vivo mRNA destabilization directed by the AU-rich element. Genes Dev. 13:1884-1897. López de Silanes, I., M. Zhan, A. Lal, X. Yang, and M. Gorospe.2004 . Identification of a target RNA motif for RNA-binding protein HuR. Proc. Natl. Acad. Sci. USA 101:2987-2992. Mazan-Mamczarz, K., S. Galbán, I. López de Silanes, J. L. Martindale, U. Atasoy, J. D. Keene, and M. Gorospe.2003 . RNA-binding protein HuR enhances p53 translation in response to ultraviolet light irradiation. Proc. Natl. Acad. Sci. USA 100:8354-8359. Min, H., C. W. Turck, J. M. Nikolic, and D. L. Black. 1997. A new regulatory protein, KSRP, mediates exon inclusion through an intronic splicing enhancer. Genes Dev. 11:1023-1036. Piecyk, M., S. Wax, A. R. Beck, N. Kedersha, M. Gupta, B. Maritim, S. Chen, C. Gueydan, V. Kruys, M. Streuli, and P. Anderson.2000 . TIA-1 is a translational silencer that selectively regulates the expression of TNF-α. EMBO J. 19:4154-4163. Stoecklin, G., M. Colombi, I. Raineri, S. Leuenberger, M. Mallaun, M. Schmidlin, B. Gross, M. Lu, T. Kitamura, and C. Moroni. 2002. Functional cloning of BRF1, a regulator of ARE-dependent mRNA turnover.EMBO J. 21:4709-4718. Tenenbaum, S. A., P. J. Lager, C. C. Carson, and J. D. Keene. 2002. Ribonomics: identifying mRNA subsets in mRNP complexes using antibodies to RNA-binding proteins and genomic arrays. Methods 26:191-198. Vawter, M. P., T. Barrett, C. Cheadle, B. P. Sokolov, W. H. Wood III, D. M. Donovan, M. Webster, W. J. Freed, and K. G. Becker.2001 . Application of cDNA microarrays to examine gene expression differences in schizophrenia. Brain Res. Bull. 55:641-650. Wang, W., M. C. Caldwell, S. Lin, H. Furneaux, and M. Gorospe.2000 . HuR regulates cyclin A and cyclin B1 mRNA stability during cell proliferation. EMBO J. 19:2340-2350. Wang, W., H. Furneaux, H. Cheng, M. C. Caldwell, D. Hutter, Y. Liu, N. J. Holbrook, and M. Gorospe. 2000. HuR regulates p21 mRNA stabilization by ultraviolet light. Mol. Cell. Biol. 20:760-769. Wilkie, G. S., K. S. Dickson, and N. K. Gray.2003 . Regulation of mRNA translation by 5′- and 3′-UTR-binding factors. Trends Biochem. Sci. 28: 182-188. Zhang, T., V. Kruys, G. Huez, and C. Gueydan. 2002. AU-rich element-mediated translational control: complexity and multiple activities of trans-activating factors. Biochem. Soc. Trans. 30:952-958. Zhang, W., B. J. Wagner, K. Ehrenman, A. W. Schaefer, C. T. DeMaria, D. Crater, K. DeHaven, L. Long, and G. Brewer. 1993. Purification, characterization, and cDNA cloning of an AU-rich element RNA-binding protein, AUF1. Mol. Cell. Biol. 13:7652-7665. Molecular and Cellular Biology Oct 2005, 25 (21) 9520-9531; DOI: 10.1128/MCB.25.21.9520-9531.2005 You are going to email the following Identification and Functional Outcome of mRNAs Associated with RNA-Binding Protein TIA-1
CommonCrawl
Targeted delivery of harmine to xenografted human pancreatic islets promotes robust cell proliferation RNA aptamers specific for transmembrane p24 trafficking protein 6 and Clusterin for the targeted delivery of imaging reagents and RNA therapeutics to human β cells Dimitri Van Simaeys, Adriana De La Fuente, … Paolo Serafini LRH-1 agonism favours an immune-islet dialogue which protects against diabetes mellitus Nadia Cobo-Vuilleumier, Petra I. Lorenzo, … Benoit R. Gauthier Subcutaneous nanotherapy repurposes the immunosuppressive mechanism of rapamycin to enhance allogeneic islet graft viability Jacqueline A. Burke, Xiaomin Zhang, … Evan A. Scott Lentiviral gene therapy vectors encoding VIP suppressed diabetes-related inflammation and augmented pancreatic beta-cell proliferation Fulya Erendor, Elif Ozgecan Sahin, … Salih Sanlioglu In Pursuit of Stability Enhancement of a Prostate Cancer Targeting Antibody Derived from a Transgenic Animal Platform Sathya Venkataramani, Robin Ernst, … Rajkumar Ganesan A bioorthogonal system reveals antitumour immune function of pyroptosis Qinyang Wang, Yupeng Wang, … Zhibo Liu In vivo self-assembled siRNA as a modality for combination therapy of ulcerative colitis Xinyan Zhou, Mengchao Yu, … Xi Chen Immunosuppressive effect of arsenic trioxide on islet xenotransplantation prolongs xenograft survival in mice Bin Zhao, Jun-jie Xia, … Ben-hua Zhao Swati Mishra1,2 & Philip R. Streeter2 Nanoscience and technology Type 1 diabetes (T1D) occurs as a consequence of the autoimmune destruction of insulin-producing pancreatic beta (β) cells and commonly presents with insulin deficiency and unregulated glycemic control. Despite improvements in the medical management of T1D, life-threatening complications are still common. Beta-cell replication to replace lost cells may be achieved by using small-molecule mitogenic drugs, like harmine. However, the safe and effective delivery of such drugs to beta cells remains a challenge. This work aims to deploy an antibody conjugated nanocarrier platform to achieve cell-specific delivery of candidate therapeutic and imaging agents to pancreatic endocrine cells. We approached this goal by generating core–shell type micellar nanocarriers composed of the tri-block copolymer, Pluronic®F127 (PEO100–PPO65–PEO100). We decorated these nanocarriers with a pancreatic endocrine cell-selective monoclonal antibody (HPi1), with preference for beta cells, to achieve active targeting. The PPO-based hydrophobic core allows encapsulation of various hydrophobic cargoes, whereas the PEO-based hydrophilic shell curbs the protein adhesion, hence prolonging the nanocarriers' systemic circulation time. The nancarriers were loaded with quantum dots (QDots) that allowed nanocarrier detection both in-vitro and in-vivo. In-vitro studies revealed that HPi1 conjugated nanocarriers could target endocrine cells in dispersed islet cell preparations with a high degree of specificity, with beta cells exhibiting a fluorescent quantum dot signal that was approximately five orders of magnitude greater than the signal associated with alpha cells. In vivo endocrine cell targeting studies demonstrated that the HPi1 conjugated nanocarriers could significantly accumulate at the islet xenograft site. For drug delivery studies, the nanocarriers were loaded with harmine. We demonstrated that HPi1 conjugated nanocarriers successfully targeted and delivered harmine to human endocrine cells in a human islet xenograft model. In this model, targeted harmine delivery yielded an ~ 41-fold increase in the number of BrdU positive cells in the human islet xenograft than that observed in untreated control mice. By contrast, non-targeted harmine yielded an ~ 9-fold increase in BrdU positive cells. We conclude that the nanocarrier platform enabled cell-selective targeting of xenografted human pancreatic endocrine cells and the selective delivery of the hydrophobic drug harmine to those cells. Further, the dramatic increase in proliferation with targeted harmine, a likely consequence of achieving higher local drug concentrations, supports the concept that targeted drug delivery may promote more potent biological responses when using harmine and/or other drugs than non-targeting approaches. These results suggest that this targeted drug delivery platform may apply in drug screening, beta cell regenerative therapies, and/or diagnostic imaging in patients with type 1 diabetes. Type 1 diabetes (T1D) is an autoimmune disorder characterized by T-cell mediated insulin-producing beta cell loss leading to insulin deficiency and unregulated blood glucose levels1. Currently, 1.9 million individuals are living with T1D in the United States, and it affects approximately 9 million people worldwide2,3. While recent advancements in insulin therapy and the latest blood glucose monitoring technologies allow patients to measure their blood glucose levels more accurately so they may achieve optimal glycemic control, T1D associated chronic complications like ketoacidosis, heart attack, stroke, nephropathy, retinopathy, neuropathy, and hypoglycemia are still common4. In addition, more than 40% of patients receiving insulin therapy eventually become unaware of hypoglycemia5. Since experiencing frequent and prolonged hypoglycemic episodes can be potentially fatal, allogeneic transplantation of healthy pancreatic Islets of Langerhans is used as a last resort treatment to temporarily restore endogenous insulin production in such patients6,7,8. Nonetheless, limited donor availability to produce enough islet cells to achieve normoglycemia and long-term dependence on anti-rejection immunosuppressive drugs limit the large-scale implementation of this approach9,10. Therefore, it is crucial to develop an alternative therapeutic intervention to restore insulin production in patients with T1D. In this context, the identification of small-molecule drugs capable of enhancing the proliferation of human pancreatic beta cells in vivo (e.g., serpin b, harmine, and its derivatives) represents a breakthrough in potential therapeutic approaches11,12,13. However, currently, there is no effective method available to selectively deliver these highly potent mitogenic drugs to pancreatic endocrine cells or their subsets. The overarching goal here is to regenerate functional beta cell mass from residual beta cells in type 1 diabetic pancreas. Thus, the study focuses on the cell-selective delivery of proliferation-inducing small molecules to pancreatic endocrine cells using antibody conjugated "actively targeted" nanocarriers. Focusing on human pancreatic islet and islet cell subset detection, Dorrell and colleagues developed monoclonal antibodies (mAbs) that react with cell surface molecules on discrete cell subsets in the human pancreas14. With the help of immunohistochemical staining, it has been demonstrated that these reagents efficiently target xenografted human endocrine cells in vivo (PRS, unpublished). In recent years, mAbs have shown tremendous clinical success in cancer cell targeting and delivery of chemotherapeutic drugs specifically to solid tumors, which has led to the development of better diagnostic and therapeutic tools in the field of cancer research15,16,17,18. Similarly, diabetes researchers have been striving to precisely deliver therapeutic cargos to pancreatic islets and islet cell subsets to manipulate their properties and function. However, monoclonal antibodies have not yet been successfully used to target pancreatic islets for non-invasive imaging or therapeutic applications, mainly due to the unavailability of cell specific antibodies. Moreover, due to the limited volume of islet cells (< 2% of the total pancreatic volume) and low target expression levels, the targeting moiety must be specific for the pancreatic endocrine cells to target pancreatic islets successfully19,20. Within this frame of reference, Moore et al. could demonstrate excellent beta cell-specific accumulation of IC2 antibody conjugated radioactive and exendin 4 conjugated iron oxide based imaging probes in ex vivo imaging of the excised murine pancreata. Monoclonal antibody IC2 and exendin 4 respectively reacted with glucagon-like peptide 1 (GLP-1) receptors and an unknown molecule present on beta cells20. Subsequently, Balhuizen et al. have developed high-affinity camelid single-domain antibody (nanobody) targeting human Dipeptidyl-Peptidase 6 (DPP6) that localizes only in beta and alpha cells within the pancreas. The radiolabelled nanobodies demonstrated successful non-invasive visualization of DPP6-expressing cells transplanted in immunodeficient mice by SPECT/CT21. In our studies, to achieve cell-type selective targeting, nanocarriers were conjugated to a pancreatic endocrine cell-selective monoclonal antibody (HPi1). HPi1 preferntially reacts to beta cells and a lesser extent with alpha cells. As shown in the schematic (Fig. 1), the nanocarrier platform described here consists of 1) a cell-selective targeting component, the monoclonal antibody HPi1, to facilitate active targeting of pancreatic beta cells, 2) a Pluronic F127 based nanocarriers that can accommodate a variety of hydrophobic cargo, such as imaging (quantum dots) or therapeutic agents (Harmine) and 3) cargo, allowing assessment of effective targeting, cargo delivery, and promotion of biological responses. Schematic structure of cell type selective cargo encapsulating nanocarriers in aqueous solution. PEO polyethylene oxide, PPO polypropylene oxide. Overall, this nanocarrier platform has allowed us to explore antibody-mediated targeting and delivery of cell proliferating small molecule drug harmine through active targeting to human pancreatic endocrine cells transplanted in mice. This combination of features is anticipated to enable higher drug delivery efficiency, improve therapeutic efficacy, and minimize off-target side effects or toxicities. This nanocarrier platform may have application in the targeted delivery of various therapeutic and/or regulatory agents; such agents could be used to prevent beta cell apoptosis; promote beta cell regenerative processes; reduce the immunogenicity of beta cells or alter cellular phenotype and function. Pluronic F127 (P305 500 GM) was purchased from Anatrace Products LLC (Maumee, OH, USA). The core–shell type CdSe/ZnS (CZ520-25) green fluorescence and CuInS/ZnS (CIS750-25) near-infrared quantum dots were purchased from NNCrystal US Corporation (Fayetteville, AR, USA). The quantum dot formulations were obtained in chroform from the manufacturer. Harmine and Harmine hydrochloride salts were purchased from Sigma Aldrich (St. Louis, MO, USA). Benzene anhydrous (Fisher Chemicals Cat#B4121), nitrophenyl chloroformate (Cat#AC170800250), diethyl ether (Cat#AC176830010), and HPLC grade water (Cat#W5N-119) were used as received from Thermo Fisher Scientific, Inc. (Waltham, MA, USA). Phosphate buffer saline solution without calcium magnesium (Cat#SH30256.02), fetal bovine serum (FBS; Cat#SH30396.02HI), and trypsin (Cat#SH30236.01) were purchased from HyClone Laboratories Inc. (Logan, UT, USA). Corning® cellgro® CMRL media (Cat#15-110-CV) was used for islet cell preparations. Human pancreatic islet cell preparation The Integrated Islet Distribution Program (IIDP, City of Hope) (https://iidp.coh.org) provided human islet samples for in vitro (less than 80% pure) and in vivo (more than 95% pure) studies. For the preparation of single-cell suspensions, islets were washed with 1 × PBS before incubation in 3 mL of 0.05% trypsin–EDTA per 5000 islet equivalent (IEQ) in a 15 ml falcon tube. Islets were digested for 10–12 min at 37 °C. Every 3 min the islets were gently dispersed using p1000 micro pipette. Subsequently, CMRL media supplemented with 10% FBS was slowly added to the enzyme dispersed cells to inhibit trypsin activity. Cells were then washed and resuspended in CMRL media supplemented with 10% FBS before incubation with nanocarriers. The human pancreatic endocrine cell reactive mouse mAb, HPi1 (Clone designation: HIC0-4F9), and isotype-matched negative control mouse mAb HPDAC1-6D2 (6D2; no reactivity with normal human pancreatic cells or with mouse cells) were used in this investigation. For conjugation to nanocarriers, these mAbs were isolated from culture supernatants using Pierce™ protein G immobilized beaded agarose resin (Thermo Fisher Scientific, Waltham, MA) as per the manufacturer's guidelines. Purified mAb was buffer exchanged into PBS, filter-sterilized, and stored at 4 °C until used. For the xenograft immunolabeling, tissue sections were stained with 200 μl primary antibody HPi2 (Clone designation: HIC1-2B4) supernatant, which was detected with FITC conjugated goat anti-mouse IgG (H + L) secondary (Jackson Immuno Research Lab) at 1:200 dilution or Alexa Fluor™ 488 goat anti-mouse IgG (A11001, Lifetechnologies) at 1:400 dilution. For proliferation studies rat monoclonal antibody to Ki67 (SolA15; eBiosciences™, Invitrogen Cat#14–5698-80) was used at 1:100 concentration and primary was detected with goat anti-rat IgG (H + L) cy3 (AP136C; Millipore) secondary antibody at 1:200 concentrations. The sections were incubated overnight with primary antibody at 4°C in a humidified chamber. For BrdU staining, tissue sections were incubated in 2 N HCl at 37 °C for 30 min, neutralized in 100 mM sodium borate buffer pH 8.5 for 10 min at room temperature, washed with 1XPBS and blocked with goat serum blocking solution for 30 min. These sections were then incubated overnight at 4 °C with rat monoclonal antibody to BrdU [BU1/75(ICR1); abcam, Cat# ab6326] at 1:100 dilution. The primary antibody was detected with cy3 conjugated goat anti-rat IgG(H + L) cy3 (AP136C; Millipore) secondary antibody at 1:200 concentrations. The antibodies used in in-vitro targeting studies are as follows—monoclonal mouse anti-human pro-insulin supernatant (Developmental Studies Hybridoma Bank, Cat#GS-9A8) at 5µg/ml concentration, polyclonal rabbit anti-human glucagon (Dako, Cat#A0565) at 1:200 dilution, polyclonal rabbit anti-human amylase (Sigma, Cat#A8273) at 1:100 dilution. Targeted nanocarrier production and cargo encapsulation End-group activation of Pluronic® F127 In order to conjugate antibody, terminal hydroxyl groups of pluronic chain were activated into amine reactive functional groups. For end-group activation, the hydroxyl end groups of the Pluronic®F127 chain were converted into amine-reactive p-nitrophenyl carbonate derivatives22. Briefly, Pluronic®F127 (MW 12,600, 2 g) was completely dissolved in anhydrous benzene (6 mL). A solution of p-nitrophenol chloroformate (p-NPC) in anhydrous benzene was gradually added to the stirred Pluronic® F127 solution, with p-NPC added in 3:1 molar excess of the terminal –OH groups present in Pluronic® F127. The mixture was stirred at room temperature for 24 h. The product was precipitated in 250 mL ice-cold diethyl ether; the residue was gravity filtered through Whatman filter paper and re-dissolved in benzene. The precipitation procedure was repeated 3 times to remove unreacted p-NPC altogether. After the third precipitation, the filtrate was thoroughly dried at room temperature in a vacuum desiccator to obtain the functional group activated Pluronic® F127-pNP (PF127- p-nitrophenyl ester). The degree of end group activation with p-nitrophenyl chloroformate was determined by UV analysis of the p-nitrophenol released from PF127-pNP after alkaline hydrolysis. The 1H NMR spectrum of the product was recorded at OHSU's NMR core facility, using a Bruker Advance 400 MHz high performance NMR spectrometer (Bruker Corporation, Billerica, MA) with deuterated water (D2O) used as a solvent to confirm successful end-group derivatization. Encapsulation of Cargo For proof-of-concept endocrine cell targeting studies, QDots were chosen as a model cargo and imaging agent, and these were encapsulated within the hydrophobic core of the nanocarriers. QDot-loaded end-group activated Pluronic® F127 nanocarriers [PF127(QD)-pNP] were prepared using a thin-film hydration method23. Briefly, 1.0 mg hydrophobic CdSe/ZnS (green fluorescent) or CuInS/ZnS (NIR) QDots (200 µL of 5 mg/mL QDot suspension in chloroform) were mixed in 20 mg PF127-pNP dissolved in 3 mL chloroform. The solution was stirred in dark for 4 h at room temperature, then sonicated for 5 min in a bath sonicator in the presence of ice. The solution was then transferred into a round bottom flask, and the organic phase was evaporated using a rotatory evaporator. The resulting thin film deposited at the wall of the round bottom flask was rehydrated with 3 mL HyPure Molecular biology grade (deionized, distilled) water (HyClone, Logan, UT). The round bottom flask was covered with aluminum foil and was kept on a shaker at 250 rpm for 2 h to obtain a nanocarrier solution, which was then centrifuged at 376 RCF for 15 min to remove any unencapsulated QDots. The supernatant was lyophilized to obtain a light-yellow colored (in case of green fluorescent QDot encapsulation) or light-brown colored (in case of near Infrared QDdot incapsulation) powder and stored at 4 °C. For harmine encapsulation, Pluronic F127/Harmine (100 mg/5 mg, w/w) were completely dissolved in 6 mL of chloroform by vigorous stirring at 40 °C over a magnetic hotplate stirrer in a glass scintillation vial, and later transferred into a round bottom flask and dried through rotary evaporation to form a thin film. The organic solvent was then removed by drying under a vacuum. The film was rehydrated in HyPure Molecular biology grade (deionized, distilled) water. Un-encapsulated harmine precipitate was removed by centrifugation, and the harmine loaded end-group activated nanocarrier [PF127(HM)-pNP] solution was lyophilized to obtain a white colored powder. Antibody Conjugation Lyophilized, cargo-loaded, end-group activated Pluronic® F127 [PF127(cargo)-pNP] was weighed (10 mg per animal) in a 1.5 ml Eppendorf tube. HPi1 or the negative control antibody, 6D2 in 1XPBS, were added dropwise to the PF127(cargo)-pNP to achieve a final ratio 1/100 ratio (wt/wt). The initial antibody concentrations were maintained at 2 mg/mL or more so that the volume of antibody solution added would not exceed 50 μL per 10 mg of polymer. The polymer was then incubated with the antibody at 4 °C for 30 min. Due to the sticky nature of the wet polymer, the solution of polymer and antibody was dissolved by vortexing and intermittent manual shaking, without using pipettes. The reaction volume was then increased to 100 μL by adding 0.1 M Sorenson's phosphate buffer at pH 8.0. The solution was then mixed using an end-over-end tube rotator overnight at room temperature24. The antibody conjugated PF127(cargo)-pNP was then centrifuged at 15,871 RCF for 15 min at room temperature to remove the unconjugated antibody. The supernatant was discarded, the pellet was washed three times with 1XPBS and then resuspended in 200 µL 1XPBS by vigorous pipetting and by ultrasonication to obtain a clear solution of PF127(cargo)-mAb nanocarriers. Control conjugates were prepared by conjugation to a negative control antibody that does not react with mouse tissue or normal human pancreatic cells. The final weight percent of antibody in the conjugates was determined using a BCA protein assay (Pierce Biotechnology, Rockford, IL), with BSA as a protein standard. Characterization of cargo-loaded nanocarriers The size and morphology of the nanocarriers were determined by transmission electron microscopy (TEM; FEI T12 Tecnai™ T12 Spirit, Hillsboro, OR) at OHSU's Multiscale Microscopy Core. The nanocarrier dispersion was negatively stained with 1% uranyl acetate for TEM measurements. One 10 μL drop of an aqueous dispersion specimen was deposited on a carbon-coated TEM copper grid (PELCO® Grids, Ted Pella Inc., Redding, CA) with 300 mesh and allowed to dry in air for 2 min. The surface of the carbon film had previously been glow-discharged by exposure under plasma to render it hydrophilic. Immediately before TEM sample preparation, the nanocarriers were vortexed for 60 s, vigorously pipetted, ultrasonicated for 10 min in the presence of ice, and vortexed again for an additional 10 s, which helped break down nanocarrier aggregates. Assessment of endocrine cell targeting in vitro Specific targeting of pancreatic endocrine cells was investigated by treating enzymatically dispersed human pancreatic islets with antibody-targeted (PF127(QDot)-HPi1) and negative control antibody-conjugated nanocarriers (PF127(QDot)-6D2). One -thousand dispersed pancreatic islet equivalents were incubated with 2 mg of PF127(QDot)-HPi1 or negative control nanocarriers in 500 µL complete CMRL1066 growth media for 2 h at 37 °C. All the treatments were conducted in duplicate. Following incubation, the cells were centrifuged, and the cell pellet was washed with PBS three times to remove free nanocarriers. The cell pellet was dispersed in CMRL 1066 with 2% FBS, and the cells were stained with surface markers HPi2 (hybridoma clone HIC1-2B4) supernatant at 1:10 dilutiona mAb that reacts with all human pancreatic endocrine cells; HPa3 (hybridoma clone HIC3-2D12) supernatant at 1:10 dilution, a mAb that reacts with all non-beta endocrine cells; HPd3 (hybridoma clone DHIC5-4D9) supernatant at 1:10 dilution, a mAb that reacts with human pancreatic duct cells; and HPx1 (hybridoma clone HIC0-3B3) supernatant at 1:10 dilution antibody, a mAb that reacts with human pancreatic exocrine cells. After surface staining, the cells were fixed and permeabilized using the Intracellular Fixation & Permeabilization Buffer Set (eBiosciences, San Diego, CA; Cat#88-8824-00) and stained for various intracellular hormone markers to differentiate alpha, beta, duct, and acinar cells within the islet cell population. A confocal laser scanning microscope Zeiss LSM 780 (Carl Zeiss Microscopy GmbH, Germany) or Zeiss Axioskop 2 plus microscope was utilized for imaging. Based on marker expression, we counted the number of cells of each type that had associated QDots. A minimum of 100 surface marker positive cells was estimated from the treated cell samples that were stained for total endocrine, duct, or acinar surface markers. In this dataset, we further calculated the number of HPi2/pro-insulin or HPa3/glucagon double positives, which represented the percentage of nanocarrier internalizing beta and alpha cells respectively within the total counted endocrine cell population. The experiment was repeated using three different donor pancreatic islet preparations. Cells treated with the negative control nanocarriers were also stained and imaged to evaluate non-specific nanocarrier binding and/or uptake. The total fluorescence signal associated with nanocarriers internalized by endocrine cells was also assessed. A minimum of 10 alpha or beta cells were counted for this measurement. These observations and calculations were made using Bitplane Imaris Scientific 3D/4D Image Analysis Software at OHSU's Advanced Light Microscopy Core. For this, 3D composite fluorescent images captured using the Z-stacking method on Zeiss LSM 780 confocal microscope were transferred to 3/4D image visualization and analysis software Imaris (Oxford Instruments, Abingdon, UK). Imaris software is integrated with distance-detection Matlab algorithms, which use fluorescence intensity data to detect objects in 3D space based on both intensity and size. First, surfaces were created using the "Surface Creation" module within the relevant fluorescent channel in the source image to segment the cells labeled for pro-insulin (beta cells) or glucagon (alpha cells). Utilizing fluorescence thresholding and masking tools found on Imaris, these surfaces were used to mask the green channel to segment the QDot loaded nanocarriers within the labeled cells only. More specifically, through an "intensity threshold" and a "number of voxels" filter, 3D objects were extracted from the original image file, using "Spot" analysis, which allows for quantification of the Qdot loaded nanocarriers internalized by the labeled cell. The objects defined as spots by the software are intracellular structures in which the QDot loaded nanocarriers localize. For quantification, statistics files were extracted and the pertinent information, i.e., the sum of average intensities within the green channel (nanocarriers) masked by red channel (intracellular hormone pro-insulin or glucagon) as a measure of endocrine cell subset type) was recorded. Here the sum of average intensities associated with the green channel masked by the red cy3 channel represents the sum of the average intensities related with the QDot loaded nanocarriers internalized by the labeled cells. This number correlates to the amount of QDot loaded nanocarriers internalized by the labeled cells. The extracted information was plotted using GraphPad Prism. Assessment of endocrine cell targeting in vivo Islet xenograft model For the human islet xenograft model, 1,000 human IEQ from cadaveric donors were transplanted under the renal capsule of anesthetized atleast 8 weeks old male NOD/LtSz-scid IL2R gamma null NSG Tg(RIP-HuDTR) mice. Human C-peptide levels were monitored on day 7, day 14, and day 21 post-transplantation using a Human C-peptide ELISA (ALPCO Diagnostics, Salem, NH) to ensure successful islet engraftment. The animal care, handling, and all animal studies were performed following ARRIVE guidelines, in compliance with, and with approval of, the Oregon Health & Science University's Institutional Animal Care and Use Committee (IACUC, protocol# IP00001290). Cell targeting study Mice were intravenously injected via the retro-orbital sinus with 10 mg of nanocarrier [PF127(cargo)-mAb] in 100 µL 1xPBS. Whole-body fluorescence images were recorded at designated timepoints after intravenous administration of the nanocarrier formulation using a Xenogen IVIS® 200 Series instrument (Caliper Life Sciences, Hopkinton, MA) equipped with a 150 W Quartz halogen lamp and a 1 mW power scanning laser. The QDots were excited at 640 nm, with their emission detected using a 760 nm band pass filters. The fluorescence signal at the transplantation site was quantified using Living Image 4.0 software to calculate the flux radiating omnidirectionally from the region of interest (ROI) and graphed as radiant efficiency (photons/s/cm2/str)/(μW/cm2). To yield a standardized ROI for measuring the fluorescence signal, the same area of capture was used for each mouse. Fluorescence from a null or background capture area was measured and subtracted from each reading. The animals were humanely euthanized, and organs were harvested 48 h post-injection. Tissues were immediately embedded in OCT medium, frozen over liquid nitrogen, and stored at − 80 °C. Cryo-sections (5–10 μm) were mounted on glass slides and fixed in cold acetone for 10 min. Xenografts were located by immunofluorescence microscopy using fluorophore-labeled antibodies. Sections were mounted with Hoechst mounting media, and images were captured by confocal microscopy. N = 2 for each of targeted and negative control nanocarriers. In vivo harmine delivery Human pancreatic islet xenografted mice were randomly selected to receive one of four treatments: antibody-targeted harmine-loaded nanocarriers PF127(HM)-HPi1, negative antibody control harmine-loaded nanocarriers PF127(HM)-HPDAC-6D2, harmine-HCl in 1xPBS or saline only. Harmine loaded nanocarriers were administered intravenously via the retro-orbital sinus once a day for 7 days (10 mg/kg/dose). Harmine-HCl was administered intraperitoneally every 12 h for 7 days (10 mg/kg/dose) as per Stewart et al.13. Animals were given BrdU in drinking water for 7 days. On the morning of day 8, animals were humanely euthanized, and kidneys were harvested, OCT blocks were prepared and stored at − 80 °C. Later, the sections were cut using a cryostat and immediately fixed in acetone. The sections were immunostained with HPi2 antibody to locate the xenograft. The sections were further immunostained for cell-proliferation markers BrdU or Ki67 to quantify proliferating cells. The effect of nanocarriers on the viability of dispersed islet cells was evaluated using the CellTiter 96® Aqueous One Solution Cell Proliferation Assay (MTS, Promega, USA). This assay uses tetrazolium (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium, inner salt), a compound that is bio-reduced by metabolically active cells to a soluble formazan product. The quantity of formazan produced indicates the number of viable cells in the culture and is assessed colorimetrically by measuring the change in absorbance at 490 nm (Multiskan MMC microplate reader; Thermo Fisher Scientific). Enzyme dispersed islets were placed in a 96-well plate at a cell density of 10,000 cells per well. Then 100 μL of fresh culture media containing nanocarriers at concentrations of 50, 100, and 500 μg/100 μL was slowly added to the cells. After 2 and 4 h of nanocarrier exposure, the cell were collected in eppendorf tubes, centrifuged at 1000 rpm and the culture medium was removed. The cells were gently rinsed three times with PBS to wash off any free nanocarriers sticking to the cell surface. The cells were resuspended in 100µl fresh media, plated in 96 well format and 20µl MTS reagent was added to the cells. After 2 h of incubation with the MTS reagent, the cells were collected in eppendorf tubes, centrifuged at 1000 rpm and the supernatant was transferred to a new 96-well plate for read-out. The viability of the dispersed islet cells was determined by MTS reduction. Each treatment was carried out in triplicate, and cell survival was calculated with respect to untreated control cells. Synthesis and characterization of mAb-conjugated, cargo-loaded, Pluronic F127 nanocarriers Pluronic F127 is a tri-block copolymer comprised of a central hydrophobic polypropylene oxide (PPO) block flanked by two hydrophilic polyethylene oxide (PEO) blocks. The approximate lengths of the two PEO blocks are 100 repeat units for each PEO block, while the approximate length of the PPO block is 60 repeat units. The copolymer is activated with 1:3 molar excess of 4-nitrophenyl chloroformate to synthesize end-group activated PF127 (PF127-pNP). The activated product was analyzed with 1H nuclear magnetic resonance. Figure 2 shows the NMR spectra of Pluronic F127 and end group activated Pluronic F127-pNP in D2O. The peak intensities of various protons in the product are as follows 1H-NMR (D2O): = 1.1 (m, 3H, PPO, –CH3), 3.5 (m, 1H, PPO, CH), 3.5 (m, 2H, PPO, –CH2), 3.66 (m, 2H, PEO, –CH2), 7.4 (d, 2H nitrophenol, -CH) and 8.3 (d, 2H nitrophenol-CH) ppm. Peaks associated with aryl protons of the nitrophenyl groups (NO2–C6H4–, δ = 7.4–8.3 ppm) in the spectra of the product confirmed successful end group activation. The degree of activation was calculated by hydrolysis. For this quantification, activated pluronic is hydrolyzed in 0.1 M NaOH for 1 h, and absorbance of the hydrolysis product at 400 nm was calculated using the molar extinction coefficient of p-nitrophenol (18,100 M-1 cm−1) in 0.1 M NaOH. The degree of activation determined from the hydrolysis experiments was 57.3%. The peak intensity ratio of the aryl protons of the nitrophenyl groups (NO2–C6H4–, δ = 7.4–8.3 ppm) to the methyl protons of PF127 (–CH3, δ = 1.1 ppm) confirmed the degree of activation calculated from the hydrolysis data. 1H-NMR spectra of Pluronic-F127 (A) and p-nitrophenyl chloroformate activated Pluronic F127-pNP (B) in D2O. δ = 1.125 ppm (–CH3 protons in PPO blocks in Pluronic chain), δ = 3.5 ppm (–CH and –CH2 units of PPO block in Pluronic chain), δ = 3.66 ppm (–CH2CH2 units of PEO blocks in Pluronic chain), δ = 7.447/7.479 and 8.317/8.333 ppm (p-nitrophenyl doublet proton peaks). Pluronic based micelles have been well studied for their potential to efficiently retain hydrophobic small molecules in their hydrophobic PPO core23. The solubilities of the nanocarrier encapsulated hydrophobic drugs or cargo in an aqueous medium, therefore, is correlated to their encapsulation efficiency23,25. The encapsulation efficiency for Harmine and Qdots was measured using the following formula and was found to be ~ 60% and ~ 75%, respectively. BCA assay confirmed conjugation of ~ 67 µg antibody per 10 mg end group activated Pluronic F127-pNP, 67% of the total antibody used for the conjugation reaction. $$Encapsulation\ Efficiency \left(\%\right)=\frac{Amount\ of\ cargo\ present\ in\ the\ nanocarrier}{Amount\ of\ cargo\ used\ for\ encapsulation}X100$$ It has been previously observed that polymer based nanocarriers with diameters in the range of 10–100 nm successfully avoided uptake by the reticuloendothelial system (RES), which significantly increased the circulation half-life of the encapsulated drug26. Therefore, it was critical to estimate the average diameter of our cargo loaded antibody conjugated nanocarriers. The antibody conjugated nanocarriers PF127(cargo)-mAb were characterized for their sizes by transmission electron microscopy (TEM). Figure 3 shows transmission electron micrographs of the harmine or QDot loaded HPi1 antibody conjugated nanocarriers formulated in 1xPBS. TEM images confirmed that most antibody conjugated cargo encapsulating nanocarriers were found with an average size of 73 ± 10 nm for Qdot loading and 69 ± 8 for harmine loading. The qdot loaded nanocarriers appeared darker in contrast than harmine loaded nanopartciles because of the high electron scattering power of the quantum dots. Antibody conjugated cargo loaded nanocarrier diameter was measured 73 ± 10 nm for QDot loading and 69 ± 8 nm for HM loading. (A,B) TEM images of Pluronic F127(Qdot)-mAb. (C,D) TEM images of Pluronic F127(HM)-mAb HM harmine, TEM transmission electron microscopy. In vitro targeting of endocrine cells by the nanocarriers The ability of the HPi1 conjugated nanocarriers to bind specifically to the endocrine cells in human pancreatic islets was investigated in vitro using dispersed human pancreatic islet cells. The nanocarrier uptake by single cells was measured using fluorescent microscopy after incubating cells with nanocarriers for 2 h. Approximately 200,000 cells were incubated with the targeted (PF127(QDot)-HPi1) or negative control nanocarriers in CMRL growth media supplemented with 10% FBS. Following incubation, the cells were washed with 1XPBS to remove any nanocarriers not associated with cells. Further, 50,000 cells from each nanocarrier treated cell sample were immunostained for beta, alpha, acinar, and duct cell surface markers. Cells were also stained for the intracellular hormones pro-insulin, glucagon, and amylase. Figure 4A, B showed representative images from the in-vitro targeting experiments. As it can be seen in panel A, total endocrine cells stained by HPi2 (red) showed internalization of the QDot loaded nanocarriers, whereas non-endocrine (not stained; duct and acinar) cells did not internalize the nanocarriers. Similarly, pro-insulin positive cells (yellow) showed significantly higher uptake of the nanocarriers when compared to glucagon positive (magenta) cells which confirmed preferential uptake of the nanocarriers by beta cells. In contrast, Fig. 4B revealed that the non-endocrine cell population (duct and acinar cells) represented by cells labeled by duct cell marker DHIC5 4D9 (red) and acinar cell marker amylase (yellow) did not internalize the nanocarriers. The quantitative analysis revealed (Fig. 4C) that the targeted nanocarriers were selectively internalized by endocrine cells; 88% of the beta cells and 36% of the alpha cells showed nanocarrier uptake. By contrast, only 16% of the acinar cells and 9% of the duct cells showed nanocarrier uptake. Nanocarriers conjugated with negative control antibody did not exhibit selective internalization by endocrine cells and the percentages of nanocarrier internalizing total endocrine cells, as well as beta and alpha cells separately, were significantly lower than the HPi1 conjugated nanocarrier treated cells. The percentages of negative control nanocarrier internalizing duct and acinar cells were not significantly different from that obtained with the HPi1 conjugated nanocarrier treatment group. Thus, we conclude that the antibody conjugated nanocarriers selectively target endocrine cells, with a greater preference for beta cells. The total number of nanocarriers internalized by a particular cell type was measured by calculating the total fluorescence associated with the QDots encapsulated in the internalized nanocarriers. Figure 4D shows the representative image demonstrating that nanocarriers internalized by the alpha cells (magenta) were significantly lower than those internalized by the bets cells. In Fig. 4D, all the cells stained with HPi2 represent total endocrine cells; magenta stained cells represent alpha cell subset. Figure 4E shows the results of quantitative data analysis from 3 replicate experiments, which confirms that the number of nanocarriers internalized by the beta cells was significantly higher than that internalized by the alpha cells, differing by approximately 5 orders of magnitude (Fig. 4D,E). Representative images of dispersed human islet cells treated with qdot loaded targeted nanocarriers and co-immunostained with various surface and intracellular markers to determine the percentage of particular cell type internalizing targeted nanocarriers upon treatment in total endocrine, beta, alpha, acinar, and duct cells. Cells were counterstained with DAPI (blue). (A) Pancreatic endocrine cells showed uptake of targeted Qdot encapsulating nanocarriers. Endocrine cell markers; HPi2 (red), anti-pro-insulin (pseudo-colored yellow), anti-glucagon (pseudo-colored magenta) antibodies, targeted nanocarriers (green); (B) duct cells and acinar cells showed minimal uptake of the targeted nanocarriers. Duct cell marker; DHIC5 4D9 mAb (red) and anti-amylase antibody (pseudo-colored yellow ) For each cell-type evaluated, a minimum of 100 cells were counted. (C) Illustrates the percent of total endocrine cells, beta cells, alpha cells, acinar cells, and duct cells that internalized targeted nanocarriers. Results shown are the means ± standard deviation (n = 3). (D) HPi2 and anti-glucagon labeled cells; the cell indicated with arrow is most likely a beta cell which is HPi2 positive but anti-glucagon negative, (E) in accordance the data illustrates dramatically increased qdot signal associated with beta vs. alpha cells. *** indicates p-value < 0.001. In vivo targeting of endocrine cells by the nanocarriers To determine whether this targeting capability can also be achieved under physiological conditions, a single dose (10 mg Pluronic F127(QD)-pNP per animal) of targeted or negative control nanocarriers was administered intravenously via the retro-orbital sinus into NSG mice transplanted with human pancreatic islets under the kidney capsule. Figure 5C,D shows the data from whole body imaging of mice obtained from In Vivo Imaging System (IVIS). The results showed that the targeted nanocarriers effectively accumulated at the transplant site (kidney capsule), and the maximum signal associated with encapsulated NIR qdot was observed at 48 h post-injection timepoint. In contrast, negative control nanocarriers did not show any signal at the transplant site at this timepoint. The IVIS imaging helped identify the time of maximal nanocarrier uptake and xenograft harvesting to conduct ex vivo histological analysis. The ex vivo histological analysis performed on kidney sections revealed significant co-localization of targeted nanocarriers at human pancreatic islet xenografts. Figure 5A,B show the representative micrographs of the kidney tissue sections stained with HPi2 (red) to locate the xenograft. As shown in Fig. 5B, the images demonstrated significant nanocarrier accumulation at the xenograft location in the targeted nanocarrier group. By contrast, the QDot signal in the xenograft of mice injected with the negative control nanocarrier was negligible. Co-localization analysis was performed on the xenograft selected as a region of interest in the red fluorescence channel and QDot associated fluorescence in the green channel. The analysis revealed that the targeted nanocarriers were present in over 16% of the xenograft area compared to the negative control nanocarriers present in less than 2% of the xenograft area. Representative images of human islet xenografts immunostained with HPi2 (red) antibody reactive to pancreatic endocrine cells to locate the graft in kidney sections of mice that received (A) QDot loaded negative control or (B) QDot loaded targeted, nanocarriers (green). The sections were counterstained with DAPI (blue). (C,D) Representative IVIS images from the Living Image 4.3.1 software of mice that received QDot loaded negative control (right) or targeted nanocarriers (left) under short isoflurane anesthesia. (C) Raw image before spectral unmixing (D) composite image obtained after multispectral imaging and spectral unmixing. The IVIS images were obtained at 48 h post-injection. In vivo targeted delivery of harmine Results from in vivo targeted harmine delivery studies (Fig. 6) revealed that along with efficient targeting of the pancreatic endocrine cells, the targeted nanocarriers successfully delivered the payload to the targeted cells, eliciting Harmine-induced cell proliferation. Figure 6A,B show the representative images of the tissue sections stained with the proliferation markers Ki67 and BrdU (red), respectively. The islet xenografts were identified using total endocrine cell reactive antibody HPi2 (green), and the sections were stained for nuclear stain DAPI (blue). Figure 6C summarized the quantitative analysis of the degree of proliferation in different treatment groups. The targeted nanocarriers loaded with harmine yielded Ki67 expression in approximately 12.47% of the pancreatic endocrine cells. By contrast, negative control nanocarriers did not exhibit selective targeting of endocrine cells and yielded 2.76% Ki67 positive cells. When harmine was injected in unencapsulated form (Fig. 6A, free harmine), 1.8% of the endocrine cells at the xenograft site showed proliferation measured by Ki67 expression. The degree of proliferation at the xenograft site was also confirmed using BrdU staining, and the results showed 16.27% BrdU labeled cells present at the xenograft in the targeted nanocarrier group. Harmine delivered encapsulated in the negative control nanocarriers (Fig. 6B, negative control) and unencapsulated form (Fig. 6B, free harmine) resulted in only 2.9% and 2.6% BrdU labeled cells in the xenograft, respectively. In the non-treatment control group, only 0.3% of the cell in the xenograft showed proliferation by Ki67 or BrdU staining. Representative human islet xenograft sections co-immunostained with HPi2 (green) and antibodies against (A) Ki67 (red) or (B) BrdU (red) a to measure proliferation in the transplanted islets upon treatment with harmine loaded targeted nanocarriers, harmine loaded negative control nanocarriers, harmine, and no treatment. The sections were counterstained with DAPI (blue). (C) Proliferating human islet cells within the xenograft from individual mice (n = 3) in each treatment group. Results are the means ± standard deviation for the percent proliferating cells of total HPi2 positive cells for each animal. Data were confirmed to be normally distributed by the Shapiro–Wilk normality test and analyzed using one-way analysis of variance (ANOVA) followed by Dunnett's test for multiple comparisons at α = 0.05. * and *** denotes p-value < 0.05 and p < 0.001 respectively. In-vitro evaluation of nanocarrier associated cytotoxicity To evaluate toxicity associated with the nanocarriers, we performed an MTS (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium cell proliferation assay on dispersed human pancreatic islets. The MTS assay is based on reducing a tetrazolium salt into a colored soluble formazan product by the mitochondrial activity of viable cells. As shown in Fig. 7, treatment with the nanocarriers did not significantly affect cell viability in vitro post 2 h or 4 h time points. For our in-vitro targeting experiments, we have incubated dispersed islet cells with the nanocarriers for up to 4 h. We were interested in learning about the nanocarriers' toxicity on the cells within this time frame. For this reason, the treatments were conducted for short periods, and the results represent the effect of nanocarriers on the islets cell viability only for short incubation times. Effect of PF127(QD)-mAb nanocarrier formulation on the metabolic activity of dispersed human pancreatic islets, measured by the MTS assay. Cells were treated with nanocarriers at 50, 100, and 500 μg/100 µl concentration for 2 h and 4 h, in 10% FBS-containing cell culture media, and assayed 24 h post-treatment. Control cells were not treated with the nanocarriers; Data represent mean ± s.d. (n = 3). Type-1-Diabetes, also known as Juvenile Diabetes, is an autoimmune condition in which the body's immune system destroys the insulin-producing beta cells in the pancreas. In the absence of insulin, the body loses its capability of sensing elevated blood glucose levels and fails to maintain normoglycemia. The current standard of medical care for individuals with type 1 diabetes relies on insulin replacement therapy, which involves monitoring blood glucose levels multiple times a day and administering exogenous insulin via injections or continuous subcutaneous insulin infusion using insulin pumps27. Though insulin therapy in general effectively and safely maintains blood glucose levels close to normal in most individuals, its inability to closely mimic the endogenous pattern of insulin release leads to occasional hyperglycemia and related microvascular and cardiovascular complications28. Even if glycemic control is achieved, insulin administration enhances the risk of severe and potentially fatal hypoglycemia in type-1-diabetics29. An alternate strategy to counter insulin deficiency in type 1 diabetes is beta cell replacement therapy, which involves replacing dysfunctional or damaged beta cells with healthy insulin-secreting beta cells. Allogeneic pancreatic islet transplantation is one such strategy30,31. However, this treatment option requires long-term use of immunosuppressive transplant rejection drugs. Therefore, islet transplantation is the last resort for high-risk patients who experience severe hypoglycemic episodes. Moreover, due to limited donor islet availability and poor transplant durability, large-scale implementation of this procedure is not achievable at this time30,32. To overcome donor shortage, pancreatic islet xenotransplantation is currently being explored preclinically as an alternate beta cell replacement strategy33. Successful xenotransplantation has already been shown experimentally in various animal models34. However, the clinical application of islet xenotransplantation is far from being realized due to the similar hurdles that make allogenic transplantation inefficient, for example, immediate loss of islets due to instant blood mediated immune reaction, islet immunogenicity, and long-term need for immune suppression. In contrast, transplantation of beta cells derived from human embryonic stem cells (hESCs) or induced pluripotent stem cells (iPSCs) has been shown to overcome the challenges associated with graft rejection in allo- and xeno- transplantation as these beta cells are generated from patient's cells35,36. Clinical trials investigating the transplantation of hES cell-derived pancreatic progenitor cells to be matured to functional endocrine cells are currently underway in the US and Canada. Nevertheless, the study showed that the transplanted cells remain susceptible to the autoimmune attack37. Another strategy to replenish the lost beta cell mass relies on the regeneration of endogenous beta cells from the existing beta cells in the pancreas of type-1-diabetics. Several signaling pathways and potential stimulants have been identified in this direction and shown to induce proliferation in the pancreatic beta cells38. Recently, Stewart et al. have reported a ground-breaking finding. The group has identified a small molecule mitogen, Harmine, a DYRK1A inhibitor, by high-throughput screening of small-molecule libraries. In vitro experiments demonstrated that harmine induced significant proliferation in human pancreatic beta cells by inhibiting DYRK1A signaling. Further, their work revealed that harmine increased islet cell mass and improved glycemic control using a diabetic marginal mass human islet transplantation model in mice13. It was further shown that when used in combination with TGF beta inhibitors, harmine induced the highest rate of proliferation ever observed in adult human beta cells39. This discovery identifies a path to a novel beta cell replacement strategy, where patients' beta cells could potentially regenerate new healthy insulin producing beta cells. One of the next key challenges in developing chemically induced endogenous beta cell regeneration therapies is to selectively deliver such proliferation-promoting chemical compounds directly to pancreatic beta cells. Cell selective targeted delivery would offer the potential of limiting exposure of unwanted cells and tissues to these chemicals and may thereby reduce unwanted proliferation and drug-associated toxicities. The targeted nanocarriers discussed in this paper are self-assembled of Pluronic F127 amphiphilic block-copolymer, and the desired cargo was encapsulated within the hydrophobic core of the nanocarrier. To achieve endocrine cell specific targeting, the nanocarriers were futher conjugated with a targeting mAb HPi1 (hybridoma HIC0 4F9) that selectively reacts with cell surface molecules on all human pancreatic endocrine cells with a preference for the beta cells14. For proof of concept endocrine cell targeting studies, nanocarriers were loaded with a cargo of green fluorescent quantum dots (QDots) or a mixture of green and near infra-red (NIR) QDots. These cargos enabled the tracking of nanocarriers in vitro and in vivo, respectively. Results showed that approximately 75% of the total pancreatic endocrine cells and 88% of the beta cells internalized the HPi1 conjugated nanocarriers, whereas only 16% and 9% of acinar and duct cells showed nanocarrier uptake. The percentage of targeted nanocarrier internalizing exocrine cells is not significantly different from that observed in the case of negative control nanocarrier treated cells, implying that the uptake of targeted nanocarriers within the exocrine cells is mediated through non-specific internalization. In the targeted nanocarrier treated islet cell preparations, though approximately 36% of the total alpha cells showed internalization of the nanocarriers, the amount of nanocarriers internalized in the beta cells was 5 order of magnitude higher than that internalized in the alpha cells. This result is not unexpected because the antibody HPi1 used as a model targeting moiety in our experiments has earlier been shown to label all endocrine cells, including but not limited to insulin-expressing β cells. However, the reactivity of HPi1 with beta cells was found to be higher than alpha cells by flow cytometry (Dorrell & Streeter et al, unpublished). This attribute of the antibody was reflected in our experiments, too, and addresses the internalization of the HPi1 conjugated targeted nanocarriers in alpha cells. The higher reactivity of HPi1 and the higher level of internalization of the HPi1 conjugated nanocarriers with the beta cells as compared to alpha cells indicate that antibody reactivity and the uptake of targeted nanocarriers within specific cells type might depend on the cell surface density of the targeted antigen and overall cell morphology. However, this is a matter of further investigation. Similarly, in-vivo experiments on human islets transplanted in mice showed significantly higher accumulation of the targeted nanocarriers measured by whole body imaging and histology than that observed in the case of negative control nanocarriers. Further, we used harmine as a model cargo in the targeted drug delivery studies. As discussed above, harmine has already been shown to induce quantifiable biological responses in beta cells. In addition, being highly hydrophobic, harmine favors more compact packing within the hydrophobic micellar core of pluronic, enabling a high degree of encapsulation. In our studies, Pluronic F127 micelles encapsulated approximately 60% of the harmine. The results showed that harmine loaded targeted nanocarriers induced proliferation in up to 12% (by Ki67 staining) and 16% (by BrdU labeling) cells in the islet xenografts. Our findings were consistent with the previous investigations published by Moore et al. demonstrating targeting islet cells in the mouse pancreas by an intact mAb conjugated with radionuclide for nuclear imaging20. The antibody used in this study was β-cell–specific monoclonal antibody IC2. This report is particularly instructive in that the antibody used was an IgM (approximate molecular mass of 900 kD). The Cryo-AFM and Cryo-TEM diameters of IgM, in the range of 35–50 nm40,41, suggest that the vasculature within islets supports the passage of large biomolecules. The presence of functionally specialized fenestrated islet endothelium is thought to enable the transport of macromolecules out of the vasculature. Therefore, the antibody conjugated cargo (QDots or harmine) encapsulating Pluronic F127 nanocarriers used in this work, having an average size of 73 ± 10 or 69 ± 8 nm, were favorable to achieve the in vivo targeting of pancreatic islets. Further, in our in-vivo endocrine cell targeting studies, IVIS imaging at various time points after intravenous injection revealed that the Qdot loaded targeted nanocarriers showed maximum accumulation, 48 h post-injection time point at the transplant site, observed as a bright signal associated with encapsulated NIR qdot. At this time point, the negative control nanocarriers did not show any signal in IVIS images. Negative control nanocarriers presented faint signal in IVIS images only after 72 h (data not shown). This implies that the favorable size of this pluronic based nanocarrier assembly prolonged the circulation time of qdot in the blood. Moreover, the presence of fenestrated endothelial cells allows for the passive accumulation of the nanocarriers within the islet xenograft regardless of the targeting antibody. However, targeting antibody facilitated active binding of the nanocarriers to xenografted pancreatic endocrine cells. In contrast, negative control nanocarriers circulated without actively binding to the xenograft and showed minimal non-specific retention over time due to passive accumulation. Ex-vivo histology further confirmed the observations from IVIS imaging. The kidney tissue sections revealed that at 48 h post-injection timepoint, the targeted nanocarriers significantly co-localized at the islets transplanted under the kidney capsule. In contrast, minimal co-localization was observed in the negative control nanocarriers treatment group at the 48 h time point. We further investigated whether the selective delivery of harmine is capable of inducing proliferation in the targeted cells in the same mouse model. This investigation determined not only the cargo delivery capability of the nanocarriers but also the delivery of the functionally active cargo capable of inducing a quantifiable biological response only at the target site without compromising its mitogenicity. For drug delivery studies, harmine was loaded in the nanocarriers by the thin-film evaporation method. Harmine delivery by targeted nanocarriers yielded approximately 12% Ki67 positive cells and 16% BrdU labeled cells within the islet xenografts. In contrast, when harmine was delivered in unencapsulated form, 1.85% and 2.65% of the total endocrine cells were found to be positive for Ki67 expression and BrdU labeling, respectively. This level of proliferation induced by unencapsulated harmine at a similar dosage is consistent with the studies reported earlier by Stewart et al. Proliferation induced by the negative control nanocarriers within the islet xenografts was found to be not statistically significantly different from that observed when harmine was given unencapsulated. This observation suggests that the proliferation induced by harmine when delivered via non-targeted nanocarriers is due to the minimal non-specific uptake of the non-targeted nanocarriers at the transplant site. We have observed that the total number of proliferating cells measured by the BrdU staining method was consistently higher than the measurement made by the Ki67 staining method. This difference may be explained by the fact that in our experiments, the mice accessed BrdU in drinking water during the whole course of the treatment (1 week), allowing BrdU uptake by the total number of cells that proliferated during the week. By contrast, Ki67 expression revealed the cells that were proliferating at the end-point. Compared to the unencapsulated Harmine or Harmine given in negative control nanocarriers, Harmine given in targeted nanocarriers induced a six–seven-fold greater increase in proliferation of endocrine cells, measured by both Ki67 expression and BrdU labeling of xenografted islet cells. At the base level, 0.3–0.4% of the total endocrine cells in the xenograft were found to be proliferating in the untreated control mice. This data concludes that harmine given in the encapsulated and antibody targeted nanocarrier formulation is significantly more efficacious than when given in unencapsulated or encapsulated but untargeted formulations. Harmine dosage of 10 mg/kg in our experiments was decided based on the previously reported study, which demonstrated quantifiable proliferation levels in the islet xenografts without causing any toxic side effects in the animals13. In our experiments, though we have not observed any drug-associated toxicity in animals at the given harmine dosage, we have observed prominent unwanted proliferation in the cells present outside but closely associated with the islet xenograft in the targeted nanocarrier treatment group. Bani et al. have shown earlier that human pancreatic islets abundantly express p-glycoprotein and indicated the possibility that these p-glycoproteins play an essential role in regulating the secretory function of the human pancreatic islets and thereby maintaining the composition of the extracellular fluids and their intracellular microenvironment42. We anticipate that the undesirable cell proliferation in the close vicinity of the xenograft could be due to the p-glycoprotein mediated efflux of harmine in the nearby cells. Harmine's highly hydrophobic nature facilitated higher encapsulation efficiency within the nanocarriers. Along with this, the longer circulation time of the nanocarriers and active binding to the endocrine cells mediated by the targeting antibody describe the high biological efficacy of harmine at the target site. On the other hand, encapsulation and targeting present the opportunity to reduce the daily dosage or frequency of administration of the drug loaded nanocarriers. We anticipate reducing the daily dosage or frequency of the drug loaded nano carriers might help mitigate the unwanted proliferation. In practical applications, the effective therapeutic response from a drug depends on the optimum plasma concentration of the drug obtained and maintained over time. Without achieving this ideal concentration, the drug won't be effective43. However, most drugs are prone to enzymatic degradation while in the circulation42, or they get cleared from systemic circulation too soon before being effective at the targeted site. If the drug's circulation time is very short, it is given at a higher dosage and multiple times to achieve and maintain the ideal plasma concentration. This is particularly true for a lot of anti-cancer drugs. A higher concentration of anti-cancer drugs in the system is known to cause adverse effects, toxicities, and immune responses. Similarly, in the case of harmine and other mitogenic small molecules- it is crucial to deliver these molecules to the desired site to avoid unwanted proliferation in other cells and tissues. Therefore the probability of significant dose reduction to achieve desired therapeutic efficacy when the drug is given in targeted nanocarrier formulations is highly valuable. Once such drugs are encapsulated within a nanocarrier, and the nanocarriers are guided to the target site via antibodies conjugated on their surface, they steer clear of other tissues and organs. Upon encapsulation, these drugs will escape enzymatic degradation, which will increase the drug's circulation half-time. Altogether, drug delivery via targeted nanocarriers potentially reduces drug dosage to elicit equally effective therapeutic response while mitigating drug associated toxic response. Based on our findings, we anticipate that the platform technology described here will have application in drug discovery, in the in vivo evaluation or validation of candidate drugs; in the evaluation of different endocrine cell targeting moieties (e.g., antibodies, antibody fragments, aptamers, etc.); and therapeutic drug delivery and/or diagnostic imaging in individuals with diabetes or prediabetes. The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request. Chiarelli, F., Giannini, C. & Primavera, M. Prediction and prevention of type 1 diabetes in children. Clin. Pediatr. Endocrinol. 28, 43–57 (2019). Griggs, S., Redeker, N.S., Crawford, S.L. & Grey, M. Sleep, self-management, neurocognitive function, and glycemia in emerging adults with Type 1 diabetes mellitus: A research protocol. Res. Nurs. Health (2020). Gregory, G. A., Robinson, T., Linklater, S. E. et al. Global incidence, prevalence, and mortality of type 1 diabetes in 2021 with projection to 2040: a modelling study. Lancet Diabetes Endocrinol. 10(10), 741–760 (2022). Pambianco, G. et al. The 30-year natural history of type 1 diabetes complications: The Pittsburgh Epidemiology of Diabetes Complications Study experience. Diabetes 55, 1463–1469 (2006). Szadkowska, A. et al. Hypoglycaemia unawareness in patients with type 1 diabetes. Pediatr. Endocrinol. Diabetes Metab. 2018, 126–134 (2018). Harlan, D. M. Islet transplantation for hypoglycemia unawareness/severe hypoglycemia: Caveat emptor. Diabetes Care 39, 1072–1074 (2016). Harlan, D.M. Response to Comment on Harlan. Islet Transplantation for hypoglycemia unawareness/severe hypoglycemia: Caveat emptor. Diabetes Care 2016;39:1072–1074. Diabetes Care 40, e113–e114 (2017). Hering, B.J., Bridges, N.D., Eggerman, T.L., Ricordi, C. & Clinical Islet Transplantation, C. Comment on Harlan. Islet Transplantation for Hypoglycemia Unawareness/Severe Hypoglycemia: Caveat Emptor. Diabetes Care 2016;39, 1072–1074. Diabetes Care 40, e111–e112 (2017). Mathieu, C. Current limitations of islet transplantation. Transplant Proc. 33, 1707–1708 (2001). Chhabra, P. & Brayman, K. L. Overcoming barriers in clinical islet transplantation: current limitations and future prospects. Curr. Probl. Surg. 51, 49–86 (2014). Dirice, E. et al. Inhibition of DYRK1A stimulates human beta-cell proliferation. Diabetes 65, 1660–1671 (2016). Shirakawa, J. & Kulkarni, R. N. Novel factors modulating human beta-cell proliferation. Diabetes Obes. Metab. 18(Suppl 1), 71–77 (2016). Wang, P. et al. A high-throughput chemical screen reveals that harmine-mediated inhibition of DYRK1A increases human pancreatic beta cell replication. Nat. Med. 21, 383–388 (2015). Dorrell, C. et al. Isolation of major pancreatic cell types and long-term culture-initiating cells using novel human surface markers. Stem Cell Res. 1, 183–194 (2008). Sledge, G. W. Jr., Jotwani, A. C. & Mina, L. Targeted therapies in early-stage breast cancer: achievements and promises. Surg. Oncol. Clin. N. Am. 19, 669–679 (2010). Hasan, M., Alam, S. & Poddar, S. K. Antibody-drug conjugates: A review on the epitome of targeted anti-cancer therapy. Curr. Clin. Pharmacol. 13, 236–251 (2018). Weiner, G. J. Building better monoclonal antibody-based therapeutics. Nat. Rev. Cancer 15, 361–370 (2015). Dubowchik, G. M. & Walker, M. A. Receptor-mediated and enzyme-dependent targeting of cytotoxic anticancer drugs. Pharmacol. Ther. 83, 67–123 (1999). Wang, P. & Moore, A. Theranostic magnetic resonance imaging of type 1 diabetes and pancreatic islet transplantation. Quant. Imaging Med. Surg. 2, 151–162 (2012). Moore, A., Bonner-Weir, S. & Weissleder, R. Noninvasive in vivo measurement of beta-cell mass in mouse model of diabetes. Diabetes 50, 2231–2236 (2001). Balhuizen, A. et al. A nanobody-based tracer targeting DPP6 for non-invasive imaging of human pancreatic endocrine cells. Sci. Rep. 7, 15130 (2017). Li, J. T., Carlsson, J., Lin, J. N. & Caldwell, K. D. Chemical modification of surface active poly(ethylene oxide)-poly (propylene oxide) triblock copolymers. Bioconjug. Chem. 7, 592–599 (1996). Bodratti, A.M. & Alexandridis, P. Formulation of poloxamers for drug delivery. J. Funct. Biomater. 9 (2018). Heredia, K. L. & Maynard, H. D. Synthesis of protein-polymer conjugates. Org. Biomol. Chem. 5, 45–53 (2007). Adams, M. L., Lavasanifar, A. & Kwon, G. S. Amphiphilic block copolymers for drug delivery. J. Pharm. Sci. 92, 1343–1355 (2003). Hoshyar, N., Gray, S., Han, H. & Bao, G. The effect of nanoparticle size on in vivo pharmacokinetics and cellular interaction. Nanomedicine (Lond) 11, 673–692 (2016). Ferber, C., Mao, C. S. & Yee, J. K. Type 1 diabetes in youth and technology-based advances in management. Adv. Pediatr. 67, 73–91 (2020). Subramanian, S. & Hirsch, I. B. Intensive diabetes treatment and cardiovascular outcomes in type 1 diabetes mellitus: Implications of the diabetes control and complications trial/epidemiology of diabetes interventions and complications study 30-year follow-up. Endocrinol. Metab. Clin. N. Am 47, 65–79 (2018). Cryer, P. E. Hypoglycemia in type 1 diabetes mellitus. Endocrinol. Metab. Clin. N. Am. 39, 641–654 (2010). Ikemoto, T. et al. Islet cell transplantation for the treatment of type 1 diabetes in the USA. J. Hepatobiliary Pancreat. Surg. 16, 118–123 (2009). Kin, T., Shapiro, J., Ryan, E. A. & Lakey, J. R. Islet isolation and transplantation from an annular pancreas: A case report. JOP 6, 274–276 (2005). Matsumoto, S. Islet cell transplantation for Type 1 diabetes. J. Diabetes 2, 16–22 (2010). Gill, R. G. Pancreatic islet xenotransplantation. Autoimmunity 15(Suppl), 18–20 (1993). Marigliano, M., Bertera, S., Grupillo, M., Trucco, M. & Bottino, R. Pig-to-nonhuman primates pancreatic islet xenotransplantation: An overview. Curr. Diab. Rep. 11, 402–412 (2011). Pellegrini, S., Piemonti, L. & Sordi, V. Pluripotent stem cell replacement approaches to treat type 1 diabetes. Curr. Opin. Pharmacol. 43, 20–26 (2018). Memon, B. & Abdelalim, E.M. Stem cell therapy for diabetes: Beta cells versus pancreatic progenitors. Cells 9 (2020). Wallner, K. et al. Stem cells and beta cell replacement therapy: A prospective health technology assessment study. BMC Endocr Disord 18, 6 (2018). Zhong, F. & Jiang, Y. Endogenous pancreatic beta cell regeneration: A potential strategy for the recovery of beta cell deficiency in diabetes. Front. Endocrinol. (Lausanne) 10, 101 (2019). Wang, P. et al. Combined Inhibition of DYRK1A, SMAD, and Trithorax Pathways Synergizes to Induce Robust Replication in Adult Human Beta Cells. Cell Metab. 29, 638–652 e635 (2019). Czajkowsky, D. M. & Shao, Z. The human IgM pentamer is a mushroom-shaped molecule with a flexural bias. Proc. Natl. Acad. Sci. USA 106, 14960–14965 (2009). Chen, Y., Cai, J., Xu, Q. & Chen, Z. W. Atomic force bio-analytics of polymerization and aggregation of phycoerythrin-conjugated immunoglobulin G molecules. Mol. Immunol. 41, 1247–1252 (2004). Bani, D., Brandi, M. L., Axiotis, C. A. & Bani-Sacchi, T. Detection of P-glycoprotein on endothelial and endocrine cells of the human pancreatic islets by C 494 monoclonal antibody. Histochemistry 98, 207–209 (1992). Loftsson, T. Essential Pharmacokinetics: A Primer for Pharmaceutical Scientists. (Elsevier Science, 2015). The authors thank Dr. Craig Dorrell (member of the OHSU, Papé Family Pediatric Research Institute/Department of Pediatrics), Dr. Devorah Goldman (former, member of the OHSU, Papé Family Pediatric Research Institute/Department of Pediatrics) for expert technical assistance; Dr. Markus Grompe (member of the OHSU, Papé Family Pediatric Research Institute/Department of Pediatrics) for advice and members of the Streeter and Grompe laboratories for discussions. The authors acknowledge Yong-Ping Zhang (member of the OHSU, Papé Family Pediatric Research Institute and Oregon Stem Cell Center/Department of Pediatrics) and Claire Turina (former member of the OHSU, Papé Family Pediatric Research Institute and Oregon Stem Cell Center/Department of Pediatrics) for their support in antibody generation and screening. The authors also thank Oregon Health & Science University shared core facilities for their services: Multiscale Microscopy Core (Dr. Claudia Lopez), Advanced Light Microscopy Core (Aurelie Snyder, Dr. Stefanie Kaech Petrie), NMR Core (Dr. Tapasree Banerji), Small Animal Imaging Core (Aris Xie). This work was funded by Juvenile Diabetes Research Foundation (Grant#17-2013-327​), Collins Medical Trust, National Institute of Diabetes and Digestive and Kidney Diseases/Beta Cell Biology Consortium (Grant#UO1 DK089565)​. Brenden-Colson Center for Pancreatic Care, Oregon Health and Science University, Portland, OR, USA Swati Mishra Department of Pediatrics, Papé Family Pediatric Research Institute, Oregon Stem Cell Center, Oregon Health and Science University, Portland, OR, USA Swati Mishra & Philip R. Streeter Philip R. Streeter Designed the study: P.R.S. and S.M. Performed experiments: S.M. Data Analysis: S.M. and P.R.S. Wrote the paper: S.M. and P.R.S. Supervised and coordinated all aspects of the study: P.R.S. All authors edited and approved the manuscript. Correspondence to Swati Mishra. Mishra, S., Streeter, P.R. Targeted delivery of harmine to xenografted human pancreatic islets promotes robust cell proliferation. Sci Rep 12, 19127 (2022). https://doi.org/10.1038/s41598-022-19453-5
CommonCrawl
Restrictions on commutativity ratios in finite groups Document Type : Research Paper Robert Heffernan 1 Des MacHale 2 Aine Ni She 3 1 University of Connecticut 2 University College Cork 3 Cork Institute of Technology ‎We consider two commutativity ratios $\Pr(G)$ and $f(G)$ in a finite group $G$‎ ‎and examine the properties of $G$ when these ratios are `large'‎. ‎We show that‎ ‎if $\Pr(G) > \frac{7}{24}$‎, ‎then $G$ is metabelian and we give threshold‎ ‎results in the cases where $G$ is insoluble and $G'$ is nilpotent‎. ‎We also‎ ‎show that if $f(G) > \frac{1}{2}$‎, ‎then $f(G) = \frac{n+1}{2n}$‎, ‎for some‎ ‎natural number $n$‎. commutativity ratios commuting probability Finite groups Main Subjects 20D99 None of the above, but in this section 20D Group theory and generalizations: Abstract finite groups 20E45 Conjugacy classes F. Barry (2004). The commutator subgroup and CLT (NCLT) groups. Math. Proc. R. Ir. Acad.. 104A (1), 119-126 Y. Berkovich (1994). On induced characters. Proc. Amer. Math. Soc.. 121, 679-685 Y. Berkovich and K. R. Nekrasov (1986). Finite groups with large sums of degrees of irreducible characters. (Russian). Publ. Math. Debrecan. 33 (3-4), 333-354 Ya. G. Berkovich and E. M. Zhmud (1999). Characters of finite groups. Part 2. Translated from the Russian manuscript by P. Shumyatsky [P. V. Shumyatski{ui}], V. Zobina and Berkovich. Translations of Mathematical Monographs, American Mathematical Society, Providence, RI. 181 H. F. Blichfeldt (1917). Finite collineation groups. The University of Chicago Press. J. Burns, G. Ellis, D. MacHale, P. OMurchu, R. Sheehy, and J. Wiegold (1997). Lower central series of groups with small upper central factors. Proc. Roy. Irish Acad. Sect. A. 97 (2), 113-122 W. Burnside (1955). Theory of groups of finite order. 2d ed. Dover Publications, Inc., New York. J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker, and R. A. Wilson (1985). Atlas of finite groups. Atlas of finite groups. Maximal subgroups and ordinary characters for simple groups, With computational assistance from J. G. Thackray, Oxford University Press, Eynsham. J. D. Dixon (2002). Probabilistic group theory. C. R. Math. Acad. Sci. Soc. R. Can.. 24, 1-15 B. Eick, H. U. Besche and E. OBrien The small groups library. http://www.icm.tu-bs.de/ag_algebra/software/small/small.html. W. H. Gustafson (1973). What is the probability that two group elements commute?. Amer. Math. Monthly. 80, 1031-1034 P. Hall (1940). The classification of prime-power groups. J. Reine Angew. Math.. 182, 130-141 K. S. Joseph (1969). Commutativity in non-abelian groups. Ph.D. thesis, University of California, Los Angeles. P. Lescot (1995). Isoclinism classes and commutativity degrees of finite groups. J. Algebra. 177 (3), 847-869 D. MacHale and P. OMurchu (1993). Commutator subgroups of groups with small central factor groups. Proc. Roy. Irish Acad. Sect. A. 93 (1), 123-129 P. OMurchu (1990). Central factor groups and commutator subgroups. Master's thesis, National University of Ireland, Cork. D. S. Passman (1966). Groups whose irreducible representations have degrees dividing p^2. Pacific J. Math.. 17, 475-496 A. NiShe (2000). Commutativity and generalisations in finite groups. Ph. D. thesis, National University of Ireland, Cork. T. A. Springer (1977). Invariant theory.. Springer-Verlag. S. S.-T. Yau and Y. Yu (1993). Gorenstein quotient singularities in dimension three. Mem. Amer. Math. Soc.. 505 Volume 3, Issue 4 - Serial Number 4 Article View: 4,871 PDF Download: 3,666 Heffernan, R., MacHale, D., Ni She, A. (2014). Restrictions on commutativity ratios in finite groups. International Journal of Group Theory, 3(4), 1-12. doi: 10.22108/ijgt.2014.4570 Robert Heffernan; Des MacHale; Aine Ni She. "Restrictions on commutativity ratios in finite groups". International Journal of Group Theory, 3, 4, 2014, 1-12. doi: 10.22108/ijgt.2014.4570 Heffernan, R., MacHale, D., Ni She, A. (2014). 'Restrictions on commutativity ratios in finite groups', International Journal of Group Theory, 3(4), pp. 1-12. doi: 10.22108/ijgt.2014.4570 Heffernan, R., MacHale, D., Ni She, A. Restrictions on commutativity ratios in finite groups. International Journal of Group Theory, 2014; 3(4): 1-12. doi: 10.22108/ijgt.2014.4570
CommonCrawl
GridPP networking infrastructure (DRI) - QMUL Grant Lead Research Organisation: Queen Mary, University of London Department Name: Physics Equipment only, agreed in relation to the network infrastructure for robust and resilient computing and storage services for the LHC Grant (DRI -bid). Planned Impact Dec 11 - Mar 12 ST/J005460/1 Stephen Lloyd Particle physics - experiment (50%) Particle physics - theory (50%) Beyond the Standard Model (50%) The Standard Model (50%) Queen Mary, University of London, United Kingdom (Lead Research Organisation) Stephen Lloyd (Principal Investigator) http://orcid.org/0000-0002-5073-2264 ascending (press to sort descending) |< < 1 2 3 4 > >| The ATLAS Collaboration (2013) Search for a light charged Higgs boson in the decay channel [Formula: see text] in [Formula: see text] events using pp collisions at [Formula: see text] with the ATLAS detector. in The European physical journal. C, Particles and fields The ATLAS Collaboration (2012) Search for heavy neutrinos and right-handed W bosons in events with two leptons and jets in pp collisions at [Formula: see text] with the ATLAS detector. in The European physical journal. C, Particles and fields The ATLAS Collaboration (2012) Search for doubly charged Higgs bosons in like-sign dilepton final states at $\sqrt{s} = 7\ \mathrm{TeV}$ with the ATLAS detector in The European Physical Journal C The ATLAS Collaboration (2012) ATLAS search for a heavy gauge boson decaying to a charged lepton and a neutrino in pp collisions at $\sqrt{s} = 7\ \mathrm{TeV}$ in The European Physical Journal C Mehlhase S (2013) Searches for heavy long-lived sleptons and R -hadrons with the ATLAS detector in EPJ Web of Conferences Collaboration T (2014) Operation and performance of the ATLAS semiconductor tracker in Journal of Instrumentation Collaboration T (2014) Monitoring and data quality assessment of the ATLAS liquid argon calorimeter in Journal of Instrumentation Collaboration T (2014) A neural network clustering algorithm for the ATLAS silicon pixel detector in Journal of Instrumentation Britton D (2014) How to deal with petabytes of data: the LHC Grid project. in Reports on progress in physics. Physical Society (Great Britain) ATLAS Collaboration (2014) Muon reconstruction efficiency and momentum resolution of the ATLAS experiment in proton-proton collisions at [Formula: see text] TeV in 2010. in The European physical journal. C, Particles and fields ATLAS Collaboration (2012) Search for lepton flavour violation in the eµ continuum with the ATLAS detector in [Formula: see text]pp collisions at the LHC. in The European physical journal. C, Particles and fields ATLAS Collaboration (2014) Measurement of the centrality and pseudorapidity dependence of the integrated elliptic flow in lead-lead collisions at [Formula: see text] TeV with the ATLAS detector. in The European physical journal. C, Particles and fields ATLAS Collaboration (2013) Improved luminosity determination in pp collisions at [Formula: see text] using the ATLAS detector at the LHC. in The European physical journal. C, Particles and fields ATLAS Collaboration (2013) Jet energy resolution in proton-proton collisions at [Formula: see text] recorded in 2010 with the ATLAS detector. in The European physical journal. C, Particles and fields ATLAS Collaboration (2013) Measurement of the inclusive jet cross-section in pp collisions at [Formula: see text] and comparison to the inclusive jet cross-section at [Formula: see text] using the ATLAS detector. in The European physical journal. C, Particles and fields ATLAS Collaboration (2013) Multi-channel search for squarks and gluinos in [Formula: see text]pp collisions with the ATLAS detector at the LHC. in The European physical journal. C, Particles and fields ATLAS Collaboration (2012) Measurement of the charge asymmetry in top quark pair production in pp collisions at [Formula: see text] using the ATLAS detector. in The European physical journal. C, Particles and fields ATLAS Collaboration (2013) Measurement of the [Formula: see text] production cross section in the tau + jets channel using the ATLAS detector. in The European physical journal. C, Particles and fields ATLAS Collaboration (2012) Measurement of [Formula: see text] production with a veto on additional central jet activity in pp collisions at [Formula: see text] TeV using the ATLAS detector. in The European physical journal. C, Particles and fields ATLAS Collaboration (2013) Measurement of kT splitting scales in W?l? events at [Formula: see text] with the ATLAS detector. in The European physical journal. C, Particles and fields ATLAS Collaboration (2012) A search for [Formula: see text] resonances with the ATLAS detector in 2.05 fb-1 of proton-proton collisions at [Formula: see text]. in The European physical journal. C, Particles and fields ATLAS Collaboration (2014) Electron reconstruction and identification efficiency measurements with the ATLAS detector using the 2011 LHC proton-proton collision data. in The European physical journal. C, Particles and fields ATLAS Collaboration (2014) Search for direct top squark pair production in events with a [Formula: see text] boson, [Formula: see text]-jets and missing transverse momentum in [Formula: see text] TeV [Formula: see text] collisions with the ATLAS detector. in The European physical journal. C, Particles and fields ATLAS Collaboration (2012) A particle consistent with the Higgs boson observed with the ATLAS detector at the Large Hadron Collider. in Science (New York, N.Y.) ATLAS Collaboration (2012) Measurement of the top quark mass with the template method in the [Formula: see text] channel using ATLAS data. in The European physical journal. C, Particles and fields ATLAS Collaboration (2013) Measurement of jet shapes in top-quark pair events at [Formula: see text] using the ATLAS detector. in The European physical journal. C, Particles and fields ATLAS Collaboration (2012) Measurement of t polarization in W?t? decays with the ATLAS detector in pp collisions at [Formula: see text]. in The European physical journal. C, Particles and fields ATLAS Collaboration (2014) The differential production cross section of the [Formula: see text](1020) meson in [Formula: see text] = 7 TeV [Formula: see text] collisions measured with the ATLAS detector. in The European physical journal. C, Particles and fields Aad G (2012) Search for direct top squark pair production in final states with one isolated lepton, jets, and missing transverse momentum in sqrt[s] = 7 TeV pp collisions using 4.7 fb(-10 of ATLAS data. in Physical review letters Aad G (2013) Observation of associated near-side and away-side long-range correlations in sqrt[s(NN)]=5.02 TeV proton-lead collisions with the ATLAS detector. in Physical review letters Aad G (2014) Evidence for electroweak production of W±W±jj in pp collisions at sqrt[s] = 8 TeV with the ATLAS detector. in Physical review letters Aad G (2013) Search for new phenomena in events with three charged leptons at s = 7 TeV with the ATLAS detector in Physical Review D Aad G (2013) Search for pair production of heavy top-like quarks decaying to a high-<mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.e in Physics Letters B Aad G (2013) Measurement of Z boson production in Pb-Pb collisions at sqrt[s(NN)]=2.76 TeV with the ATLAS detector. in Physical review letters Aad G (2013) Search for single <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.elsevier.com/xml/common/table/dtd" xmlns:sb="http://ww in Physics Letters B Aad G (2012) Search for magnetic monopoles in sqrt[s]=7 TeV pp collisions with the ATLAS detector. in Physical review letters Aad G (2012) Search for the decay <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.elsevier.com/xml/common/table/dtd" xmlns:sb="http:/ in Physics Letters B Aad G (2013) Measurement of the flavour composition of dijet events in pp collisions at $\sqrt{s}=7\ \mbox{TeV}$ with the ATLAS detector in The European Physical Journal C Aad G (2014) Light-quark and gluon jet discrimination in [Formula: see text] collisions at [Formula: see text] with the ATLAS detector. in The European physical journal. C, Particles and fields Aad G (2012) Measurement of the production cross section of an isolated photon associated with jets in proton-proton collisions at s = 7 TeV with the ATLAS detector in Physical Review D Aad G (2014) Search for new phenomena in photon+jet events collected in proton-proton collisions at <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns: in Physics Letters B Aad G (2012) Measurement of W? and Z? production cross sections in pp collisions at <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http://www.e in Physics Letters B Aad G (2012) Underlying event characteristics and their dependence on jet size of charged-particle jet events in p p collisions at ( s ) = 7 TeV with the ATLAS detector in Physical Review D Aad G (2013) Search for charged Higgs bosons through the violation of lepton universality in $ t\overline{t} $ events using pp collision data at $ \sqrt{s}=7 $ TeV with the ATLAS experiment in Journal of High Energy Physics Aad G (2014) Measurement of the Z/? * boson transverse momentum distribution in pp collisions at s = 7 $$ \sqrt{s}=7 $$ TeV with the ATLAS detector in Journal of High Energy Physics Aad G (2014) Electron and photon energy calibration with the ATLAS detector using LHC Run 1 data in The European Physical Journal C Impact Summary Description We have built a Grid to analyse data from the LHC at CERN and elsewhere. This enabled the discovery of the Higgs Boson, the fundamental scalar boson that is predicted to give mass to all other particles. Exploitation Route Other disciplines can use our facilities or work with us to develop their own. Sectors Digital/Communication/Information Technologies (including Software) URL http://www.gridpp.ac.uk Description Other disciplines have used our Grid for their own purposes. First Year Of Impact 2008 Sector Digital/Communication/Information Technologies (including Software),Education,Pharmaceuticals and Medical Biotechnology Impact Types Societal
CommonCrawl
DOI:10.1016/0375-9601(92)90077-Y Blockspin cluster algorithms for quantum spin systems @article{Wiese1992BlockspinCA, title={Blockspin cluster algorithms for quantum spin systems}, author={U. Wiese and Heping Ying}, journal={Physics Letters A}, U. Wiese, H. Ying Physics Letters A Abstract Cluster algorithms are developed for simulating quantum spin systems like the one- and two-dimensional Heisenberg ferro- and anti-ferromagnets. The corresponding two- and three-dimensional classical spin models with four-spin couplings are mapped to blockspin models with two-blockspin interactions. Clusters of blockspins are updated collectively. The efficiency of the method is investigated in detail for one-dimensional spin chains. Then in most cases the new algorithms solve the… Blockspin scheme and cluster algorithm for quantum spin systems Abstract We present a numerical study using a cluster algorithm for the 1-d S = 1 2 quantum Heisenberg models. The dynamical critical exponent for anti-ferromagnetic chains is z = 0.0(1) such that… The Efficiency of a Cluster Algorithm for the Quantum Heisenberg Model on a Kagome Lattice L. Marti Experiments with electronand hole-doped antiferromagnetic materials show interesting properties at low temperatures, such as high temperature superconductivity (high Tc) and quantum… Loop-cluster algorithm: an application for the 2D quantum Heisenberg antiferromagnet H. Ying, U. Wiese, D. Ji Abstract A new type of cluster algorithm that strongly reduces the critical slowing down and frustration effects is developed to stimulate the spin one half quantum Heisenberg antiferromagnet. The… Cluster Monte Carlo Method for Quantum Systems N. Kawashima The general framework of the cluster Monte Carlo algorithms for quantum systems is reviewed and a new algorithm is applied to the one-dimensional t-J model and has proved to reduce the autocorrelation time by a few orders of magnitude. Quantum Monte Carlo Methods F. Assaad, M. Troyer We present a review of quantum Monte Carlo algorithms for the simulation of quantum magnets. A general introduction to Monte Carlo sampling is followed by an overview of local updates, cluster… Improvements in cluster algorithms for quantum spin systems B. B. Beard Loop cluster algorithms provide an efficient implementation of the Monte Carlo technique for evaluating path integrals. The work described here represents improvement in the state of the art in… Meron-cluster solution of fermion and other sign problems J. Cox, C. Gattringer, K. Holland, B. Scarlet, U. Wiese Abstract Numerical simulations of numerous quantum systems suffer from the notorious sign problem. Important examples include QCD and other field theories at non-zero chemical potential, at non-zero… AN UPDATING SCHEME FOR THE LOOP-CLUSTER ALGORITHM FOR THE ANISOTROPIC HEISENBERG ANTIFERROMAGNET H. Ying, F. Chen Abstract A loop-cluster algorithm is proposed to overcome the critical slowing down in simulations of the anisotropic Heisenberg antiferromagnet, the XYZ model. The primary features of the algorithm… Monte Carlo Simulations of the Quantum X-Y Model by a Loop-Cluster Algorithm Ji Da-ren, Ying He-ping The quantum X-Y model of interacting spins on square lattices is simulated by a loop-cluster algorithm. It is shown that the method can be used to simulate the systems efficiently at low temperatures… Generalization of the Fortuin-Kasteleyn transformation and its application to quantum spin simulations N. Kawashima, J. Gubernatis We generalize the Fortuin-Kasteleyn (FK) cluster representation of the partition function of the Ising model to represent the partition function of quantum spin models with an arbitrary spin… Monte Carlo Simulation of Quantum Spin Systems. I Masuo Suzuki, S. Miyashita, Akira Kuroda A general explicit formulation of Monte Carlo simulation for quantum systems is given in this paper on the basis of the previous fundamental proposal by Suzuki. This paper also demonstrates… Relationship between d-Dimensional Quantal Spin Systems and (d+1)-Dimensional Ising Systems: Equivalence, Critical Exponents and Systematic Approximants of the Partition Function and Spin Correlations Masuo Suzuki The partition function of a quantal spin system is expressed by that of the Ising model, on the basis of the generalized Trotter formula. Thereby the ground state of the d-dimensional Ising model… Two-dimensional spin-1/2 Heisenberg antiferromagnet: A quantum Monte Carlo study. Makivic, Ding A fast and efficient multispin coding algorithm on a parallel supercomputer, based on the Suzuki-Trotter transformation, is developed, which shows that the correlation length and staggered susceptibility are quantitatively well described by the renormalized classical picture at the two-loop level of approximation. Collective Monte Carlo updating for spin systems. A Monte Carlo algorithm is presented that updates large clusters of spins simultaneously in systems at and near criticality. We demonstrate its efficiency in the two-dimensional $\mathrm{O}(n)$… THE 2D HEISENBERG ANTIFERROMAGNET IN HIGH-Tc SUPERCONDUCTIVITY: A Review of Numerical Techniques and Results T. Barnes In this article we review numerical studies of the quantum Heisenberg antiferromagnet on a square lattice, which is a model of the magnetic properties of the undoped "precursor insulators" of the… Monte Carlo studies of one-dimensional quantum Heisenberg and XY models J. Cullen, D. Landau The Suzuki-Trotter transformation has been used to transform $N$-spin $S=\frac{1}{2}$ Heisenberg and $\mathrm{XY}$ chains into two-dimensional ($N\ifmmode\times\else\texttimes\fi{}2m$) classical… Nonuniversal critical dynamics in Monte Carlo simulations. Swendsen, Wang A new approach to Monte Carlo simulations is presented, giving a highly efficient method of simulation for large systems near criticality. The algorithm violates dynamic universality at second-order… Phys. Rev Phys. Rev. Lett
CommonCrawl
Sandu Popescu's research while affiliated with University of Bristol and other places This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile. It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community. If you're a ResearchGate member, you can follow this page to keep up with this author's work. If you are this author, and you don't want us to display this page anymore, please let us know. A Dynamical Quantum Cheshire Cat Effect and Implications for Counterfactual Communication Yakir Aharonov Eliahu Cohen Sandu Popescu Here we report a type of dynamic effect that is at the core of the so called "counterfactual computation" and especially "counterfactual communication" quantum effects that have generated a lot of interest recently. The basic feature of these counterfactual setups is the fact that particles seem to be affected by actions that take place in location... Download full-text Reference Frames Which Separately Store Noncommuting Conserved Quantities Ana Belen Sainz Anthony J. Short Andreas Winter Even in the presence of conservation laws, one can perform arbitrary transformations on a system if given access to a suitable reference frame, since conserved quantities may be exchanged between the system and the frame. Here we explore whether these quantities can be separated into different parts of the reference frame, with each part acting as... Steering in no-signalling theories Emmanuel Zambrini Cruzeiro Nicolas Gisin Steering is usually described as a quantum phenomenon. In this article, we show that steering is not restricted to quantum theory, it is also present in more general, no-signalling theories. We present two main results: first, we show that quantum steering involves a collection of different aspects, which need to be separated when considering steer... Constraints on nonlocality in networks from no-signaling and independence Jean-Daniel Bancal Yu Cai Nicolas Brunner The possibility of Bell inequality violations in quantum theory had a profound impact on our understanding of the correlations that can be shared by distant parties. Generalizing the concept of Bell nonlocality to networks leads to novel forms of correlations, the characterization of which is, however, challenging. Here, we investigate constraints... Reference frames which separately store non-commuting conserved quantities Even in the presence of conservation laws, one can perform arbitrary transformations on a system given access to an appropriate reference frame. During this process, conserved quantities will generally be exchanged between the system and the reference frame. Here we explore whether these quantities can be separated into different parts of the refer... Generalising the concept of Bell nonlocality to networks leads to novel forms of correlations, the characterization of which is however challenging. Here we investigate constraints on correlations in networks under the two natural assumptions of no-signaling and independence of the sources. We consider the ``triangle network'', and derive strong co... Quantum Reference Frames and Their Applications to Thermodynamics We construct a quantum reference frame, which can be used to approximately implement arbitrary unitary transformations on a system in the presence of any number of extensive conserved quantities, by absorbing any back action provided by the conservation laws. Thus, the reference frame at the same time acts as a battery for the conserved quantities.... Connecting processes with indefinite causal order and multi-time quantum states Ralph Silva Yelena Guryanova Anthony J Short Recently, the possible existence of quantum processes with indefinite causal order has been extensively discussed, in particular using the formalism of process matrices. Here we give a new perspective on this question, by establishing a direct connection to the theory of multi-time quantum states. Specifically, we show that process matrices are equ... Exploring the limits of no backward in time signalling We present an operational and model-independent framework to investigate the concept of no-backwards-in-time signalling. We define no-backwards-in-time signalling conditions, closely related to spatial no-signalling conditions. These allow for theoretical possibilities in which the future affects the past, nevertheless without signalling backwards... On Conservation Laws in Quantum Mechanics Daniel Rohrlich Significance Conservation laws are one of the most important aspects of nature. As such, they have been intensively studied and extensively applied, and are considered to be perfectly well established. We, however, raise fundamental question about the very meaning of conservation laws in quantum mechanics. We argue that, although the standard way i... Paul Skrzypczyk Supplementary Figures 1-3, Supplementary Notes 1-4 and Supplementary References. Reply to Svensson: Quantum Violations of the Pigeonhole Principle Fabrizio Colombo Jeff Tollaksen "In our paper in PNAS (1), we describe a quantum violation of the pigeonhole principle. We describe a situation (involving pre- and postselection) in which we put three particles in two boxes and we never find two particles in the same box. We presented both a 'strong measurement' analysis and a 'weak measurement' one. In his comment, Svensson (2)... Thermodynamics of quantum systems with multiple conserved quantities We consider a generalisation of thermodynamics that deals with multiple conserved quantities at the level of individual quantum systems. Each conserved quantity, which, importantly, need not commute with the rest, can be extracted and stored in its own battery. Unlike in standard thermodynamics, where the second law places a constraint on how much... A Current of the Cheshire Cat's Smile: Dynamical Analysis of Weak Values Recently it was demonstrated, both theoretically and experimentally, how to separate a particle from its spin, or any other property, a phenomenon known as the "Quantum Cheshire Cat". We present two novel gedanken experiments, based on the quantum Zeno effect, suggesting a dynamical process thorough which this curious phenomenon occurs. We analyze,... Multiple Observers Can Share the Nonlocality of Half of an Entangled Pair by Using Optimal Weak Measurements We investigate the trade-off between information gain and disturbance for von Neumann measurements on spin-1/2 particles, and derive the measurement pointer state that saturates this trade-off, which turns out to be highly unusual. We apply this result to the question of whether the nonlocality of a single particle from an entangled pair can be sha... Optimal Pointers for Weak von Neumann Measurements and Consecutive Violations of Bell's Inequality We investigate the trade-off between information gain and disturbance for a class of weak von Neumann measurements on spin-$\frac{1}{2}$ particles, and derive the unusual measurement pointer state that saturates this trade-off. We then consider the fundamental question of sharing the non-locality of a single particle of an entangled pair among mult... A perspective on possible manifestations of entanglement in biological systems Hans J. Briegel Introduction In this chapter, we will focus on the phenomenon of quantum entanglement. Entanglement is a special property that some states of two or more quantum particles may possess. When particles are in an entangled state they are correlated. But entanglement is far more than mere correlation. Entanglement is a type of correlation that macrosco... The quantum pigeonhole principle and the nature of quantum correlations The pigeonhole principle: "If you put three pigeons in two pigeonholes at least two of the pigeons end up in the same hole" is an obvious yet fundamental principle of Nature as it captures the very essence of counting. Here however we show that in quantum mechanics this is not true! We find instances when three quantum particles are put in two boxe... Work extraction and thermodynamics for individual quantum systems Thermodynamics is traditionally concerned with systems comprised of a large number of particles. Here we present a framework for extending thermodynamics to individual quantum systems, including explicitly a thermal bath and work-storage device (essentially a 'weight' that can be raised or lowered). We prove that the second law of thermodynamics ho... Nonlocality beyond quantum mechanics Nonlocality is the most characteristic feature of quantum mechanics, but recent research seems to suggest the possible existence of nonlocal correlations stronger than those predicted by theory. This raises the question of whether nature is in fact more nonlocal than expected from quantum theory or, alternatively, whether there could be an as yet u... Entanglement enhances cooling in microscopic quantum refrigerators Marcus Huber Noah Linden Small self-contained quantum thermal machines function without external source of work or control but using only incoherent interactions with thermal baths. Here we investigate the role of entanglement in a small self-contained quantum refrigerator. We first show that entanglement is detrimental as far as efficiency is concerned-fridges operating a... Pre- and postselected quantum states: Density matrices, tomography, and Kraus operators We present a general formalism for characterizing 2-time quantum states, describing pre- and postselected quantum systems. The most general 2-time state is characterized by a "density vector" that is independent of measurements performed between the preparation and postselection. We provide a method for performing tomography of an unknown 2-time de... Quantum Cheshire Cats In this paper we present a quantum Cheshire cat. In a pre- and post-selected experiment we find the cat in one place, and the smile in another. The cat is a photon, while the smile is it's circular polarisation. The Classical Limit Of Quantum Optics: Not What It Seems At First Sight Alonso Botero Shmuel Nussinov Lev Vaidman What is light and how to describe it has always been a central subject in physics. As our understanding has increased, so have our theories changed: Geometrical optics, wave optics and quantum optics are increasingly sophisticated descriptions, each referring to a larger class of phenomena than its predecessor. But how exactly are these theories re... Each Instant of Time a New Universe We present an alternative view of quantum evolution in which each moment of time is viewed as a new "universe" and time evolution is given by correlations between them. Oblivious transfer and quantum channels as communication resources Valerio Scarani Jürg Wullschleger We show that from a communication-complexity perspective, the primitive called oblivious transfer—which was introduced in a cryptographic context—can be seen as the classical analogue to a quantum channel in the same sense as non-local boxes are of maximally entangled qubits. More explicitly, one realization of non-cryptographic oblivious transfer... Extracting work from quantum systems We consider the task of extracting work from quantum systems in the resource theory perspective of thermodynamics, where free states are arbitrary thermal states, and allowed operations are energy conserving unitary transformations. Taking as our work storage system a 'weight' we prove the second law and then present simple protocols which extract... Peculiar Features Of Entangled States With Postselection We consider quantum systems in entangled states post-selected in non-entangled states. Such systems exhibit unusual behavior, in particular when weak measurements are performed at intermediate times. A Quantum Delayed-Choice Experiment Alberto Peruzzo Peter J. Shadbolt Jeremy O'Brien Delaying Quantum Choice Photons can display wavelike or particle-like behavior, depending on the experimental technique used to measure them. Understanding this duality lies at the heart of quantum mechanics. In two reports, Peruzzo et al. (p. 634 ) and Kaiser et al. (p. 637 ; see the Perspective on both papers by Lloyd ) perform an entangled versi... A critical view on transport and entanglement in models of photosynthesis Markus Tiersch Hans J Briegel We revisit critically the recent claims, inspired by quantum optics and quantum information, that there is entanglement in the biological pigment-protein complexes, and that it is responsible for the high transport efficiency. While unexpectedly long coherence times were experimentally demonstrated, the existence of entanglement is, at the moment,... Persistent dynamic entanglement from classical motion: How bio-molecular machines can generate nontrivial quantum states Gian Giacomo Guerreschi Jianming Cai Very recently (Cai et al 2010 Phys. Rev. E 82 021921), a simple mechanism was presented by which a molecule subjected to forced oscillations, out of thermal equilibrium, can maintain quantum entanglement between two of its quantum degrees of freedom. Crucially, entanglement can be maintained even in the presence of very intense noise, so intense th... The Complete Quantum Cheshire Cat We show that a physical property can be entirely separated from the object it belongs to, hence realizing a complete quantum Cheshire cat. Our setup makes use of a type of quantum state of particular interest, namely an entangled pre- and post-selected state, in which the pre- and post-selections are entangled with each other. Finally we propose a... Open Quantum System Approach to the Modeling of Spin Recombination Reactions Ulrich Steiner S Popescu H J Briegel In theories of spin-dependent radical pair reactions, the time evolution of the radical pair, including the effect of the chemical kinetics, is described by a master equation in the Liouville formalism. For the description of the chemical kinetics, a number of possible reaction operators have been formulated in the literature. In this work, we pres... Purification of Noisy Entanglement and Faithful Teleportation via Noisy Channels Charles H. Bennett Djabeur Mohamed Seifeddine Zekrifa William K. Wootters Two separated observers, by applying local operations to a supply of not-too-impure entangled states ({\em e.g.} singlets shared through a noisy channel), can prepare a smaller number of entangled pairs of arbitrarily high purity ({\em e.g.} near-perfect singlets). These can then be used to faithfully teleport unknown quantum states from one observ... Estimating preselected and postselected ensembles Serge Massar In analogy with the usual quantum state-estimation problem, we introduce the problem of state estimation for a pre- and postselected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and postselected ensembles are the most basic quantum ensembles. Two new features are shown to appear... Can apparent superluminal neutrino speeds be explained as a quantum weak measurement? M V Berry N. Brunner S. Popescu Pragya Shukla Consistent treatments of quantum mechanics Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies Virtual qubits, virtual temperatures, and the foundations of thermodynamics We argue that thermal machines can be understood from the perspective of `virtual qubits' at `virtual temperatures': The relevant way to view the two heat baths which drive a thermal machine is as a composite system. Virtual qubits are two-level subsystems of this composite, and their virtual temperatures can take on any value, positive or negative... Estimating Pre- and Post-Selected Ensembles In analogy with the usual state estimation problem, we introduce the problem of state estimation for a pre- and post-selected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and post-selected ensembles are the most basic quantum ensembles. Two new features are shown to appear: 1) i... Time-symmetric quantum mechanics questioned and defended Michael Nauenberg Art Hobson Shaul Mukamel Dynamical Features Of Interference Phenomena In the Presence Of Entanglement Tirzah Kaufherr A "strongly" interacting, and entangling, heavy, non recoiling, external particle effects a significant change of the environment. Described locally, the corresponding entanglement event is a generalized electric Aharonov Bohm effect, that differs from the original one in a crucial way. We propose a gedanken interference experiment. The predicted s... Biased nonlocal quantum games Thomas Lawson We address the question of when quantum entanglement is a useful resource for information processing tasks by presenting a new class of nonlocal games that are simple, direct, generalizations of the Clauser Horne Shimony Holt game. For some ranges of the parameters that specify the games, quantum mechanics offers an advantage, while, surprisingly,... A Time-Symmetric Formulation Of Quantum Mechanics That quantum mechanics is a probabilistic theory was, by 1964, an old but still troubling story. The fact that identical measurements of identically prepared systems can yield different outcomes seems to challenge a basic tenet of science and philosophy. Frustration with the indeterminacy intrinsic to quantum mechanics was famously expressed in Alb... The smallest possible heat engines We construct the smallest possible self contained heat engines; one composed of only two qubits, the other of only a single qutrit. The engines are self-contained as they do not require external sources of work and/or control. They are able to produce work which is used to continuously lift a weight. Despite the dimension of the engine being small,... The smallest refrigerators can reach maximal efficiency We investigate whether size imposes a fundamental constraint on the efficiency of small thermal machines. We analyse in detail a model of a small self-contained refrigerator consisting of three qubits. We show analytically that this system can reach the Carnot efficiency, thus demonstrating that there exists no complementarity between size and effi... How Small Can Thermal Machines Be? The Smallest Possible Refrigerator We investigate the fundamental dimensional limits to thermodynamic machines. In particular, we show that it is possible to construct self-contained refrigerators (i.e., not requiring external sources of work) consisting of only a small number of qubits and/or qutrits. We present three different models, consisting of two qubits, a qubit and a qutrit... Quantum Phases: 50 years of the Aharonov-Bohm effect and 25 years of the Berry phase PREFACE Mark R Dennis This special issue celebrates the discovery of two of the most important aspects of quantum mechanics: the Aharonov–Bohm effect and the Berry phase. The issue includes work presented at two conferences, 50 Years of the Aharonov–Bohm Effect, 11–14 October 2009, Tel Aviv University, Israel, and the Aharonov–Bohm Effect and Berry Phase 50/25 Anniversa... Dynamic entanglement in oscillating molecules and potential biological implications We demonstrate that entanglement can persistently recur in an oscillating two-spin molecule that is coupled to a hot and noisy environment, in which no static entanglement can survive. The system represents a nonequilibrium quantum system which, driven through the oscillatory motion, is prevented from reaching its (separable) thermal equilibrium st... Physics within a quantum reference frame Renato M. Angelo We investigate the physics of quantum reference frames. Specifically, we study several simple scenarios involving a small number of quantum particles, whereby we promote one of these particles to the role of a quantum observer and ask what is the description of the rest of the system, as seen by this observer? We highlight the interesting aspects o... Motional effects on the efficiency of excitation transfer Ali Asadian Energy transfer plays a vital role in many natural and technological processes. In this work, we study the effects of mechanical motion on the excitation transfer through a chain of interacting molecules with application to biological scenarios of transfer processes. Our investigation demonstrates that, for various types of mechanical oscillations,... On the speed of fluctuations around thermodynamic equilibrium We study the speed of fluctuation of a quantum system around its thermodynamic equilibrium state, and show that the speed will be extremely small for almost all times in typical thermodynamic cases. The setting considered here is that of a quantum system couples to a bath, both jointly described as a closed system. This setting, is the same as the... Dynamical quantum non-locality During the 50 years since its discovery, the Aharonov–Bohm effect has had a significant impact on the development of physics. Its arguably deepest implication, however, has been virtually ignored. CALL FOR PAPERS: Special issue on Quantum Phases: 50 Years of the Aharonov-Bohm Effect and 25 Years of the Berry Phase Special issue on Quantum Phases: 50 Years of the Aharonov-Bohm Effect and 25 Years of the Berry Phase This is a call for contributions to a special issue of Journal of Physics A: Mathematical and Theoretical dedicated to the subject of quantum phases and highlighting the impact of the discovery of the Aharonov--Bohm effect and of the Berry phase across physics. Researchers working in the area are invited to submit papers of original research to thi... Intra-molecular refrigeration in enzymes We present a simple mechanism for intra-molecular refrigeration, where parts of a molecule are actively cooled below the environmental temperature. We discuss the potential role and applications of such a mechanism in biology, in particular in enzymatic reactions. Comment: 6 pages, 5 figures Closed sets of nonlocal correlations Jonathan Allcock Tamas Vertesi We present a fundamental concept—closed sets of correlations—for studying nonlocal correlations. We argue that sets of correlations corresponding to information-theoretic principles, or more generally to consistent physical theories, must be closed under a natural set of operations. Hence, studying the closure of sets of correlations gives insight... A Novel Phase Shift Acquired due to Virtual Forces T. Kaufherr S. Nussinov This paper has been withdrawn by the author. Quantum mechanical evolution towards thermal equilibrium The circumstances under which a system reaches thermal equilibrium, and how to derive this from basic dynamical laws, has been a major question from the very beginning of thermodynamics and statistical mechanics. Despite considerable progress, it remains an open problem. Motivated by this issue, we address the more general question of equilibration... Emergence of Quantum Correlations from Nonlocality Swapping By studying generalized nonsignaling theories, the hope is to find out what makes quantum mechanics so special. In the present Letter, we revisit the paradigmatic model of nonsignaling boxes and introduce the concept of a genuine box. This will allow us to present the first generalized nonsignaling model featuring quantumlike dynamics. In particula... Testing a Bell inequality in multipair scenarios Cyril Branciard To date, most efforts to demonstrate quantum nonlocality have concentrated on systems of two (or very few) particles. It is, however, difficult in many experiments to address individual particles, making it hard to highlight the presence of nonlocality. We show how a natural setup with no access to individual particles allows one to violate the Cla... Quantum Measurements and Non-locality We discuss the role of non-locality in the problem of determining the state of a quantum system, one of the most basic problems in quantum mechanics. Entanglement and Intra-molecular Cooling in Biological Systems? A Quantum Thermodynamic Perspective We discuss the possibility of existence of entanglement in biological systems. Our arguments centre on the fact that biological systems are thermodynamic open driven systems far from equilibrium. In such systems error correction can occur which may maintain entanglement despite high levels of de-coherence. We also discuss the possibility of cooling... Simulation of partial entanglement with nonsignaling resources With the goal of gaining a deeper understanding of quantum non-locality, we decompose quantum correlations into more elementary non-local correlations. We show that the correlations of all pure entangled states of two qubits can be simulated without communication, hence using only non-signaling resources. Our simulation model works in two steps. Fi... Knill-Laflamme-Milburn Linear Optics Quantum Computation as a Measurement-Based Computation We show that the Knill Lafllame Milburn method of quantum computation with linear optics gates can be interpreted as a one-way, measurement-based quantum computation of the type introduced by Briegel and Raussendorf. We also show that the permanent state of n n-dimensional systems is a universal state for quantum computation. Multiple-Time States and Multiple-Time Measurements In Quantum Mechanics We discuss experimental situations that consist of multiple preparation and measurement stages. This leads us to a new approach to quantum mechanics. In particular, we introduce the idea of multi-time quantum states which are the appropriate tools for describing these experimental situations. We also describe multi-time measurements and discuss the... Quantum Nonlocality and Beyond: Limits from Nonlocal Computation We address the problem of "nonlocal computation," in which separated parties must compute a function without any individual learning anything about the inputs. Surprisingly, entanglement provides no benefit over local classical strategies for such tasks, yet stronger nonlocal correlations allow perfect success. This provides intriguing insights int... Knill-Laflamme-Milburn Quantum Computation with Bosonic Atoms A Knill-Laflamme-Milburn (KLM) type quantum computation with bosonic neutral atoms or bosonic ions is suggested. Crucially, as opposite to other quantum computation schemes involving atoms (ions), no controlled interactions between atoms (ions) involving their internal levels are required. Versus photonic KLM computation, this scheme has the advant... Sequential weak measurement Graeme Mitchison Richard Jozsa The notion of weak measurement provides a formalism for extracting information from a quantum system in the limit of vanishing disturbance to its state. Here we extend this formalism to the measurement of sequences of observables. When these observables do not commute, we may obtain information about joint properties of a quantum system that would... Reducing Polarization Mode Dispersion With Controlled Polarization Rotations One of the fundamental limitations to high bit rate, long distance, telecommunication in optical fibers is Polarization Mode Dispersion (PMD). Here we introduce a conceptually new method to reduce PMD in optical fibers by carrying out controlled rotations of polarization at predetermined locations along the fiber. The distance between these control... Negative Kinetic Energy Between Past and Future State Vectors An analysis of errors in measurement yields new insight into classically forbidden quantum processes. In addition to "physical" values, a realistic measurement can yield "unphysical" values; we show that in {\it sequences} of measurements, the "unphysical" values can form a consistent pattern. An experiment to isolate a particle in a classically fo... Interplay of Aharonov‐Bohm and Berry Phases for a Quantum Cloud of Charge SIDNEY COLEMAN No quantum advantage for nonlocal computation We investigate the problem of "nonlocal" computation, in which separated parties must compute a function with nonlocally encoded inputs and output, such that each party individually learns nothing, yet together they compute the correct function output. We show that the best that can be done classically is a trivial linear approximation. Surprisingl... Entanglement and the foundations of statistical mechanics Statistical mechanics is one of the most successful areas of physics. Yet, almost 150 years since its inception, its foundations and basic postulates are still the subject of debate. Here we suggest that the main postulate of statistical mechanics, the equal a priori probability postulate, should be abandoned as misleading and unnecessary. We argue... Entanglement of Superpositions John A Smolin Given a bipartite quantum state (in arbitrary dimension) and a decomposition of it as a superposition of two others, we find bounds on the entanglement of the superposition state in terms of the entanglement of the states being superposed. In the case that the two states being superposed are biorthogonal, the answer is simple, and, for example, the... The Physics of No-Bit-Commitment: Generalized Quantum Non-Locality Versus Oblivious Transfer We show here that the recent work of Wolf and Wullschleger (quant-ph/0502030) on oblivious transfer apparently opens the possibility that non-local correlations which are stronger than those in quantum mechanics could be used for bit-commitment. This is surprising, because it is the very existence of non-local correlations which in quantum mechanic... Entanglement swapping for generalized nonlocal correlations AJ Short We consider an analog of entanglement-swapping for a set of black boxes with the most general nonlocal correlations consistent with relativity (including correlations which are stronger than any attainable in quantum theory). In an attempt to incorporate this phenomenon, we consider expanding the space of objects to include not only correlated boxe... The foundations of statistical mechanics from entanglement: Individual states vs. averages We consider an alternative approach to the foundations of statistical mechanics, in which subjective randomness, ensemble-averaging or time-averaging are not required. Instead, the universe (i.e. the system together with a sufficiently large environment) is in a quantum pure state subject to a global constraint, and thermalisation results from enta... Quantum information processing and communication P. Zoller Th. Beth Daniele Binosi A. Zeilinger We present an excerpt of the document "Quantum Information Processing and Communication: Strategic report on current status, visions and goals for research in Europe", which has been recently published in electronic form at the website of FET (the Future and Emerging Technologies Unit of the Directorate General Information Society of the Europ... Lower Bound on the Number of Toffoli Gates in a Classical Reversible Circuit through Quantum Information Concepts Berry Groisman The question of finding a lower bound on the number of Toffoli gates in a classical reversible circuit is addressed. A method based on quantum information concepts is proposed. The method involves solely concepts from quantum information--there is no need for an actual physical quantum computer. The method is illustrated in the example of classical... Entanglement concentration of three-partite states We investigate the concentration of multi-party entanglement by focusing on simple family of three-partite pure states, superpositions of Greenberger-Horne-Zeilinger states and singlets. Despite the simplicity of the states, we show that they cannot be reversibly concentrated by the standard entanglement concentration procedure, to which they seem... Quantum gloves: Quantum states that encode as much as possible chirality and nothing else D Collins Lajos Diósi Communicating a physical quantity cannot be done using information only\char22{}i.e., using abstract cbits and∕or qubits. Rather one needs appropriate physical realizations of cbits and∕or qubits. We illustrate this by considering the problem of communicating chirality. We discuss in detail the physical resources this necessitates and introduce the... Simulating Maximal Quantum Entanglement without Communication N. J. Cerf It is known that all causal correlations between two parties which output each 1 bit, a and b, when receiving each 1 bit, x and y, can be expressed as convex combinations of local correlations (i.e., correlations that can be simulated with local random variables) and nonlocal correlations of the form a+b=xy mod 2. We show that a single instance of... Nonlocal correlations as an information-theoretic resource Jonathan Barrett It is well known that measurements performed on spatially separated entangled quantum systems can give rise to correlations that are nonlocal, in the sense that a Bell inequality is violated. They cannot, however, be used for superluminal signaling. It is also known that it is possible to write down sets of "superquantum" correlations that are more... Optimal extraction of information from finite quantum ensembles S. Massar Given only a finite ensemble of identically prepared particles, how precisely can one determine their states? We describe optimal measurement procedures in the case of spin-1/2 particles. Furthermore, we prove that optimal measurement procedures must necessarily view the ensemble as a single composite system rather than as the sum of its components... Measurement of the total energy of an isolated system by an internal observer We consider the situation in which an observer internal to an isolated system wants to measure the total energy of the isolated system (this includes his own energy, that of the measuring device and clocks used, etc...). We show that he can do this in an arbitrarily short time, as measured by his own clock. This measurement is not subjected to a ti... Conditions for the confirmation of three-particle nonlocality Peter Mitchell The notion of genuine three-particle nonlocality introduced by Svetlichny Phys. Rev. D 35 10, 3066 (1987)] is discussed. Svetlichny's inequality, which can distinguish between genuine three-particle and three-particle nonlocality that is based on underlying two-particle nonlocality, is analyzed by reinterpreting it as a frustrated network of correl... Quantum, classical, and total amount of correlations in a quantum state We give an operational definition of the quantum, classical and total amount of correlations in a bipartite quantum state. We argue that these quantities can be defined via the amount of work (noise) that is required to erase (destroy) the correlations: for the total correlation, we have to erase completely, for the quantum correlation one has to e... Error Filtration and Entanglement Purification for Quantum Communication N. Linden The key realisation which lead to the emergence of the new field of quantum information processing is that quantum mechanics, the theory that describes microscopic particles, allows the processing of information in fundamentally new ways. But just as in classical information processing, errors occur in quantum information processing, and these have... Proposal for Energy-Time Entanglement of Quasiparticles in a Solid-State Device We present a proposal for the experimental observation of energy-time entanglement of quasiparticles in mesoscopic physics. This type of entanglement arises whenever correlated particles are produced at the same time and this time is uncertain in the sense of quantum uncertainty, as has been largely used in photonics. We discuss its feasibility for... Frames of Reference and the Intrinsic Directional Information of a Particle With Spin Daniel Collins "Information is physical", and here we consider the physical directional information of a particle with spin. We ask whether, in the presence of a classical frame of reference, such a particle contains any intrinsic directional information, ie. information above that which can be transmitted by a classical bit. We show that when sending a large num... Classical Analog to Topological Nonlocal Quantum Interference Effects Benni Reznik Ady Stern The two main features of the Aharonov-Bohm effect are the topological dependence of accumulated phase on the winding number around the magnetic fluxon, and nonlocality-local observations at any intermediate point along the trajectories are not affected by the fluxon. The latter property is usually regarded as exclusive to quantum mechanics. Here we... Energy-time entanglement of quasi-particles in solid-state devices Entanglement cost of generalised measurements Masato Koashi Bipartite entanglement is one of the fundamental quantifiable resources of quantum information theory. We propose a new application of this resource to the theory of quantum measurements. According to Naimark's theorem any rank 1 generalised measurement (POVM) M may be represented as a von Neumann measurement in an extended (tensor product) space o... Almost Every Pure State of Three Qubits Is Completely Determined by Its Two-Particle Reduced Density Matrices N Linden W K Wootters In a system of n quantum particles, we define a measure of the degree of irreducible n-way correlation, by which we mean the correlation that cannot be accounted for by looking at the states of n-1 particles. In the case of almost all pure states of three qubits, we show that there is no such correlation: almost every pure state of three qubits is... Quantum nonlocality, Bell inequalities, and the memory loophole Lucien Hardy In the analysis of experiments designed to reveal violation of Bell-type inequalities, it is usually assumed that any hidden variables associated with the nth particle pair would be independent of measurement choices and outcomes for the first (n-1) pairs. Models which violate this assumption exploit what we call the memory loophole. We focus on th... The power of reduced quantum states
CommonCrawl
Bakshi, M.P.S. (1) Bhushan, Shashi (1) Diab, Kawthar AE (1) Govindarajan, Raghavan (1) Guru, Santosh Kumar (1) Kumar, Munesh (1) Mehrotra, Shanta (1) Prasad, Sunil (1) Pushpangadan, Palpu (1) Rajwar, G.S. (1) Rana, K.K. (1) Rawat, Ajay Kumar Singh (1) Saleem, Sajid (1) Saxena, Ajit K (1) Shirwaikar, Annie (1) Vijayakumar, Madhavan (1) Wadhwa, M. (1) Department of Animal Nutrition, Punjab Agricultural University (1) Asian Australasian Association of Animal Production Societies (1) Asian Pacific Organization for Cancer Prevention (1) Institute of Forest Science, kangwon National University (1) The Korean Society of Pharmacognosy (1) Asian Pacific Journal of Cancer Prevention (1) Asian-Australasian Journal of Animal Sciences (1) Journal of Forest and Environmental Science (1) Natural Product Sciences (1) Title, Summary, Keyword: Anogeissus latifolia Search Result 4, Processing Time 0.043 seconds Change in Community Composition and Soil Carbon Stock Along Transitional Boundary in a Sub-Tropical Forest of Garhwal Himalaya Kumar, Munesh;Kumar, Manish;Saleem, Sajid;Prasad, Sunil;Rajwar, G.S. Journal of Forest and Environmental Science The aim of the present study was to assess the effect of transitional boundary on community composition and soil carbon stock. Five vegetation types were recognized horizontally along the transitional strip based on the dominance of tree species i.e., Pure Anogeissus latifolia forest (P.AL), mixed Pinus roxburghii and Lannea coromandelica forest (M.PR&LC), pure Pinus roxburghii forest (P.PR), mixed Pinus roxburghii and Lannea coromandelica (M.PR&LC) and pure Anogeissus latifolia forest (P.AL). The results revealed that Anogeissus latifolia was reported dominant tree in the outer transitional boundaries of the forest, which reduced dominance of trees towards middle where Pinus roxburghii was found dominant. The soil carbon stock was reported higher in the Anogeissus latifolia dominant forest and reduced with the dominance of Pinus roxburghii in the middle site. Both the species are growing close to one another and competing for survival, but the aggressive nature of Anogeissus latifolia particular in this region may change new growth of Pinus roxburghii and will enhance soil carbon stock. But high anthropogenic pressure on Anogeissus latifolia tree species could be limited chance to further its flourish. https://doi.org/10.7747/JFS.2013.29.3.194 인용 PDF KSCI Activity Guided Isolation of Antioxidant Tannoid Principles from Anogeissus latifolia Govindarajan, Raghavan;Vijayakumar, Madhavan;Shirwaikar, Annie;Rawat, Ajay Kumar Singh;Mehrotra, Shanta;Pushpangadan, Palpu Natural Product Sciences Oxidative stress is an important causative factor in several human chronic diseases, such as atherosclerosis, cardiovascular disorders, mutagenesis, cancer, several neurodegenerative disorders, and the aging process. Phenolics and tannins are reported to be good antioxidants. Anogeissus latifolia (Combretaceae) bark has been used in the Indian traditional systems of medicine for curing a variety of ailments, but scientific validation is not available till date. Hence the present study was undertaken to isolate antioxidant compounds by activity-guided isolation. Inhibtion of diphenyl picryl hydrazyl (DPPH) and Xanthine oxidase along with photochemiluminescence assay were used as bioassay for antioxidant activity. Activity guided isolation was carried out using silica column and the compounds were quantified using HPLC. Ethyl acetate and butanol fraction exhibited potent antioxidant activity. Bioassay-guided isolation led to isolation of ellagic acid (1) and dimethyl ellagic acid (2) as the main active compounds, which along with gallic acid were quantified by HPLC. Thus we conclude that these three major tannoid principles present in A. latifolia, are responsible for the antioxidant potential and possibly their therapeutic potential. In Vitro Anticancer Activities of Anogeissus latifolia, Terminalia bellerica, Acacia catechu and Moringa oleiferna Indian Plants Diab, Kawthar AE;Guru, Santosh Kumar;Bhushan, Shashi;Saxena, Ajit K The present study was designed to evaluate in vitro anti-proliferative potential of extracts from four Indian medicinal plants, namely Anogeissus latifolia, Terminalia bellerica, Acacia catechu and Moringa oleiferna. Their cytotoxicity was tested in nine human cancer cell lines, including cancers of lung (A549), prostate (PC-3), breast (T47D and MCF-7), colon (HCT-16 and Colo-205) and leukemia (THP-1, HL-60 and K562) by using SRB and MTT assays. The findings showed that the selected plant extracts inhibited the cell proliferation of nine human cancer cell lines in a concentration dependent manner. The extracts inhibited cell viability of leukemia HL-60 and K562 cells by blocking G0/G1 phase of the cell cycle. Interestingly, A. catechu extract at $100{\mu}g/mL$ induced G2/M arrest in K562 cells. DNA fragmentation analysis displayed the appearance of a smear pattern of cell necrosis upon agarose gel electrophoresis after incubation of HL-60 cells with these extracts for 24h. https://doi.org/10.7314/APJCP.2015.16.15.6423 인용 PDF KSCI Seasonal Variations in Tannin Profile of Tree Leaves Rana, K.K.;Wadhwa, M.;Bakshi, M.P.S. Asian-Australasian Journal of Animal Sciences Forest tree leaves (12 different species) of semi hilly arid region of Punjab State were collected at 30-day interval throughout the year to assess the seasonal variations in tannin profile. Tannins were extracted and fractionated from fat free samples and data were analyzed statistically by $12{\times}12$ factorial design. The leaves of Anogeissus latifolia had the highest (p<0.05) concentration of total phenols (17.4%), net (15.9%) and hydrolysable (16.9%) tannins, followed by leaves of Acacia nilotica. Majority of the tree leaves selected had moderate levels (2-5%) of net tannins. Leaves of Carrisa had the highest (p<0.05) concentration of condensed tannins (CT), whereas the leaves of Anogeissus had the lowest (p<0.05) concentration of condensed tannins. The protein precipitable phenols (PPP) corresponded well with the net tannin content present in different tree leaves. Seasonal variation data revealed that in summer, net tannins and PPP decline in leaves of Bauhinia and Zizyphus whereas the net tannin content of Anogeissus and that of Carrisa increased during summer. The CT and PPP content in the leaves of Pheonix, Leucaena, Zizyphus and Ougenia increased in winter till spring season. Tree leaves generally had higher concentration of HT during summer months. It was concluded that leaves of leaves of A. nilotica, A. latifolia and L. leucocephala could serve as an excellent alternate feed stuffs for ruminants. However, leaves of Phoenix, Carrisa, Bauhinia and Dodonea should be avoided. https://doi.org/10.5713/ajas.2006.1134 인용 PDF KSCI
CommonCrawl
Home Journals IJHT Mathematical Modeling of Drying of Mint in a Forced Convective Dryer Based on Important Parameters Citation List CiteScore 2018: 1.75 ℹCiteScore: CiteScore is the number of citations received by a journal in one year to documents published in the three previous years, divided by the number of documents indexed in Scopus published in those same three years. SCImago Journal Rank (SJR) 2018: 0.376 ℹSCImago Journal Rank (SJR): The SJR is a size-independent prestige indicator that ranks journals by their 'average prestige per article'. It is based on the idea that 'all citations are not created equal'. SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from It measures the scientific influence of the average article in a journal, it expresses how central to the global scientific discussion an average article of the journal is. Source Normalized Impact per Paper (SNIP): 1.051 ℹSource Normalized Impact per Paper(SNIP): SNIP measures a source's contextual citation impact by weighting citations based on the total number of citations in a subject field. It helps you make a direct comparison of sources in different subject fields. SNIP takes into account characteristics of the source's subject field, which is the set of documents citing that source. 240x200fu_ben_.jpg Mathematical Modeling of Drying of Mint in a Forced Convective Dryer Based on Important Parameters Foued Chabane* | Fares Adouane | Noureddine Moummi | Abdelhafid Brima | Djamel Bensahal Department of Mechanical Engineering, Faculty of Technology, University of Biskra 07000, Algeria Laboratoire de Génie Mécanique (LGM), Faculty of Technology, University of Biskra 07000, Algeria Department of Mechanical Engineering, Faculty of Technology, University of Laghouat 03000, Algeria Corresponding Author Email: [email protected] https://doi.org/10.18280/ijht.370222 | Citation 37.02_22.pdf Thin layer drying of mint was evaluated using a solar dryer at the university of Biskra. The experimental moisture ratios of the samples were fitted to eight drying models. The drying experiments were carried out on the mint with effect the mass flow rate, which varies from 0.018 to 0.034 kg.s-1. In this part of the study, we cut the mint to virtual median little pieces. The mathematical models were tested with the drying behavior of mint in the laboratory solar dryer. The coefficients of the models were determined by the multiple regression method in the solar dryer to find out the most suitable moisture ratio model. We conclude that the mass flow rate influence onto the coefficients of correlation it's meaning the mass flow it is an important factor. The our model was found as the best model based on statistical parameters of R², RMSE and χ². The model is applicable to predict the moisture content of mint during solar drying of mint. drying room, R², RMSE, χ². moisture content, mass flow rate, mint, solar drying An improved technology in utilizing solar energy for drying agricultural is the use of solar dryers where the air is heated in a solar collector and then passed through products. There are two basic types of solar dryer appropriate for use with agricultural: natural convection dryers where the air flow is induced by thermal gradients; and forced convection dryers wherein air is forced through a solar collector. In this study the drying solar with forced convection, The forced convection solar dryer can be considered as a conventional mechanical drying system in which air is forced through a dried product holder but the air is heated by a flat plate solar collector rather than by more conventional means. Most developing countries are unable to solve their food problems for then tire population because of the rapidly increasing number of people in their respective territories. Some research effort to design and develop a forced convection solar dryer using evacuated tube air collector. Their performance was compared with natural sun drying. The results of the present study show that the proposed solar dryer has been greater efficiency, and the moisture content of bitter gourd is reduced from 91 % to 6.25 % in 6 hours as compared to 10 hours in natural sun drying [1]. Another work studies an experimental study was conducted to investigate the performance of a solely solar drying system and a system equipped with an auxiliary heater as a supplement to the solar heat, [2]. The performances of both are compared to that of natural drying. Beans and peas are dehydrated in a system that consists of two flat plate collectors, a blower, and a drying chamber. Tests with four different airflow rates, namely, 0.0383, 0.05104, 0.0638, and 0.07655 m3/s were conducted. The efficiency of the mixed drying system was found to increase by 25 % to 40 % compared to the solely solar drying. A best fit to the experimental data of peas and beans was obtained by six exponential equations for the various systems with a correlation coefficient in the range 0.933 and 0.997. Solar drying can be an effective means of food preservation since the product is completely protected during drying against rain, dust, insects and animals [3]. There is a great diversity of designs and modes of operation: forced convection, Ahmad et al., [4], Indirect forced convection, Bahlou et al., [5], Direct cabinet and indirect cabinet solar dryers, Banoult et al., [6], Solar-biomass hybrid dryer enhanced by the Co-Gen technique, Tadahmun and Hussai, [7]; Leon and Kumar, [8], Greenhouse solar dryers, Abdullah, [9]; Bechoff et al., [10], Direct solar dryer, Hii et al., [11], Heat pumps, Fadhel et al., [12]; Li et al., [13], Indirect natural convection solar dryer with chimney, solar dryer with greenhouse as collector, solar tunnel dryer (air collector), hybrid solar dryer assisted by evacuated tube collectors, Jairaj et al., [14]. F. Chabane et al. [15-26], Presents a study of heat transfer in a solar air heater by using new design of solar collector. The collector efficiency in a single pass of solar air heater without, and with using fins attached under the absorbing plate has been investigated experimentally the maximum efficiency obtained for the 0.012 and 0.016 kg/s with, and without fins were 40.02, 51.50 % and 34.92, 43.94 %. 2. Experimental Study In this study, we focused on agro-food drying, using a solar collector and a drying chamber, which we manually performed in the technological hall of the Department of Mechanical Engineering of the University of Biskra, carried out in the period from February to May 2018. Figure 1. Experimental setup (solar collector with drying chamber) -Orifices: We drilled holes in order to distribute the air on the product and avoid burning it. In our case, we put holes 10 mm in diameter in a square board, 30 cm by 30 cm. As show in Figure 2. Figure 2. Orifices into drying chamber -The grate: Is a support on which the product is arranged, with holes for the removal of water, attached by four rods equilibrated to allow us to weigh the product without removing it (see Figure 3, This grid is characterized by its hardness and resistance to rust. Figure 3. Support of the product Table 1. Parameters of solar drying with m =0.018 kg/s 0.018 kg/s 1a.jpg Before drying 1b.jpg After drying 928.43 W/m² Vwind 26.86 °C Te,cham Ts,cham Tcham DTChambre -10.40 °C 919 W/m² Table 1, 2 and 3, shows the different variation of the solar drying according three different mass flow rates. The temperature of an inlet chamber vary between 56 to 60 °C, approximately is approach, this mean that the effect climate is play with important role about this change. Such as ambiante temperature solar irradiation and velocity wind which varying for each day of the test. The tables show the picture of the wet and dry mint for each experiment. Preparation for the test: • Experimental determination: The dimensions of the collector are: Dimensions (general): 1.53 m x 0.83 m. - Dimensions (specific): 1.50 m x 0.80 m. Calculation of the collector surface: - Total area: 1.27 - Area (specific): 1.20 • Dimensions of the drying chamber: - Dimensions (general): 0.80 m x 0.50 m x0.50 - Dimensions (specific): 0.73m x 0.42m x0.39 • Calculation of the volume of the drying chamber: - Total volume: 0.20 - Characteristics of the drying process: We summarized the results of the drying process in the Table 2 showing the state of the product before and after drying with a characteristic average, the results are as follows: Table 4. Mathematical model applied to the drying curves MR = exp(-k*x) (Ayensu, 1997) [27] MR=exp(-ktn) (Page, 1949; Doymaz, 2004) [28] Henderson and Pabis MR=aexp(-kt) (Rahman et al., 1998) [29] MR=aexp(-kt)+c (Lahsasni et al., 2004) [30] Two-term MR=aexp(-kt)+cexp(-gt) (Dandamrongrak et al., 2002) [31] Two-term exponential MR = aexp(-kt) + (1-a)cexp(-kat) (Hayaloglu et al., 2007) [32] Wang and Singh MR=1+at+ct² Midilli et al. MR=aexp(-kt) +ct 3. Mathematical Modeling of Drying Curves The moisture content was expressed in percentage wet basis (%, w.b) and then converted to kilogram water per kilogram dry matter. The drying curves were fitted to eight different moisture ratio models to select a suitable model for describing the drying process of mint pieces (Table 4). The moisture ratio of the samples during drying was expressed by the Eq. (1): $M R=\frac{M_{t}-M_{e q}}{M_{0}-M_{e q}}$ (1) However, the moisture ratio (MR) was simplified by modifying the Equation (1) to $\frac{M_{t}}{M_{0}}$ instead of the $\frac{M_{t}-M_{e q}}{M_{0}-M_{e q}}$, [33]. The reduced chi-square (χ² and root mean square error (RMSE) were used as the primary criterion to select the best equation to account for variation in the drying curves of the dried samples [34]. The lower the value of the χ², the better the goodness of the fit. The RMSE gives the deviation between the predicted and experimental values and it is required to reach zero. The statistical values were calculated by Equations (2) and (3): $\chi^{2}=\frac{\sum_{i=1}^{N}\left(M_{\mathrm{exp}, i}-M_{\mathrm{mod}, i}\right)^{2}}{N-m}$ (2) $R M S E=\left(\frac{1}{N} \sum_{i=1}^{N}\left(M_{\mathrm{exp}, i}-M_{\mathrm{mod} i}\right)^{2}\right)^{\frac{1}{2}}$ (3) $S S E=\frac{1}{N} \sum_{i=1}^{N}\left(M_{\mathrm{exp}, i}-M_{\mathrm{mod}, i}\right)^{2}$ (4) The results of the statistical computations undertaken to assess the consistency of 9 drying models are presented in Tables 5 for the Wang and Singh and our model. The Wang and Singh model yielded the highest values of R² and about our model takes the lowest values of SEE and c² and with important values of R² which the order 3 of all models. To take into account the effect of the drying variables on the our model constants a, c, k and n were regressed against those of drying air temperatures and mass flow rate using multiple regression analysis. Based on the multiple regression analysis, the accepted model was as follows: $\frac{M_{t}-M_{e q}}{M_{0}-M_{e q}}=c+a \times \sin \left(\pi \frac{(t-k)}{g}\right)$ (5) Calculation of effective diffusivities: It has been accepted that the drying characteristics of biological products in the falling rate period can be described by using Fick's diffusion equation. The solution of Fick's law for a slab was, according to Equation (6) [35]. $\frac{M_{t}-M_{e q}}{M_{0}-M_{e q}}=\frac{8}{\pi^{2}} \sum_{n=1}^{\infty} \frac{1}{(2 n+1)^{2}} \exp \left(-(2 n+1)^{2} \pi^{2} \frac{D_{e f f} t}{4 H^{2}}\right)$ (6) For a long drying period, Equation (6) can be further simplified to only the first term of the series [36]. Thus, Equation (6) is written in a logarithmic form according to Equation (7): $\ln \frac{M_{t}-M_{e q}}{M_{0}-M_{e q}}=\ln \left(\frac{8}{\pi^{2}}\right)-\left(\pi^{2} \frac{D_{e f f}}{4 H^{2}}\right) t$ (7) Diffusivities are typically determined by plotting experimental drying data in terms of Ln (MR) versus drying time t in Equation (7), because the plot gives a straight line with a slope according to Equation (8) [37]. $k_{0}=\left(\pi^{2} \frac{D_{e f f}}{4 H^{2}}\right)$ (8) Table 5. Results of nonlinear regression analyses for drying curve fitting R² RMSE c² 3,78E-03 -0,10579 -0,002869 Chabane et al. -210,997 -17039,8 4. Results and Discussion Figure 4 shows the variation of the inlet temperature of the drying chamber as a function of time of the day, according to different values of mass flow rate. We can be seen that the inlet temperature takes a maximum value with the low mass flow rate and a minimum in the much mass flow rate, and the evolution of curves takes a same variety. Figure 4. Inlet temperature of the drying chamber according to mass flow rate The following curve shows the variation of MR as a function of time for different models, compared to the experimental model. It should be noted that for the drying of this product, all models have the same decreasing appearance. Figure 5. The moisture content according to models of Newton, Page, Henderson and Pabis, Logarithmic and experimental data with mass flow rate 0.018 kg/s Figure 6. The moisture content according to models of Two-Term, Two-term exponential, Wang and Singh, Midilli et al, Prediction model and experimental data with mass flow rate 0.018 kg/s The following figure shows the variation of MR for a mass flow rate of 0.018 kg / s compared to the proposed model. It can be seen that the proposed model is consistent with other models. Figure 7. The moisture content according to models of Newton, Page, Henderson and Pabis, Logarithmic and experimental data with mass flow rate 0.029 kg/s. The following Figure 8 shows a comparison made between the different models for a mass flow rate of 0.029 kg / s. It should be noted that for higher throughput, the drying time is longer. However the evolution of the curves is the same. For a higher throughput a distribution between the models appears. The proposed model is closer to the experimental curve. In the following Figure 9 and for a mass flow rate of 0.034 kg /s the shape of the drying curve takes a different shape for the experimental model. Phase 1 (warming up) appears. This is not the case for some models. Figure 10. The moisture content according to models of Two-Term, Two-term exponential, Wang and Singh, Midilli et al, Prediction model and experimental data with mass flow rate 0.034 kg/s Here, we note that for this mass flow, the shape of the variation curve of MR as a function of time of the proposed model follows the evolution of the experiment. This shows a better performance of the proposed model as to the best translation of this drying phenomenon for this product. Figure 11. Relative humidity in the outlet drying chamber The Figure 11 show, the progression of relative humidity which goes to outlet drying chamber, according to three days of the test, we can be remarked that values come much of the inlet solar collector which means the product out much the wet content and that what I needed to come. In the second days the relative humidity is going with a maximum values which begin with it 59 % and the median in the first day with 54 %, and a last variation in the last day with a minimum attain 47 %. Figure 12. Temperature of the product of mint in the drying chamber A during the heat air with solar collector which gives this temperature to drying chamber when the product put it inside on the support, under all this we create the source of distribution under the form of orifices in favor of homogenate the heat inside all chamber. In the Figure 12, show the variation of temperature of the mint according to time of the day, we can see all the evolution curves begin with a less temperature and then take a more in the median sun time and for after that is stable with fixe temperature. When the mass flow rate take a much is associating that the product get a much the heat from air. In conclusion, this experimental study mainly has been carried out to investigate the solar drying behaviour of agricultural products under forced convection, by using an indirect solar dryer, which was designed and constructed previously with local materials. This study considers the thin layer drying under various conditions, for three different mass flow rate and quantities of mint. The experimental data were fitted to different mathematical moisture ratio models to compare them with the extracted model. Moreover, the effect of the solar radiation intensity is the primary factor to accomplish the drying process as purpose of heating the absorbing plate; the tests recorded good results of solar radiation and average temperature absorbing with maximum values of 928.43 W/m² and 80 °C respectively. Finley, a several mathematical models were used to determine the reducing moisture ratio, and the our model, showed the best fit to the experimental data with the highest average values of R² and the lowest average values of c² and RMSE. While our extracted model was in agreement with the other models, and it was better just in all times. [1] Umayal Sundari, A.R., Neelamegam, P., Subramanian, C.V. (2013). An experimental study and analysis on solar drying of bitter gourd using an evacuated tube air collector in Thanjavur, Tamil Nadu, India. Conference Papers in Energy. http://dx.doi.org/10.1155/2013/125628 [2] Khalifa, A.J.N., Al-Dabagh, A.M., Al-Mehemdi, W.M. (2012). An experimental study of vegetable solar drying systems with and without auxiliary heat. ISRN Renewable Energy, 2012. http://dx.doi.org/10.5402/2012/789324 [3] Esper, A., Muhlbauer, W. (1998). Solar drying – an effective means of food preservation. Renew Energy 15(1-4): 95-100. https://doi.org/10.1016/S0960-1481(98)00143-8 [4] Ahmad, F., Kamaruzzaman, S., Mohammad, H.Y., Mohd, H.R., Mohamed, G., Hussein, A.K. (2014). Performance analysis of solar drying system for red chili. Sol. Energy, 99: 47–54. https://doi.org/10.1016/j.solener.2013.10.019 [5] Bahloul, N., Boudhrioua, N., Kouhila, M., Kechaou, B., (2009). Effect of convective solar drying on colour, total phenols and radical scavenging activity of olive leaves (Olea europaea L.). International Journal Food Science & Technology, 44(12): 2561-2567. https://doi.org/10.1111/j.1365-2621.2009.02084.x [6] Banout, J., Havlik, J., Kulik, M., Kloucek, P., Lojka, B. (2010). Effect of solar drying on the composition of essential oil of Sacha Culantro (Eryngium Foetidum L.) grown in the Peruvian Amazon. Journal of Food Process Engineering, 33(1): 83–103. https://doi.org/10.1111/j.1745-4530.2008.00261.x [7] Yassen, T.A., Al-Kayiem, H.H. (2016). Experimental investigation and evaluation of hybrid solar/termal dryer combined with supplementary recovery dryer. Solar Energy, 134: 284–293. https://doi.org/10.1016/j.solener.2016.05.011 [8] Leon, M.A., Kumar, S. (2008). Design and performance evaluation of a solar assisted biomass drying system with thermal storage. Drying Technology, 26(7): 936–947. https://doi.org/10.1080/07373930802142812 [9] Abdullah, K. (1997). Drying of vanilla pods using a greenhouse effect solar dryer. Drying Technology, 15(2): 685-698. https://doi.org/10.1080/07373939708917254 [10] Bechoff, A., Dufour, D., Dhuique-Mayer, C., Marouzé, C., Reynes, M., Westby, A. (2009). Effect of hot air, solar and sun drying treatments on provitamin a retention in orange-fleshed sweet potato. Journal of Food Engineering, 92(2): 164–171. https://doi.org/10.1016/j.jfoodeng.2008.10.034 [11] Hii, C.L., Abdul Rahman, R., Jinap, S., Che Man, Y.B. (2006). Quality of cocoa beans dried using a direct solar dryer at different loadings. Journal of the Science Food and Agriculture, 86: 1237-1243. https://doi.org/10.1002/jsfa [12] Fadhel, M.I., Sopian, K., Daud, W.R.W., Alghoul, M.A. (2010). Performance analysis of solar-assisted chemical heat-pump dryer. Solar Energy 84(11): 1920–1928. https://doi.org/10.1016/j.solener.2010.07.001 [13] Li, Y., Li, H.F., Dai, Y.Y., Gao, S.F., Wei, L., Li, Z.L., Odinez, I.G., Wang, R.Z. (2011). Experimental investigation on a solar assisted heat pump in-store drying system. Applied Thermal Engineering, 31(10): 1718–1724. https://doi.org/10.1016/j.applthermaleng.2011.02.014 [14] Jairaj, K.S., Singh, S.P., Srikant, K. (2009). A review of solar dryers developed for grape drying. Solar Energy, 83(9): 1698–1712. https://doi.org/10.1016/j.solener.2009.06.008 [15] Chabane, F., Moummi, N., Benramache, S., Bensahal, D., Belahssen, O. (2013). Collector efficiency by single pass of solar air heaters with and without using fins. Engineering Journal, 17(3): 44-53. https://doi.org/10.4186/ej.2013.17.3.43 [16] Chabane, F., Moummi, N., Benramache, S. (2014). Heat transfer and energy analysis of a solar air collector with smooth plate. The European Physical Journal Applied Physics, 66(1): 10901. https://doi.org/10.1051/epjap/2014130405 [17] Chabane, F., Hatraf, N., Moummi, N. (2014). Experimental study of heat transfer coefficient with rectangular baffle fin of solar air heater. Frontiers in Energy, 8(2): 160-172. https://doi.org/10.1007/s11708-014-0321-y [18] Chabane, F., Moummi, N., Bensahal, D., Brima, A. (2014). Heat transfer coefficient and thermal losses of solar collector and Nusselt number correlation for rectangular solar air heater duct with longitudinal fins hold under the absorber plate. Applied Solar Energy, 50(1): 19-26. https://doi.org/10.3103/S0003701X14010046 [19] Chabane, F., Moummi, N., Benramache, S. (2014). Experimental study of heat transfer and thermal performance with longitudinal fins of solar air heater. Journal of Advanced Research, 5(2): 183-192. https://doi.org/10.1016/j.jare.2013.03.001 [20] Chabane, F., Moummi, N., Brima, A., Benramache, S. (2013). Thermal efficiency analysis of a single-flow solar air heater with different mass flow rates in a smooth plate. Frontiers in Heat and Mass Transfer, 4(1). https://doi.org/10.5098/hmt.v4.1.3006 [21] Chabane, F., Khadraoui, Z., Bensahal, D. (2013). Prediction of global solar radiation on the horizontal area with the effect of ambient temperature Part: II. TECNICA ITALIANA-Italian Journal of Engineering Science, 63(1): 73-77. https://doi.org/10.18280/ti-ijes.630110 [22] Chabane, F., Moummi, N., Benramache, S. (2013). Experimental analysis on thermal performance of a solar air collector with longitudinal fins in a region of Biskra, Algeria. Journal of Power Technologies, 93(1): 52-58. [23] Chabane, F., Sekseff, E. (2018). Solar air collectors with doubles glazed by different distances in support of mass flow. Instrumentation, Mesure, Metrologie, 17(1): 37-53. https://doi.org/10.3166//I2M.17.37-53 [24] Chabane, F., Moummi, N., Brima, A. (2018). Estimation of Ultraviolet A (315–400 nm) and Ultraviolet B (280–315 nm) on region of Biskra. Instrumentation, Mesure, Metrologie, 17(2): 193-204. https://doi.org/10.3166/i2m.17.193-204 [25] Chabane, F., Laznek, I., Bensaha, D. (2018). Prediction of global solar radiation on the horizontal area with the effect of relative humidity part: I. TECNICA ITALIANA: Italian Journal of Engineering Science, 62(2): 115-118. [26] Chabane, F., Moummi, N., Benramache, S. (2012). Effect of the tilt angle of natural convection in a solar collector with internal longitudinal fins. International Journal of Science and Engineering Investigations, 1(7): 13-17. [27] Ayensu, A. (1997). Dehydration of food crops using a solar dryer with convective heat flow. Solar Energy, 59(4-6): 121-126. https://doi.org/10.1016/S0038-092X(96)00130-2 [28] Doymaz, I. (2004). Drying kinetics of white mulberry. Journal of Food Engineering, 61(3): 341-346. https://doi.org/10.1016/S0260-8774(03)00138-9 [29] Rahman, M.S., Perera, C.O., Thebaud, C. (1998). Desorption isotherm and heat pump drying kinetics of peas. Food Research International, 30(7): 485-491. https://doi.org/10.1016/S0963-9969(98)00009-X [30] Lahsasni, S., Kouhila, M., Mahrouz, M., Jaouhari, J.T. (2004). Drying kinetics of prickly pear fruit (Opuntica ficus indica). Journal of Food Engineering, 61(2): 173-179. https://doi.org/10.1016/S0260-8774(03)00084-0 [31] Dandamrongrak, R., Young, G., Mason, R. (2002). Evaluation of various pre-treatments for the dehydration of banana and selection of suitable drying models. Journal of Food Engineering, 55(2): 139-146. https://doi.org/10.1016/S0260-8774(02)00028-6 [32] Hayaloglu, A.A., Karabulut, I., Alpaslan, M., Kelbaliyev, G. (2007). Mathematical modeling of drying characteristics of strained yoghurt in a convective type tray-dryer. Journal of Food Engineering, 78(1): 109-117. https://doi.org/10.1016/j.jfoodeng.2005.09.006 [33] Shanmugam, V., Natarajan, E. (2006). Experimental investigation of forced convection and desiccant integrated solar dryer. Renewable Energy, 31(8): 1239-1251. https://doi.org/10.1016/j.renene.2005.05.019 [34] Hossain, M.A., Bala, B.K. (2002). Thin layer drying characteristics for green chilli. Drying Technology, 20(2): 489–505. https://doi.org/10.1081/DRT-120002553 [35] Ullah, F., Kang, M., Khattak, M.K., Wahab, S. (2018). Retracted: Experimentally investigated the asparagus (Asparagus officinalis L.) drying with flat‐plate collector under the natural convection indirect solar dryer. Food Science & Nutrition, 6(6): 1357-1357. https://doi.org/10.1002/fsn3.603 [36] Prakash, S., Jha, S.K., Datta, N. (2004). Performance evaluation of blanched carrots dried by three different driers. Journal of Food Engineering, 62(3): 305-313. https://doi.org/10.1016/S0260-8774(03)00244-9 [37] Shamekhi-Amiri, S., Gorji, T.B., Gorji-Bandpy, M., Jahanshahi, M. (2018). Drying behaviour of lemon balm leaves in an indirect double-pass packed bed forced convection solar dryer system. Case Studies in Thermal Engineering, 12: 677-686. https://doi.org/10.1016/j.csite.2018.08.007 Latest News & Announcement
CommonCrawl
Moment of Inertia of a Rectangular Cross Section I have a question that is annoying me for a long time. I know that I can calculate the moment of inertia of a rectangular cross section around a given axis located on its centroid by the following formulas: I also know that more generically, the moment of inertia is given by the integer of an area times the square of the distance from its centroid to the axis. So lets say I have a rectangular section with a height of 200 mm and a width of 20 mm. If I use the formulas of the first method, in relation to an x axis parallel to the width: $$I_x=\frac{bh^3}{12}=\frac{20\cdot200^3}{12}=1333.33\text{ cm}^4$$ Using the second method, why do I get different results when calculating twice the area of half a section, multiplied by the square of the distance from its centroid to the x axis. $$I_x= 2A_{half\ section}d^2 = 2\cdot(200/2\cdot20)*(200/4)^2= 1000\text{ cm}^4$$ mechanical-engineering structural-engineering geometry Wasabi♦ balth bbalth b You have misunderstood the parallel axis theorem. The moment of inertia of an object around an axis is equal to $$I = \iint\limits_R\rho^2\text{d}A$$ where $\rho$ is the distance from any given point to the axis. In the case of a rectangular section around its horizontal axis, this can be transformed into $$\begin{align} I_x &= \int\limits_{-b/2}^{b/2}\int\limits_{-h/2}^{h/2}y^2\text{d}y\text{d}x \\ I_x &= \int\limits_{-b/2}^{b/2}\left.\dfrac{1}{3}y^3\right\rvert_{-h/2}^{h/2}\text{d}y\text{d}x \\ I_x &= \int\limits_{-b/2}^{b/2}\dfrac{1}{3}\dfrac{h^3}{4}\text{d}x \\ I_x &= \left.\dfrac{1}{3}\dfrac{h^3}{4}x\right\rvert_{-b/2}^{b/2} \\ I_x &= \dfrac{bh^3}{12} \end{align}$$ Now, what if we wanted to get the inertia around some other axis at a distance $r$ from our centroid? In this case, all we have to do is: $$I = \iint\limits_R(\rho+r)^2\text{d}A$$ $$I = \iint\limits_R\left(\rho^2 + 2\rho r + r^2\right)\text{d}A$$ $$I = \iint\limits_R\rho^2\text{d}A + 2r\iint\limits_R\rho\text{d}A + r^2\iint\limits_R\text{d}A$$ The first component $\iint\limits_R\rho^2\text{d}A$ is simply equal to the original moment of inertia. The second component $2r\iint\limits_R\rho\text{d}A$ is equal to zero since we're integrating around the centroid (it'll become a function of $y^2$, which when integrated from $-h/2$ to $h/2$ gives zero). The third component is equal to $Ar^2$. So, in the end, we get: $$I' = I + Ar^2$$ So, if you want to calculate the moment of inertia of a rectangular section by considering each of its halves (half above the centroid, half below), you need to do: $$\begin{align} I_{half} &= \dfrac{b\left(\dfrac{h}{2}\right)^3}{12} \\ I'_{half} &= I_{half} + b\left(\dfrac{h}{2}\right)\left(\dfrac{h}{4}\right)^2 \\ &= \dfrac{bh^3}{96} + \dfrac{bh^3}{32} = \dfrac{bh^3}{24} \\ I_{full} &= 2I'_{half} = \dfrac{bh^3}{12} \end{align}$$ Which is the original value for the full section. QED. Wasabi♦Wasabi The following sentence is not correct: the moment of inertia is given by the integer of an area times the square of the distance from its centroid to the axis You have to add to that, the moment of inertia of the area around its own centroid. That is what the parrallel axis theorem is all about: $$ I = I_o + A\cdot d^2 $$ where: - Io the moment of inertia around centroid - I is the moment of inertia around any parallel axis and - d the distance between the two axes So applying the above to your example, each half area (below and above centroidal axis) should have a moment of inertia equal to: $$ I_{half} = \frac{b (h/2)^3}{12} + \frac{bh}{2}\cdot\left(\frac{h}{4}\right)^2 $$ $$ I_{half} = \frac{b h^3}{96} + \frac{b h^3}{32} $$ $$ I_{half} = \frac{b h^3}{24} $$ Therefore, for the whole section, due to symmetry: $$ I = 2 I_{half} = \frac{b h^3}{12} $$ Demonstrating the example with your numbers: $$ I = 2\left(\frac{20 (100)^3}{12} + 20\cdot 100\cdot\left(50\right)^2 \right)\,mm^4$$ $$ I = 2\left(1666666.7 + 5000000 \right) \,mm^4 $$ $$ I = 13333333.3 \,mm^4 = 1333.33 cm^4 $$ Usually in enginnereing cross sections the parallel axis term $Ad^2$ is much bigger than the centroidal term $I_o$. It is rather acceptable to ignore the centroidal term for the flange of an I/H section for example, because d is big and flange thickness (the h in the above formulas) is quite small. In other circumstances however this is not accepteble. Calculation of moment of inertia for parallel axes minas lemonisminas lemonis Not the answer you're looking for? Browse other questions tagged mechanical-engineering structural-engineering geometry or ask your own question. Effect of varying diameter bolts on shear stress under eccentric loading What is the difference between the Polar Moment of Inertia, $ I_P $ and the torsional constant, $ J_T $ of a cross section? Composite beam calculation Biaxial Bending Sign convention Determining the first moment of area (Q) when calculating the shear flow of a built up member Natural frequencies of an orthotropic bending beam with circular cross-section Why do (some) high-speed trains have curved (convex) walls? Failure in Hollow, Thin Beam due to Collapse of Cross Section
CommonCrawl
Research | Open | Published: 10 May 2019 Aminudin Afandhi1, Tita Widjayanti1, Ayu Apri Leli Emi1, Hagus Tarno1, Mufidah Afiyanti2 & Rose Novita Sari Handoko2 Chemical and Biological Technologies in Agriculturevolume 6, Article number: 11 (2019) | Download Citation Common bean (Phaseolus vulgaris L.) is a source of antioxidant-containing vegetable protein that is beneficial to human health. The intense cultivation of common bean may result in environmental degradation. Thus, environmentally friendly cultivation methods that use an endophyte to improve productivity are needed. An entomopathogenic fungus, Beauveria bassiana, can serve as an endophyte that stimulates the growth of Gossypium. Therefore, we isolated and identified B. bassiana and also examined its function as a beneficial endophyte that promotes the growth of common bean. An entomopathogenic fungus, B. bassiana, was collected and identified based on macroscopic and microscopic characteristics that were observed during morphological examination. B. bassiana was propagated and inoculated into common bean via seed soaking, soil wetting, and leaf spraying. The soil-wetting and leaf-spraying methods used to inoculate B. bassiana effectively enhanced the growth of common bean, which was observed at day 10 post-inoculation. However, no significant growth enhancement of common bean was observed when the seed-soaking inoculation method was used. These results suggest a positive correlation between the B. bassiana inoculation method and growth enhancement of common bean. This study showed the endophytic potency of the fungus that may be used in the development of environmentally friendly cultivation methods of common bean. Common bean (Phaseolus vulgaris L.) is a source of antioxidant-containing vegetable protein and is widely consumed in various countries. Beninger and Hosfield [5] showed that pure flavonoid compounds that are present in the seed coats of common bean, such as anthocyanins, quercetin glycosides, and proanthocyanidins (condensed tannins), have significant antioxidant activity that is related to butylated hydroxytoluene (BHT), a commercial antioxidant used in foods. Consumption of common bean may increase levels of antioxidants, which is beneficial to human health. However, intense cultivation of common bean may result in environmental degradation, as well as planting of beans (Phaseolus vulgaris L.) in Northern Nicaragua [21]. Thus, environmentally friendly cultivation methods that improve productivity, such as using Beauveria bassiana as an endophyte, are needed. Biological agents can be a positive trigger of plant growth. Entomopathogenic B. bassiana is thought to have a broad host range and has the potential of being used as a biological control agent by serving as an endophyte capable of protecting plants. For example, colonization of the sweet pepper Capsicum annum by B. bassiana could suppresses aphids in controlled greenhouse conditions as part of an Integrated Pest Management (IPM) approach [8]. Moreover, B. bassiana may also be isolated from common bean by PDA media [16]. According to Lopez and Sword [12], the endophytic ability of B. bassiana can stimulate the growth of Gossypium hirsutum by suppressing the population of Helicoverpa zea and by helping to transfer nutrients from the soil into the roots of the plant. Endophytic fungi can be introduced onto plants by seed-soaking, leaf-spraying, and soil-wetting initiation methods using suspensions of B. bassiana. The seed-soaking method has been demonstrated to effectively promote endophytic development of B. bassiana on bean plants [4]. Leaf-spraying and soil-wetting methods successfully introduced endophytic B. bassiana onto peanut plants [16]. In this study, B. bassiana inoculation methods and its function as a growth-promoting endophytic fungus in the common bean (Phaseolus vulgaris L.) were evaluated. Plants and endophytic fungi Common bean varieties of balitsa 2 were obtained from the Unit of Seed Sources of Vegetable Research Institute, Bandung. The plant optimally grows 400–500 m above sea level, with a productivity yield of 20–23.8 tons per hectare. Isolates of B. bassiana were obtained from our collection in the Plant Protection Laboratory, Universitas Brawijaya, Malang. B. bassiana was grown in PDA media [16] and was identified according to genera of imperfect fungi [3]. Inoculation of B. bassiana onto common bean This study used a completely randomized design that consisted of six groups of treatments and four replications. Twenty-four plants were divided into six groups that were inoculated with B. bassiana by three methods: seed-soaking, leaf-spraying, and soil wetting; the remaining three groups were used as controls for each method. The number of colonies and plant growth parameters were determined at 10 days and 20 days post-inoculation to monitor the effects of the treatments. B. bassiana suspensions at a concentration of 1 × 108 conidia per ml with greater than 80% viability were used as inoculants. The seed-soaking method consisted of soaking common bean seeds with 0.5% sodium hypochlorite (NaOCl) with constant shaking, soaking the seeds in 70% ethanol, then rinsing the seeds with distilled water three times (2 min per rinse) [16]. The seeds were then soaked for 30 min with B. bassiana (1 × 108 conidia per ml) or with 10 ml of sterile water as a control treatment. Leaf-spraying and soil-wetting were performed 14 days after planting. Leaf-spraying was performed by manually spraying leaves with 10 ml of a B. bassiana suspension and then covering the plants with a plastic bag for 24 h to maintain humidity. Soil-wetting was performed by irrigating the soil with 10 ml of a B. bassiana suspension and then covering the top of the soil with aluminium foil. Measurement of plant growth Observations of plant growth were performed at 10 and 20 days following inoculation. Measurements included plant height, root length, and number of leaves. Bassiana colonization Beauveria bassiana colonization tests were performed on leaves, stems, and roots. Each leaf, two pieces of root, and two pieces of stem were taken from the common bean plant and were then planted on PDA medium. Macroscopic and microscopic examinations were conducted to confirm that colonized B. bassiana were endophytes. Statistic analysis The following formula was used to calculate B. bassiana colonization: $${\text{\% Colonization = }}\frac{\text{Number of pieces exhibiting fungal growth}}{\text{Total number of pieces planted}} \times 1 0 0 {\text{\% }}$$ The results were then analyzed by analysis of variance (ANOVA) at 95% confidence level. Identification of B. bassiana Beauveria bassiana grows on the edges of roots, stems, and leaves on PDA medium and is white and round in appearance with a smooth edge texture (Fig. 1). During growth, B. bassiana colonies spread regularly. Macroscopic and microscopic identification was based on a previous publication [3]. Macroscopic characteristics of endophytic B. bassiana include white color, textured smooth edges, irregular scattering, and a diffuse distribution. The surface texture of a colony is rather coarse and the colony grows rather tightly (Fig. 2a). Microscopically, endophytic B. bassiana are characterized by hyaline-colored sectional hyphae with a width of 1.10 μm, round-to-oval hyaline-colored conidia with a diameter of 1.47 μm, and a collection of conidia clustered in Konidiofor (Fig. 2b). Results of B. bassiana isolation. a Common bean plant leaves, b common bean roots and stems Morphology of B. bassiana isolated from common bean. a Macroscopic (14 days post-inoculation), b microscopic (400×) (1) conidia, (2) conidiophore, (3) hyphae Three inoculation methods (seed-soaking, leaf-spraying, soil-wetting) were tested using three different B. bassiana inocula amounts and two different post-inoculation times (10 and 20 days). At 10 days post-inoculation with the seed-soaking method, root colonization was 37.50 ± 12.50 colonies, stem colonization was 25.64 ± 12.50, and leaf colonization was 33.34 ± 16.67 (all controls were 0.00 ± 0.00). At 10 days post-inoculation with the leaf-spraying method, root colonization was 62.50 ± 23.94, stem colonization was 62.50 ± 12.50, and leaf colonization was 66.92 ± 18.00 (all controls were 0.00 ± 0.00). At 10 days post-inoculation with the soil-wetting method, root colonization was 87.50 ± 12.50, stem colonization was 62.50 ± 23.94, and leaf colonization was 46.09 ± 10.65 (all controls were 0.00 ± 0.00). At 20 days post-inoculation with the seed-soaking method, root colonization was 16.67 ± 12.50, stem colonization was 12.50 ± 12.50, and leaf colonization was 8.34 ± 9.62 (all controls were 0.00 ± 0.00). At 20 days post-inoculation with the leaf-spraying method, root colonization was 25.00 ± 12.50, stem colonization was 25.00 ± 14.43, and leaf colonization was 33.59 ± 30.80 (all controls were 0.00 ± 0.00). At 20 days post-inoculation with the soil-wetting method, root colonization was 37.50 ± 12.50, stem colonization was 25.00 ± 14.43, and leaf colonization was 25.00 ± 16.67 (all controls were 0.00 ± 0.00). The results of these experiments showed that B. bassiana was present in common bean tissue and that B. bassiana colonies grew in roots, stems, and leaves. The highest level of B. bassiana colonization was observed in the roots at 10 days post-inoculation using the soil-wetting and leaf-spraying methods (Table 1). Table 1 Colonization percentages of B. bassiana in leaves, stems, and roots at 10 and 20 days post-inoculation Soil-wetting method significantly enhanced common bean growth Inoculation of B. bassiana by soil-wetting and leaf-spraying resulted in enhanced plant growth, which was not observed with the seed-soaking method. Common bean growth parameters differed significantly based on the inoculation method at 10 days post-inoculation (Table 2). In this work, three different inoculation methods and two different inoculation times were used. The parameters of plant growth that were measured included plant height, leaf number, and root length. At 10 days post-inoculation with the seed-soaking method, plant height was 20.00 cm (control height was 17.75 cm), the number of leaves was 6.50 sheet (control number was 6.50 sheet), and root length was 9.50 cm (control length was 8.75 cm). At 10 days post-inoculation with the leaf-spraying method, plant height was 43.50 cm (control height was 32.25 cm), number of leaves was 16.75 sheet (control number was 11.75 sheet), and root length was 30.00 cm (control length was 21.75 cm). At 10 days post-inoculation with the soil-wetting method, plant height was 49.25 cm (control height was 25.25 cm), the number of leaves was 19.25 sheet (control leaf number was 9.00 sheet), and root length was 32.25 cm (control root length was 24 cm) (Table 2). Table 2 Average plant growth parameters at 10 and 20 dpi (days post-inoculation) At 20 days post-inoculation with the seed-soaking method, plant height was 51.00 cm (control height was 41.75 cm), the number of leaves was 20.75 sheet (control number was 15.75 sheet), and root length was 31.00 cm (control length was 23.25 cm). At 20 days post-inoculation with the leaf-spraying method, plant height was 67.75 cm (control height was 54.50 cm), number of leaves was 29.00 sheet (control number was 21.75 sheet), and root length was 38.25 cm (control length was 33.25 cm). At 20 days post-inoculation with the soil-wetting method, plant height was 68.50 cm (control height was 60 cm), the number of leaves was 29.75 sheet (control leaf number was 25.25 sheet), and root length was 40.5 cm (control root length was 36.00 cm). Based on the results of these experiments, a post-inoculation time of 10 days was the most effective application time. Plant growth enhancement was observed with soil wetting and leaf spraying but not with seed-soaking. Macroscopic and microscopic identification of B. bassiana in this work correlated with the following observations made by Barnett and Hunter [3]: B. bassiana is known as a white muscardine because its mycelium and conidium (conidia) are white, its single oval-shaped conidia are oval-like eggs, and its conidiophore grows in a zigzag pattern. Vega et al. [23] stated that B. bassiana was found in the seeds, leaves, and shoots of coffee plants. B. bassiana not only infects insects but can also be a symbiont of plants. Thus, when B. bassiana conidia are inoculated into plants, the conidia may become an endophyte of the common bean plant. Application of B. bassiana resulted in endophytic colonization of roots, stems, and leaves of tomato and cotton plants, which protected against the plant pathogens Rhizoctonia solani and Pythium myriotylum. The degree of disease control depends upon the population density of B. bassiana conidia [15]. Colonization by B. bassiana This study examined three methods of inoculating the entomopathogenic fungus B. bassiana. The method of inoculation can impact the endophytic characteristics of B. bassiana in plants. Factors that influence the development of entomopathogenic fungi into endophytes are entomopathogenicity, fungus inoculation method, environmental compatibility, fungus life cycle, and plant reaction to the fungus [18]. The production of metabolites by fungi can help exploit these endophytic entomopathogenic properties to support plant resistance to pest and disease attacks. According to Qayyum et al. [18], B. bassiana can enter and form colonies on plant tissue via inoculation techniques. Many factors can influence the specific outcome of experiments that aim to establish a fungal entomopathogen as an endophyte. Biological factors of interest that can be addressed experimentally include the crop species or cultivar and the fungal entomopathogen species, strain, or isolate. Other factors to consider include the concentration of the inoculum, the age of the plant during inoculations, and the growth conditions of the plant [16]. The soil-wetting and leaf-spraying methods used in this study generated positive but the seed-soaking method did not. The soil-wetting-facilitated B. bassiana conidia attach to the root surface, then penetrate to grow and develop, then spread to the stems and roots. With the leaf-spraying method, B. bassiana attaches to the top and bottom leaf surfaces and grows and develops, then spreads to stems and roots. It is not yet clear why B. bassiana colonization is more effective using leaf-spraying in leaves and stems, but this may reflect different physiological conditions and microbial states in different parts of the plant. The tissue specificity of endophytic fungus may also play a role in the ability of different parts of plants to cope with and adapt to particular conditions [11, 17]. Based on previous research, for seed-treatment to increase growth of maize, high nutrient conditions are mandatory. If nutrients are not high enough, then the fungus tended to reduce plant growth [14]. Colonization levels of B. bassiana in cassava roots were higher when plants were sampled at 7–9 days post-inoculation (84%) as compared to 47–49 days post-inoculation (40%) [7]. All colonizing fungi detected by fungal cell wall (CW) staining are metabolically active in young plants, but this may not be the case in older plants (Smith et al. 1990; Abdel-Fattah 2001) in [9]. According to Yoo and Ting [25], the success of endophytic fungal inoculation, including colonization and growth, depends on the host plant. According to Parsa et al. [16], a higher degree of colonization was observed for soil-wetting as compared to leaf-spraying. In coffee, foliar sprays favor leaf colonization whereas soil drenches favor root colonization [16]. According to Behie et al. [4], of a total of 514 Beauveria isolates at Ontario field sites, 74.3% were from roots, 11.9% were from the hypocotyl, and 13.8% were from the stem and leaves. The conidia of the fungus may stick to the root surface, then enter through the roots to the xylem vessels to spread to the stems and leaves [6]. The spread of B. bassiana in plants occurs through the vascular tissues of the xylem [10]. The ability of Beauveria to colonize above-ground plant tissues could allow for subsequent infection of above-ground herbivorous insects [4]. Beauveria is also capable of infecting insects found beneath the soil surface; however, host plants upon which the insects feed could potentially influence fungal pathogenicity (Santiago-Alvarez et al. 2006). According to Tefera and Vidal (2009), spraying with conidial suspensions of B. bassiana was the best method for introducing the fungus into sorghum leaves. When a B. bassiana conidia solution was sprayed onto maize, the fungus colonized the leaves and translocated to other tissues (Wagner and Lewis 2000 in Mantzoukas et al. 2015). Once hyphae on the leaf surface penetrate the cuticle, they follow the leaf apoplast and move through interconnected vascular tissues to colonize the entire plant (Mantzoukas et al. 2015). Our results demonstrated that inoculation of entomopathogenic B. bassiana improved growth of common bean plants. Inoculation of B. bassiana had a positive effect on plant growth parameters, which included plant height, number of leaves, and root length. Saikkonem et al. [19] stated that B. bassiana helps to transfer nutrients from the soil into the plant that helps to stimulate plant growth. Several studies have shown that inoculation of entomopathogenic B. bassiana had a positive effect on growth of cotton plants [15]. According to Akutse et al. [2], the Beauveria endophyte able to colonize the roots, stems, and leaves of P. vulgaris plants by soil-wetting treatment. Soil-wetting inoculation of conidial suspensions close to cassava stem cuttings resulted in endophytic colonization of cassava roots by B. bassiana and Metarhizium anisopliae, although neither was found in the leaves or stems of the treated cassava plants [7]. Inoculation of B. bassiana into Phaseolus vulgaris L. via seed-soaking, leaf-spraying, and soil-wetting demonstrated the potential of this fungus to function as a growth-promoting endophytic agent. Use of soil-wetting to inoculate B. bassiana conidia maximally increased the growth of Phaseolus vulgaris L. at 10 days post-inoculation. Compatibility of B. bassiana with plant parts (roots, stems, leaves) was demonstrated by macroscopic and microscopic identification. BHT: Butylated hydroxytoluene NaOCl: PDA: Potato dextrose agar ANOVA: Abdel-Fattah GM. Measurement of the viability of arbuscular-mycorrhizal fungi using threedifferent stains; relation to growth and metabolic activities of soybean plants. Microbiol. Res. 2001;156:359–67. Akutse KS, Maniania NK, Fiaboe KKM, Berg JVD, Ekesi S. Endophytic colonization of Vicia faba and Phaseolus vulgaris (Fabaceae) by fungal pathogens and their effects on the life-history parameters of Liriomyza huidobrensis (Diptera: Agromyzidae). Fungal Ecol. 2013;6:293–301. https://doi.org/10.1016/j.funeco.2013.01.003. Barnett HL, Hunter BB. Illustrated Marga of imperfect fungi. 4th ed. USA: Prentice-Hall, Inc; 1998. p. 90–5. Behie SW, Jones SJ, Bidochka MJ. Plant tissue localization of the endophytic insect pathogenic fungi Metarhizium and Beauveria. Fungal Ecol. 2015;13:112–9. https://doi.org/10.1016/j.funeco.2014.08.001. Beninger & Hosfield. Antioxidant activity of extracts, condensed tannin fractions, and pure flavonoids from Phaseolus vulgaris L. seed coat color genotypes. J Agric Food Chem. 2003;51:7879–83. https://doi.org/10.1021/jf0304324. Fatahuddin, Amin N, Daud ID, Chandra Y. Uji kemampuan Beauveria bassiana Vuillemin (Hyphomycetes: Moniliales) sebagai endofit pada tanaman kubis dan penyaruh terhadap larva Plutella Xylostella L. (Lepidoptera: Yponomeutidae). FITOMRDIKA Jurnal Fitomedika. 2003;5(1):16–9. Greenfield M, Gomez-Jimenez MI, Ortiz V, Vega FE, Kramer M, Paras S. Beauveria bassiana and Metarhizium anisopliae endophytically colonize cassava roots following soil drench inoculation. Biol Control. 2016;95:40–8. Jaber LR, Araj S-E. Interactions among endophytic fungal entomopathogens (Ascomycota: Hypocreales), the green peach aphid Myzus persicae Sulzer (Homoptera: Aphididae), and the aphid endoparasitoid Aphidius colemani Viereck (Hymenoptera: Braconidae). Biol Control. 2018;116(2018):53–61. Kobae Y, Ohtomo R, Oka N, Morimoto S. A simple model system for identifying arbuscular mycorrhizal fungal taxa that actively colonize rice (Oryza sativa L.) roots grown in field soil. Soil Sci Plant Nutr. 2017;63:29–36. Landa BB, Lopez DC, Jimenez FD, Montes BM, Munoz LFJ, Ortiz UA, Quesada ME. In plant a detection and monitorization of endophytic colonization by a Beauveria bassiana strain using a new-developed nested and quantitative PCR-based assay and confocal laser scanning microscopy. J Invert Pathol. 2013;114:128–38. https://doi.org/10.1016/j.jip.2013.06.007. Liang-Dong G, Guo-Rui H, Yu W. Seasonal and tissue age influences on endophytic fungi of Pinus tabulaeformis (Pinaceae) in the Dongling Mountains, Beijing. J Integr Biol. 2008;50:997–1003. Lopez DC, Sword GA. The endophytic fungal entomopathogens Beauveria bassiana and Purpureocillium lilacinum enhance the growth of cultivated cotton (Gossypium hirsutum) and negatively affect survival of the cotton bollworm (Helicoverpa zea). J Biol Control. 2015;89:53–60. https://doi.org/10.1016/j.biocontrol.2015.03.010. Mantzoukas S, Chondrogiannis C, dan Grammatikopoulos G. Effects of three endophytic entomopathogens on sweet sorghum and on the larvae of the stalk borer sesamia nonagrioides. J Entomol Soc Entomol Exp Appl 2015;154:78–87. Meyling NV, Tall S. Probiotics for Plants? Growth promotion by the entomopathogenic fungus Beauveria bassiana depends on nutrient availability. Microb Ecol. 2018;76:1002–8. Ownley BH, Griffin MR, Klingeman WE, Gwinn KD, Moulton JK, Pereira RM. Beauveria bassiana: endophytic colonization and plant disease control. J Invert Pathol. 2008;98:267–70. https://doi.org/10.1016/j.jip.2008.01.010. Parsa S, Ortiz V, Vega FE. Establishing fungal entomopathogens as endophytes: towards endophytic biological control. JOVE J Vis Exp. 2013;74:1–5. https://doi.org/10.3791/50360. Petrini O, Fisher PJ. Fungal endophytes in Salicornia perennis. Trans Br Mycol Soc. 1987;87:647–51. Qayyum MA, Wakil W, Arif MJ, Sahi ST, Dunlap CA. Infection of Helicoverpa armigera by endophytic Beauveria bassiana colonizing tomato plants. J Biol Control. 2015;90:200–7. https://doi.org/10.1016/j.biocontrol.2015.04.005. Saikkonem K, Mikola J, Helander M. Endophytic phyllosphere fungi and nutrient cyling in terrestrial ecosystems. J Curr Sci. 2015;109(1):121–5. Santiago-Alvarez C, Maranhao EA, Maranhao E, Quesada-Moraga E. Host plant influence pathogenicity of Beauveria bassiana to Bemisia tabaci and its sporulation on cadavers. Biocontrol. 2006;51:519–32. Sepulveda RB, Carrillo AA. The erosion threshold for a sustainable agriculture in cultures of bean (Phaseolus vulgaris L.) under conventional tillage and no-tillage in Northern Nicaragua. Soil Use Manag. 2016;32:368–80. Tefera T, Vidal S. Effect of inoculation method and plant growth medium on endophytic colonizationof sorghum by the entomopathogenic fungus Beauveria bassiana. Biocontrol. 2009;54:663–9. Vega FE, Posada F, Aime MC, Peterson SW, Rehner SA. Fungal endophyters in green coffee seeds. Mycosystema. 2008;27(1):75–84. Wagner BL, dan Lewis LC. Colonization of Corn, Zea mays, by the Entomopathogenic Fungus Beauveria bassiana. J Appl Environ Microbiol. 2000;66:3468–73. Yoo HS, Ting ASY. In vitro endophyte-host plant interaction study to hypothetically describe endophyte survival and antifungal activities in planta. Acta Biol Szegediensis. 2017;61(1):1–11. AA designed study and write this manuscript; TW, RNSH, and AALE conducted experiments, HT and MA analyzed the data and drafted the manuscript. All authors read and approved the final manuscript. We thank the Post Graduate Program and Department of Pest Management, Faculty of Techno Agriculture, Brawijaya University for providing the facility to conduct this study. Data included in this article. The data are original, no duplication and plagiarism, also have not publish in elsewhere. Grant-in-Aid for Scientific Research (No. 13343) from the Japan Society for the Promotion of Sciences. Plant Protection Department, Agriculture Faculty, Universitas Brawijaya, Veteran Street, Malang, 65145, East Java, Indonesia Aminudin Afandhi , Tita Widjayanti , Ayu Apri Leli Emi & Hagus Tarno Study Program of Environmental Management and Development, Postgraduate, Universitas Brawijaya, M.T. Haryono Street, Number 169, Malang, 65145, East Java, Indonesia Mufidah Afiyanti & Rose Novita Sari Handoko Search for Aminudin Afandhi in: Search for Tita Widjayanti in: Search for Ayu Apri Leli Emi in: Search for Hagus Tarno in: Search for Mufidah Afiyanti in: Search for Rose Novita Sari Handoko in: Correspondence to Aminudin Afandhi. Endophytic fungi Common bean Soil-wetting Leaf-spraying
CommonCrawl
Is it possible to use one sequence of moves to solve the Rubik's cube from any position? Is there a sequence of moves that can be repeated over and over again which can solve any legal position the Rubik's Cube? If so what is it, and if there's more than one, what's the shortest? If not, prove that it's impossible. The sequence can be of any length but cannot be broken up into multiple pieces to be executed depending on Rubik's cube patterns; it must, when repeated from start to finish, end in a perfectly solved Rubik's cube. strategy rubiks-cube mechanical-puzzles solvability abelenky warspykingwarspyking $\begingroup$ Can you just do all possible (<20 move sequence> followed by <undoing previous sequence>) as a possible solution? (might not be shortest but it'll show one exists) You'll solve the cube at some point. $\endgroup$ – Sp3000 Nov 17 '14 at 0:50 $\begingroup$ @Sp3000 That's pretty long, the question asks for the shortest if one exists so. $\endgroup$ – warspyking Nov 17 '14 at 0:56 $\begingroup$ Mew, was that edit some kind of joke -_- $\endgroup$ – warspyking Nov 17 '14 at 0:56 $\begingroup$ @warspyking, indeed I hope you liked it :) $\endgroup$ – Kenshin Nov 17 '14 at 1:07 $\begingroup$ Another potentially interesting question, if anyone cared to solve it, would be the shortest primitive-recursive way of writing such a sequence, if one could define sub-sequences and then unconditionally invoke each sub-sequence in its entirety simply by including its name, so [spaces added for clarity] "1 tr 2 f 1b1 3 22r12" would define "1" as "tr", "2" as "f tr b tr", and "3" as "ftrbtr ftrbtr r tr ftrbtr". $\endgroup$ – supercat Nov 17 '14 at 18:38 The only way you can have a solve-all sequence is if you have a sequence of moves that goes through all 43 quintillion configurations of the Rubik's Cube. In order to do this, you need to draw a transition graph between all the states of the Rubik's Cube and find a Hamiltonian cycle through them. This sequence of moves doesn't necessarily have to be 43 quintillion moves long - a simple sequence of 4 moves can produce a cycle of 1,260 configurations as seen in mdc32's answer, and in general a sequence of symbols in the group will produce a cycle of configurations much longer than the cycle itself. However, the sequence will still be very long, simply because 43 quintillion moves is still a lot. Micah provided a link to a page that did construct such a Hamiltonian cycle in a comment. I haven't been able to make head or tail of its notation (or to figure out how to count the number of moves from the descriptions of the cosets), but it looks like the sequence of moves that is required is billions of moves long, which is still definitely outside of the realm of plausibility for memorization. Joe Z.Joe Z. $\begingroup$ It wouldn't necessarily have to be 43 quintillion moves. Mine is only a four move sequence, technically, yet it cycles through 1260 states as you pointed out. $\endgroup$ – mdc32 Nov 17 '14 at 2:11 $\begingroup$ Really? That's really the ONLY way to do this? Wow... Are you sure there isn't like a 4 move sequence that can loop through them all or something? Even if it repeats a position but on a separate move? $\endgroup$ – warspyking Nov 17 '14 at 2:11 $\begingroup$ @mdc32 You're right. For example, in $\mathbb Z_2 \times \mathbb Z_2$, the sequence $(1, 0), (0, 1)$ does result in the whole group being traversed, so the length of the solve-all cycle doesn't necessarily have to be equal to the order of the group. And, as you pointed out, a sequence of 4 moves can generate a 1,260-state cycle. But I don't see there being a solution that's shorter than 10,000 or maybe 10,000,000 moves long. $\endgroup$ – Joe Z. Nov 17 '14 at 2:26 $\begingroup$ It exists. $\endgroup$ – Micah Nov 17 '14 at 4:31 $\begingroup$ Any sequence loops after at most 1260 repetitions. (src: Wikipedia, Rubik's Cube Group) Therefore, a lower bound on the length of warspyking's sequence is 43 quintillion divided by 1260. $\endgroup$ – Lopsy Dec 16 '14 at 23:38 EDIT: This only works in a certain set of configurations, as the loop only goes through 1260 states before returning to the original position. My mistake, this is incorrect, but still useful. Great solution found here. Basically, if you rotate the right, back, left, and front faces all clockwise and in order, then it will always solve the cube - eventually. The core part of it is in one simple explanation - doing this cannot enter an inescapable loop. If you move the front face clockwise, then counter-clockwise, nothing happens to the cube. The faces are not moved with respect to each other, and you are in the same position. In order for a loop to happen, two different layouts must lead to the same configuration with the same move. But wait - if you reverse this move, then what position does it go to? The simple answer is this will never happen. Here is a great diagram in the article itself. As you can see, reversing this loop would not work, as there are two possible options. This is a proof that loops will never happen. Moving these four faces is the shortest possible configuration of moves that affects each and every cube, save for the center ones which are irrelevant anyway. This means it is the shortest possible solution that you could repeat to solve the Cube. mdc32mdc32 $\begingroup$ Nice! Good find! $\endgroup$ – warspyking Nov 17 '14 at 1:46 $\begingroup$ This doesn't work for any configuration - only starting from a solved cube and the 1260 configurations you get while performing that sequence of moves. $\endgroup$ – Joe Z. Nov 17 '14 at 1:46 $\begingroup$ @Joe Doesn't this mean it will eventually lead back to a solved cube, assuming the cube can be solved? You can't enter loops, so it should eventually lead back to the solved position. $\endgroup$ – mdc32 Nov 17 '14 at 1:46 $\begingroup$ No; as soon as you perform a move that's outside of the fixed sequence, performing that fixed sequence will never get you back to a solved cube; you'll just get another cycle of 1,260 configurations. $\endgroup$ – Joe Z. Nov 17 '14 at 1:47 $\begingroup$ Note, in general, that any sequence of moves, if repeated enough times, will eventually get back to the original configuration. This is a property of groups. $\endgroup$ – Joe Z. Nov 17 '14 at 2:27 Is there a sequence of moves that can be repeated over and over again which can solve any position the Rubik's Cube can be scrambled in? This actually depends on how you define "scrambling" a cube. If you mean taking the cube apart and putting it back together with the pieces in random positions, then there is no such sequence. That's simply because most of the possible configurations of a Rubik's cube are unsolvable. I don't know about a sequence of moves that can unscramble any solvable configuration. Rob WattsRob Watts $\begingroup$ Generally, unless specified otherwise, "scrambling" refers to a random sequence of valid moves. $\endgroup$ – Joe Z. Nov 21 '14 at 1:47 single move (sexy move) RUR'U' method youtube video or 3 moves method (Peter Renzland's method) These all very humanly intuitive that requires minimal memorization of extra algorithms, but human reasoning. Zhe HuZhe Hu $\begingroup$ I think you misread the question. The question was: "Is there an algorithm you can use from any scrambled configuration and position to solve the 3x3x3 Cube?" The answer is basically Devil's Algorithm. $\endgroup$ – Kevin Cruijssen May 9 '16 at 8:52 protected by Community♦ Mar 26 '17 at 18:08 Not the answer you're looking for? Browse other questions tagged strategy rubiks-cube mechanical-puzzles solvability or ask your own question. Solving Rubik's cube with one algorithm Is there a universal algorithm that can solve any Rubik's Cube? Rubik's Cube Master Algorithm Why does the 4x4 Rubik's cube have parity cases, while the 3x3 does not? What's the minimum number of sequences required to be learn to solve Rubik's cube in under half an hour? How to solve Rubik's Cube using mathematical formulas? Possible to expose each legal position of rubiks cube? Rubik's cube 19 move Solution What does it mean for a Rubik's cube to be perfectly scrambled, and how do you reach it? Without doing a checker board Is this Rubik's Cube position possible? What are the official rules for generating Rubik's Cube scrambles? Rubik's cube two-person game Theoretical Question - Rubik's Cubes Unsolving a Rubik's Cube
CommonCrawl
You're reading: Main By Elliott Baxby. Posted January 24, 2023 We asked guest author Elliott Baxby to take a look at John Allen Paulos' latest book, Who's Counting. Mathematics is an increasingly complex subject, and we are often taught it in an abstract manner. John Allen Paulos delves into the hidden mathematics within everyday life, and illustrates how it permeates everything from politics to pop culture – for example, how game show hosts use mathematics for puzzles like the classic Monty Hall problem. The book is a collection of essays from Paulos' ABC News column together with some original new content written for the book, on a huge range of topics from card shuffling and the butterfly effect to error correcting codes and COVID, and even the Bible code. As it's a collection of separate columns, it doesn't always flow fluently – I did find myself losing focus on some of the topics covered, particularly ones that didn't interest me as much. This was mainly down to the content though – the writing style is extremely accessible and at times witty. The book included some interesting puzzles and questions, which were challenging and engaging, and included solutions to each problem – very helpful for a Saturday night maths challenge! I even showed some to my friends, who at times were truly puzzled. I loved the idea of puzzles being a means of sneaking cleverly designed mathematical problems onto TV game shows. It goes to show maths is everywhere! I enjoyed the sections on probability and logic as these are topics I'm particularly interested in. One chapter also explored the constant $e$, where it came from and where else it pops up – a very interesting read. It does deserve more attention, as π seems to be the main mathematical constant you hear about, and I appreciated seeing $e$ being explored in more depth. This book would suit anyone who seeks to see a different side of mathematics – which we aren't often taught in school – and how it manifests itself in politics and the world around us. That said, it would be better for someone with an A-level mathematics background, as some of the topics could be challenging for a less experienced reader. It's mostly enjoyable and has a good depth of knowledge, including questions to test your mind. While I didn't find all of it completely engaging, there are definitely some points made in the book that I'll refer back to in the future! Aperiodical News Roundup – December 2022 Here's a roundup of the maths news we missed in December 2022. Maths News The leap second, referred to in this Independent article as a 'devastating time quirk', is finally being abolished. This has been covered in a bunch of places, mostly being quite rude about the leap second, including a writeup in the New York Times where it's referred to as 'a kludge, a bain, a pain in the little hand' (£), and this Live Science article ('pesky'). A committee at the International Bureau of Weights and Measures apparently nearly unanimously voted in support of Resolution D, meaning there won't be any leap seconds from 2035 until at least 2135. Anti-maths news! Princeton mathematician Rachel Greenfield (pictured left – photo by Dan Komoda/Institute for Advanced Study), working with Fields Medalist Terry Tao, has posted a disproof of the periodic tiling conjecture. A preprint titled 'A counterexample to the periodic tiling conjecture' is now on the ArXiv, and if it's correct, means that any finite subset of a lattice which tiles that lattice by translations, must tile it periodically. There's a nice explanation in the Quanta writeup! Meanwhile there's been a new claimed proof of the 4-colour theorem, which is non-constructive (meaning it doesn't rely on finding a colouring for every possible map, but proves the theorem generally). Some people have been skeptical about the proof, including in this statement from Noam Zeilberger, which links to a Mastodon discussion with John Carlos Baez. (via Neil Calkin on Mastodon) Another claimed proof – this time of the sunflower conjecture. A k-sunflower is a family of k different sets with common pair-wise intersections, and the conjecture gives conditions for when such a thing must exist. ArXiv has posted a framework for improving the accessibility of research papers on arXiv.org – their plan is to offer html as well as PDF versions of papers. (via Deyan Ginev) Bright-trouser-wearer and mathematician Marcus Du Sautoy is offering a free OU online course, entitled 'What we cannot know'. Find out how he manages to break the rules of reality by facilitating you knowing something that it's by definition impossible to know, by signing up online for the 8-week course (which can also be accessed without signing in but then you don't get a badge). Any excuse to include a photo of HF ❤️ As part of their Elevating Mathematics video competition, the National Academies Board on Mathematical Sciences and Analytics (BMSA) invites early career professionals and students who use maths in their work to submit short video elevator speeches describing how their work in mathematics is important and relevant to our everyday lives, with a $1000 Prize for the best video. And finally, in a rare instance of us linking to the Hollywood Reporter, Hannah Fry is to front a science and tech series for Bloomberg, entitled The Future With Hannah Fry. Sounds great! It'll be available on Bloomberg's Quicktake streaming service and will explore breakthroughs in artificial intelligence, crypto (not clear if -graphy or -currency), climate, chemistry and ethics. Particularly mathematical New Years Honours 2023 By Peter Rowlett. Posted December 31, 2022 It's that time of year when we take a look at the UK Government's New Years Honours list for any particularly mathematical entries. Here is the selection for this year – if you spot any more, let us know in the comments and we'll add to the list. Paul Glaister, Professor of Mathematics and Mathematics Education, University of Reading. Appointed CBE for services to education. Dan Abramson, headteacher of King's College London Maths School. Appointed OBE for services to education. Kanti V. Mardia, Senior Research Professor, Leeds University. Appointed OBE for services to Statistical Science. Jeffrey Quaye, National Director of Education and Standards at Aspirations Academies Trust, PhD in Mathematics Education and Chartered Mathematics Teacher. Appointed OBE for services to education. Charlotte Francis, maths teacher and entrepreneur. Appointed Medallist of the Order of the British Empire for services to education. Get the full list from gov.uk. Updated 2/1 to add Dr. Jeffrey Quaye, HT The Mathematical Association on Twitter. How to fold and cut a Christmas star By Christian Lawson-Perfect. Posted December 24, 2022 This week and last I hosted a series of public maths talks featuring disabled presenters. I'll post about how that went later, but for now I just want to share this clip of me filling time spreading Christmas joy. This is a party trick that Katie Steckles showed me: you can fold a piece of paper and then make a single cut to produce a five-pointed star. I showed how to do it by following the instructions I'd been told, and then recreated the steps just starting from the insight that when you make the cut, all the edges of the shape need to be on top of each other. Maybe you'll show someone else how to do it during the Christmas holiday? This doesn't only work for stars: there's a theorem that you can make any polygon by folding and a single cut. Erik Demaine has made a really good page about the theorem, with some examples to print out and links to research papers. Katie can cut out any letter of the alphabet on demand, which is impressive to witness! Aperiodical News Roundup – November 2022 By Katie Steckles. Posted December 7, 2022 Here's a roundup of things that happened online in November that we didn't cover here at the time! Maths Research News According to an article on philosophy news site Daily Nous, an international symbolic logic journal printed then shortly retracted two articles, one entitled "The Twin Primes Conjecture is True in the Standard Model of Peano Arithmetic: Applications of Rasiowa–Sikorski Lemma in Arithmetic" and the other "There are Infinitely Many Mersenne Prime Numbers. Applications of Rasiowa–Sikorski Lemma in Arithmetic". After a discussion on MathOverflow, mistakes were found in both papers, and the journal's editor posted: Recently two articles on the applications of the Rasiowa-Sikorski Lemma to arithmetic were published online in Studia Logica without proper examination and beyond reasonable standards of scholarly rigor. As it turned out, they contained an irrrepairable mistake and, consequently, have been retracted from the journal's website. The papers will not appear in print. Studia Logica editor-in-chief Jacek Malinowski (via Catarina Dutilh Novaes on Twitter, whose thread includes some clarifications.) Gliders producing decimal digits According to Conway's Life, a blog which documents developments in research around Conway's Game of Life, on November 9, 2022 Pavel Grankovskiy discovered that 15 gliders can make any pattern in Conway's game of life. Given a particular shape, the gliders can be set up to create it (eventually) beating a recent record of 16 gliders. (via Oscar Cunningham on mathstodon,xyz:) Fields medalist Terry Tao reports some progress on the union closed sets conjecture, an open problem in combinatorics, which has seen rapid developments thanks to (in Tao's words) 'maths at internet speed'. As of 11th November, applications for Young Researchers for the Heidelberg Laureate Forum 2023 are open. If you or someone you know is a researcher in maths or computer science at undergrad or postgrad level, and would like to spend a week next September in a lovely town in Germany meeting the world's most decorated mathematicians and computer scientists, you should consider applying! The latest issue of The Mathematics Enthusiast is a special issue collecting 29 reviews of popular maths books by maths educators, including Matt Parker, Hannah Fry, Eugenia Cheng, Simon Singh and Jordan Ellenberg among many others. If you're looking for new pop maths book recommendations, it's a good place to start! Check out these absolute units (Image: NASA/Brian0918/ Wikipedia Commons) It was announced earlier this month that having discovered sufficiently many very big and very small numbers, it's time for some new SI prefixes: ronna-, ronto-, quetta- and quecto- have joined the ranks of things that make numbers bigger and smaller, allowing you to describe itty bitty quantities as small as $10^{-27}$ (ronto) and $10^{-30}$ (quecto), as well as chonky numeros in the region of $10^{27}$ (ronna) and $10^{30}$ (quetta). The earth weighs 6 ronnagrams, and Jupiter is about 2 quettagrams. "'R' and 'Q' were the only letters left in the English alphabet that hadn't been used by other prefixes." Richard Brown, National Physical Laboratory And in computer news, Google Chrome now supports MathML core, a language for describing mathematical notation embeddable in HTML and SVG. (via axel rauschmayer) What Can Mathematicians Do? A series of online talks about maths By Christian Lawson-Perfect. Posted November 9, 2022 I've put together a series of online public maths presentations, to take place in the last couple of weeks of term before Christmas. This came about after a few people on the Talking Maths in Public WhatsApp group complained that we can hardly ever take up requests for a speaker to deliver a fun maths talk due to our disabilities, usually because of the difficulty of travelling to and from an event. I quipped that we should set up a series of talks for non-commutative mathematicians, and then I was told that the department's EDI committee had a load of money sitting unused in its budget. So I decided to use some of it! Aperiodical News Roundup – October 2022 By Katie Steckles. Posted November 4, 2022 AI research company DeepMind said that their AlphaTensor system has discovered a new way to multiply matrices, citing this as the first such advance since the Strassen algorithm was proposed in 1969. AlphaTensor found thousands of algorithms for multiplying matrices of different sizes, but most were not better than the state of the art. Specifically, it found an algorithm for multiplying \(5 \times 5\) matrices in \(\mathbb{Z}_2\) in just 96 operations. There's a paper in Nature describing how the algorithm was found. It's not all over for us humans just yet, though: the DeepMind announcement prompted two algebraists at Linz University, Jakob Moosbauer and Manuel Kauers, to see if they could do even better. After a few days of thought, they published The FBHHRBNRSSSHK-Algorithm for Multiplication in $\mathbb{Z}_2^{5\times5}$ is still not the end of the story on the arXiv, giving an algorithm which does the multiplication in only 95 steps. Meanwhile, in other computers-helping-humans news, the Lean 3 library mathlib has made it to 100,000 theorems, none of which have been left as an exercise for the reader. The IMA and LMS have joined forces to offer a new university access programme called Levelling Up: Maths, which aims to address the difficulties that young people of Black heritage face in STEM. A-level students can join the programme, and will be able to access teaching and mentoring in virtual tutorial groups with Black heritage undergraduates, as well as events with Black guest speakers. The programme is also supported by the RAEng, BCS, IOP RSC, MEI and STEM Learning, as well as the Association for Black & Minority Ethnic Engineers (AFBE-UK) and Black British Professionals in STEM (BBSTEM). What Can Mathematicians Do? is a series of free online public maths presentations organised by Newcastle University's School of Mathematics, Statistics and Physics, covering a wide range of topics such as how colours mix, how to make a mint on the stock market, and how to pick your next Netflix binge. Aimed at students in school years 10 to 13, the talks are all given by disabled presenters: to show that anyone can be a mathematician, and mathematicians can do anything. And finally: last weekend, a group of maths communicators (including several Aperiodical editors and regulars) put together a live online 24-hour Mathematical Game Show, featuring mathematical games, games with a mathematical twist, the maths of games and games about maths. The show has raised nearly £5000 for a collection of excellent charities, and the whole show is available to watch back in half-hour or 1-hour segments. Nick Berry of the Data Genetics blog has died. The site ran for over a decade, and was described by Alex Bellos as 'one of best examples of maths outreach on the web […] A brilliant cabinet of curiosities'. Nick passed away peacefully at home on Saturday October 8th after a long battle with cancer. (via Alex Bellos on Twitter) Phil Goldstein, aka magician Max Maven, has died. Max Maven popularised the Gilbreath principle, which underlies a host of astonishing mathematical card tricks. (via Colm Mulcahy on Twitter)
CommonCrawl
Nanda Kumar Yellapu1,2, Thuc Ly2,3, Mihaela E. Sardiu1,2, Dong Pei1,2, Danny R. Welch2,3,4, Jeffery A. Thompson1,2 & Devin C. Koestler1,2 Triple-negative breast cancer (TNBC) constitutes 10–20% of breast cancers and is challenging to treat due to a lack of effective targeted therapies. Previous studies in TNBC cell lines showed in vitro growth inhibition when JQ1 or GSK2801 were administered alone, and enhanced activity when co-administered. Given their respective mechanisms of actions, we hypothesized the combinatorial effect could be due to the target genes affected. Hence the target genes were characterized for their expression in the TNBC cell lines to prove the combinatorial effect of JQ1 and GSK2801. RNASeq data sets of TNBC cell lines (MDA-MB-231, HCC-1806 and SUM-159) were analyzed to identify the differentially expressed genes in single and combined treatments. The topmost downregulated genes were characterized for their downregulated expression in the TNBC cell lines treated with JQ1 and GSK2801 under different dose concentrations and combinations. The optimal lethal doses were determined by cytotoxicity assays. The inhibitory activity of the drugs was further characterized by molecular modelling studies. Global expression profiling of TNBC cell lines using RNASeq revealed different expression patterns when JQ1 and GSK2801 were co-administered. Functional enrichment analyses identified several metabolic pathways (i.e., systemic lupus erythematosus, PI3K-Akt, TNF, JAK-STAT, IL-17, MAPK, Rap1 and signaling pathways) enriched with upregulated and downregulated genes when combined JQ1 and GSK2801 treatment was administered. RNASeq identified downregulation of PTPRC, MUC19, RNA5-8S5, KCNB1, RMRP, KISS1 and TAGLN (validated by RT-qPCR) and upregulation of GPR146, SCARA5, HIST2H4A, CDRT4, AQP3, MSH5-SAPCD1, SENP3-EIF4A1, CTAGE4 and RNASEK-C17orf49 when cells received both drugs. In addition to differential gene regulation, molecular modelling predicted binding of JQ1 and GSK2801 with PTPRC, MUC19, KCNB1, TAGLN and KISS1 proteins, adding another mechanism by which JQ1 and GSK2801 could elicit changes in metabolism and proliferation. JQ1-GSK2801 synergistically inhibits proliferation and results in selective gene regulation. Besides suggesting that combinatorial use could be useful therapeutics for the treatment of TNBC, the findings provide a glimpse into potential mechanisms of action for this combination therapy approach. Breast cancer is the most common cancer in women worldwide, aside from skin cancer, accounting for over 1.4 million cases annually [1,2,3,4]. From 1975 to 2010, the mortality rate of breast cancer declined by 34%, from 32 per 100,000 per year in 1975 to 21 per 100,000 per year in 2010. However, the incidence of localized breast cancers increased by 30% in the same time frame with no commensurate decline in the number of regional breast cancers [5]. After 2010, mortality rates continued to decline by 1.2%-2.2% per year in women aged 40–79 years, but increased by 2.8% per year in women aged 20–29 years and 0.3% per year in women aged 30–39 years [6, 7]. Existing therapies for breast cancer exhibit several different mechanisms of action, including damaging DNA (Cisplatin, Etoposide and Bleomycin), targeting overexpression of receptors (Tamoxifen, Trastuzimab, Cetuximab, Gefitinib and Imatinib) and inhibiting intracellular signal transduction (Rapamycin). While such therapeutic and intervention procedures have reduced mortality, overall survival rates suggest that more aggressive treatments and/or interventions may be needed [8]. Among breast cancers, Triple-Negative Breast Cancer (TNBC) represents a high-risk breast cancer because of its poor response to specific therapies [9]. TNBC accounts for a non-negligible proportion of breast cancers, accounting for 10–20% of all breast cancers [10]. TNBC is typically managed with standard, non-targeted treatment procedures [11, 12]; however, such treatments tend to be associated with high rates of relapse [10, 11]. The reason for the lack of success of current treatments is due to a poor understanding of the molecular mechanisms underlying TNBC. TNBC is devoid of the three major receptors: estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) [13]. These three receptors are the main targets for several breast cancer therapeutics [14] and absence of the aforementioned receptors leads to development of drug resistance [15, 16]. Furthermore, TNBC is often considered more aggressive, more likely to recur, has a worse prognosis, and disproportionally affects younger women compared to other breast cancer subtypes [17]. The standard of care for TNBC is cytotoxic chemotherapy; unfortunately, the development of drug resistance is common [15, 16]. The complex molecular heterogeneity of TNBC [18] means that a range of targeted therapies will likely be needed to continue to make progress in treating this disease. Therefore, it is critical to continue the search for novel therapeutic strategies and identify their targets so that appropriate biomarkers can be used to make personalized treatment decisions and ultimately, improve the prognosis associated with TNBC. While still in the early stages of investigation, small molecule bromodomain inhibitors (BDI) that reduce the pathogenicity of TNBC are a promising approach for treating TNBC. Bromodomains (BD) are 110 amino acid protein domains that recognize acetylated lysine residues on the N-terminal tails of histones [19, 20]. BD represent "readers of lysine acetylation" and are responsible for signal transduction via acetylated lysines. Since lysine acetylation is prerequisite for histone association and chromatin remodeling, BDI represent a promising and emerging agent for treating TNBC. In recent years there is an increasing popularity of BDI for their effective anti-cancer activity and have emerged as a promising class of anti-cancer drugs. CDK4/6 inhibitors and paclitaxel have higher synergies with bromodomain-extra-terminal domain (BET) inhibitors against TNBC [21]. A novel ATAD2 BDI, AM879, which was discovered by Dahong et al., presents potential inhibitory activity in breast cancer cells [22]. Guan-Jun et al. discovered a BDI which showed potential anticancer activity in NF-kappa B-active MDA-MB-231 TNBC cells [23]. Minjin et al. discovered five potential BRD4 based BDI with high binding affinity and their co-crystal structures experimentally demonstrated impressive inhibitory activity and mode of action for the treatment of cancers [24]. TRIM24 BD is a therapeutic target for several cancers including breast cancer for which Qingqing et al. developed potential BDI based on N-benzyl-3,6-dimethylbenzo[d]isoxazol-5-amine derivatives [25]. CREB (cyclic-AMP response element binding protein) binding protein (CBP) bromodomain is related to several human malignancies for which a potential BDI, DC-CPin734 was discovered by Xiaoyang et al. [26]. Such findings suggest that BDI have distinct clinical efficacies in cancer treatments and further suggest that combinations of BDI may also exert unique effects. Among BD-containing proteins, members of the BET domain [27], Bromo adjacent to zinc finger 2A (BAZ2A) [28] and Bromo adjacent to zinc finger 2B (BAZ2B) [29] domains play important roles in transcriptional regulation. These BD family members are potential targets in several cancer types [30]. JQ1, a BET domain inhibitor [31], and GSK2801, BAZ2A/B domain inhibitor [32], were recently tested on MDA-MB-231, HCC-1806, and SUM-159 TNBC cell line models [32]. Preclinical models were sensitive to BET-dependent TNBC inhibition [33]; however, clinical trials testing BD-based inhibition by means of BDI such as JQ1 plus GSK2801 are ongoing. The functional distinctions in the mechanisms by which BET and BAZ2A/B domains regulate transcription machinery remain poorly defined. In order to gain insight into this question, Bevill et al. (2019) performed dose-dependent drug synergy screen targeting several BD families [32]. They found little or no growth inhibition as single agents. In contrast, combined treatments resulted in strong growth inhibition of TNBC. While promising, expression of specific gene elements identified via whole transcriptome profiling under different treatment conditions were not validated nor did they examine specific target candidates. While interesting, their study was limited by the use of a single concentration which did not assess cytotoxic effects on each cell line. Since cells undergoing apoptosis or other mechanisms of cell death have inherently different expression profiles, interpretation of drug-induced expression changes is limited. To address these gaps and limitations, we expanded the work of Bevill et al. (2019). Specifically, we sought to shed light on the specific metabolic networks involved in TNBC inhibition, both in single and combined agent treatments which would be the novel strategy of the current work. We further wanted to explore the synergistic action of JQ1 and GSK2801 at different dose concentrations beyond the studies by Bevill et al. to optimize the lethal dose concentrations both as single and combined agents against multiple cancer cell types. Investigating the expression of specific genes associated with the synergic effect would provide more probable molecular mechanism associated with synergistic action and the enhanced cytotoxic effect. Further, adding molecular modelling studies will help to elucidate the intermolecular interactions of the drugs and provide strong evidence of their inhibitory actions. We adopted Next-Generation Sequencing technology (NGS)-based bioinformatics approaches to identify differentially expressed genes (DEGs) across different treatment conditions using publicly available RNASeq data collected on TNBC cell lines treated with JQ1 + GSK2801, and subsequently validated differentially expressed genes through RT-qPCR. Our analyses reveal common upregulated and downregulated genes across all treatments that could be treated as therapeutic targets. Molecular modelling studies were employed to predict the drug binding interactions with the identified target proteins. Our investigations shed light on the synergy between JQ1 and GSK2801 in TNBC cell lines. Furthermore, our studies support the distinct functional mechanism of JQ1 and GSK2801 through BD inhibition, along with revealing unique adaptive mechanisms of BDI that are amenable to co-inhibition of BDs to induce TNBC inhibition across the three cell line models. A general overview of the analytical strategy (e.g., tools and methods) is given in Fig. 1-A. Further description of the bioinformatics and statistical methods used in our analyses are explained as follows. A. Schematic diagram of methods and tools. The steps/tools used to identify differentially expressed genes, along with their subsequent validation through computational and in vitro methods. B. Differential expression analysis of RNASeq data. Bar plots depicting the number of upregulated/downregulated DEGs in treated samples (JQ1, GSK2801 and JQ1&GSK2801) compared to control across different TNBC cell lines RNASeq expression data collected on MDA-MB-231, HCC-1806, and SUM-159 TNBC cell lines treated with JQ1 and GSK2801 alone and in combination for 72 h [34] were retrieved from Gene Expression Omnibus (GEO) database (GEO series ID: GSE116907). RNASeq data were prepared on Illumina NextSeq 500 using high-throughput sequencing technology and included 12 short read archive (SRA) experimental datasets. Characteristics of the experimental design are represented in supplementary material (S1 File-Table S1). Data processing and generation of gene counts The quality of the sequencing reads was assessed using FastQC (https://www.bioinformatics.babraham.ac.uk/projects/fastqc/). Low quality reads with poor base sequence quality and adaptors were removed using the Trimmomatic tool [35]. After removing poor quality reads and trimming adaptors, the remaining reads were subjected to further analysis. All the processed datasets were quantified using the RSEM tool [36]. The human hg38 reference genome was used and the genome indices were constructed using Bowtie2 [37]. The gene count result files generated from RSEM analysis were used for downstream differential expression analyses. Identification of differentially expressed genes (DEGs) We aimed to derive the drug induced response in each TNBC cell line model and hence, treatment wise differential expression analysis was carried out. For each cell line there was one control and three treated samples (JQ1, GSK2801 and JQ1 + GSK2801) making a total of 12 biological replicates across three TNBC cell lines. Differentially expressed genes were identified comparing control vs JQ1, control vs GSK2801 and control vs JQ1 + GSK2801. The quantified gene counts matrix was used to derive the differential expression patterns in the R-environment using the edgeR package [38] in Bioconductor. Genes with zero counts in all the samples were excluded prior to analysis. The data were normalized using the trimmed mean of M-values (TMM) method both within and between samples. Inter-sample relationships were examined by producing a plot based on multidimensional scaling. The primary parameters such as log2 fold change (logFC), log counts per million (logCPM), and p-value were derived for each gene and for each of the four comparisons of interest. Genes with log2 fold change > 1 or < -1 and p-values < 0.05 were considered as significantly differentially expressed genes (DEGs) and formed the basis for downstream analyses. Gene functional enrichment analysis Gene ontology functional annotations and pathways enriched with statistically significant DEGs were analyzed by ClueGO [39] using the Cytoscape software environment [40]. ClueGO V.2.5.7 plugin was installed on Cytoscape V.3.7.2 and the genes were queried against updated versions of Kyoto Encyclopedia of Genes and Genomes (KEGG) [41]. The biological terms were derived for the large clusters of DEGs using functional enrichment analysis and grouped into networks/metabolic pathways for upregulated and downregulated genes separately. A two-sided (Enrichment/Depletion) test based on hypergeometric distribution was used for the gene ontology analysis. The Bonferroni step down method was used for correction with the threshold p-value of 0.05. Molecular modelling studies Homology modelling and protein processing The protein sequences of the five downregulated proteins PTPRC (NP_002829.3), MUC19 (NP_775871.2), KCNB1 (XP_006723847.1), TAGLN (NP_001001522.1) and KISS1 (NP_002247.3) were retrieved from National Center for Biotechnology Information (NCBI). Three dimensional models were constructed through automated homology modelling using Modweb server where the structures were built based on the template homology (https://modbase.compbio.ucsf.edu/modweb/). The stereo chemical quality of the homology models was validated by Ramachandran plots [42]. The validated models were further processed individually using the pre-process module of Schrödinger Maestro 12.6 version [43]. Bond orders were assigned in the protein and hydrogens were added. Disulfide linkages were created at the possible locations. Missing side chains and missing loops were added. Water molecules which are beyond 5 Å were removed. All the proteins were prepared by restrained minimization using optimized potentials for liquid simulations-3e (OPLS3e) force field [44]. Ligand preparation The three-dimensional co-ordinates of JQ1 (49,871,818) and GSK2801 (73,010,930) were retrieved from the NCBI-PubChem database and optimized in the Maestro graphical environment. The structures were processed independently using the Ligprep module under OPLS3e force field. The ligands were subjected to ionization and possible states were generated at pH 7 ± 2 using Epik. Desalting was done and the tautomers were generated. The ligands were investigated for their hydrogen bonding efficiency, hydrophobic surface area and electron density clouds around the molecules, to predict their efficiency of interaction with the target proteins at the binding sites. Individual docking reactions were carried out for each protein with JQ1 and GSK2801 ligands. The optimized conformations of the proteins were loaded into Maestro workspace and the protein receptor grids were generated using the glide tool. Grid centers were determined from the active residues of the receptor proteins. Glide's ligand docking module was used to dock JQ1 and GSK2801 into the specified binding grids of the PTPRC, MUC19, KCNB1, TAGLN, and KISS1 proteins [45]. The standard precision (SP) docking mode was used for the flexible ligand sampling without smearing any constraint. The binding efficiency and ligand affinities were assessed as docking scores and the docked ligand poses were analyzed and visualized by Maestro interface. The docking score was calculated as follows: $$Docking\,score\, = \,a\, \times \,vdW\, \times \,Coul\, + \,Hbond\, + \,Lip\, + \,BuryP\, + \,RotB\, + \,Site$$ a & b = co-efficient constant for vdW and Coul respectively. vdW = van der Waals energy Coul = Coulomb energy Hbond = Hydrogen bonding with receptor Metal = Binding with metal Lipo = Constant term for lipophilic BuryP = Buried polar group penalty RotB = Rotatable bond penalty Site = active site polar interaction In vitro studies Cells and cell cultures MDA-MB-231, HCC-1806 and SUM-159 are widely used TNBC cell line models [46]. MDA-MB-231 cells were maintained in a 1:1 mixture of Dulbecco's MEM and Ham's F12 media containing 5% FBS and 1X-non-essential amino acids in the absence of antibiotics. HCC-1806 cells were maintained in RPMI-1640 media with 10%-FBS and PenStrep. SUM-159 cells were maintained in Ham's F12 media with 5% FBS, Insulin (5 ug/ml), Hydrocortisone (1 ug/ml), Gentamycin (5 ug/ml), fungizone and PenStrep. The cultures were maintained at 37 °C in a humidified CO2-controlled (5%) incubator. Prior to performing in vitro investigations, all cell cultures were tested using PCR assay for Mycoplasma spp and found to be uninfected. Determination of cytotoxic effects of JQ1 and GSK2801 by MTT assay Cells were seeded in 96 well plates (MDA-MB-231 cells, 5000 cells/well; HCC-1806 and SUM-159 cells, 8000 cells/well) in triplicate with 200 µl of media per well and allowed to adhere for 24 h [47]. JQ1 (SML1524) and GSK2801 (SML0768) were purchased from Sigma- Aldrich and dissolved in DMSO to make stock solutions (JQ1-20 mg/mL; GSK2801-10 mg/mL) and further dilutions were made with culture media immediately prior to each experiment. JQ1 and GSK2801 stock solutions were stored at 4 oC and -20 oC, respectively for no longer than 30 days. The spent medium was aspirated and the fresh medium containing drug concentrations ranging from 0.125 µM – 20 µM were used to treat the cells along with DMSO control. The combined treatments were carried out with JQ1 (25 nM, 50 nM, 125 nM, 250 nM, 500 nM, 1000 nM) combined with 10 µM and 20 µM of GSK2801 to assess the possible synergistic effects. After 72 h, spent media was aspirated; MTT was added and incubated for 3 h at 37 °C. MTT was removed, DMSO was added, and the plate was kept on shaker for 30 min at 60 RPM to dissolve crystals completely. The plate was read at 540 nm and OD recorded. Percent viability was calculated \((OD\,treated\, \div \,OD\,Control)\, \times \,100\) and the viabilities were plotted [48]. The combination index (CI) for each treatment combination was determined by the Chou-Talalay method [49] using CompuSyn software [50]. RT-qPCR Expression studies RT-qPCR was used to validate the expression of PTPRC, MUC19, KCNB1, TAGLN and KISS1 genes following treatment. The treatment strategic plan to evaluate the expression of these genes is summarized in Table 1. The TNBC cells were seeded in six well plates (MDA-MB-231, 150,000 cells/well; HCC-1806 or SUM-159, 240,000 cells/well) and allowed to adhere for 24 h. Spent media was aspirated and fresh media containing drugs was added and incubated for 72 h. Table 1 Experimental setup for the gene expression studies The total RNA was isolated using Quick-RNA Miniprep Kit (Zymo research, USA-R1054) according to manufacturer instructions. Purity and concentrations of the isolated RNA were determined by NanoDrop 2000 spectrophotometer (Thermo Scientific, USA). cDNA was synthesized by reverse transcription using iScript cDNA Synthesis Kit (Bio-Rad #1,708,891). cDNA synthesis was performed with 1 µg of total RNA, 5 µl of 5X iScript reaction mix and 1 µl of iScript reverse transcriptase in a final volume of 20 µl. Quantitative RT-PCR was performed using the SYBR™ select master mix (Applied Biosystems-4472908). The reaction was set up with 5 µl of SYBR master mix 2X, 300 nM of each primer and 100 ng of cDNA template in a total reaction volume of 10 µl. Primers were designed using Primer3 [51] online software and are shown in supplementary material (S1 File-Table S2). 18S rRNA gene was used as a standard reference to normalize the expression levels. Gene expression levels were calculated by the comparative ΔΔCt. The Ct values were obtained for the standard reference gene (18S rRNA) and the five target genes (PTPRC, MUC19, KCNB1, TAGLN and KISS1). Delta Ct (ΔCt) values of the samples were calculated by subtracting the reference gene Ct values from the Ct value of the samples. ΔCt values of the DMSO control sample were averaged to obtain an average ΔCt value. The ΔCt values of all the samples including DMSO control were then subtracted from the average ΔCt value of the DMSO control to obtain the ΔΔCt value. The relative gene expression fold was calculated by 2−ΔΔCt formula for DMSO control and treated samples. The values of each sample group were then averaged and presented as relative fold gene expression [52]. All the results are displayed as mean ± SD, calculated from three independent experiments. Statistical significance between experimental conditions was determined using two-tailed, two-sample Student's t-tests and a non-parametric Wilcoxon rank sum test. A significance level of p-value < 0.05 was considered statistically significant. Differential expression analysis using publicly available RNASeq data sets To explore the molecular mechanisms by which JQ1 and GSK2801 exert anti-proliferative activity in TNBC cell lines, we first performed a differential expression analysis using the RNASeq data obtained from the Bevill et al. (2019) study [32]. Expression profiles were determined via RNASeq analysis of MDA-MB-231, HCC-1806 and SUM-159 cell lines treated with JQ1 and GSK2801 individually, and in combination. The gene count matrix generated from the RSEM analysis of the RNASeq data is provided as supplementary material (S2 File). After normalization, data were visualized using boxplots (S1 File-Fig. S1-A & B), heatmaps (S1 File-Fig. S1-C & D) and Multidimensional Scaling (MDS) plots (S1 File-Fig. S2) to understand the sources of variation across samples. Common and tagwise dispersions were estimated, exact tests were performed and differences in gene expression between treated and control were determined in each cell line model and across all treatment conditions. The overall differential expression in each cell line model was determined in terms of number of DEGs and summarized in Fig. 1-B. A list of DEGs for each treatment/treatment combination and within each TNBC cell line are provided as supplementary material S3 File. The corresponding smear and volcano plots representing the DEGs are also provided in the supplementary material (S1 File-Tables S3-S5). MDA-MB-231: A total of 730 DEGs were observed in cells treated with JQ1, among which 442 were upregulated and 288 were downregulated. In contrast, 40 DEGs in cells treated with GSK2801, among which 16 were upregulated and 24 were downregulated. A total of 1915 DEGs were observed in cells treated with both JQ1 and GSK2801 (combined treatment), of which 981 were upregulated 934 were downregulated. HCC-1806: There were 1197 DEGs were observed in cells treated with JQ1, of which 547 were upregulated and 650 downregulated. GSK2801 treated cells showed 33 DEGs, among which 17 upregulated and 16 were downregulated in treated versus control cells. Treatment with both JQ1 and GSK2801 resulted in 1668 DEGs, among which 807 were upregulated 861 were downregulated. SUM-159: A total of 695 DEGs were identified when comparing JQ1 treated and control SUM-159 cells, among which 172 were upregulated and 523 were downregulated in JQ1 treated cells. GSK2801 treated SUM-159 cells showed 65 DEGs, among which 41 upregulated and 24 were downregulated in treated versus control cells. Treatment with both JQ1 and GSK2801 resulted in 1248 DEGs, among which 474 were upregulated 774 were downregulated. Across all three TNBC cell line models, we observed a greater number of DEGs in the combined treatment as compared to the single agent treatments. To identify the key elements responsible for the anti-cancer activity of JQ1 and GSK2801, the DEGs were further analyzed by constructing network maps. DEGs that were commonly and uniquely upregulated and downregulated by the three treatment conditions were visualized using these network maps (supplementary material S4 File). MDA-MB-231 cell line models showed upregulation of MSH5-SAPCD1 and SENP3-EIF4A1 genes and downregulation of RMRP, KISS1 and TAGLN across all three treatment conditions. HCC-1806 cell lines showed CTAGE4 and RNASEK-C17orf49 genes as commonly upregulated, however zero genes were observed to be commonly downregulated across all the three treatment conditions. SUM-159 cell lines showed upregulation of GPR146, SCARA5, HIST2H4A, CDRT4 and AQP3 and downregulation of PTPRC, MUC19, RNA5-8S5 and KCNB1 across all three treatment conditions. Prioritizing genes that were consistently downregulated, our in vitro investigations focused on PTPRC, MUC19, KCNB1, TAGLN and KISS1 and the proteins that they encode (RNA5-8S5 and RMRP are not protein coding genes). The functional association of the identified DEGs following different treatments was determined by enrichment analysis and the associated metabolic pathways were determined. The analysis of DEGs revealed several interesting findings regarding the potential synergistic action of JQ1 and GSK2801 across the three different cancer cell lines. MDA-MB-231: A total of 22 metabolic pathways were observed to be significantly enriched with genes found to be upregulated based on treatment with JQ1 (e.g., PI3K-Akt signaling pathway, Transcriptional misregulation in cancer and ECM-receptor interaction). In contrast, there were no metabolic pathways that were found to be significantly enriched with genes found to be upregulated based on treatment with GSK2801. For the combined treatment, we identified 156 pathways that were significantly enriched with upregulated genes (e.g., MAPK, Ras, Rap1 and calcium signaling pathways), and included among those 156 pathways were the 22 pathways identified from treatment with JQ1 alone. Similarly, 32 pathways (e.g., cytokine-cytokine receptor interactions, TNF, JAK-STAT and IL-17 signaling pathways) were significantly enriched with downregulated genes based on treatment with JQ1 alone, while there were no pathways found to be significantly enriched with downregulated genes based on treatment with GSK2801 alone. Finally, for the combined treatment we identified 138 metabolic pathways (e.g., cellular senescence, microRNA in cancer, NOD-like receptor signaling and AGE-RAGE signaling pathways) that were significantly enriched with the downregulated genes based on the combined treatment and which contained all 32 pathways identified when JQ1 was given alone (Fig. 2). Functional enrichment analysis of DEGs identified in MDA-MB-231 cells. Metabolic pathways enriched with (A) upregulated (B) downregulated genes in JQ1 treatment. Metabolic pathways enriched with (C) upregulated (D) downregulated genes from JQ1 + GSK2801 combined treatment HCC-1806: A total of 68 pathways were significantly enriched with genes upregulated following treatment with JQ1 (e.g., IL-17, sphingolipid signaling and circadian entrainment pathways); GSK2801 none. For the combined treatment, we identified 150 pathways that were significantly enriched with upregulated genes (e.g., AGE-RAGE, mTOR, B-cell receptor, VEGF signaling pathways), and included among those 150 pathways were the 66 pathways identified from treatment with JQ1 alone. Similarly, 108 pathways (e.g., TNF, MAPK, PI3-Akt and calcium signaling pathways) were significantly enriched with downregulated genes based on treatment with JQ1 alone, while there were no pathways significantly enriched with downregulated genes based on treatment with GSK2801 alone. Finally, for the combined treatment we identified 133 pathways (e.g., NF-kappa B, cAMP, Rap1 and Ras signaling pathways) that were significantly enriched with the downregulated genes based on the combined treatment and which contained 98 of the 108 pathways identified when JQ1 was given alone (Fig. 3). Functional enrichment analysis of DEGs identified in HCC-1806 cells. Metabolic pathways enriched with (A) upregulated (B) downregulated genes from JQ1 treatment. Metabolic pathways enriched with (C) upregulated (D) downregulated genes from JQ1 + GSK2801 combined treatment SUM-159: A total of 5 pathways were significantly enriched with genes found to be upregulated based on treatment with JQ1 in SUM-159 cells (e.g., systemic lupus erythematosus, Transcription misregulation in cancer and mineral absorption). For GSK2801, we identified a single pathway (PPAR signaling pathway) that was significantly enriched with upregulated DEGs when GSK2801 was administered alone. For the combined treatment, we identified 41 pathways that were significantly enriched with upregulated genes (e.g., TGF-beta signaling, aldosterone synthesis and regulation of lipolysis in adipocytes), which included the five pathways identified when JQ1 was administered alone, along with the PPAR signaling pathway that was identified when GSK2801 was administered alone. A total of 91 pathways (e.g., AGE-RAGE, IL-7, TGF-beta signaling pathways) were significantly enriched with downregulated genes based on treatment with JQ1 alone, while there were no pathways significantly enriched with downregulated genes based on treatment with GSK2801 alone. Finally, for the combined treatment we identified 131 pathways (e.g., ECM-receptor interaction, PPAR, TNF, cGMP-PKG signaling pathways) that were significantly enriched with the downregulated genes based on the combined treatment and which contained 85 of the 91 pathways identified when JQ1 was given alone (Fig. 4). Functional enrichment analysis of DEGs from identified from SUM-159 cells. Metabolic pathways enriched with (A) upregulated (B) downregulated genes from JQ1 treatment. Metabolic pathways enriched with (C) upregulated genes from GSK2801 treatment. (D) upregulated and (E) downregulated genes from JQ1 + GSK2801 combined treatment The nodes representing more than four genes are shown in each metabolic network map and the complete list of metabolic pathways enriched with upregulated and downregulated genes identified for each treatment condition and cell line are provided as supplementary material S5 File. To identify shared and unique numbers of pathways across different treatments and cell lines, pathways were represented as Venn diagrams (Fig. S3). Based on combined treatment, we found 31 pathways enriched with upregulated genes and 66 pathways enriched with downregulated genes that were common across all three cell line models (Table 2). Table 2 Metabolic pathway enrichment We next sought to explore the impact of JQ1 and GSK2801 on downregulated DEGs at the protein functional level. Our molecular modelling studies supported the inhibitory effect of JQ1 and GSK2801 drugs against TNBC though specific targets. Protein models of PTPRC, MUC19, KCNB1, KISS1 and TAGLN were constructed with validated stereochemical quality explaining their suitability for docking studies (Fig. 5-A). Ramachandran plots are provided in the supplementary material S1 File-Table S6. The optimized conformations of the ligands showed the possibility of hydrogen bonding due to possession of H-bond acceptor atoms. JQ1 and GSK2801 have a hydrophobic surface area of 709.62 Å2 and 613.66 Å2, respectively, which helps form strong hydrophobic interactions favored by ring structures. Both structures have electronic density clouds on their surface that favor electrostatic interactions with the receptor molecules (Fig. S4). A. Homology modelling of PTPRC, MUC19, KCNB1, TAGLN and KISS1 protein. The optimized conformations of five downregulated protein structures represented as cartoon models. The reactive binding domains were constructed for MUC19 and KISS1 proteins due to the lack of template availability. B. Molecular docking of JQ1 and GSK2801 against PTPRC, MUC19, KCNB1, TAGLN and KISS1 proteins. Binding mode orientation of JQ1 (Pink) and GSK2801 (Cyan) with downregulated proteins in TNBC. The ligands are shown in the binding site cavities of target proteins The molecular docking of JQ1 and GSK2801 provided information regarding the efficiency of their binding and inhibitory action on these target proteins. Both ligands showed the best docking scores, which explains the strong binding affinity with receptor proteins (Table 3). The highest negative score indicates the highest affinity of the ligand. The highest affinity of JQ1 and GSK2801 was found with KISS1, with docking scores of -2.60 and -3.01 kcal/mol, respectively. All the docking scores are between -0.95 to -3.01, which substantiates the stable binding of ligands. The ligands were observed to fit within the binding pockets of the receptor proteins making hydrogen bonds and non-bonded interactions. The ring structures of both ligands favored formation of hydrophobic interactions, which makes the complexes more stable (Fig. 5-B). Table 3 Molecular docking of JQ1 and GSK2801 against downregulated proteins Cytotoxicity assays The independent and additive/synergistic effects of JQ1 and GSK2801 on three different cell lines were determined by MTT assay after 72 h of treatment using varying doses of JQ1 and GSK2801 as explained in the methods section. Dose–response curves of independent and combined treatment were performed, and the viability values were plotted. Administering JQ1 alone showed efficient anti-proliferative activity in all three TNBC cell lines, while GSK2801 alone was ineffective at inducing cytotoxicity at the doses utilized. JQ1 started showing an effect at a concentration of 125 nM and GSK2801 showed a mild effect at a concentration of 20 µM on MDA-MB-231 and HCC-1806 cell lines, but no effect on SUM-159 cells (Fig. 6-A). To examine the potential synergistic effect of JQ1 + GSK2801 combined treatment, the assays were repeated with JQ1 doses of 25 nM, 50 nM, 125 nM, 250 nM, 500 nM and 1000 nM combined with 10 µM or 20 µM of GSK2801. Administering the combined treatment at the previously mentioned doses greatly enhanced the cytotoxic effect within each of the three cell lines. When combined with 10 µM of GSK2801, JQ1 enhanced the cytotoxic effect far more than treatments of JQ1 alone. Using 20 µM of GSK2801 was even more effective at inducing cytotoxicity as compared to 10 µM of GSK2801 when administered in combination with JQ1 (Fig. 6-B). After treatment, microscopic images were captured by EVOS-fl digital inverted microscope under 10X objective and quantified as percent of well of covered using ImageJ software [53]. Variation in cell densities were observed in the treated samples. The cell densities were observed to decrease as the concentration of the drugs increased. Further, the cell densities were reduced to lesser extent in the combined treatment than single agent treatments, which indicates an increase in cancer cell death (Fig. 7). These data show the cytotoxic effects of JQ1 are enhanced by GSK2801, suggesting a potential synergy between these two agents. This synergistic action was further investigated by deriving the CI values for the combined treatment doses (Table 4). The CI values of < 1, = 1, and > 1 indicate synergism, additive effect, and antagonism of drugs, respectively. As shown in Table 4, we found that all the combined doses of JQ1 and GSK2801 showed strong synergistic effect on MDA-MB-231 cells with CI values less than 1. The JQ1/GSK2801 doses at 25 nM/10 µM, 25 nM/20 µM, 50 nM/20 µM showed antagonistic effect on HCC-1806 with CI values > 1, and there appeared to be an additive effect at a dose of 50 nM/10 µM with a CI value of 1. In case of SUM-159 cell lines, most of the lower doses of JQ1 with GSK2801 showed antagonistic effects, whereas the higher doses showed synergistic effect. From the CI indices, it can be observed that JQ1 at lower doses acts antagonistically with GSK2801; however as the concentration of JQ1 increases, so too does its synergistic effect with GSK2801. From the CI index analyses, it appears that JQ1 concentration from 250 nM onwards with 10 and 20 µM of GSK2801 exhibits the strongest synergistic effects, irrespective of the cancer cell line. A. Cytotoxicity assays of JQ1 and GSK2801 against three TNBC cell lines. A. Viability curves explaining the cytotoxic effect of JQ1 and GSK2801 when treated alone on MDA-MB-231, HCC-1806 and SUM-159 TNBC cell lines. B. Viability curves explaining the cytotoxic effect of combined treatments of JQ1 and GSK2801 on MDA-MB-231, HCC-1806 and SUM-159 cell lines demonstrating the synergistic effect. Values are given as mean of three independent experiments ± SD. B. Quantification of the cell density. Bar plots showing the cell densities measured after the treatment. There is a progressive decrease in the cell density with increasing drug concertation. The cell densities are lower in the combined treatment when compared to single agent treatment Microscopic images demonstrating the decrease in cell density. With increase in the concentration of drugs (left to right) there is a progressive decrease in density of the cells which indicated a steady death of cancer cells. The cell density is lesser in the combined treatments when compared to the single agent treatment Table 4 Combination index calculation using Chou and Talalay method in three TNBC cell lines Since anti-cancer activity is generally achieved by the downregulation of cancer-associated genes [54, 55], downregulation of selected genes was assessed by performing RT-qPCR studies. Our analyses of the RNASeq data collected by Bevill et al., (2019) revelated that MDA-MB-231 cells showed downregulation of TAGLN and KISS1 genes (treated with JQ1-100 nM, GSK2801-10 µM, JQ1/GSK2801-100 nM/10 µM) and SUM159 cells showed downregulation of PTPRC, MUC19 and KCNB1 genes (treated with JQ1-300 nM, GSK2801-10 µM, JQ1/GSK2801-300 nM/10 µM). These results were consistent in our RT-qPCR investigations. MDA-MB-231: We observed downregulation of TAGLN and KISS1 genes in these cells across all the doses of the single and combined treatments we examined (Fig. 8-A). Validation of gene expression in three TNBC cell lines. A. Effect of JQ1 and GSK2801 on TAGLN and KISS1 genes in the MDA-MB-231 cells showing the downregulation in the expression both in the single and combined treatments. B. Effect of JQ1 and GSK2801 on MUC19 and KCNB1 genes in SUM-159 cells showing the downregulation in the expression. MUC19 was found to be upregulated with a higher concentration of JQ1 (250 nM). C. Effect of JQ1 and GSK2801 on MUC19, KCNB1 and KISS1 genes in HCC-1806 cells lines. MUC19 was upregulated by GSK2801 (10 µM). The increase in the JQ1 concentration in the combined treatments increased the expression of KCNB1. KISS1 was observed to be downregulated in all the single and combined treatments. Values are given as mean of three independent experiments ± SD. Statistical significance were defined at *p < 0.05 compared to DMSO control. DMSO: control, J: JQ1, G: GSK2801. JQ1 doses are in nM and GSK2801 in µM SUM-159: We observed the downregulation of MUC19 and KCNB1 genes, however we found no expression of PTPRC gene in these cells. MUC19 was upregulated at the lower (50 nM) and higher doses (250 nM) of JQ1 and downregulated at 125 nM. There was a negligible change in the expression of MUC19 following GSK2801 treatment. Higher doses of combined JQ1/GSK2801 at 125 nM/10 µM showed downregulation and the expression is undetermined with 250 nM/10 µM of JQ1/GSK2801. KCNB1 was observed to be downregulated across all the treatment conditions, where lower doses of combined treatment 50 nM/10 µM of JQ1/GSK2801 demonstrated the most significant downregulation of this gene (Fig. 8-B). HCC-1806: We observed no expression of PTPRC and TAGLN genes in these cells. MUC19 was downregulated by JQ1 and combined treatments but was observed to be upregulated when GSK2801 was given alone. KCNB1 was downregulated by JQ1 and GSK2801 and lower doses of the combined treatment. As the JQ1 concentration increased in the combined treatment, so did the expression of KCNB1. The results are promising for KISS1, where all of the treatments resulted in its downregulation; The combined treatment effect was most prominent with 125 nM/10 µM of JQ1/GSK2801 (Fig. 8-C). These observations suggest that lower doses of JQ1 with 10 µM GSK2801 is the best synergistic dose to treat the HCC-1806 cell lines among the doses we tested. Collectively, these results indicate BD treatments altered gene expression consistent with our RNASeq analysis. BET and BAZ2A/B domain inhibitors are currently being evaluated at different stages of clinical trials for treating a variety of different cancer types (https://clinicaltrials.gov/; Identifiers: NCT05111561, NCT05053971, NCT05301972, NCT04910152, NCT04471974). Despite modest toxicities, the administration of different BET and BAZ2A/B inhibitors in patients suggest that their co-administration may be more effective than their individual administration [56]. Previous reports on the synergistic combination of BET and BAZ2A/B inhibitors in TNBC preclinical models suggest that they might affect transcriptional regulation via epigenetic mechanisms[57]. This observation has led to the development of targeted inhibitors such as lapatinib and trametinib, which are more effective in tumor inhibition both in vitro and in vivo [58, 59]. Trametinib and Selumetinib are FDA-approved kinase inhibitors. There are also several other kinase inhibitors in various phases of clinical trials, administered alone and in diverse combinations, for targeted therapeutics and chemotherapies (https://clinicaltrials.gov/). The synergistic effects of BET and BAZ2A/B domain inhibitors highlighted here and elsewhere [32] underscore the need for trials of these agents given in combination, similar to what is presently happening with kinase inhibitors. Among the BD containing proteins, BET and BAZ2A/B domains are the most promising and novel targets for the synergistic action of diverse inhibitors to induce cytotoxicity, as assessed in multiple TNBC cell line models [56, 60,61,62]. The synergistic action observed with JQ1 and GSK2801 proteins makes a unique combination treatment for TNBC models. Their action may be regulated by transcriptional mechanisms associated with BD containing proteins [33, 63,64,65]. Our results support a synergistic mechanism of JQ1 and GSK2801 and showed more effective anti-proliferative activity in vitro when these two treatments were given in combination. The probable molecular mechanism for the observed synergy was investigated initially through a computational analysis wherein we identified upregulated and downregulated genes and their associated metabolic pathways. Several important metabolic pathways associated with cancer were enriched with both upregulated and downregulated genes. Several of these are discussed to illustrate the effectiveness of JQ1 and GSK2801 on the regulation of metabolic pathways resulting in inhibition in TNBC cell lines. Major cancer related pathways enriched with upregulated genes include ABC transporters [66], Rap1 signaling [67], necroptosis [68], ferroptosis [69], PPAR signaling [70] and TGF-beta signaling. Rap1 acts as a switch in the cellular signaling transduction process and Rap1 signaling can regulate cell invasion and metastasis in different cancers. Ferroptosis is a distinct type of regulated cell death where its metabolism and expression of specific genes is associated with the cell inhibition in breast cancer. Activation of genes in the PPAR signaling pathways inhibit tumor progression [71]. Activation of TGF-beta signaling in breast cancer is involved in the suppression of tumor growth [72]. The upregulated pathways in the MDA-MB-231 cell lines were predominantly enriched by the genes such as RRAS, SHC2, TRADD, CAMK2D, GNG7, MAP2K6, PLD1, ITGB3, CCND1, CDKN1A, GNAI1, ITPR1 and PRKACB. These genes are most importantly associated with cancer associated signaling pathways such as cAMP signaling, kinase signaling, NF-kappaB apoptotic signaling, calcium signaling and p38 MAP kinase mediated signal transduction. The upregulated pathways in HCC-1806 were observed to be majorly enriched with the genes such as FOS, GNG7, SHC2, CDKN1A, ITPR1, PLCG2, ITPR2, PRKACB, PRKCB and PIK3R3. These gene are associated with the several important pathways such as growth signaling, B cell activation, apoptosis induction, endothelial cell proliferation, Inositol triphosphate receptor-mediated signaling, transmembrane receptor protein tyrosine kinase signaling and G protein-coupled receptor signaling pathways. SUM-159 cells exhibit enriched expression of CALM1, FOS, CREB3L3, ADCY1 and PIK3R3, associated with growth signaling, regulation of cell proliferation, differentiation, and transformation and calmodulin signaling pathways. The cancer related pathways enriched with downregulated genes include TNF signaling [73], JAK-STAT signaling [74], IL-17 signaling [75] and NF-kappa B signaling [76] pathways. TNF-α is involved in the epithelial-to-mesenchymal transition and metastasis of breast cancer cells and represents an important target [73]. JAK-STAT signaling represents an important focus as it is a potential therapeutic target to overcome drug resistance [74]. There exists a direct association between IL-17 and breast cancer invasion since IL-17 promotes invasion in some breast cancer cell lines [75]. The NF-kappa B signaling plays a major role by contributing to the aggressiveness of breast cancer and the genes from this pathway represent novel therapeutic targets [76]. The downregulated pathways in the MDA-MB-231 cell lines were predominantly enriched by the genes such as IL1A, PLA2G4A, VEGFA, E2F2, PTGS2, IL12A, NFATC1, NFATC2, EGF, TRAF2, CXCL8, ADCY7, IL1B, IL6, PLCB4 and MAPK13. These genes were observed to be majorly associated with the cancer associated pathways such as MAP kinase, cytokine mediated inflammation, proinflammatory signaling cascade, TNF-induced apoptosis, T cell receptor stimulation and mitogenic pathways. The downregulated pathways of HCC-186 were majorly enriched by the genes such as IKBKE, PDGFB, IL12A, IL1A, IRAK1, ITGB2, MYC, CALML4, ADCY7, TLR4, CALML5 and IL1B. These genes are associated with the several important pathways such as cytokine mediated inflammation, calmodulin, Toll-like receptor, cell cycle progression, apoptosis and cellular transformation, IL1-induced upregulation of NF-kappa B and oncanonical I-kappa-B pathways. Finally, the downregulated pathways of SUM-159 cell lines were predominantly enriched with the genes such as ITGB2, PDGFB, PDGFRA, PDGFRB, TLR2, RAC3, TGFB2, ADCY7, PLCB1 and MAPK3. These genes are associated with the metabolic activities such as MAP kinase signaling, TGF-beta signaling and Toll-like receptor signaling pathways. Collectively, these signaling, and metabolic pathways all contribute to cell proliferation and aggressive properties in TNBC. It is noticed that no metabolic pathways were enriched with upregulated and downregulated genes from GSK2801 single agent treatment in MDA-MB-231 and HCC-1806 cell lines whereas only one pathway i.e., PPAR signaling was enriched with the upregulated genes of GSK2801. Such findings most likely reflect heterogeneity between tumor cell lines. PTPRC, which belongs to protein tyrosine phosphatase (PTP) family involves in the regulation of variety of cellular processes including: cell growth, differentiation, mitosis, and oncogenic transformation making it useful as a prognostic or predictive biomarkers and/or direct target [77]. PTPRC acts as essential regulator of T-and -B-cell antigen receptor signaling and associated with lymphocyte-specific immune recruitment thereby plays a major role in patient survival [78, 79]. MUC19 reportedly modulates proliferation, invasion and metastatic potential of breast cancer [80]. MUC19 expression was notably observed to increase in TNBC and found to be targeted by miR-593 [81]. KCNB1 is a complex class of voltage-gated ion channels and its overexpression was reported as biomarker in breast cancer [82]. TAGLN is a potentially useful diagnostic marker differentially expressed in TNBC and a potential target for treatment strategies [83]. TAGLN was reported to be frequently downregulated by DNA hypermethylation and its promotor methylation profiles also serves as diagnostic markers with a possible clinical impact in the TNBC [84, 85]. KISS1 is of tremendous utility in controlling metastasis in a therapeutic context [86]. KISS1 is known as a downstream target of the canonical TGFβ/Smad2 pathway and has been identified as a putative human metastasis suppressor gene in in TNBC [87, 88]. JQ1 and GSK2801 were able to modulate their expression triggering their downregulation and inducing the cytotoxicity in TNBC cell lines models. The results from the MTT assay and RT-qPCR studies support the inhibitory activity of JQ1 and GSK2801. We further sought to predict their inhibitory action at the protein structural level through molecular modelling methods to theoretically support their inhibitory action by means of deriving their binding energies and intermolecular interactions with the target proteins. The comprehensive binding analysis of JQ1 and GSK2801 suggest their use as combinatorial drugs because of their similar binding interactions at the defined binding sites of the target proteins. So far, the most tractable inhibitor for BET is JQ1 [31] and for BAZ2A/B is GSK2801 [32]. Inhibition of such domains noticeably contributed to the cytotoxicity of the JQ1 + GSK2801 combinatorial treatments. Use of this combination enabled us to induce anti-proliferative activity more efficiently in the three TNBC cell line models along with the plausible identification of the target genes and the mechanisms. TNBC has not previously been associated with significant mutations or copy number alterations in the BET or BAZ2A/B domains. This paves the way to develop inhibitors against BD as therapeutics and represents an ideal choice to target TNBC. The distinct functional roles of BET and BAZ2A/B domains in TNBC suggests inhibition of their BD may result in selective tumor toxicities. Our findings explain how co-inhibition of BET and BAZ2A/B BD represent an effective approach to regulate transcription of TAGLN, KISS1, PTPRC, MUC19 and KCNB1 genes thereby inducing anti-proliferative activity. BD are likely to mediate several important cancer pathways that suppress oncogenic pathologies. It is important to identify the strategies to improve the clinical utilities of BDI following the conclusions of their phase I studies to examine their safety. Overall, the BDI represent promising combinatorial agents and further studies into their mechanisms of action would enable us to identify the most effective combinations to fast-track their use as therapeutic drugs. In the current study, the expression of five downregulated genes were characterized through molecular modelling and RT-qPCR, anticipating that the downregulation is associated with cancer inhibition. Additionally, the alteration of metabolic pathways requires additional investigation. Finally, the effective concentrations of JQ1 and GSK2801 require in vivo evaluation. Overall, the current study advances our understanding of the potential targets of JQ1/GSK2801 and underscores the combinatorial use of BDI against TNBC. Effective doses of JQ1/GSK2801 were determined for the three different TNBC cell lines, which provides motivation for future in vivo studies. The five downregulated genes characterized in the current study may serve as biomarkers of BDI sensitivity in TNBC. Our studies present coinhibition of JQ1 and GSK2801 in combination as an effective strategy to induce cytotoxicity and arrest the progression of TNBC. While our studies support the synergistic role of JQ1 and GSK2801, the functional analysis of the candidate genes in BET/BEZ inhibition along with additional in vivo evaluations is the important next step of the current study. These investigations will help to advance the present study to determine the possible molecular mechanism involved in cell inhibition and also to conclude the synergistic activity of the drugs beyond the validation of candidate gene expressions. The combined cytotoxic effect of JQ1 and GSK2801 on MDA-MB-231, HCC-1806 and SUM-159 cells has been documented through MTT assays. Our findings provide experimental support for the hypothesis that combined JQ1-GSK2801 shows synergistic growth inhibitory activity on TNBC cell line models. Further, we observed downregulated expression of cancer associated genes such as PTPRC, MUC19, KCNB1, TAGLN and KISS1, validated through RT-qPCR studies, with additional support from molecular modelling studies. These results from the current study suggest the combination of JQ1 and GSK2801 may be effective in the treatment of TNBC. Further in vivo investigations through mouse xenograft models would help to standardize the accurate synergistic doses to move towards clinical trials. The results from the current study may warrant further investigation as a potential treatment for TNBC. The RNASeq datasets analyzed in the current study are from Bevil et al., 2019 and are available in the NCBI-GEO repository https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE116907 The research data generated during the analysis in the current study are available from the Harvard Datavserse repository. The description of each file and their repository links are provided as follows. TNBC: Estrogen Receptor Progesterone Receptor HER2: Human Epidermal Growth Factor Receptor 2 BDI: Bromodomain Inhibitors BD: Bromodomains BET: Bromodomain-Extra-Rerminal domain CREB: Cyclic-AMP Response Element Binding protein CBP: CREB Binding Protein BAZ2A: Bromo Adjacent to Zinc Finger 2A BAZ2B: Bromo Adjacent to Zinc Finger 2B Differentially Expressed Genes Gene Expression Omnibus SRA: Short Read Archive TMM: Trimmed Mean of M-values logFC: Log2 Fold Change logCPM: Log Counts Per Million KEGG: Kyoto Encyclopedia of Genes and Genomes NCBI: National Center for Biotechnology Information OPLS: Optimized Potentials for Liquid Simulations FBS: PTP: Protein Tyrosine Phosphatase MTT: (3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide DMSO: Dimethyl Sulfoxide Optical Density nm: Nano meters Nano moles µM: Micro moles MDS: Boyle P, Levin B: World cancer report 2008: IARC Press, International Agency for Research on Cancer; 2008. https://www.cabdirect.org/cabdirect/abstract/20103010665. Lei S, Zheng R, Zhang S, Wang S, Chen R, Sun K, Zeng H, Zhou J, Wei W. Global patterns of breast cancer incidence and mortality: A population-based cancer registry data analysis from 2000 to 2020. Cancer Commun. 2021;41(11):1183–94. Lima SM, Kehm RD, Terry MB. Global breast cancer incidence and mortality trends by region, age-groups, and fertility patterns. EClinicalMedicine. 2021;38:100985. Carioli G, Bertuccio P, Malvezzi M, Rodriguez T, Levi F, Boffetta P, La Vecchia C, Negri E. Cancer mortality predictions for 2019 in Latin America. Int J Cancer. 2020;147(3):619–32. Narod SA, Iqbal J, Miller AB. Why have breast cancer mortality rates declined? J Cancer Policy. 2015;5:8–17. Hendrick RE, Helvie MA, Monticciolo DL. Breast cancer mortality rates have stopped declining in US women younger than 40 years. Radiology. 2021;299(1):143–9. Hendrick RE, Baker JA, Helvie MA. Breast cancer deaths averted over 3 decades. Cancer. 2019;125(9):1482–8. Reddy K. Triple-negative breast cancers: an updated review on treatment options. Curr Oncol. 2011;18(4): e173. Collignon J, Lousberg L, Schroeder H, Jerusalem G. Triple-negative breast cancer: treatment challenges and solutions. Breast Cancer: Targets and Therapy. 2016;8:93. Dent R, Trudeau M, Pritchard KI, Hanna WM, Kahn HK, Sawka CA, Lickley LA, Rawlinson E, Sun P, Narod SA. Triple-negative breast cancer: clinical features and patterns of recurrence. Clin Cancer Res. 2007;13(15):4429–34. Cleator S, Heller W, Coombes RC. Triple-negative breast cancer: therapeutic options. Lancet Oncol. 2007;8(3):235–44. Kumar P, Aggarwal R. An overview of triple-negative breast cancer. Arch Gynecol Obstet. 2016;293(2):247–69. Hudis CA, Gianni L. Triple-negative breast cancer: an unmet medical need. Oncologist. 2011;16(Suppl 1):1–11. Finn RS, Press MF, Dering J, Arbushites M, Koehler M, Oliva C, Williams LS, Di Leo A. Estrogen receptor, progesterone receptor, human epidermal growth factor receptor 2 (HER2), and epidermal growth factor receptor expression and benefit from lapatinib in a randomized trial of paclitaxel with lapatinib or placebo as first-line treatment in HER2-negative or unknown metastatic breast cancer. J Clin Oncol. 2009;27(24):3908. Florea A-M, Büsselberg D. Breast cancer and possible mechanisms of therapy resistance. Journal of Local and Global Health Science. 2013;2013(1):2. Tang Y, Wang Y, Kiani MF, Wang B. Classification, treatment strategy, and associated drug resistance in breast cancer. Clin Breast Cancer. 2016;16(5):335–43. Chalakur-Ramireddy NK, Pakala SB. Combined drug therapeutic strategies for the effective treatment of Triple Negative Breast Cancer. Biosci Rep. 2018;38(1):BSR20171357. Irshad S, Ellis P, Tutt A. Molecular heterogeneity of triple-negative breast cancer and its clinical implications. Curr Opin Oncol. 2011;23(6):566–77. Owen DJ, Ornaghi P, Yang JC, Lowe N, Evans PR, Ballario P, Neuhaus D, Filetici P, Travers AA. The structural basis for the recognition of acetylated histone H4 by the bromodomain of histone acetyltransferase gcn5p. EMBO J. 2000;19(22):6141–9. Zeng L, Zhou M-M. Bromodomain: an acetyl-lysine binding domain. FEBS Lett. 2002;513(1):124–8. Shu S, Wu H-J, Jennifer YG, Zeid R, Harris IS, Jovanović B, Murphy K, Wang B, Qiu X, Endress JE. Synthetic lethal and resistance interactions with BET bromodomain inhibitors in triple-negative breast cancer. Molecular cell. 2020;78(6):1096–113 (e1098). Yao D, Zhang J, Wang J, Pan D, He Z. Discovery of novel ATAD2 bromodomain inhibitors that trigger apoptosis and autophagy in breast cells by structure-based virtual screening. J Enzyme Inhib Med Chem. 2020;35(1):713–25. Yang G-J, Song Y-Q, Wang W, Han Q-B, Ma D-L, Leung C-H. An optimized BRD4 inhibitor effectively eliminates NF-κB-driven triple-negative breast cancer cells. Bioorg Chem. 2021;114:105158. Yoo M, Park TH, Yoo M, Kim Y, Lee J-Y, Lee KM, Ryu SE, Lee BI, Jung K-Y, Park CH. Synthesis and Structure-Activity Relationships of Aristoyagonine Derivatives as Brd4 Bromodomain Inhibitors with X-ray Co-Crystal Research. Molecules. 2021;26(6):1686. Hu Q, Wang C, Xiang Q, Wang R, Zhang C, Zhang M, Xue X, Luo G, Liu X, Wu X. Discovery and optimization of novel N-benzyl-3, 6-dimethylbenzo [d] isoxazol-5-amine derivatives as potent and selective TRIM24 bromodomain inhibitors with potential anti-cancer activities. Bioorg Chem. 2020;94:103424. Bi X, Chen Y, Sun Z, Lu W, Xu P, Lu T, Ding H, Zhang N, Jiang H, Chen K. Structure-based drug optimization and biological evaluation of tetrahydroquinolin derivatives as selective and potent CBP bromodomain inhibitors. Bioorg Med Chem Lett. 2020;30(22):127480. Taniguchi Y. The bromodomain and extra-terminal domain (BET) family: functional anatomy of BET paralogous proteins. Int J Mol Sci. 2016;17(11):1849. Article PubMed Central CAS Google Scholar Poot RA, Dellaire G, Hülsmann BB, Grimaldi MA, Corona DF, Becker PB, Bickmore WA, Varga-Weisz PD. HuCHRAC, a human ISWI chromatin remodelling complex contains hACF1 and two novel histone-fold proteins. EMBO J. 2000;19(13):3377–87. Jones MH, Hamana N, Nezu J-i, Shimane M. A novel family of bromodomain genes. Genomics. 2000;63(1):40–5. Jung M, Gelato KA, Fernández-Montalván A, Siegel S, Haendler B. Targeting BET bromodomains for cancer treatment. Epigenomics. 2015;7(3):487–501. da Motta LL, Ledaki I, Purshouse K, Haider S, De Bastiani MA, Baban D, Morotti M, Steers G, Wigfield S, Bridges E. The BET inhibitor JQ1 selectively impairs tumour response to hypoxia and downregulates CA9 and angiogenesis in triple negative breast cancer. Oncogene. 2017;36(1):122–32. Bevill SM, Olivares-Quintero JF, Sciaky N, Golitz BT, Singh D, Beltran AS, Rashid NU, Stuhlmiller TJ, Hale A, Moorman NJ. Gsk2801, a baz2/brd9 bromodomain inhibitor, synergizes with bet inhibitors to induce apoptosis in triple-negative breast cancer. Mol Cancer Res. 2019;17(7):1503–18. Shu S, Lin CY, He HH, Witwicki RM, Tabassum DP, Roberts JM, Janiszewska M, Huh SJ, Liang Y, Ryan J. Response and resistance to BET bromodomain inhibitors in triple-negative breast cancer. Nature. 2016;529(7586):413–7. Clough E, Barrett T. The gene expression omnibus database. In: Statistical Genomics. edn.: Springer. 2016. pp. 93–110. https://www.nature.com/articles/nature16508. Bolger AM, Lohse M, Usadel B. Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics. 2014;30(15):2114–20. Li B, Dewey CN. RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome. BMC Bioinformatics. 2011;12(1):323. Langmead B, Salzberg SL. Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012;9(4):357. Bindea G, Mlecnik B, Hackl H, Charoentong P, Tosolini M, Kirilovsky A, Fridman W-H, Pagès F, Trajanoski Z, Galon J. ClueGO: a Cytoscape plug-in to decipher functionally grouped gene ontology and pathway annotation networks. Bioinformatics. 2009;25(8):1091–3. Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, Amin N, Schwikowski B, Ideker T. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003;13(11):2498–504. Kanehisa M, Goto S. KEGG: kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000;28(1):27–30. Hooft RW, Sander C, Vriend G. Objectively judging the quality of a protein structure from a Ramachandran plot. Bioinformatics. 1997;13(4):425–30. Maestro: Schrödinger Release 2021–2: Maestro, Schrödinger, LLC, New York, NY. 2021. https://www.schrodinger.com/citations. Lim VT, Hahn DF, Tresadern G, Bayly CI, Mobley DL. Benchmark assessment of molecular geometries and energies from small molecule force fields. F1000Research. 2020;9(1390):1390. Halgren TA, Murphy RB, Friesner RA, Beard HS, Frye LL, Pollard WT, Banks JL. Glide: a new approach for rapid, accurate docking and scoring. 2. Enrichment factors in database screening. J Medicinal Chem. 2004;47(7):1750–9. Grigoriadis A, Mackay A, Noel E, Wu PJ, Natrajan R, Frankum J, Reis-Filho JS, Tutt A. Molecular characterisation of cell line models for triple-negative breast cancers. BMC Genomics. 2012;13(1):619. Kumar P, Nagarajan A, Uchil PD. Analysis of cell viability by the MTT assay. Cold Spring Harbor Protocols. 2018;2018(6):pdb (prot095505). Nga N, Ngoc T, Trinh N, Thuoc T, Thao D. Optimization and application of MTT assay in determining density of suspension cells. Anal Biochem. 2020;610:113937. Chou T-C. Drug combination studies and their synergy quantification using the Chou-Talalay method. Can Res. 2010;70(2):440–6. Chou T, Martin N. CompuSyn for drug combinations: PC software and user's guide: a computer program for quantitation of synergism and antagonism in drug combinations, and the determination of IC50 and ED50 and LD50 values. ComboSyn, Paramus, NJ; 2005. https://www.combosyn.com/. Untergasser A, Cutcutache I, Koressaar T, Ye J, Faircloth BC, Remm M, Rozen SG. Primer3—new capabilities and interfaces. Nucleic Acids Res. 2012;40(15):e115–e115. Chakravarthi VP, Khristi V, Ghosh S, Yerrathota S, Dai E, Roby KF, Wolfe MW, Rumi MK. ESR2 is essential for gonadotropin-induced Kiss1 expression in granulosa cells. Endocrinology. 2018;159(11):3860–73. Collins TJ. ImageJ for microscopy. Biotechniques. 2007;43(S1):S25–30. Jurkovicova D, Smolkova B, Magyerkova M, Sestakova Z, Kajabova VH, Kulcsar L, Zmetakova I, Kalinkova L, Krivulcik T, Karaba M. Down-regulation of traditional oncomiRs in plasma of breast cancer patients. Oncotarget. 2017;8(44):77369. Weber G, Shen F, Prajda N, Yeh Y, Yang H, Herenyiova M, Look KY. Increased signal transduction activity and down-regulation in human cancer cells. Anticancer Res. 1996;16(6A):3271–82. Pérez-Salvia M, Esteller M. Bromodomain inhibitors and cancer therapy: From structures to applications. Epigenetics. 2017;12(5):323–39. de Andrés M, Madhusudan N, Bountra C, Oppermann U, Oreffo R. Bromodomain inhibitors are potent epigenetic regulators of catabolic gene expression in human osteoarthritic chondrocytes. Osteoarthritis Cartilage. 2018;26:S154. Zawistowski JS, Bevill SM, Goulet DR, Stuhlmiller TJ, Beltran AS, Olivares-Quintero JF, Singh D, Sciaky N, Parker JS, Rashid NU. Enhancer remodeling during adaptive bypass to MEK inhibition is attenuated by pharmacologic targeting of the P-TEFb complex. Cancer Discov. 2017;7(3):302–21. Stuhlmiller TJ, Miller SM, Zawistowski JS, Nakamura K, Beltran AS, Duncan JS, Angus SP, Collins KA, Granger DA, Reuther RA. Inhibition of lapatinib-induced kinome reprogramming in ERBB2-positive breast cancer by targeting BET family bromodomains. Cell Rep. 2015;11(3):390–404. Smith SG, Zhou M-M. The bromodomain: a new target in emerging epigenetic medicine. ACS Chem Biol. 2016;11(3):598–608. Andrikopoulou A, Liontos M, Koutsoukos K, Dimopoulos M-A, Zagouri F. The emerging role of BET inhibitors in breast cancer. The Breast. 2020;53:152–63. Bevill SM. Transcriptional Adaptation to Targeted Inhibitors via BET Bromodomain Proteins in Triple-negative Breast Cancer. The University of North Carolina at Chapel Hill; 2018. https://www.proquest.com/pagepdf/2170697151?accountid=28920. Donati B, Lorenzini E, Ciarrocchi A. BRD4 and Cancer: going beyond transcriptional regulation. Mol Cancer. 2018;17(1):1–13. Shu S, Polyak K. BET bromodomain proteins as cancer therapeutic targets. In: Cold Spring Harbor symposia on quantitative biology: 2016. Cold Spring Harbor Laboratory Press; 2016. pp. 123–9. https://scholar.archive.org/work/f7vezev6srejjjsrdb4pbeypqa/access/wayback/http://symposium.cshlp.org/content/81/123.full.pdf. Sahni JM, Keri RA. Targeting bromodomain and extraterminal proteins in breast cancer. Pharmacol Res. 2018;129:156–76. Domenichini A, Adamska A, Falasca M. ABC transporters as cancer drivers: Potential functions in cancer development. Biochimica et Biophysica Acta (BBA)-General Subjects. 2019;1863(1):52–60. Zhang Y-L, Wang R-C, Cheng K, Ring BZ, Su L. Roles of Rap1 signaling in tumor cell migration and invasion. Cancer Biol Med. 2017;14(1):90. Khorsandi L, Orazizadeh M, Niazvand F, Abbaspour M, Mansouri E, Khodadadi A. Quercetin induces apoptosis and necroptosis in MCF-7 breast cancer cells. Bratislava Medical Journal. 2017;118(2):123–8. Li Z, Chen L, Chen C, Zhou Y, Hu D, Yang J, Chen Y, Zhuo W, Mao M, Zhang X. Targeting ferroptosis in breast cancer. Biomarker Research. 2020;8(1):1–27. Chen Y, Xue J, Chen C, Yang B, Xu Q, Wu F, Liu F, Ye X, Meng X, Liu G. PPAR signaling pathway may be an important predictor of breast cancer response to neoadjuvant chemotherapy. Cancer Chemother Pharmacol. 2012;70(5):637–44. Fanale D, Amodeo V, Caruso S. The interplay between metabolism, PPAR signaling pathway, and cancer. In.: Hindawi; 2017. https://www.hindawi.com/journals/ppar/2017/1830626/. Buck MB, Knabbe C. TGF-beta signaling in breast cancer. Ann N Y Acad Sci. 2006;1089(1):119–26. Cruceriu D, Baldasici O, Balacescu O, Berindan-Neagoe I. The dual role of tumor necrosis factor-alpha (TNF-α) in breast cancer: molecular insights and therapeutic approaches. Cell Oncol. 2020;43(1):1–18. Tabassum S, Abbasi R, Ahmad N, Farooqi AA. Targeting of JAK-STAT signaling in breast cancer: therapeutic strategies to overcome drug resistance. In: Breast Cancer Metastasis and Drug Resistance. 2019. p. 271–81. Coffelt SB, Kersten K, Doornebal CW, Weiden J, Vrijland K, Hau C-S, Verstegen NJ, Ciampricotti M, Hawinkels LJ, Jonkers J. IL-17-producing γδ T cells and neutrophils conspire to promote breast cancer metastasis. Nature. 2015;522(7556):345–8. Lerebours F, Vacher S, Andrieu C, Espie M, Marty M, Lidereau R, Bieche I. NF-kappa B genes have a major role in inflammatory breast cancer. BMC Cancer. 2008;8(1):1–11. Du Y, Grandis JR. Receptor-type protein tyrosine phosphatases in cancer. Chin J Cancer. 2015;34(2):61–9. Kim J-Y, Jung HH, Sohn I, Woo SY, Cho H, Cho EY, Lee JE, Kim SW, Nam SJ, Park YH. Prognostication of a 13-immune-related-gene signature in patients with early triple-negative breast cancer. Breast Cancer Res Treat. 2020;184(2):325–34. Ma Y, Li Y, Guo P, Zhao J, Qin Q, Wang J, Liang Z, Wei D, Wang Z, Shen J. Endothelial Cells Potentially Participate in the Metastasis of Triple-Negative Breast Cancer. J Immunol Res. 2022;2022:5412007. Mukhopadhyay P, Chakraborty S, Ponnusamy MP, Lakshmanan I, Jain M, Batra SK. Mucins in the pathogenesis of breast cancer: implications in diagnosis, prognosis and therapy. Biochimica et Biophysica Acta (BBA)-Reviews on Cancer. 2011;1815(2):224–40. Liu Y, Zhang Q, Wu J, Zhang H, Li X, Zheng Z, Luo M, Li L, Xiang Y, Yang F. Long non-coding RNA A2M-AS1 promotes breast cancer progression by sponging microRNA-146b to upregulate MUC19. Int J Gen Med. 2020;13:1305. Qu C, Sun J, Liu Y, Wang X, Wang L, Han C, Chen Q, Guan T, Li H, Zhang Y. Caveolin-1 facilitated KCNA5 expression, promoting breast cancer viability. Oncol Lett. 2018;16(4):4829–38. Rao D, Kimler BF, Nothnick WB, Davis MK, Fan F, Tawfik O. Transgelin: A potentially useful diagnostic marker differentially expressed in triple-negative and non–triple-negative breast cancers. Hum Pathol. 2015;46(6):876–83. Sayar N, Karahan G, Konu O, Bozkurt B, Bozdogan O, Yulug IG. Transgelin gene is frequently downregulated by promoter DNA hypermethylation in breast cancer. Clin Epigenetics. 2015;7(1):1–16. Yang L, Hong Q, Xu SG, Kuang XY, Di GH, Liu GY, Wu J, Shao ZM, Yu SJ. Downregulation of transgelin 2 promotes breast cancer metastasis by activating the reactive oxygen species/nuclear factor-κB signaling pathway. Mol Med Rep. 2019;20(5):4045–258. Beck BH, Welch DR. The KISS1 metastasis suppressor: a good night kiss for disseminated cancer cells. Eur J Cancer. 2010;46(7):1283–9. Tian J, Al-Odaini AA, Wang Y, Korah J, Dai M, Xiao L, Ali S, Lebrun J-J. KiSS1 gene as a novel mediator of TGFβ-mediated cell invasion in triple negative breast cancer. Cell Signal. 2018;42:1–10. Martin TA, Watkins G, Jiang WG. KiSS-1 expression in human breast cancer. Clin Exp Metas. 2005;22(6):503–11. Human breast cancer cell lines, MDA-MB-231 were obtained from Dr. Janet Price, U.T. M.D. Anderson Cancer Center; HCC-1806 were obtained from Dr. Shane Stecklein, University of Kansas Medical Center; and SUM-159 cells were obtained from Dr. Sofia D Merajver, University of Michigan. We would also like to extend our gratitude to: Lisa Neums, Shachi Patel, Shelby Bell-Glenn, Whitney Shae, Bo Zhang, Samuel Boyd, Emily Nissen, Jonah Amponsah, Dr. Prabhakar Chalise, Dr. Jinxiang Hu, for their constructive feedback on this manuscript. Research reported was primarily supported by the National Cancer Institute (NCI) Cancer Center Support Grant P30 CA168524 and the Kansas Institute for Precision Medicine COBRE (Supported by the National Institute of General Medical Science award P20 GM130423) and the Kansas IDEA Network Of Biomedical Research Excellence (P20-GM103418). Additional funding provided by METAvivor Research and Support, Inc. and the National Foundation for Cancer Research. Department of Biostatistics & Data Science, University of Kansas, Medical Center, KS, Kansas City, USA Nanda Kumar Yellapu, Mihaela E. Sardiu, Dong Pei, Jeffery A. Thompson & Devin C. Koestler The University of Kansas Cancer Center, Kansas City, KS, USA Nanda Kumar Yellapu, Thuc Ly, Mihaela E. Sardiu, Dong Pei, Danny R. Welch, Jeffery A. Thompson & Devin C. Koestler Department of Cancer Biology, University of Kansas, Medical Center, KS, Kansas City, USA Thuc Ly & Danny R. Welch Departments of Molecular & Integrative Physiology and Internal Medicine, University of Kansas, Medical Center, KS, Kansas City, USA Danny R. Welch Nanda Kumar Yellapu Thuc Ly Mihaela E. Sardiu Dong Pei Jeffery A. Thompson Devin C. Koestler NKY: designed the project, provided expertise to the bioinformatics and laboratory concepts, and wrote the paper. TL: Assisted with the cytotoxicity assays and RT-qPCR studies. MS: Assisted with the interpretation of gene network maps and edited the manuscript. DP: Assisted with the computational analysis of RNASeq data, interpretation of the study findings and edited the manuscript. DRW: Supervised the laboratory studies, interpretation of results and edited the manuscript. JAT: supervised the analysis, participated in the interpretation of the study findings and edited the manuscript. DCK: supervised the analysis, participated in the interpretation of the study findings and edited the manuscript. The author(s) read and approved the final manuscript. Correspondence to Jeffery A. Thompson or Devin C. Koestler. The authors have no competing interest to disclose. Additional file 1: S1 file. This file contains thefigures and tables generated during the data analysis and is available at https://doi.org/10.7910/DVN/BEW3OR. Fig.S1. Analysis of gene counts. Boxplot (A) before and (B) after normalization explaining the distribution of genecounts; (C) Heat map of samples (D) Heat map of gene counts among the samples. Fig. S2. Multidimensional analysis of samples. Multidimensional scalingplots before (left) and after (right) normalization explaining the distributionof control and treated samples among three different breast cancer cell lines. Fig.S3. Number of upregulated anddownregulated metabolic pathways. The number of upregulated anddownregulated pathways in the threedifferent treatment conditions (JQ1, GSK2801 and JQ1 +GSK2801) across threedifferent TNBC cell lines. The unique and shared number of pathways amongdifferent treatment conditions are represented. Fig. S4. Optimized conformation of JQ1 and GSK2801. The structures are represented in stickmodel with the hydrogen bond donor (blue) and acceptor (red) surface areas. Theelectron density clouds are represented in green dots. Table S1. Table. RNASeq data sets. RNASeq expression data retrieved from GEOdatabase. Three different TNBC cell lines were treated with JQ1 and GSK2801alone and in combination for 72 hours. DMSO was used as a vehicle and served asan internal control. Table S2.The genes and primers. Primers usedfor the evaluation of gene expression in the breast cancer cell lines underdifferent treatment conditions. Table S3-S5. Smear plots and volcano plots. DEGs observed inMDA-MB-231, HCC-1806 and SUM-159 cell lines among different treatmentconditions. Green dots represent downregulated and red dots upregulated genes. Table S6. Ramachandran plots. The stereochemicalvalidations were done by generating the Ramachandran plots for the homologymodels of five downregulated proteins. The Gene count matrix. The gene counts generated from the RSEM analysis showing a list of 44,567genes corresponds all the three different TNBC (MDA-MB-231, HCC-1806 andSUM-159) cell lines treated with three treatment conditions (JQ1, GSK2801 andJQ1 +GSK2801). Available at https://doi.org/10.7910/DVN/GMSXLN. Additionalfile 3: S3 file. Differentially expressed genes (DEGs). DEGs observed from three TNBC cell lines. The DEGSare separated as upregulated and downregulated genes based on the log2 foldchange and p-values. Available at https://doi.org/10.7910/DVN/0SI5AB. Grouping of DEGs.The upregulated and downregulated genes from three different TNBC cell lines aregrouped based on the three treatment conditions to find out the common andunique genes among the treatments. Available at https://doi.org/10.7910/DVN/BJFDD8. Gene enrichment analysis. Metabolic pathways enriched with upregulated anddownregulated genes in three different TNBC cell lines and three differenttreatment conditions. Available at https://doi.org/10.7910/DVN/KWVOJV. Yellapu, N.K., Ly, T., Sardiu, M.E. et al. Synergistic anti-proliferative activity of JQ1 and GSK2801 in triple-negative breast cancer. BMC Cancer 22, 627 (2022). https://doi.org/10.1186/s12885-022-09690-2 Differential expression analysis Expression studies MTT assay
CommonCrawl
Prediction of domestic appliances usage based on electrical consumption Patrick Huber1, Mario Gerber1, Andreas Rumsch1 & Andrew Paice1 Forecasting or modeling the on-off times of domestic appliances has gained increasing attention in recent years. However, comparing currently published results is difficult due to the many different data-sets and performance measures employed. In this paper, we evaluate the performance of three increasingly sophisticated approaches within a common framework on three data-sets each spanning 2 years. The approaches forecast the future on-off times of the appliances for the next 24 h on an hourly basis, solely based on historic energy consumption data. The appliances investigated are driven by user behavior and consume a significant fraction of the household's total electrical energy consumption. We find that for all algorithms the average area under curve (AUC) in the receiver operating characteristic (ROC) is in the range between 72% and 73%, i.e. indicating mediocre prediction quality. We conclude that historic consumption data alone is not sufficient for a good quality hourly forecast. Forecasting or modeling the expected on-off times of domestic appliances is motivated from two directions: (i) generation of electrical load profiles and (ii) learning and predicting user behavior. Artificial load profile generation (Pflugradt 2016) can be helpful if large numbers of profiles spanning extended durations are required because their collection typically involves arduous measurement campaigns. While the precise prediction of the switch-on/off times is in this case not the main concern, it is an essential part in applications targeting demand response systems: Learning the usage pattern of appliances and therefore, knowledge of the user behavior is a vital input to optimally plan energy usage (Chrysopoulos et al. 2014; Holub and Sikora 2013). While different prediction approaches have been published (Chrysopoulos et al. 2014; Holub and Sikora 2013; Truong et al. 2013; Barbato et al. 2011), an outstanding matter in adequately addressing the forecast of domestic appliance usage is a comparison of the available approaches: Published results are difficult to compare because of diverse performance metrics, different predicted appliances and the large variety of employed datasets, either measured at different geographic locations or even simulated. It is therefore unclear how well a method generalizes (i) over extended time periods and (ii) to other datasets with different attributes such as appliances, number of inhabitants, user habits and behavior. In this work, we compare published the approaches we are aware of (Chrysopoulos et al. 2014; Truong et al. 2013; Barbato et al. 2011) and extensions from these on three datasets measured over 2 years in households located in Switzerland, Canada and the UK. We implemented these approaches into a common framework and compare their fitness in predicting the usage patterns of the appliances. In doing so, we focused on appliances, whose usage is mainly driven by user behavior and whose switch-on time is flexible. In the relevant literature such appliances are commonly referred to as "shiftable loads". Examples for such loads are washing machine, dish washer or tumble dryer. The Python source code for the experiments can be obtained from the authors upon request. The following subsections shortly discuss the main characteristics of the three implemented algorithms. All algorithms have been used to predict the on-off times of appliances with a resolution of 1 h. Histogram algorithm Assuming that household activities follow a weekly pattern, one can build up a histogram of on-times of an appliance for each weekday based on the training data (Chrysopoulos et al. 2014; Holub and Sikora 2013). The approach used in this work is shown in Eq. (1). It conditions relevant day-profiles with a Gaussian weighting around the time of interest. In this manner we allow on-events in the past that are not precisely aligned with the time of interest to influence the prediction. Based on the preceding N days each subdivided into T time intervals, the probability that on day n at time t appliance l is running is calculated as $$ p\left({x}_{ntl}\right)\propto {\sum}_{m\in N}{\sum}_{\tau \le T}{w}_{nm}{e}^{\frac{{\left(t-\tau \right)}^2}{2{\sigma}^2}}{x}_{m\tau l} $$ where xmlt = 1 if appliance l was running during the interval τ on weekday m and xmlt = 0 otherwise. wnm = 1 if n = m, and wnm = 0 otherwise. The variance σ is a model parameter that was set experimentally, see results section. Pattern search algorithm Whereas the histogram-based approaches assume the weekdays to be the governing pattern defining the weights wnm, see Eq. (1), the approach by Barbato (Barbato et al. 2011) tries to identify these patterns. It does so by relying on the redundancy of variably sized day-patterns. To this end, one maps the N days preceding the day to be predicted, n, to a binary array of the form [δn − N, δn − (N − 1), …, δn − 1] of length N with δi = 1 when the appliance was running on day i, and δi = 0 otherwise. The sub-array Sn(i) is then defined as Sn(i) = [δn − i, δn − (i − 1), …, δn − 1] for a given length 1 ≤ i < N/2. The occurrences of both the sub-pattern Sn(i) as well as [Sn(i), 1] = [δn − i, δn − (i − 1), …, δn − 1, 1] is counted in the original array and the probability of a pattern of length i followed by a conjectured on-day is calculated as $$ {s}_n\left(i,1\right)=\left\{\begin{array}{cc}0& \mathrm{if}\#\left[{S}_n(i)\right]=1\\ {}\frac{\#\left[{S}_n(i),1\right]}{\#\left[{S}_n(i)\right]-1}& else\end{array}\right. $$ and correspondingly sn(i, 0) for a conjectured off-day (note that sn(i, 1) + sn(i, 0) = 1 by construction). Now i is increased until either sn(i, 1) or sn(i, 0) equals 1. In the latter case, a day without any appliance usage is predicted. Whereas in the former, the days following the occurrences of the pattern Sn(i) define the relevant days used for forecasting. They replace the days with identical weekday as used in the Histogram algorithm. It turns out that for the investigated data, patterns are not as obvious as in (Barbato et al. 2011), i.e. there is typically not an optimal pattern length i resulting in either sn(i, 1) or sn(i, 0) being 1. We therefore extended the original approach as can be seen in Eq. (3). Day n is predicted by the sum of the K most probable patterns weighted with the probabilities sn(i, α). $$ p\left({x}_{nlt}\right)\propto {\sum}_{i,\alpha }{s}_n\left(i,\alpha \right){\delta}_{\alpha 1}{\sum}_{m\in {N}_i,\mathrm{s}\le T}{e}^{\frac{{\left(t-s\right)}^2}{2{\sigma}^2}}{x}_{msl} $$ where ∑i, αgoes over the K most relevant patterns. The Kronecker Delta δα1 leads to a zero contribution of the patterns predicting a day with no appliance usage. Bayesian inference algorithm The third investigated method (Truong et al. 2013) uses Bayesian inference, which differs fundamentally from the previous approaches. It uses a Markov-Chain Monte-Carlo approach to sample the posteriori distribution of the model parameters. The key elements of the model are the latent day-types k. They are used to create day profiles and to record correlations between the use of individual appliances. In summary, the probability p(xnlt) of appliance l running at time t on day n is calculated as $$ p\left({x}_{nlt}\right)\propto {\sum}_Kp\left(k\mid n\right){\mu}_{kl}(t) $$ where k goes over all K day-types and p(k| n) is the probability of day n being described by day-type k. One of the advantages of this approach is that it infers the parameters for each appliance l from the data of all appliances resulting with an effective training set of N · L data points, L being the total number of appliances. Data and methods Various datasets containing electrical consumption data of individual households are available (Murray et al. 2017). The three datasets employed in this investigation are GH9, collected by the authors, AMPds2 (Makonin et al. 2016) and REFIT, House 5 (Murray et al. 2017). They all cover at least two continuous years of data records from a single-family house, stem from Switzerland, Canada and UK respectively, and include sub-metered data for dishwasher, washing machine, and tumble dryer. In order to produce hourly on- and off-times off the appliances, the measurement data was preprocessed by imposing i) minimal on- and off-times i.e. removing noise spikes and preventing double-counting due to intra-cyclic pauses, ii) as well as a minimal power levels. It was then downsampled to hourly intervals. Performance metric Binary classifiers can be assessed with a variety of performance metrics. We compare the predictive quality of the tested algorithms on the basis of the so called ReceiverOperator-Characteristics (ROC) curves because of their independence from the relative weight of the ground-truth's classes. The ROC method is well suited for a posteriori measure of the prediction quality but for an actual predictive algorithm a single working point along the curve (i.e. a single fixed threshold) must be chosen in advance. To average the ROC curves over individual samples each predicting the on-off behavior of an appliance during 1 week and estimate the resulting statistical variance, methods described in (Macskassy and Provost 2004) are employed. To allow for simple comparison with other experiments, the ROC curve is integrated, resulting in the area under curve AUC. Where not otherwise mentioned, results stem from an average over 90 samples, where each individual sample predicted on-off behavior of an appliance during 1 week based on the eight preceding weeks, hence covering in total roughly 2 years of data. The basic histogram method was tested on all three datasets with the model parameter σ (variance) varying between 0 and 2. Overall best performance with respect to AUC was achieved with σ = 1.3 which was used for all further experiments. The average performance improves by increasing the training window, i.e. increasing the individual train-sets, but saturates for lengths above about 3 months. As a trade-off between prediction quality and a quickly increasing computational effort for the more elaborate algorithms, a training window of 8 weeks was chosen to ensure comparability of the results. Table 1 summarizes the results. The algorithm generally performs in a medium quality range with AUC-values around 0.7. Differences in the AUC of different appliances and datasets are large but are not significant due to the large uncertainty as illustrated in Fig. 1. Table 1 Area Under Curve for predicting 1 week based on the preceding 8 weeks. The results are averaged over 2 years (90 samples) with corresponding standard deviation AUC of the Histogram algorithm plotted over a 2 years horizon illustrating the large variations of the mean values in table 1 for all three datasets. The black curve is averaged over appliances, whereas the colored lines depict the individual appliances' performance. Red: tumble dryer, blue: dishwasher, green: washing machine. Interruptions in the lines result from a lack of sufficient values to confidently assess an AUC value for certain validation periods Whereas the Histogram algorithm and the Bayesian Inference algorithm can 'predict' arbitrarily far into the future, the Pattern Search algorithm adapted from (Barbato et al. 2011) is only able to predict the day immediately following the training set. Thus for the latter, the window of training- and validation-set was not shifted by intervals of a week but day by day. Seven day-predictions were then summarized to a 1 week-prediction. Barbato's approach has been modified to include the K most relevant patterns. Experimentally we found that K = 14 leads to satisfactory performance. As can be seen in Table 1, the AUC values are around 0.7 as for the Histogram algorithm but the standard deviation for the Pattern Search algorithm is mostly reduced. In contrast to the results discussed so far, results from the Bayesian Inference algorithm originate from averaging not only over 90 samples i.e. validation weeks, but in addition each sample was obtained by averaging the prediction of ten independent Markov Chains. Tests showed a fast convergence of the individual Markov Chains independent of the initialization. With a burn-in period of 500 steps, the individual prediction was calculated by averaging over 2000 Gibbs iterations. Results are summarized in Table 1. The results summarized in Table 1 lead to the following observations: i) The overall performance of the three algorithms is essentially the same: averaging all appliances in all houses leads to the following values: 0.72 for Histogram and 0.73 for both Pattern Search and Bayesian Inference algorithms. ii) With increasing complexity of the algorithm not only the variances over 2 years decrease, but also the prediction quality across appliances and data-sets becomes more similar. iii) An algorithm's (relative) performance for a given appliance and data-set does not necessarily relate to another algorithm's performance on the same data-set. That is, an algorithm performing particularly well on a given appliance of a given data-set does not necessarily imply other algorithms to perform similarly, i.e. the governing reason for the large differences does not seem to be the underlying data. iv) Similarly, no statement is possible about certain appliances performing markedly worse or better across all datasets for a specific algorithm, i.e. no particular algorithm is especially good at predicting a certain appliance. As discussed above the results imply that the mean predictive performance is not affected by the choice of the employed model. Because of the complexity of the implementation and computational considerations this would favor the Histogram algorithm over the two others. It performs at least 2–3 orders of magnitude faster than the algorithm based on Bayesian inference. For most real-world applications, it is, however, not the mean performance that counts most but a reliable performance on any given data. Here the reduced variance of the Bayesian algorithm with respect to the Histogram and Pattern Search approach speaks in favor of the former. From our viewpoint, a limitation of the Bayesian algorithm in its current form is the fact that it strongly relies on weekly patterns despite its introduction of the latent day-types. This could be addressed by making minor changes to include more data such as weather or schedules (Truong et al. 2013). An alternative could be to combine the Bayesian and the Pattern Search algorithms so that day n would also be predicted based upon the inferred day-types of the immediately preceding days. One aim of this study was to investigate if the on-off times of domestic appliances can be predicted solely based on electrical usage data. From our results, we tend to negate this hypothesis: On a coarse-grained timescale of 1 h, we achieved on average a mediocre prediction performance with a large variance. However, we believe that for domestic load optimization an improved performance and, in particular, a smaller variance of the prediction would be desirable if not necessary. Our choice of algorithms is far from exhaustive and one can think of various improvements of the examined algorithms. Nevertheless, from our point of view, the presented results based on 2 years of data from three different households reflect a general limit for the hourly predictability of an individual household's electrical appliances. We conclude that taking solely electrical data of a single family into account, every stochastic approach must suffer from a lack of information, independently of its complexity. ROC: Receiver operating characteristic Barbato A, Capone A, Rodolfi M, Tagliaferri D (2011) Forecasting the usage of household appliances through power meter sensors for demand management in the smart grid. In: 2011 IEEE International Conference on Smart Grid Communications (SmartGridComm). IEEE, Brussels, pp 404–409 Chrysopoulos A, Diou C, Symeonidis AL, Mitkas PA (2014) Bottom-up modeling of small-scale energy consumers for effective demand response applications. Eng Appl Artif Intell 35:299–315 Holub O, Sikora M (2013) End user models for residential demand response. In: Innovative Smart Grid Technologies Europe (ISGT Europe) 4th IEEEPES, pp 1–4 Macskassy S, Provost F (2004) Confidence bands for ROC curves: methods and an empirical study. In: ROC Analysis in AI, First Int Workshop (ROCAI-2004) Valencia Spain Makonin S, Ellert B, Bajić IV, Popowich F (2016) Electricity, water, and natural gas consumption of a residential house in Canada from 2012 to 2014. Sci Data 3:160037 Murray D, Stankovic L, Stankovic V (2017) An electrical load measurements dataset of United Kingdom households from a two-year longitudinal study. Sci Data 4:160122 Pflugradt ND (2016) Modellierung von Wasser und Energieverbräuchen in Haushalten. TU Chemnitz Truong NC, McInerney J, Tran-Thanh L, Costanza E, Ramchurn SD (2013) Forecasting multi-appliance usage for smart home energy management. Proc Twenty-Third Int Jt Conf Artif Intell Beijing China 4:3–9 This work has been financially supported though the Swiss Competence Centers for Energy Research – Future Energy Efficient Buildings and Districts. Publication costs for this article were sponsored by the Smart Energy Showcases - Digital Agenda for the Energy Transition (SINTEG) programme. The datasets analyzed during the current study are available from (AMPds2) Harvard Dataverse at https://doi.org/10.7910/DVN/FIE0S and (REFIT) University of Strathclyde's PURE data repository at https://doi.org/10.15129/9ab14b0e-19ac-4279-938f-27f643078cec. The datasets GH9 is for the moment only available from the corresponding author on reasonable request. Engineering and Architecture, Lucerne University of Applied Sciences and Arts, Technikumstrasse 21, 6048, Horw, Switzerland Patrick Huber , Mario Gerber , Andreas Rumsch & Andrew Paice Search for Patrick Huber in: Search for Mario Gerber in: Search for Andreas Rumsch in: Search for Andrew Paice in: MG implemented the algorithms, and analyzed and interpreted their performance on the different datasets. The manuscript was written jointly by PH and MG and critically revised by both AR and AP. All authors read and approved the final manuscript. Correspondence to Patrick Huber. Huber, P., Gerber, M., Rumsch, A. et al. Prediction of domestic appliances usage based on electrical consumption. Energy Inform 1, 16 (2018) doi:10.1186/s42162-018-0035-1 Load forecasting Shiftable loads Experimental comparison
CommonCrawl
Can a spaceship traveling close to light speed be knocked off course by a gamma ray burst? This was spawned from this question about Blueshifting (BS) when traveling near the speed of light. I had found this while learning about the other question. @99.99995 percent c And interestingly, the students also realized that, when traveling at such an intense speed, a ship would be subject to incredible pressure exerted by X-rays — an effect that would push back against the ship, causing it to slow down. The researchers likened the effect to the high pressure exerted against deep-ocean submersibles exploring extreme depths. To deal with this, a spaceship would have to store extra amounts of energy to compensate for this added pressure. So then I began to wonder, I am assuming the higher the EM energy to begin with the more 'pressure' is exerted back, starting with X-Rays, they'd be BSed to Gamma and they would cause more pressure than visible light pushed back to soft X-Ray. If that assumption is correct would a star going nova with a gamma ray burst cause enough pressure to throw a ship way off course? And would it be (reasonably) possible to cause a 'burst' large enough to push a ship off course (assuming we already have the tech to go near the speed of light). On top of that, if you have a gamma ray machine and point it at a ship going at .9999995 light speed would Newton's 3rd law kick in with an equal and opposite reaction? Because if so, then we have an other issue, Mass increases with speed and near the speed of light it would be downright dangerous for anything in its path. science-based spaceships space-travel relativity bowlturnerbowlturner $\begingroup$ I assume you assume "If it is possible to use enough radiation energy to do this, the ship will remain intact". No calculations, but I can very well imagine that at the energies required the ship would disintegrate under the radiation bombardement. $\endgroup$ – Jan Doggen Nov 21 '14 at 15:09 $\begingroup$ Possible, but I was guessing that the ship was already built to deal with the pressure, I was more assume that it would be an unexpected 'pressure' from an angle knocking it off course, like a ricochet. $\endgroup$ – bowlturner Nov 21 '14 at 15:11 $\begingroup$ Don't forget your radiation-proofing! I recommend a lead umbrella. $\endgroup$ – Crabgor Nov 21 '14 at 16:55 $\begingroup$ A ship being knocked directly backwards along its line of flight will not be "off course". $\endgroup$ – Oldcat Nov 21 '14 at 18:47 $\begingroup$ I don't know enough about relativistic speeds to post an answer, but what you describe sounds very much like wave drag which plagues sailing vessels and trans-sonic aircraft. It might be worth looking into that $\endgroup$ – Cort Ammon Nov 22 '14 at 4:23 Let's first do some math, the first part taken from this pdf regarding (solar) radiation pressure (the formulas should be applicable from any source of electromagnetic radiation). The intensity $I$ depends on the power $P$ and the distance from the source $r$. We can write the expression $$I=\frac{P}{4 \pi r^2}$$ The force $F$ depends on the intensity and the area $A$ of the object being buffeted. It is $$F=\frac{2I}{c}A$$ and, substituting, we get $$F=\frac{2P}{4 \pi r^2c}A$$ which is $$F=\frac{P}{2 \pi r^2c}A$$ Given the mass of the ship $M$, we find that the acceleration $a$ is $$a=\frac{P}{2 M\pi r^2c}A$$ How much power does a gamma-ray burst emit? From Wikipedia: Because their energy is strongly focused, the gamma rays emitted by most bursts are expected to miss the Earth and never be detected. When a gamma-ray burst is pointed towards Earth, the focusing of its energy along a relatively narrow beam causes the burst to appear much brighter than it would have been were its energy emitted spherically. When this effect is taken into account, typical gamma-ray bursts are observed to have a true energy release of about 10^44 J, or about 1/2000 of a Solar mass energy equivalent—which is still many times the mass energy equivalent of the Earth (about 5.5 × 1041 J). We have to divide this number by two to account for the fact that only one of the two beams is hitting the ship - and this is still a little inaccurate, as it assumes the ship is hit by the whole beam. Anyway, gamma-ray bursts, on average, can last from anywhere between less than a second to 30 seconds. Let's say ours lasts for 10 seconds. Because the definition of power is $\frac{E}{t}$, where $E$ is work and $t$ is time, we can say that the power here is $$P=\frac{E}{t}$$ $$P=\frac{5 \times 10^{43}}{10}$$ $$P= 5 \times 10^{42} \text{ Watts}$$ Plugging into our earlier equation for acceleration, we get $$a=\frac{\frac{E}{t}}{2 M\pi r^2c}A$$ $$a=\frac{ 5 \times 10^{42}}{2 M\pi r^2c}A$$ Assuming a mass of the ship similar to the Space Shuttle Orbiter (109,000 kilograms) which is admittedly an unlikely comparison, we make this $$a=\frac{5 \times 10^{42}}{2 \times 1.09 \times 10^5 \pi r^2c}A$$ $$a=\frac{5 \times 10^{37}}{2.18 \pi r^2c}A$$ If you want, you can plug in the area of the underside of the Space Shuttle Orbiter (a stat I can't find, at the moment) and discover that the space shuttle would take quite a hit if it was near a gamma-ray burst. Note that this is only valid for a ship traveling at a slow speed. At near-light speeds, the relativistic mass would increase (although I don't know if this is valid perpendicular to the direction of its initial motion). This would impact your calculations; I'll try to figure out the corrections later. Belated Conclusion The answer is a definitive yes. A ship moving at "normal" speed (i.e. something we could make today - think a successor to the space shuttle) would meet some severe buffeting if it was anywhere near a gamma-ray burst and was hit by one of the beams emitted from the progenitor. If it was hit full-on, it would be severely pushed back; if it was hit partially on, it could be sent spinning. Either way, things wouldn't turn out well. Your ship, though, is a bit more advanced and is traveling at a speed pretty close to $c$. This means that it is highly unlikely that it would get hit with the beam for an extended period of time if it was anywhere close to the source. If it was further out, the cross-section of the beam would be a lot larger, though (eventually, on the order of hundreds of thousands of miles), and the ship could continue to travel through it for the duration of the burst. The downside is that the energy would be greatly dissipated over the beam's cross-section. But the answer is yes, the sip would be buffeted if it was reasonably close (i.e. about an AU away, though that's an estimate) to the source, and would most likely be impacted in some way if it was further out. SJuan76 and Oldcat pointed out that Lorentz contraction would impact the area of the side of the ship receiving radiation pressure if the ship was moving tangentially to the beam. At speeds nearing $c$, this phenomenon would have huge implications for the pressure on the ship. This answer is already math-heavy, so I figure adding a few more equation can't hurt. Haters of algebra beware. The length of an object due to Lorentz contraction can be found by $$L=L_0 \sqrt{1-v^2/c^2}$$ This means that the area (previously the height times the length, $A=H \times L$) is now written as $$A=H \times L_0 \sqrt{1-v^2/c^2}$$ and so the original equation becomes $$a=\frac{5 \times 10^{37}}{2.18 \pi r^2c} \times H \times L_0 \sqrt{1-v^2/c^2}$$ HDE 226868♦HDE 226868 $\begingroup$ Also, to note, at relativistic speeds Lorentz contraction would make the (lateral) area smaller $\endgroup$ – SJuan76 Nov 21 '14 at 21:55 $\begingroup$ Re length contraction: objects travelling across your vision at relativistic speeds don't look contracted; they actually appear rotated because the object moves out of the way of light emitted from the back side. Would this work in reverse? Incoming light would not just be diluted due to foreshortening, but would hit the front surface as well as it hit the light crossing in front on the ship's path. $\endgroup$ – JDługosz Jul 5 '15 at 9:47 $\begingroup$ But what about the ship's reference frame? There the gamma ray burst's area is contracted. I think you need to consider en.wikipedia.org/wiki/Ladder_paradox. The expression you are using for intensity is only for isotropic (spherically symmetric) sources. The two in the numerator of your force equation means that the ship is reflecting 100% of the gamma rays (which I guess is reasonable as, if it wasn't, it wouldn't be a ship anymore). Flying within 1 AU of a GRB source at .99c doesn't make very much sense. If you want to continue flying at .99c for very long. It's rather soupy. $\endgroup$ – alessandro Jul 6 '15 at 5:24 I suppose I should take my comments and make them a response. First - if a gamma ray burst has no effect on changing the course of a ship at rest, it will not have any greater effect on a ship at near-light speed. So lets assume this burst would not shove a ship at rest at all. What about all the blue-shifting? The blue shifted extra momentum will all be in a direction that opposes the flight of the near-C ship. When you change frame to the rest frame of the ship, the energy and direction added will be directly opposing the flight path. So if this blue shift does anything, it merely slows down the ship and does not 'knock it off course'. Second Case - The burst would add a significant kick to an at-rest ship. When the ship is traveling at near C, the impact of this kick is reduced in a number of ways: first the mass is greatly increased, reducing the sideways velocity resulting. The Lorentz contraction reduces the apparent area of the ship, which might reduce the ability of the storm to push the ship off course. It would take an amazing impact to still push the ship off course, but it could possibly happen. But the near light speed does not make it worse. The one thing that would be massively increased is the in-line "braking" momentum of anything that the fast ship ran into in flight. But I wouldn't consider a slowing, or a requirement to fire the engines more to overcome it "knocking the ship off course". OldcatOldcat $\begingroup$ Relativistic mass isn't mass, it's a (rather cludgy I think) way to think about how momentum's work in special relativity. See: en.wikipedia.org/wiki/…. Note the bit about transverse and longitudinal mass. In reality, as you quite correctly put in the last paragraph of your first case, only the components of the GRB which hit the ship parallel to the axis of travel need to have any relativistic corrections applied to them. The perpendicular kick in your second case would be the same as if the ship was at rest. $\endgroup$ – alessandro Jul 6 '15 at 5:30 Not the answer you're looking for? Browse other questions tagged science-based spaceships space-travel relativity or ask your own question. Being cooked for dinner! or What Danger of Blueshifting EM into X-rays and beyond? Practical problems of near-light-speed travel How would cheap gamma-ray laser turrets change near-future battlefields? What would happen if this FTL drive type concept were used in an atmosphere? Localized manipulation of the speed of light How to prevent superluminal traveling idiots from wrecking half of the universe? Can I X-Ray a civilisation to death? What does the view outside my ship traveling at light speed look like?
CommonCrawl
Obesity and ENT manifestations — a tertiary care centre study Aditiya Saraf1, Monica Manhas2, Amit Manhas3 & Parmod Kalsotra1 The Egyptian Journal of Otolaryngology volume 39, Article number: 22 (2023) Cite this article The aim of our study was to assess whether there is role of obesity in ENT diseases like otitis media effusion, chronic otitis media, chronic rhinosinusitis, sudden sensorineural hearing loss and chronic tonsillitis, or not. The present prospective study, after approval by institutional ethics committee, was conducted in the Department of ENT, SMGS Hospital, GMC Jammu from January 2021 to February 2022 on 590 patients, who were divided into 6 groups — group A — otitis media with effusion (n = 95 patients), group B — chronic otitis media (n = 171 patients), group C — sudden SNHL (n = 43 patients), group D — chronic rhinosinusitis (n = 102 patients), group E — chronic tonsillitis (n = 67 patients) and group F (control group) — patients (aged 11–50 years) coming to ENT OPD with other problems, except those problems mentioned in inclusion and exclusion criteria (n = 112 patients). Severity of disease was evaluated using Adelaide Disease Severity Score (CRS patients), otoscopy and pure tone audiometry (OME and COM), pure-tone audiometry (sudden SNHL) and Brodsky grading scale (chronic tonsillitis). Mean BMI and percentage of obese patients were calculated for each group. The mean age of presentation in our study was 40.66 ± 7.25 years. Male to female ratio was 1:1.6 in our study. The mean BMI in control group (group F) was 22.51 ± 3.01 kg/m2. The mean BMI was 25.41 ± 2.81 kg/m2 in group A, 25.33 ± 2.34 kg/m2 in group B, 25.12 ± 3.14 kg/m2 in group C, 25.78 ± 2.33 kg/m2 in group D and 25.03 ± 1.84 kg/m2 in group E, the difference between each of these groups and control group being statistically significant (p < 0.005). The percentage of obese patients in group F was 20.5% (23 patients). The percentage of obese patients was 53.6% (51 patients) in group A, 49.7% (85 patients) in group B, 39.5% (17 patients) in group C, 54.9% (56 patients) in group D and 31.3% (21 patients) in group E. Upon comparison with group F, the difference in percentage of obese patients was statistically significant in each group. Obese patients were more likely to have otitis media with effusion (OR 1.85, 95% CI 0.15 to 6.49), chronic otitis media (OR 1.80, 95% CI 0.15 to 6.33), sudden SNHL (OR 1.62, 95% CI 0.21 to 6.40), chronic rhinosinusitis (OR 2.05, 95% CI 0.15 to 6.55) and chronic tonsillitis (OR 1.60, 95% CI 0.16–6.13), than the control group. Obesity leads to various ENT problems by altering the immune system. In our study, mean BMI was significantly higher in patients with otitis media effusion, chronic otitis media, chronic rhinosinusitis, sudden sensorineural hearing loss and chronic tonsillitis and also, as the severity of disease increased with increase in severity of BMI, showing positive correlation for all study groups, thus establishing association of obesity and these common otorhinolaryngological conditions. Obesity is officially considered as a disease by WHO and various national/international organisations [1]. Around 650 million people are categorised as obese globally. In India, there are about 135 million obese people. The current trajectory of prevalence acceleration would result in about 50% of world's population being obese by 2030 [2]. Obesity is the accumulation of abnormal or excessive fat that may interfere with the maintenance of an optimal state of health. Obesity is recognised as a rapidly increasing major health problem globally, with people having body mass index (BMI = weight divided by square of height) value greater than 25 kg/m2 considered as obese. Body mass index (BMI) is the most widely used representative measure of obesity and is well-correlated with chronic diseases and mortality risk [3]. Obesity is basically a systemic chronic inflammatory disease, leading to musculoskeletal overload and chronic diseases such as diabetes, heart diseases, renal diseases and cerebrovascular disease [4]. Obesity is associated with increased number of macrophages in adipose tissue, which stimulates higher release of inflammatory mediators such as tumour necrosis factor alpha, interleukin-6, C-reactive protein and adiponectin, as compared to underweight or normal weight individuals. This leads to generation of a pro-inflammatory state and oxidative stress in the human body [5]. Increased adipose tissue located in upper airway and head-neck region acts as an endocrine organ and modifies immunity, thus leading to various otorhinolaryngological conditions such as otitis media (otitis media with effusion, chronic otitis media, acute otitis media, recurrent otitis media), hearing loss (sudden sensorineural hearing loss, age-related hearing loss, noise induced hearing loss), chronic rhinosinusitis, obstructive sleep apnoea, laryngopharyngeal reflux and head neck cancers [6]. Also, there is chronic sympathetic overactivity in obese individuals, leading to dysregulation of immune system, induction of a chronic inflammatory state and increased insulin resistance. All these pathological mechanisms further pave way for occurrence of various otorhinolaryngological diseases [7]. Despite many research studies on association of obesity-induced systemic inflammation with various chronic diseases affecting human body like type 2 diabetes mellitus and cardiovascular diseases, very less literature is present that shows any association of obesity with otorhinolaryngological inflammatory conditions other than obstructive sleep apnoea, especially in our local population. In this study, we aim to assess whether there is role of obesity in ENT diseases like otitis media effusion, chronic otitis media, chronic rhinosinusitis, sudden sensorineural hearing loss and chronic tonsillitis, or not. The present prospective study, after approval by institutional ethics committee, was conducted in the Department of ENT, SMGS Hospital, GMC Jammu from January 2021 to February 2022. Age: 11 to 50 years Otitis media with effusion, diagnosed as per otoscopy and impedance audiometry Chronic otitis media, diagnosed as per otoscopy and CT scan temporal bone Sudden SNHL, diagnosed as per pure tone audiometry (reduction in hearing threshold of 30 dB or more, over 3 days in three consecutive frequencies) Chronic rhinosinusitis, diagnosed as per CT scan of nose and paranasal sinuses Chronic tonsillitis, diagnosed clinically and confirming history of recurrent sore throat/fever Age less than 11 years and more than 50 years The presence of systemic diseases Head-neck malignancies Congenital or acquired immunodeficiencies Other predisposing factors for these ENT conditions like habitual factors (smoking, alcoholism, lifestyle, sporting habitus), environmental factors, occupational factors (noise exposure, exposure to allergens), sanitation/hygiene factor, local anatomical variations (deviated nasal septum) and history of atopy A total of 590 patients were included in our study after informed written consent. Relevant clinical history was taken, and general/systemic/local ENT examination was done. Routine blood investigations (CBC, RFT, LFT, lipid profile and coagulation profile), relevant radiological and relevant audiological investigations were done. All subjects were divided into 6 groups: ▪ Group A — Otitis media with effusion (OME) (n = 95 patients) ▪ Group B — Chronic otitis media (COM) (n = 171 patients) ▪ Group C — Sudden sensorineural hearing loss (sudden SNHL) (n = 43 patients) ▪ Group D — Chronic rhinosinusitis (CRS) (n = 102 patients) ▪ Group E — Chronic tonsillitis (n = 67 patients) Group F (control group) — These are patients (aged 11–50 years) coming to ENT OPD with other problems, except those problems mentioned in inclusion and exclusion criteria (n = 112 patients). BMI (body mass index) was calculated for all patients in each group, using the following formula: $${\mathrm{BMI}=\mathrm{weight}\;(\mathrm{in}\;\mathrm{kilograms})\;\mathrm{divided}\;\mathrm{by}\;(\mathrm{height}\;\mathrm{in}\;\mathrm{metres})}^2$$ Mean BMI was calculated in each of the six groups. Also, total number of obese patients was calculated for each group based on BMI, using standards of WHO for Asia–Pacific region (Table 1). Table 1 Standards of WHO for Asia–Pacific region [3] Severity of disease was evaluated using Adelaide Disease Severity Score (CRS patients), otoscopy and pure-tone audiometry (OME and COM), pure-tone audiometry (sudden SNHL) and Brodsky Grading scale (chronic tonsillitis). Adelaide Disease Severity Score [8]: It is simplified tool to assess CRS symptoms and quality of life. Individual symptom scores were added to a total out of 25. A VAS was completed by the subject so as to indicate their quality of life on a scale ranging from 0 to 7, O showing no effect and 7 indicating maximal effect (Table 2). The combined symptom score and VAS score were added to a total score out of 32. Table 2 Adelaide Disease Severity Score Brodsky tonsil size grading (Table 3) [9] Table 3 Brodsky tonsil size grading All data was entered in Microsoft Excel spreadsheet and analysed and compared using the Statistical Package for Social Sciences (SPSS) software (version 21 for windows). Appropriate statistical analytical tests were applied as per the advice of statistician. A total of 590 patients were included in our study. The mean age of presentation in our study was 40.66 ± 7.25 years. The mean age of presentation was 39.16 ± 6.55 years in group A, 41.22 ± 5.32 years in group B, 33.12 ± 8.17 years in group C, 44.19 ± 7.15 years in group D, 35.26 ± 5.33 years in group E and 37.11 ± 4.19 years in group F, there being no statistically significant difference between each of the experimental group (groups A, B, C, D, E) and control group (group F) being statistically insignificant (p-value = 0.65). Male to female ratio was 1:1.6 in our study — M:F ratio being 1:1.7 in group A, 1:1.3 in group B, 1:1.3 in group C, 1:1.1 in group D, 1:1.6 in group E and 1:1.1 in group F. There was female preponderance in all groups, with the difference between each of the experimental group (groups A, B, C, D, E) and control group (group F) being statistically insignificant (p-value = 0.34). The percentage of obese patients (Fig. 1) in group F was 20.5% (23 patients). The percentage of obese patients was 53.6% (51 patients) in group A, 49.7% (85 patients) in group B, 39.5% (17 patients) in group C, 54.9% (56 patients) in group D and 31.3% (21 patients) in group E. Upon comparison with group F, the difference in percentage of obese patients was statistically significant in each of the experimental group (group A: p = 0.0001, group B: p = 0.0001, group C: p = 0.0029, group D: p = 0.001 and group E: p = 0.034). Odds ratio — OR (adjusted for age, sex and local/general predisposing factors) for each case group when compared to control group was calculated — obese patients were more likely to have otitis media with effusion (OR 1.85, 95% CI 0.15 to 6.49), chronic otitis media (OR 1.80, 95% CI 0.15 to 6.33), sudden SNHL (OR 1.62, 95% CI 0.21 to 6.40), chronic rhinosinusitis (OR 2.05, 95% CI 0.15 to 6.55) and chronic tonsillitis (OR 1.60, 95% CI 0.16–6.13), than the control group. Percentage of obese patients in each group As per WHO Asia–Pacific BMI value, in OME group (n = 95), the average hearing impairment in OME group as per BMI was 28.5 dB in obese patients (n = 51), 26 dB in overweight (n = 32), 22.5 dB in normal (n = 8) and 22 db in underweight patients (n = 4), showing positive correlation between severity of OME and severity in grading of BMI (Pearson correlation coefficient = 0.77). In COM group (n = 171), average hearing impairment was 41.5 dB in obese (n = 85), 35 dB in overweight (n = 61), 32.5 dB in normal (n = 11) and 31 dB in underweight (n = 14), showing positive correlation between severity of CRS and BMI (Pearson correlation coefficient = 0.90). In sudden SNHL group (n = 43), average hearing impairment was 44 dB in obese (n = 17), 40 dB in overweight (n = 15), 38.5 dB in normal (n = 6) and 38 dB in underweight (n = 5), showing positive correlation between severity of CRS and BMI (Pearson correlation coefficient = 0.49). In CRS group (n = 102), mean Adelaide score was 22.31 in obese patients (n = 56), 22.1 in overweight patients (n = 26), 17.5 in normal (n = 14) and 17.2 in underweight (n = 6), showing positive correlation between severity of CRS and BMI (Pearson correlation coefficient = 0.83). In chronic tonsillitis group (n = 67), majority of patients (81.2%) in obese category (n = 21) had grade 4 tonsil size, majority (79.4%) of overweight patients (n = 17) had grade 3 tonsil size, majority of patients (78.1%) having normal BMI (n = 15) had grade 2 tonsil size and majority of patients having (77.9%) underweight BMI (n = 14) had grade 2 tonsil size, showing positive correlation between severity of CRS and BMI (Pearson correlation coefficient = 0.10). The mean BMI (Table 4) in control group (group F) was 22.51 ± 3.01 kg/m2. The mean BMI was 25.41 ± 2.81 kg/m2 in group A, the difference between groups A and F being statistically significant (p = 0.0001). The mean BMI was 25.33 ± 2.34 kg/m2 in group B, the difference between groups B and F being statistically significant (p = 0.0001). The mean BMI was 25.12 ± 3.14 kg/m2 in group C, the difference between groups C and F being statistically significant (p = 0.0012). The mean BMI was 25.78 ± 2.33 kg/m2 in group D, the difference between groups D and F being statistically significant (p = 0.0001). The mean BMI was 25.03 ± 1.84 kg/m2 in group E, the difference between groups E and F being statistically significant (p = 0.004). Table 4 Mean BMI value in each group Obesity, defined as excess body weight for a given height, is a crucial global problem with life-threatening consequences. The associated excess adiposity evolves slowly, with long-term-positive energy balance. There is apoptosis of adipocytes, leading to tissue remodelling and eventual high number of macrophages — all this leading to a low-grade systemic inflammatory state [10,11,12]. Obese individuals tend to be at higher risk of developing not only diseases like type 2 diabetes mellitus, fatty liver disease and cardiovascular diseases but also various otorhinolaryngological diseases like otitis media with effusion, rhinosinusitis and allergy, laryngopharyngeal reflux disease and post/peri complications of adenostonsillectomy [13]. The mean age of presentation in our study was 40.66 ± 7.25 years, there being no statistically significant difference between each of the experimental group (groups A, B, C, D, E) and control group (group F) being statistically insignificant (p-value = 0.65). This observation was consistent with study conducted by Kim T. H. et al. (2015) [14]. Male to female ratio was 1:1.6 in our study; there was female preponderance in all groups, with the difference between each of the experimental group (groups A, B, C, D, E) and control group (group F) being statistically insignificant (p-value = 0.34). This observation was consistent with study conducted by Nam J. S. et al. (2021) [15]; however, Ahmed S. et al. (2014) [16] showed male preponderance in their study. The difference in gender distribution from previous studies could be due to variation in demographic distribution from one population to other. The percentage of obese patients in group F was 20.5% (23 patients). The percentage of obese patients was 53.6% (51 patients) in group A, 49.7% (85 patients) in group B, 39.5% (17 patients) in group C, 54.9% (56 patients) in group D and 31.3% (21 patients) in group E. Upon comparison with group F, the difference in percentage of obese patients was statistically significant in each of the experimental group (group A: p = 0.0001, group B: p = 0.0001, group C: p = 0.0029, group D: p = 0.001 and group E: p = 0.034). Obese patients were more likely to have otitis media with effusion (OR 1.85, 95% CI 0.15 to 6.49), chronic otitis media (OR 1.80, 95% CI 0.15 to 6.33), sudden SNHL (OR 1.62, 95% CI 0.21 to 6.40), chronic rhinosinusitis (OR 2.05, 95% CI 0.15 to 6.55) and chronic tonsillitis (OR 1.60, 95% CI 0.16–6.13), than the control group. Also, in our study, as the severity of disease increased with increase in severity of BMI, showing positive correlation for all study groups. As per WHO Asia–Pacific BMI value, in OME group (n = 95), the average hearing impairment in OME group as per BMI was 28.5 dB in obese patients (n = 51), 26 dB in overweight (n = 32), 22.5 dB in normal (n = 8) and 22 db in underweight patients (n = 4), showing positive correlation between severity of OME and severity in grading of BMI (Pearson correlation coefficient = 0.77). In COM group (n = 171), average hearing impairment was 41.5 dB in obese (n = 85), 35 dB in overweight (n = 61), 32.5 dB in normal (n = 11) and 31 dB in underweight (n = 14), showing positive correlation between severity of CRS and BMI (Pearson correlation coefficient = 0.90). In sudden SNHL group (n = 43), average hearing impairment was 44 dB in obese (n = 17), 40 dB in overweight (n = 15), 38.5 dB in normal (n = 6) and 38 dB in underweight (n = 5), showing positive correlation between severity of CRS and BMI (Pearson correlation coefficient = 0.49). Now, upon considering each case group individually, we try to explain our study findings. It was found that the mean BMI in group of patients with otitis media with effusion (group A) was 25.44 ± 2.81 kg/m2, which was significantly higher than control group (mean BMI = 22.51 ± 3.01 kg/m2). This was consistent with studies conducted by Kaya S. et al. (2017) [17] and Choi H. G. et al. (2015) [18]. Obesity is characterised by higher concentrations of inflammatory mediators such as IL-6 and TNF alpha, and some studies have shown the presence of both these mediators in middle ear fluid of OME patients [19]. Besides this known association between OME and obesity, we in our study also found that out of 95 patients with OME, 42 patients (44.2%) had associated features suggestive of allergic rhinitis, and 11 patients (11.5%) gave associated history of GERD. Obesity leads to reduced adiponectin levels, which downregulates T cells, leading to altered host immunity and allergy. Allergic rhinitis induces mast cells in nasal mucosa to secrete various inflammatory mediators, resulting in Eustachian tube occlusion and OME [20]. Huang S. L. et al. (1999) [21] also showed in their cross-sectional study that BMI was a significant predictor of allergic rhinitis and OME. However, Sybilski D. et al. (2015) [22] could not find any correlation between obesity and allergic rhinitis. In addition, the reason for GERD in obese individuals of our study could be due to increased intragastric pressure and/or lower oesophageal sphincter incompetence. Thus, obesity led to GERD, which then by way of gastric reflux reaching middle ear through nasopharynx and Eustachian tube resulted in OME [23]. Rodrigues M. M. et al. (2014) [24] also suggested in their study that GERD and obesity are positively correlated. In our study, the mean BMI was 25.12 ± 2.34 kg/m2 in group B (chronic otitis media), the difference between group B and control group being statistically significant (p = 0.0021). Similar finding was shown by Kim T. H. et al. (2015) [14]. Obesity-associated low-grade inflammation leads to the expression of cytokines (like IL-6, TNF alpha, fibroblast growth factor and bone morphologenetic proteins) involved in the pathogenesis of chronic otitis media, resulting in tissue remodelling and inflammatory cell proliferation [25]. In our study, the mean BMI was 25.78 ± 3.14 kg/m2 in sudden SNHL group (Table 2), the difference of mean BMI between this group and control group being statistically significant (p = 0.0001). Lee J. S. et al. (2015) [26] and Lalwani et al. (2013) [27] also in their study showed that increased BMI is significantly associated with prevalence of sudden SNHL. However, Hwang J. H. et al. (2015) [28] stated that there was no association between BMI and sudden SNHL. The reason for association between obesity and sudden SNHL could be due to obesity-associated microangiopathy in the vascular supply of cochlea, leading to pathological damage to inner ear [29]. In our study, out of 43 patients with sudden SNHL, 17 patients had hyperlipidaemia. The reason for sudden SNHL in these patients could be due to atherosclerosis in cochlear microvasculature due to elevated blood lipid levels. In addition, the mean BMI of chronic rhinosinusitis group was 25.78 ± 2.33 kg/m2 which was significantly higher than mean BMI of control group. Bhattacharya N. et al. (2013) [30] and Chung S. D. et al. (2014) [31] also in their study showed increased association between chronic rhinosinusitis and obesity. The reason for association between obesity and chronic rhinosinusitis could be due to the changes in expression of obesity-linked cytokines, these cytokines being associated with chronic rhinosinusitis as well [32]. Besides this reason, in our study, we also found that 33 obese patients (32.3%) out of total 102 patients of group D with chronic rhinosinusitis also gave clinical history of GERD. Loehrl et al. (2012) [33] also suggested association between GERD and chronic rhinosinusitis. The reason could be due to impaired mucociliary clearance due to direct exposure of nasal mucosa to gastric acid [34]. However, there is very limited data support of this and needs further survey as both GERD and CRS are very commonly prevalent conditions, which could occur independently in a person as well. In our study, the mean BMI was 25.03 ± 1.84 kg/m2 in group E (chronic tonsillitis), the difference between group E and control group (group F) being statistically significant (p = 0.004). This finding was consistent with observations made by Kim T. H. et al. (2015) [14] and Narang I. et al. (2012) [35]. Besides the known reason of obesity-associated cytokine expression, another reason for this could be mouth-breathing tendency among obese people and obesity-associated endocrine-mediated somatic growth, thus predisposing to recurrent larger tonsils. Obesity leads to various ENT problems by altering the immune system. In our study, mean BMI was significantly higher in patients with otitis media effusion, chronic otitis media, chronic rhinosinusitis, sudden sensorineural hearing loss and chronic tonsillitis and also, as the severity of disease increased with increase in severity of BMI, showing positive correlation for all study groups, thus establishing association of obesity and these common otorhinolaryngological conditions. As obesity could be an important risk factor, early diagnosis and treatment of these ENT disorders is of prime importance in obese individuals. Available with corresponding author upon reasonable request. Muller MJ, Geisler C (2017) Defining obesity as a disease. Eur J Clin Nutr 71:1256–1258 Pradeepa R, Anjana RM, Joshi SR (2015) Prevalence of generalised and abdominal obesity in urban and rural India: ICMR-INDIAB study (phase-I) [ICMR-INDIAB-3]. Indian J Med Res 142:139–150 World Health Organization Western Pacific Region (2000) The Asia-Pacific perspective: redefining obesity and its treatment. Health Communications Australia Pty Ltd, Sydney Sturm R, Ringel JS, Andreyeva T (2004) Increasing obesity rates and disability trends. Health Aff (Millwood) 23(2):199–205 Monteiro R, Azevedo I (2010) Chronic inflammation in obesity and the metabolic syndrome. Mediators Inflamm 2010:289645 Krajewska J, Krajewski W, Zatonski T (2019) The association between ENT diseases and obesity in paediatric population: a systemic review of current knowledge. Ear Nose Throat J 98(5):E32–E43 Tchkonia T, Thomou T, Zhu Y, Karagiannides I, Pothoulakis C, Jensen MD et al (2013) Mechanisms and metabolic implications of regional differences among fat depots. Cell Metab 17(5):644–656 Lanza DC, Kennedy DW (1997) Adult rhinosinusitis defined. Otolaryngol Head Neck Surg 117:S1-7 Brodsky L (1989) Modern assessment of tonsils and adenoids. Paediatr Clin North Am 36(6):1551–1569 Heymsfield SB, Wadden TA (2017) Mechanisms, pathophysiology and management of obesity. N Engl J Med 376(15):1492 Heymsfield SB, Gonzalez MC, Shen W, Redman L, Thomas D (2014) Weight loss composition is one-fourth fat free mass: a critical review and of this widely cited rule. Obes Rev 15(4):310–321 Grant RW, Dixit VD (2015) Adipose tissue as an immunological organ. Obesity (Silver Spring) 23(3):512–518 Sidell D, Shapiro NL, Bhattacharyya N (2013) Obesity and the risk of chronic rhinosinusitis, allergic rhinitis and acute otitis media in school age children. Laryngoscope 123(10):2360–2363 Kim TH, Kang HM, Oh Ih, Yeo SG (2015) Relationship between otorhinolaryngological diseases and obesity. Clin Exp Otorhinolaryngol 8(3):194–197 Nam JS, Roh YH, Fahad WA, Noh HE, Ha JG, Yoon JH et al (2021) Association between obesity and chronic rhinosinusitis with nasal polyps: a national population based study. BMJ Open 11:e047230 Ahmed S, Arjmand E, Sidell D (2014) Role of obesity in otitis media in children. Curr Allergy Asthma rep 14(11):469 Kaya S, Selimoglu E, Cureoglu S, Selimoglu MA (2017) Relationship between chronic otitis media with effusion and overweight or obesity in children. J Laryngol Otol 131(10):866–870 Choi HG, Sim S, Kim SY, Lee HJ (2015) A high fat diet is associated with otitis media with effusion. Int J Paediatr Otorhinolaryngol 79(12):2327–2331 Yellon RF, Doyle WJ, Whiteside TL, Diven WF, March AR, Fireman P (1995) Cytokines, immunoglobulins and bacterial pathogens in middle ear effusions. Arch Otolaryngol Head Neck Surg 121(8):865–869 Bernstein JM (1996) Role of allegy in Eustachian tube blockage and otitis media with effusion: a review. Otolaryngol Head Neck Surg 114:562–568 Huang SL, Shiao G, Chou P (1999) Association between body mass index and allergy in teenage girls in Taiwan. Clin Exp Allergy 29(3):323–329 Sybilski AJ, Raciborski F, Lipiec A, Tomaszewska A, Lusawa A, Furmanczyk K et al (2015) Obesity- a risk factor for asthma, but not for atopic dermatitis, allergic rhinitis and sensitization. Public Health Nutr 18(3):530–536 Al-Saab F, Manoukian JJ, Al-Sabah B, Almot S, Nguyen LHP, Tewfik TL et al (2008) Linking laryngopharyngeal reflux to otitis media with effusion: pepsinogen study of adenoid tissue and middle ear fluid. J Otolaryngol Head Neck Surg 37:565–571 Rodrigues MM, Dibbern RS, Santos VJ, Passeri LA (2014) Influence of obesity on the correlation between laryngopharyngeal reflux and obstructive sleep apnoea. Braz J Otorhinolaryngol 80(1):5–10 MacArthur CJ, Pillers DA, Pang J, Kmepton JB, Trune DR (2011) Altered expression of middle and inner ear cytokines in mouse otitis media. Laryngoscope 121(2):365–371 Lee JS, Kim DH, Lee HJ, Kim HJ, Koo JW, Choi HG et al (2015) Lipid profiles and obesity as potential risk factors of sudden sensorineural hearing loss. PLoS One 10(4):e0122496 Lalwani AK, Katz K, Liu YH, Kim S, Weitzman M (2013) Obesity is associated with sensorineural hearing loss in adolescents. Laryngoscope 123:3178–3184 Hwang JH (2015) Role of obesity on the prognosis of sudden sensorineural hearing loss in adults. Otolaryngol Head Neck Surg 153(2):251–256 Jung SY, Shim HS, Hah YM, Kim SH, Yeo SG (2018) Association of metabolic syndrome with sudden hearing loss. JAMA Otolaryngol Head Neck Surg 144(4):308–314 Bhattacharyya N (2013) Associations between obesity and inflammatory sinonasal disorders. Laryngoscope 123(8):1840–1844 Chung SD, Chen PY, Lin HC, Hung SH (2014) Comorbidity profile of chronic rhinosinusitis: a population based study. Laryngoscope 124(7):1536–1541 Ghanim H, Aljada A, Hofmeyer D, Syed T, Mohanty P, Dandona P (2004) Circulating mononuclear cells in the obese are in a pro-inflammatory state. Circulation 110:1564–1571 Loehrl TA, Samuels TL, Poetker DM, Tochill RJ, Blumin JH, Johnston N (2012) The role of extraesophageal reflux in medically and surgically refractory rhinosinusitis. Laryngoscope 122:1425–1430 Delehaye E, Dore MP, Bozzo C, Mameli L, Delitala G, Meloni F (2009) Correlation between nasal mucociliary clearance time and gastroesophageal reflux disease: our experience on 50 patients. Auris Nasus Larynx 36:157–161 Narang I, Mathew JL (2012) Childhood obesity and obstructive sleep apnea. J Nutr Metab 2012:134202 Department of ENT and Head and Neck Surgery, SMGS Hospital, Government Medical College Jammu, Jammu, Jammu and Kashmir, India Aditiya Saraf & Parmod Kalsotra Department of Physiology, Government Medical College Jammu, Jammu, Jammu and Kashmir, India Monica Manhas Department of Anaesthesia, Government Medical College Jammu, Jammu, Jammu and Kashmir, India Amit Manhas Aditiya Saraf Parmod Kalsotra AS made contribution in design/writing of manuscript. AM and PK made contribution in data collection. AS and MM made contribution in statistical analysis. The authors read and approved the final manuscript. Correspondence to Amit Manhas. The study was conducted after approval by Institutional Ethics Committee of Government Medical College, Jammu. Written informed consent was taken from all subjects or their legal guardian in case of age of patient being less than 18 years. Written informed consent to publish patient's clinical details was obtained from all subjects or their legal guardian in case of subject under 18 years. Saraf, A., Manhas, M., Manhas, A. et al. Obesity and ENT manifestations — a tertiary care centre study. Egypt J Otolaryngol 39, 22 (2023). https://doi.org/10.1186/s43163-023-00378-3 SNHL
CommonCrawl
Simulating and quantifying legacy topographic data uncertainty: an initial step to advancing topographic change analyses Thad Wasklewicz ORCID: orcid.org/0000-0002-2609-395X1, Zhen Zhu2 & Paul Gares1 Rapid technological advances, sustained funding, and a greater recognition of the value of topographic data have helped develop an increasing archive of topographic data sources. Advances in basic and applied research related to Earth surface changes require researchers to integrate recent high-resolution topography (HRT) data with the legacy datasets. Several technical challenges and data uncertainty issues persist to date when integrating legacy datasets with more recent HRT data. The disparate data sources required to extend the topographic record back in time are often stored in formats that are not readily compatible with more recent HRT data. Legacy data may also contain unknown error or unreported error that make accounting for data uncertainty difficult. There are also cases of known deficiencies in legacy datasets, which can significantly bias results. Finally, scientists are faced with the daunting challenge of definitively deriving the extent to which a landform or landscape has or will continue to change in response natural and/or anthropogenic processes. Here, we examine the question: how do we evaluate and portray data uncertainty from the varied topographic legacy sources and combine this uncertainty with current spatial data collection techniques to detect meaningful topographic changes? We view topographic uncertainty as a stochastic process that takes into consideration spatial and temporal variations from a numerical simulation and physical modeling experiment. The numerical simulation incorporates numerous topographic data sources typically found across a range of legacy data to present high-resolution data, while the physical model focuses on more recent HRT data acquisition techniques. Elevation uncertainties observed from anchor points in the digital terrain models are modeled using "states" in a stochastic estimator. Stochastic estimators trace the temporal evolution of the uncertainties and are natively capable of incorporating sensor measurements observed at various times in history. The geometric relationship between the anchor point and the sensor measurement can be approximated via spatial correlation even when a sensor does not directly observe an anchor point. Findings from a numerical simulation indicate the estimated error coincides with the actual error using certain sensors (Kinematic GNSS, ALS, TLS, and SfM-MVS). Data from 2D imagery and static GNSS did not perform as well at the time the sensor is integrated into estimator largely as a result of the low density of data added from these sources. The estimator provides a history of DEM estimation as well as the uncertainties and cross correlations observed on anchor points. Our work provides preliminary evidence that our approach is valid for integrating legacy data with HRT and warrants further exploration and field validation. Topographic data sources are an essential source of information used to not only address scientific questions related to landscape evolution, but also to inform policy makers and land managers of recent environmental changes. Earth scientists have also begun to carve out a role in the global climate agenda (Lane 2013) and topographic data will play a significant role in these types of analyses. Recent examples of this research avenue place greater emphasis on understanding how humans modify the physical environment (Tarolli and Sofia 2016) as well as examining the likelihood of extreme weather leading to magnitude and frequency changes in geomorphic processes and surficial features (Spencer et al. 2017). These two recent research trends also highlight a need to be able to examine recent topographic changes in the context of longer-term rates of change. This task will require greater reliance on integrating legacy and more recent high-resolution topography (HRT) data sources to aid in our understanding of environmental change (Glennie et al. 2014; Pelletier et al., 2015). However, the integration of disparate topographic data sources introduces numerous opportunities to increase uncertainty in measurements through space and time. Herein, we provide a means to assay and portray data uncertainty when fusing legacy and recent HRT data to assess change in landforms. Statistic and stochastic models are employed that are not only consistent with topographic data structures simulated in this research, but also can track the correlation of errors over time and across different data sources to establish reliable detection levels. Our findings represent an initial step to providing researchers with the ability to detect topographic changes with a degree of certainty. Topographic legacy datasets come from a variety of field and remotely sensed sampling techniques and instrumentation (Wasklewicz et al. 2013). Data quality, resolution, and temporal availability from these disparate sources have varied significantly over their historical development. Current research challenges in earth science rely on a deeper understanding of how to integrate these varied datasets and their associated data uncertainty. Advancement of scientific research on landscape evolution and environmental change detection rely on definitively measuring how the changing earth surface. More exacting measures would aid in items such as the development of more precise early warning systems where landscape dynamics pose hazards and risks to society, and could lead to potential innovations in the design of infrastructure that is more resilient to dynamic surficial processes. Research focused on topographic changes has relied heavily on pre-event topographic data sets (e.g., Wasklewicz and Hattanji 2009; Wheaton et al. 2010). Some studies use the same instrumentation and apply a consistent methodology throughout the observation period (e.g., Wester et al. 2014; Staley et al. 2014; Wasklewicz and Scheinert 2015). The integration of topographic data from a single source with the aid of a repeatable surveying campaign over time has presented an opportunity to reduce the systematic errors while accounting for other positional errors and surface representation uncertainties. However, when repeatable surveying campaigns are not followed, researchers found data can possess erroneous calibration and improper error modeling (Oskin et al. 2012; Glennie et al. 2014). These inherent issues are expressed as substantial systematic errors, which lead to improper measurements when compared with the post-event data. Glennie et al. (2014) warn systematic errors must be minimized or removed prior to differencing the pre- and post-event airborne laser scanning (ALS) data sets. Schaffrath et al. (2015) identified comparable issues with both vertical and horizontal measures from pre- and post-flood ALS data being inadequately defined from the use of different geoid models and poor co-registration of flight lines, respectively. The complexity of the data uncertainty increases with the incorporation of legacy data. A single instrument is never used in these cases. Instead, researchers attempt to fuse topographic data from multiple instruments as technological advances in data collection have occurred over time (Crowell et al. 1991; Carley et al. 2012; Schaffrath et al. 2015). A variety of topographic data sources that include contour maps, cross-sections or topographic profiles, raster, triangulated irregular networks (TINs), and point clouds can be integrated as legacy data into these types of analyses. Each different data source introduces varying: data quality, spatial resolution, and temporal consistency. These items increase the spatially variable uncertainty of the measurements taken during analyses of these disparate data sets, which must be accounted for in the presentation of the findings. The recent incorporation of SfM-MVS (structure from Motion-Multi-View Stereo) techniques has the benefit of permitting researchers to extend back further the legacy record by using archived aerial photographs to reconstruct a point cloud and digital elevation model (DEM) (Gomez et al. 2015; Micheletti et al. 2015), but also adding further complications to examining spatially variable uncertainty. While all signs from these studies indicate a strong potential to use these data sources to measure topographic changes from legacy data, several issues must be addressed to accurately use this approach. Some of the common photogrammetric issues included inconsistent image quality, varied scales, objects in motion, clouds, and other superfluous information in the photos (Gomez et al. 2015; Micheletti et al. 2015). Initial results indicated aerial imagery at a scale of 1:20,000 may only be able to detect changes in the 1 to 1.5 m range with quality ground control points (Micheletti et al. 2015), but in some instances this value can be at a coarser-scale (Fonstad et al. 2013; Gomez et al. 2015). Another major concern with applying this technique is overestimation of the topography within areas of vegetation and locations where there is disparate topographic relief (Gomez et al. 2015; Micheletti et al. 2015). However, some these obstacles can be overcome with the appropriate establishment of ground control points and consistent field validation of the results where possible and others will require development of uncertainty measures capable of uncovering errors in complex topography. Robust estimates of spatially variable uncertainty have received more recent attention, but remain in their infancy (Schaffrath et al. 2015). As highlighted in Carley et al. (2012), researchers have generally used three different approaches when examining uncertainty in topographic legacy data: (1) a uniform error threshold (Brasington et al. 2003; Lane et al. 2003; Milan et al., 2007), which have been noted to bias data in various topographic sequences; (2) spatial variance thresholds used to produce a minimum level of detection raster file (Heritage et al. 2009; Milan et al. 2011), which work well in settings with high-density point clouds and mid- to fine-scale roughness features; and (3) error-source threshold methods developed from fuzzy inference systems (Wheaton et al. 2010, Schaffrath et al. 2015), which has been proven to assess the spatial variability of uncertainty often inherent in the digital elevation models (DEM[s]) used in topographic changes. Carley et al. (2012) employed a hybrid method of the spatial variance approach (#2) to consider the addition of legacy topographic map data to their analyses. They applied a survey and interpolation error (SIE) equation (Heritage et al. 2009) to produce SIE point grids for the DEMs used in the DEM-of-difference and combined these to produce a critical error threshold (Brasington et al. 2000, 2003; Lane et al. 2003) for each cell and a level-of-detection surface was generated from this style of analysis. The issue of incorporating additional measurements with existing models or maps has also been encountered in the robotics and computer vision communities. For example, a robotic mapping problem can be solved by using discretized map representation. A common approach of mapping a 2D or 3D space is to abstract the whole space into a list of objects with corresponding properties, such as location and uncertainties (Thrun et al. 2005). For example, a 2D space can be generalized into an "occupancy grid" (Elfe 1989). The occupancy grid represents a dense 3D map with a finite array of points. Any additional measurements or data sources over the same space can then relate to one or several grid points. A similar concept of sparse discretized mapping has also been applied into our approach to model topographic features. A 3D surface in space can be approximated using a finite set of known points, on which any given point can be predicted via a wide variety of interpolation techniques. Some of these techniques are commonly used in geosciences, including Kriging models (Krige 1951) and other types of regression methods (Williams 1998; Dumitru et al. 2013). There is also a realization that it is equally important, if not more so, to accurately represent the uncertainty function across the whole space. Spatial uncertainty modeling is a key element in the regression-based prediction processes. Moreover, the spatial uncertainty functions may change over time due to the underlying geomorphic processes, and this can be evaluated with a stochastic estimator, such as a Kalman filter (Kalman 1960). For example, Mardia et al. (1998) and Cortes (2009) suggested the combination of spatial and temporal modeling by using Kriging and Kalman filters. Here, our focus is on the spatio-temporal uncertainty model itself, instead of any specific interpolation method. Our approach produces an efficient, accurate, and robust uncertainty model that opens the door to the integration of legacy data and new sensors, and provides more definitive measures of landform and landscape evolution from a variety of sources. A major benefit and advance of our adopted approach is that an optimum interpolation method can be applied to this model, which would estimate or predict the elevation and the associated uncertainty for any location in the entire region, at any given time in history or even in the future. Spatio-temporal uncertainty model Here, we consider topographic uncertainty as a stochastic process that takes into consideration spatial and temporal variations. The spatial uncertainty can be modeled as a Gaussian process, which tends to vary across the region of interest, and yet often has local correlations. A higher level of spatial correlation may be expected from smooth surfaces, whereas sudden elevation changes, such as a steep cliff, will result in lower correlation. A covariance function can describe this type of uncertainty model and is required by most interpolation or prediction techniques. Although it could be challenging to obtain the complete covariance function, especially if such function is also changing over time, following Williams (1998), we estimate a covariance matrix over a finite set of hypothetical anchor points. The covariance matrix is updated over time, based on input from a variety of sensors, such as laser scanners and global navigation satellite system (GNSS) survey. Although the filter does benefit from direct observations whenever available, sensors are not expected to make direct observation of the anchor points. For example, a terrestrial laser scanner (TLS) provides a high-resolution scan of the region, but the point cloud is not guaranteed to overlap the hypothetical anchor points. Static GNSS surveys in historical data, on the other hand, may only be available on a few points in a region, which are even less likely to overlap with the anchor points. The lack of coincident data with the anchor does not preclude the use of this method, rather this method does not require any direct sensor observations in our experimental design or in any future field applications of this methodology. Anchor point distributions are instead dependent upon the complexity of the topographical features, which is independent of any specific sensor measurements. However, sensor observation made at any given location within the region can still be used to update the uncertainty model of the neighboring anchor point(s). To achieve that, we further assume that more points will be used to model an area with lower spatial correlation, such that the elevation at a given location can be interpolated or extrapolated with neighboring anchors with sufficient accuracy. Thus, a surficial feature with topographic complexity is covered with densely populated anchor points, whereas less topographically complex areas are covered with a sparse scatter of anchor points. The uncertainty model and associated anchor points not only account for the spatial uncertainty, but also consider the function of time. A stochastic estimator traces the evolution of such randomness, in which the anchor points form a space of "states". Thus, the elevation and uncertainty of any given point at any time can be approximated with a combination of these states. In a software simulation, we created a landform of 150 by 150 m, and simulated the elevation change over the course of 30 years. Both the landform and the elevation change have non-linear surfaces (Fig. 1). To demonstrate the concept of using anchor points, we first formulated a 5 by 5 array, which has sufficient density for a discretized representation of the simulated landform. However, anchor points do not necessarily form any regular shape in general. Simulated landform (left) and accumulated changes over 30 years (right). The magnitude of change is approximately ± 5 m Data from a TLS, an ALS, a 2D aerial camera, a stereo vision system, structure from motion SfM-MVS from a limited number of historic aerial photographs taken from a nadir-looking camera, static and Kinematic GNSS surveys are considered in our simulations (Table 1). These sensors differ greatly in accuracy and resolution. We assume all the sensor errors can be projected onto the vertical direction, and thus only the vertical errors are of concern in developing an objective comparison among the various sensors. Table 1 Sensor resolution and error Gaussian error models are used for GNSS positioning in this simulation. Naturally, the static surveys are more accurate. However, the Kinematic GNSS survey will gather more data points (Table 1). The uncertainty in laser ranging measurements can be modeled with two main components, angular and radial. Although the complete ranging error model may be sophisticated (Thrun et al. 2005), it is often sufficient to assume that in practice, both error sources follow a normal distribution. The underlying mechanisms are independent from each other (Glennie 2007). Figure 2 illustrates a typical application of laser scanner, which applies to both TLS and ALS. The laser scanner is elevated from the ground by h, and it scans a location on the ground at a line of sight distance r. Errors in the horizontal (azimuth) and vertical (elevation) angles result from laser beam width and the precision in orientation measurements and follow a 2D normal distribution (red ellipse in Fig. 2). Since the focus of this work is not on the high-fidelity simulation of ALS data, the comprehensive error model (Glennie 2007) is abstracted and represented with two components: a radial error σ r , and a vertical angular error σ θ . A target observed at distance r has the total uncertainty of σ v . Compared to an ALS, a TLS is closer to the ground, produces a denser point cloud, and has more accurate ranging measurements. The ALS, on the other hand, scans the ground from a high altitude, in which case the vertical error is less sensitive to the angular uncertainty. Laser scanner error model. θ: vertical angle, r: range, σr: standard deviation of radial ranging error, σθ: standard deviation of vertical angular error, σv: standard deviation of total vertical error Data from a 2D camera image does not provide direct observation of elevation, as can be seen in Table 1. However, 2D images can be related to the 3D landform via a 3D-2D projection model (Hartley and Zisserman 2003). Let vector [u, v]T represent the 2D location of a point L on the image of a landform, observed using a camera with lens focal length f, and vector \( {\left[{x}_L^C,{y}_L^C,{z}_L^C\right]}^T \) be the 3D location of point L, as observed in the camera frame (C frame). The 3D-2D relationship can be modeled with $$ \left[\begin{array}{c}u\\ {}v\end{array}\right]=\frac{f}{z_L^C}\left[\begin{array}{c}{x}_L^C\\ {}{y}_L^C\end{array}\right], $$ which is implemented in the estimator. Any change in \( {x}_L^C \) and \( {y}_L^C \) are thus observable in 2D images. The landform elevation change is observed on the z axis in a Global frame G, \( \Delta {z}_L^G \). As shown in Fig. 3, point L is located at \( {\boldsymbol{X}}_L^G \), and it is being observed by a camera located at \( {X}_C^G \), both in the global frame. The elevation error \( \Delta {z}_L^G \) is related to the observation error made in the camera frame \( \Delta {x}_L^C \). \( \Delta {x}_L^C \) can be geometrically decomposed into errors in the horizontal direction, \( \Delta {x}_L^G \) and the vertical direction, \( \Delta {z}_L^G \) in the global frame. The camera associated with the aerial photography is typically located above the landform (Fig. 3). The vertical observable \( \Delta {z}_L^G \) is thus smaller than the horizontal component \( \Delta {x}_L^G \) in this geometry. Therefore, it tends to be less effective (Table 1) when elevation changes are detected with an overhead image collected with an airborne camera. 2D camera image error model in a typical aerial photography application The aforementioned projection model should not be confused with SfM-MVS, which operates on the same principle of multi-view geometry (Hartley and Zisserman 2003), as does stereo vision or triangulation. A camera in motion (with a limited overlapping nadir-looking images to simulate a historic aerial photogrammetric approach) will provide repeated observations of the same 3D landform [u, v]T 1. . n , collected from n perspectives (unstructured photographs). In this process, the camera orientation and location are known at all n perspectives, referenced to frame G. The 2D images, [u, v]T 1. . n will then be converted into unit vectors in frame G, U G 1. . n . At the ith perspective, U G i points from the camera position \( {\boldsymbol{X}}_{C,i}^G \) to the landform position \( {\boldsymbol{X}}_L^G \). Since \( {\boldsymbol{X}}_L^G \) does not change over the different perspectives, it is solved by using U G 1. . n and \( {\boldsymbol{X}}_{C,1..n}^G \) via an optimization process (Hartley and Zisserman 2003). Figure 4 illustrates the geometric relationship between the landform and the camera on perspectives 1 and n. The uncertainty of \( {\boldsymbol{X}}_L^G \) will also be dependent on the accuracy of U G 1. . n and \( {\boldsymbol{X}}_{C,1..n}^G \), and the geometric relationship between \( {\boldsymbol{X}}_{C,1..n}^G \) and \( {\boldsymbol{X}}_L^G \). We conservatively assume a short displacement between camera perspectives in this experiment, which results in greater vertical errors in SfM. SfM error model with two perspectives. A long displacement example is shown on the left, with the error of landmark location illustrated in a purple ellipse. On the right is a short displacement example, where the error is greater in the vertical direction The stochastic estimator uses measurements that become available sequentially over time. Let a state vector x represent the vertical errors on the 25 anchors, located at m x [1..25] , and a 25 by 25 matrix P for the covariance among states. x and P are initialized with some measurements, such as a TLS scan at time t k-1, and then propagated to another time t k using a Brownian process. $$ {\boldsymbol{x}}_{\boldsymbol{k};\boldsymbol{k}-1}=\boldsymbol{F}{\boldsymbol{x}}_{\boldsymbol{k}-1;\boldsymbol{k}-1}+\boldsymbol{\mu} $$ $$ {\boldsymbol{P}}_{\boldsymbol{k};\boldsymbol{k}-1}=\boldsymbol{F}{\boldsymbol{P}}_{\boldsymbol{k}-1;\boldsymbol{k}-1}\boldsymbol{F}+\boldsymbol{Q}, $$ where F is a matrix describing the dynamic relationship between states, which will be the identity matrix in this case. μ is the process noise component, corresponding to the Brownian process, of which the covariance matrix is defined with Q. Without additional measurements, the matrix P, will grow as a function of time. The Brownian process may be established with the best-known model of terrain change, and will only be used to predict the increase of uncertainty if the model is valid for the underlying geomorphic process. The fundamental concept can be illustrated using just two of the anchor points, located at m x[1] and m x[2], respectively. The initial uncertainty of both points can be represented with two circles at t 0 (Fig. 5). As time propagates to t 1, the uncertainty grows for both points, illustrated with greater circles. A new measurement becomes available at t 1, made at location m z [t 1]. The uncertainty of this measurement is represented with an ellipse. This measurement is then used to update both anchor points. Right after this update, at t 1+, the uncertainty of both points is reduced to ellipses. Since the update is closer to m x[2], the corresponding ellipse becomes smaller than that of m x[1]. Without additional measurements, both ellipses are propagated to t 2, which result in greater ellipses. A second measurement was made at t 2, which occurred much closer to m x[1] this time. It effectively reduced the size of the ellipses, more so on m x[1] than m x[2]. Two anchor points located mx[1] (blue) and mx[2] (red) show what happens when new measurements become available over time. Uncertainty in both points are represented by circles and the circles increase and decrease in size with increasing or decreasing uncertainty, respectively The geometric relationship between anchor points and any measurement can still be represented or approximated with a linear/linearized model, which can be established by using an interpolation method. Notice that the interpolation model used in this step estimates the covariance function, instead of predicting the landform as does Kriging or polynomial regression. The bicubic interpolation method introduced in Keys (1981) has been widely used in the computer vision community, and it has been adopted in digital elevation surface modeling (Dumitru et al. 2013). With bicubic interpolation, a linearized relationship between a given point and the anchor points, denoted with matrix H, can be easily estimated. This approach handles non-linear landforms, and remains relatively computationally efficient. The elevation at m z now has a direct measurement z, and a prediction from the estimator states, Hx k; k − 1 . Any disagreement between them is likely a result of uncertainties associated with both and new information that can be observed as "innovation" on the states, $$ \boldsymbol{y}=\boldsymbol{z}-\boldsymbol{H}{\boldsymbol{x}}_{\boldsymbol{k};\boldsymbol{k}-1}, $$ with a de-facto standard estimation method, the Kalman filter (Kalman 1960), x and P are updated with a gain K: $$ \boldsymbol{K}={\boldsymbol{P}}_{\boldsymbol{k};\boldsymbol{k}-1}{\boldsymbol{H}}^{\prime }{\left(\boldsymbol{H}{\boldsymbol{P}}_{\boldsymbol{k};\boldsymbol{k}-1}{\boldsymbol{H}}^{\prime }+\boldsymbol{R}\right)}^{-1} $$ which allows us to update the states x and covariance P using this new information: $$ {\boldsymbol{x}}_{\boldsymbol{k};\boldsymbol{k}}={\boldsymbol{x}}_{\boldsymbol{k};\boldsymbol{k}-1}+\boldsymbol{Ky} $$ $$ {\boldsymbol{P}}_{\boldsymbol{k};\boldsymbol{k}}=\left(\boldsymbol{I}-\boldsymbol{KH}\right){\boldsymbol{P}}_{\boldsymbol{k};\boldsymbol{k}-1} $$ After this step, the states are further propagated onto time t k + 1 , with the integration of another measurement (such as aerial photo). The time history of states x and covariance matrix P are the outcome of this approach. Notice that only a few key steps of the estimator are highlighted here for the sake of conciseness. For details on the derivation and implementation, please refer to (Kalman 1960) and (Smith et al. 1962). At any given point in time, x and P are used as input to predict the elevation and uncertainty of any given point on the landform with a standard interpolation method. An accurate and consistent covariance matrix P is the key to an optimum interpolation process. Consistency of covariance is defined based on how well P describes and over-bounds the actual error in states x. To verify the estimation, x will be compared against a truth reference x ref used in the software simulation. With field data, the truth reference can be generated from a high-resolution, high-fidelity sensor such as TLS. The variance and covariance of error ∆x = x − x ref , is expected to be closely bounded by P at each time step t k . Since it may be difficult to visualize the comparison between matrices, the standard error σ ∆x [k] will be compared against an uncertainty level for this step, by using the root mean square of all the diagonal elements in P. $$ {\overline{\boldsymbol{\sigma}}}_{\boldsymbol{est}}\left[\boldsymbol{k}\right]=\sqrt{\overline{\boldsymbol{\operatorname{diag}}\left({\boldsymbol{P}}_{\boldsymbol{k};\boldsymbol{k}}\right)}} $$ In the software simulation, the actual elevation change is not provided to the estimator. Rather, an over-bounding Brownian process is used, and we expect to have \( {\overline{\sigma}}_{\boldsymbol{est}}\ge {\sigma}_{\Delta \boldsymbol{x}} \) in the history of 30 years. Physical modeling experiment A physical model is developed in the Terrain Analysis Lab at East Carolina University as a further means of validating our uncertainty measures. The controlled simulation is designed around a fan-shaped surficial features (i.e. an alluvial fan or washover fan) built in a stainless-steel stream table (3.7 m long by 1.8 m wide) using coarse sand on a white sheet of plotter paper to reduce reflectivity of the stainless-steel table (Fig. 6). The fan-shaped feature is 0.974 m wide and 0.392 m in with a relief of 0.05 m from the bottom of the stream table (Fig. 6). The experimental fan model. a Location of control points and cameras. b SfM image of the fan on the plotter paper with the circle targets (image rotated 90 CCW to A). c A topographic map of the experimental fan. d Elevation difference from t 1 to t 0 A Leica P20 laser scanner was inverted and mounted on an aluminum swing set (Lisenby et al. 2014). Targets mounted on the wall of the lab and within the stainless-steel table as well as the corner points of the table were used to register the TLS data (Fig. 6). Cartesian coordinates associated with TLS point clouds provide locations for the control points used in the SfM. A single scan from the nadir looking position of the scanner captured the entire simulated fan surface in t 0 and t 1. SfM data were collected with a Ricoh GR II digital camera. The camera was positioned at two different heights at 32 locations looking obliquely and inward at the fan surface (Fig. 6). A total of 64 images were used to generate a point cloud with the aid of Agisoft Photoscan software. A combination of 14 12-bit targets from the Agisoft software were printed on sticky back paper and adhered to the stainless-steel table, along with 10 small circle targets with known dimensions. Targets and circles were used to aid in the photo alignment process. The original landform (t 0) was modified (t 1) by adding approximately 2 cm of coarse sand onto a segment of the fan (Figs. 6 and 7) to emulate fan segment aggradation over an arbitrary time-period (for example, t 1 is set to 30 years). TLS and SfM-MVS data are collected at both t 0 and t 1. TLS at t 1 is used as a truth reference. We initialize the estimator at t 0 with TLS and SfM-MVS, propagate and update at t 1 using SfM-MVS. The estimated \( {\overline{\sigma}}_{\boldsymbol{est}}\left[1\right] \) is compared against the actual error σ ∆x [1] in this case. Elevation model from TLS point cloud of the simulated fan model at t 1 The estimator in the software simulation is initialized with a TLS scan. Subsequently, five additional sensor data sets become available over 30 years, to observe the geomorphic change. A general pattern evolves whereby the estimated uncertainty provides an over bound of the actual error (Fig. 8), i.e., \( {\overline{\sigma}}_{\boldsymbol{est}}\ge {\sigma}_{\Delta \boldsymbol{x}} \) . Both \( {\overline{\sigma}}_{\boldsymbol{est}} \) and σ ∆x grow over time during the simulation, and are dramatically reduced by some measurements, such as ALS, SfM-MVS and Kinematic GNSS surveys. Other sensors, such as 2D photos and static GNSS, are less effective, as expected. Although the SfM-MVS measurements are not nearly as accurate as static GNSS survey, it offers a much higher point density and therefore, is also able to reduce the error more effectively. The over-bounding associated with the static GNSS has more to do with the lack of data produced with this technique than the accuracy of the data collected. The truth reference used in this software simulation can be used to verify the estimation in x (Fig. 9). Uncertainty of software simulated landform over 30 years, estimated sigma \( {\overline{\boldsymbol{\sigma}}}_{\boldsymbol{est}} \) vs. actual error σ ∆x Actual vs. estimated terrain change in year 7 (left), year 19 (middle) and year 31 (right). Upper: actual terrain change x ref ; lower: estimation in states x The estimator is also applied to the changes associated with the lab-simulated fan model. As this terrain is more sophisticated, a 20 by 20 mesh of anchors is used. The actual change x ref is obtained by comparing the two TLS scans (Fig. 10). The estimated change is observed on the anchor points, by using a less accurate and lower-resolution SfM-MVS update. At t 1 = 30 years, \( {\overline{\sigma}}_{\boldsymbol{est}}=5.6 mm \) and σ ∆x = 3.7mm, \( {\overline{\sigma}}_{\boldsymbol{est}} \) is greater than, and yet reasonably close to σ ∆x . The difference is below the detectable limits for topographic change associated with the data source instrumentation (Wester et al. 2014, Staley et al. 2014; Wasklewicz and Scheinert 2015). Actual vs. estimated terrain change in lab-simulated fan model. Upper: actual terrain change x ref ; lower: estimation in states x The software simulation and physical modeling experiment both show that the estimator can provide a reliable and consistent measure of uncertainty in legacy data. Although some sensors are more desired than others, the availability of sensors in legacy data and sometimes also in new data collection campaigns dictates that data from all sensors must be incorporated. The estimator will allow us to retrieve information from sensors of low accuracy, low observability (2D images) or low spatial resolution (static surveys). We further emphasize that the discussed estimator is not limited to just the Kalman filter. Other Bayesian estimators, such as particle filters (Montemerlo and Thrun 2003), are applicable as well. Similarly, bicubic interpolation is not the only choice for interpolation either. Other methods that can yield a linearized relationship between measurements and states are also valid options, which will provide various types of advantages (Dumitru et al. 2013). The statistical consistency of uncertainties in all the sensor measurements and estimates are critical in our application. In practice, repeated measurements could also lead to a covariance matrix P that is too small (Julier and Uhlmann 2001), such as high-density data from SfM-MVS. As x and P can be used to predict the elevation and uncertainty of the whole landform, an overly optimistic P can easily cause unexpected errors in the prediction. For instance, few erroneous points in x with small variances would dominate the prediction and generate a bias in a neighborhood. Overly optimistic P also would prevent x from being updated with other sensors. Additional steps will be taken in future work to guarantee the consistency of P. The successful outcomes and corroboration in our initial software and physical modeling simulations, the ability to adopt and use multiple estimators and interpolation techniques, and an ability to be employed across a variety of environmental settings provides a new alternative to integrating legacy and HRT data. Applications of existing and planned methodology to field research will help extend our understanding of the dynamic landform and landscape changes that take place through space and time. At present, earth science researchers conducting studies with the aid of HRT data have placed a great deal of emphasis on methods and techniques that provide rapid, accurate, and spatially continuous topographic data. HRT provide the user with unparalleled information of short-term changes in processes and forms. Subtle topographic changes that earth scientists must measure in the future, for example those associated with climate change (Lane 2013), require detailed data that can be precisely acquired and accurately assessed to address scientific questions and inform policy makers. However, the sole use of HRT data limits the temporal framework for which earth scientists can consider these changes because these data sets have only been recently developed. Legacy topographic data allow us to extend this temporal perspective of change backward in time to better inform us of variations on decadal time-scales and in some instances, as far back as a half-century or more (Corsini et al. 2009; Carley et al. 2012; James et al. 2012; Schaffrath et al. 2015). While we appreciate that this represents a miniscule amount of time from a geological perspective, the ability to use legacy topographic data increases the temporal sample size earth scientists are working from and allows us to make more authoritative statements about landform evolution than the shorter-term view supplied by research solely using HRT data. Scientists need to be able to definitively measure the environmental changes and communicate these measures to a broad audience of possible users that include their peers, public officials, and the community. Expressing the uncertainty associated with these measurements or reducing the uncertainty in the data production process is a critical first step to achieving more definitive measures and communication. Here, our uncertainty model, associated anchor points, and stochastic estimator clearly highlight that spatial uncertainty can be accounted for across the time-scales of the legacy and HRT data. Each elevation point/grid has an associated error, which can be used to produce better maps when interpolating to raster data and provide better measurement tolerances as they relate to topographic changes and sediment transport. The ability to accurately assess spatially variable uncertainty across a broad range of temporal scales also provides a means to exchange information between researchers, engineers, policy makers, and environmental managers and planners. The longer-term understanding of topographic change and sediment transport and the uncertainties associated with these dynamics permits greater understanding of issues such as: water quality, habitat loss, and risks to built-environments and human lives from environmental hazards. Furthermore, an ability to measure the level of data uncertainty associated with past landform and landscape dynamics will help advance landscape evolution models because of the ability to corroborate existing landscape evolutionary models with the longer-term record of topographic change derived from legacy topographic data sets and recent HRT data. This is a necessary advancement as we may come to rely on landscape evolution models as an integral tool in the decision and management schemes as we face climate change and increased scenarios of extreme weather. Our software simulation and physical modeling experiments provide a new approach to measuring topographic data uncertainty where legacy data from a variety of data sources can be integrated with HRT data to expand the time-scales of topographic change detection. As anticipated, the difference in the actual and expected errors of our HRT physical model experiment was quite small (< 2 mm). Current instrumentation and field methods often have a higher minimal level of detection, so this value is quite acceptable for HRT data sources and represents a value comparable to or lower than most uncertainty measures found in current research exploring topographic change detection. The uncertainty model, associated anchor points, and stochastic estimator was further applied to a software simulation whereby a variety of remote-sensing data sources were used to simulate data capture from legacy data sources. Our findings show the estimated error coincides with the actual error using certain sensors (Kinematic GNSS, ALS, TLS, and SfM-MVS). Data from 2D imagery and static GNSS did not perform as well at the time the sensor is integrated into estimator. Nevertheless, the software simulation shows the approach can be used to estimate the error associated with all elevation values in the legacy and HRT data over the time-period of the simulation. Our findings show a strong potential for this technique to be applied in a variety of field settings. We anticipate further development of the uncertainty model in future research. Our goal is to expand the capability to densify the anchor points in areas that are more topographically complex and test by overfitting the surface how well the uncertainty model is performing in these locations. At present, it is not clear if the use of sparse points (for example, techniques that rely on interpolation from topographic maps) under sample the topographic surface and therefore, do not accurately represent the surface or the spatially variable uncertainty in the measurement of topographic change. We intend to expand the capabilities of the model by conducting further software simulations that analyze and visualize topographic change over time using the varied densification of anchor points. The advances in the uncertainty model will then be further evaluated using real-world data from a variety of sources. For example, ALS data can be used in a fashion to simulate different types of data by varying the data density, data noise, etc. to examine known differences in the data and the capability to measures these data perturbations using the refine uncertainty model. Once these items have been worked out our intent is to apply the uncertainty model to the real-world topographic and environmental changes within various environmental settings that include more topographically complex areas like mountainous environments versus less topographic complex locales such as barrier islands. Our existing uncertainty model shows clear evidence that the temporal component of spatially variable uncertainty of legacy topographic data sets can be measured in both of our simulations. A capacity to integrate legacy topographic data expands earth scientist's ability to understand topographic changes and sediment transport over longer time-scales. This should lead to more definitive measurements and answers regarding how environments are changing over the past 50 to 100 years (dependent upon the temporal availability of topographic data sources). More conclusive measures and findings stem from our ability to assess uncertainty across the spectrum of legacy data in relation to more recent HRT data using techniques like we have documented in this research. New results from these approaches should provide valuable information to the broader scientific community and society. Societal benefits will likely be recognized in the form of delivering relevant information that show the potential range for environmental changes over time-scales demanded by managers and policy makers. ALS: Airborne laser scanning DEM: Digital elevation model; DEM-of-difference GNSS: Global navigation satellite system HRT: High-resolution topography SfM-MVS: Structure from Motion-Multi-View Stereo Survey and interpolation error TIN: Triangular irregular networks TLS: Terrestrial laser scanning Brasington J, Langham J, Rumsby B (2003) Methodological sensitivity of morphometric estimates of coarse fluvial sediment transport. Geomorphology 53:299–316 Brasington J, Rumsby BT, McVey RA (2000) Monitoring and modelling morphological change in a braided gravel-bed river using high resolution GPS-based survey. Earth Surface Processes and Landforms 25: 973-990 Carley JK, Pasternack GB, Wyrick JR, Barker JR, Bratovich PM, Massa DA, Reedy GD, Johnson TR (2012) Significant decadal channel change 58–67 years post-dam accounting for uncertainty in topographic change detection between contour maps and point cloud models. Geomorphology 179:71–88 Corsini A, Borgatti L, Cervi F, Dahne A, Ronchetti F, Sterzai P (2009) Estimating mass-wasting processes in active earth slides–earth flows with time-series of high-resolution DEMs from photogrammetry and airborne LiDAR. Nat Hazards Earth Syst Sci 9:433–439 Cortes J (2009) Distributed kriged Kalman filter for spatial estimation. IEEE Trans Autom Control 54:2816–2827 Crowell M, Leatherman SP, Buckley MK (1991) Historical shoreline change: error analysis and mapping accuracy. Journal of coastal research, 839-852 Dumitru PD, Marin P, Dragos B (2013) Comparative study regarding the methods of interpolation. In: Recent advances in geodesy and Geomatics engineering, pp 45–52 Elfe A (1989) Using occupancy grids for mobile robot perception and navigation. Computer 22:46–57 Fonstad MA, Dietrich JT, Courville BC, Jensen JL, Carbonneau PE (2013) Topographic structure from motion: a new development in photogrammetric measurement. Earth Surf Process Landf 38:421–430 Glennie C (2007) Rigorous 3D error analysis of kinematic scanning LIDAR systems. J Appl Geodesy 1:147–157 Glennie CL, Hinojosa-Corona A, Nissen E, Kusari A, Oskin ME, Arrowsmith JR, Borsa A (2014) Optimization of legacy lidar data sets for measuring near-field earthquake displacements. Geophys Res Lett 41:3494–3501 Gomez C, Hayakawa Y, Obanawa H (2015) A study of Japanese landscapes using structure from motion derived DSMs and DEMs based on historical aerial photographs: new opportunities for vegetation monitoring and diachronic geomorphology. Geomorphology 242:11–20 Hartley R, Zisserman A (2003) Multiple view geometry in computer vision. Cambridge University Press, Cambridge Heritage GL, Milan DJ, Large AR, Fuller IC (2009) Influence of survey strategy and interpolation model on DEM quality. Geomorphology 112:334–344 James LA, Hodgson ME, Ghoshal S, Latiolais MM (2012) Geomorphic change detection using historic maps and DEM differencing: the temporal dimension of geospatial analysis. Geomorphology 137:181–198 Julier SJ, Uhlmann JK (2001) General decentralized data fusion with covariance intersection (CI). In: Handbook of data fusion. CRC Press, Boca Raton Kalman RE (1960) A new approach to linear filtering and prediction problems. J Basic Eng 82:35–45 Keys R (1981) Cubic convolution interpolation for digital image processing. IEEE Trans Acoust Speech Signal Process 29:1153–1160 Krige DG (1951) A statistical approach to some mine valuations and allied problems at the witwatersrand. Master's thesis. University of Witwatersrand, Witwatersrand Lane SN (2013) 21st century climate change: where has all the geomorphology gone? Earth Surf Process Landf 38:106–110 Lane SN, Westaway RM, Murray-Hicks D (2003) Estimation of erosion and deposition volumes in a large, gravel-bed, braided river using synoptic remote sensing. Earth Surf Process Landf 28:249–271 Lisenby PE, Slattery MC, Wasklewicz TA (2014) Morphological organization of a steep, tropical headwater stream: the aspect of channel bifurcation. Geomorphology 214:245–260 Mardia KV, Goodall C, Redfern EJ, Alonso FL (1998) The Kriged Kalman filter. TEST 7:217–285 Micheletti N, Lane SN, Chandler JH (2015) Application of archival aerial photogrammetry to quantify climate forcing of alpine landscapes. Photogramm Rec 30:143–165 Milan DJ, Heritage GL, Hetherington D (2007) Application of a 3D laser scanner in the assessment of erosion and deposition volumes and channel change in a proglacial river. Earth Surface Processes and Landforms 32:1657-1674 Milan DJ, Heritage GL, Large AR, Fuller IC (2011) Filtering spatial error from DEMs: implications for morphological change estimation. Geomorphology, 125: 160-171 Montemerlo M, Thrun S (2003) Simultaneous localization and mapping with unknown data association using FastSLAM. Proceedings of IEEE International Conference on Robotics and Automation, Taipei Oskin ME, Arrowsmith JR, Corona AH, Elliott AJ, Fletcher JM, Fielding EJ, Gold PO, Garcia JJG, Hudnut KW, Liu-Zeng J, Teran OJ (2012) Near-field deformation from the el mayor-Cucapah earthquake revealed by differential LIDAR. Science 335:702–705 Pelletier JD, Brad Murray A, Pierce JL, Bierman PR, Breshears DD, Crosby BT, Ellis M, Foufoula-Georgiou E, Heimsath AM, Houser C., Lancaster, N (2015) Forecasting the response of Earth's surface to future climatic and land use changes: A review of methods and research needs. Earth's Future, 3(7)220-251 Schaffrath KR, Belmont P, Wheaton JM (2015) Landscape-scale geomorphic change detection: quantifying spatially variable uncertainty and circumventing legacy data issues. Geomorphology 250:334–348 Smith GL, Schmidt SF, McGee LA (1962) Application of statistical filter theory to the optimal estimation of position and velocity on board a circumlunar vehicle. National Aeronautics and Space Administration, Washington, D.C Spencer T, Naylor L, Lane S, Darby S, Macklin M, Magilligan F, Möller I (2017) Stormy geomorphology: an introduction to the special issue. Earth Surf Process Landf 42:238–241 Staley DM, Wasklewicz TA, Kean JW (2014) Characterizing the primary material sources and dominant erosional processes for post-fire debris-flow initiation in a headwater basin using multi-temporal terrestrial laser scanning data. Geomorphology 214:324–338 Tarolli P, Sofia G (2016) Human topographic signatures and derived geomorphic processes across landscapes. Geomorphology 255:140–161 Thrun S, Burgard W, Fox D (2005) Probabilistic robotics. MIT press, Boston Wasklewicz T, Scheinert C (2015) Development and maintenance of a telescoping debris flow fan in response to human-induced fan surface channelization, Chalk Creek valley natural debris flow laboratory, Colorado, USA. Geomorphology 252:51–65 Wasklewicz T, Staley DM, Reavis K, Oguchi T (2013) Digital terrain modeling. In: Shroder J, Bishop MP (eds) Treatise on geomorphology, vol 3. Academic Press, San Diego, pp 130–161 Wasklewicz TA, Hattanji T (2009) High-resolution analysis of debris flow–Induced Channel changes in a headwater stream, Ashio Mountains, Japan. Prof Geogr 61:231–249 Wester T, Wasklewicz T, Staley D (2014) Functional and structural connectivity within a recently burned drainage basin. Geomorphology 206:362–373 Wheaton JM, Brasington J, Darby SE, Sear DA (2010) Accounting for uncertainty in DEMs from repeat topographic surveys: improved sediment budgets. Earth Surf Process Landf 35:136–156 Williams CKI (1998) Prediction with Gaussian processes: from linear regression to linear prediction and beyond. Learn Graphical Models 89:599–621 We thank Kailey Adams and Robert Rankin for their assistance in our experiments. This work was funded by an East Carolina University Interdisciplinary Research Collaboration Award entitled "Development of Scalable Sensor Payload for Rapid High Resolution Topographic Scanning via Micro Aerial Vehicles". Funding for this award was supplied by the Division of Research and Graduate Studies in partnership with the Divisions of Academic Affairs and Health Sciences. The Terrain Analysis Lab and ECU Department of Geography, Planning, and Environment, provided the physical modeling facilities and equipment to simulate field measurements in a controlled setting. Department of Geography, Planning, and Environment, East Carolina University, Brewster Building A227, Greenville, NC, 27858, USA Thad Wasklewicz & Paul Gares College of Engineering and Technology, East Carolina University, Science and Technology Complex Suite 100, Greenville, NC, 27858, USA Search for Thad Wasklewicz in: Search for Zhen Zhu in: Search for Paul Gares in: TW proposed the topic, helped conceive the study design, analyzed the data, helped with the interpretation of the data, co-wrote the paper, and is corresponding author. ZZ helped conceive the study design, carried out the experimental study, analyzed the data, interpreted the data, designed figures, and co-wrote the paper. PG helped in the interpretation of the data and co-wrote the paper. All authors read and approved the final manuscript. Correspondence to Thad Wasklewicz. TW, ZZ, and PG declare that they have no competing interest. Wasklewicz, T., Zhu, Z. & Gares, P. Simulating and quantifying legacy topographic data uncertainty: an initial step to advancing topographic change analyses. Prog Earth Planet Sci 4, 32 (2017) doi:10.1186/s40645-017-0144-7 Accepted: 28 September 2017 Digital terrain models Geomorphic change detection 3. Human geosciences High-definition topographic and geophysical data in geosciences
CommonCrawl
Haplotype-aware diplotyping from noisy long reads Jana Ebler1,2,3 na1, Marina Haukness4 na1, Trevor Pesout4 na1, Tobias Marschall ORCID: orcid.org/0000-0002-9376-10301,2 & Benedict Paten4 Current genotyping approaches for single-nucleotide variations rely on short, accurate reads from second-generation sequencing devices. Presently, third-generation sequencing platforms are rapidly becoming more widespread, yet approaches for leveraging their long but error-prone reads for genotyping are lacking. Here, we introduce a novel statistical framework for the joint inference of haplotypes and genotypes from noisy long reads, which we term diplotyping. Our technique takes full advantage of linkage information provided by long reads. We validate hundreds of thousands of candidate variants that have not yet been included in the high-confidence reference set of the Genome-in-a-Bottle effort. Reference-based genetic variant identification comprises two related processes: genotyping and phasing. Genotyping is the process of determining which genetic variants are present in an individual's genome. A genotype at a given site describes whether both chromosomal copies carry a variant allele, whether only one of them carries it, or whether the variant allele is not present at all. Phasing refers to determining an individual's haplotypes, which consist of variants that lie near each other on the same chromosome and are inherited together. To completely describe all of the genetic variation in an organism, both genotyping and phasing are needed. Together, the two processes are called diplotyping. Many existing variant analysis pipelines are designed for short DNA sequencing reads [1, 2]. Though short reads are very accurate at a per-base level, they can suffer from being difficult to unambiguously align to the genome, especially in repetitive or duplicated regions [3]. The result is that millions of bases of the reference human genome are not currently reliably genotyped by short reads, primarily in multi-megabase gaps near the centromeres and short arms of chromosomes [4]. While short reads are unable to uniquely map to these regions, long reads can potentially span into or even across them. Long reads have already proven useful for read-based haplotyping, large structural variant detection, and de novo assembly [5–8]. Here, we demonstrate the utility of long reads for more comprehensive genotyping. Due to the historically greater relative cost and higher sequencing error rates of these technologies, little attention has been given thus far to this problem. However, long-read DNA sequencing technologies are rapidly falling in price and increasing in general availability. Such technologies include single-molecule real-time (SMRT) sequencing by Pacific Biosciences (PacBio) and nanopore sequencing by Oxford Nanopore Technologies (ONT), both of which we assess here. The genotyping problem is related to the task of inferring haplotypes from long-read sequencing data, on which a rich literature and many tools exist [8–14], including our own software WhatsHap [15, 16]. The most common formalization of haplotype reconstruction is the minimum error correction (MEC) problem. The MEC problem seeks to partition the reads by haplotype such that a minimum number of errors need to be corrected in order to make the reads from the same haplotype consistent with each other. In principle, this problem formulation could serve to infer genotypes, but in practice, the "all heterozygous" assumption is made: tools for haplotype reconstruction generally assume that a set of heterozygous positions is given as input and exclusively work on these sites. Despite this general lack of tools, some methods for genotyping using long reads have been proposed. Guo et al. [17] describe a method for long-read single-nucleotide variant (SNV) calling and haplotype reconstruction which identifies an exemplar read at each SNV site that best matches nearby reads overlapping the site. It then partitions reads around the site based on similarity to the exemplar at adjacent SNV sites. However, this method is not guaranteed to discover an optimal partitioning of the reads between haplotypes, and the authors report a comparatively high false discovery rate (15.7%) and false-negative rate (11.0%) for PacBio data of NA12878, which corresponds to an F1 score of only 86.6%. Additionally, two groups are presently developing learning-based variant callers which they show can be tuned to work using long, noisy reads: In a recent preprint, Luo et al. [18] describe a method which uses a convolutional neural network (CNN) to call variants from long-read data, which they report to achieve an F1 score between 94.90 and 98.52%, depending on parametrization (when training on read data from one individual and calling variants on a different individual, see Table 3 of [18]). Poplin et al. [19] present another CNN-based tool, which achieves an F1 score of 92.67% on PacBio data (according to Supplementary Table 3 of [19]). These measures appear promising; however, these methods do not systematically exploit the linkage information between variants provided by long reads. Thus, they do not leverage one of the key advantages of long reads. For an illustration of the potential benefit of using long reads to diplotype across adjacent sites, consider Fig. 1a. There are three SNV positions shown which are covered by long reads. The gray sequences represent the true haplotype sequences, and reads are colored in blue and red, where the colors correspond to the haplotype which the respective read stems from: the red ones from the upper sequence, and the blue ones from the lower one. Since sequencing errors can occur, the alleles supported by the reads are not always equal to the true ones in the haplotypes shown in gray. Considering the SNVs individually, it would be reasonable to call the first one as A/C, the second one as T/G, and the third one as G/C, since the number of reads supporting each allele is the same. This leads to a wrong prediction for the second SNV. However, if we knew which haplotype each read stems from, that is, if we knew their colors, then we would know that there must be sequencing errors at the second SNV site. Since the reads stemming from the same haplotypes must support the same alleles and there are discrepancies between the haplotyped reads at this site, any genotype prediction at this locus must be treated as highly uncertain. Therefore, using haplotype information during genotyping makes it possible to detect uncertainties and potentially compute more reliable genotype predictions. Motivation and overview of diplotyping. a Gray sequences illustrate the haplotypes; the reads are shown in red and blue. The red reads originate from the upper haplotype, the blue ones from the lower. Genotyping each SNV individually would lead to the conclusion that all of them are heterozygous. Using the haplotype context reveals uncertainty about the genotype of the second SNV. b Clockwise starting top left: first, sequencing reads aligned to a reference genome are given as input; second, the read alignments are used to nominate candidate variants (red vertical bars), which are characterized by the differences to the reference genome; third, a hidden Markov model (HMM) is constructed where each candidate variant gives rise to one "row" of states, representing possible ways of assigning each read to one of the two haplotypes as well as possible genotypes (see the "Methods" section for details); forth, the HMM is used to perform diplotyping, i.e., we infer genotypes of each candidate variant as well as how the alleles are assigned to haplotypes In this paper, we show that for contemporary long read technologies, read-based phase inference can be simultaneously combined with the genotyping process for SNVs to produce accurate diplotypes and to detect variants in regions not mappable by short reads. We show that key to this inference is the detection of linkage relationships between heterozygous sites within the reads. To do this, we describe a novel algorithm to accurately predict diplotypes from noisy long reads that scales to deeply sequenced human genomes. We then apply this algorithm to diplotype one individual from the 1000 Genomes Project, NA12878, using long reads from both PacBio and ONT. NA12878 has been extensively sequenced and studied, and the Genome in a Bottle Consortium has published sets of high confidence regions and a corresponding set of highly confident variant calls inside these genomic regions [20]. We demonstrate that our method is accurate, that it can be used to confirm variants in regions of uncertainty, and that it allows for the discovery of variants in regions which are unmappable using short DNA read sequencing technologies. A unified statistical framework to infer genotypes and haplotypes We formulated a novel statistical framework based upon hidden Markov models (HMMs) to analyze long-read sequencing data. In short, we identify potential SNV positions and use our model to efficiently evaluate the bipartitions of the reads, where each bipartition corresponds to assigning each read to one of the individual's two haplotypes. The model ensures that each read stays in the same partition across variants, and hence does not "switch haplotypes," something which is key to exploiting the inherent long range information. Based on the read support of each haplotype at each site, the model determines the likelihood of the bipartition. By using the forward-backward algorithm, we pursue "global" diplotype inference over whole chromosomes, a process that yields genotype predictions by determining the most likely genotype at each position, as well as haplotype reconstructions. In contrast to panel-based methods, like the Li-Stephens model [21], our method relies on read data instead of using knowledge of existing haplotypes. In Fig. 1b, we give a conceptual overview of our approach and describe it in more detail in the "Methods" section. Adding robustness to our analysis, we provide two independent software implementations of our model: one is made available as an extension to WhatsHap [16, 22] and the other is a from-scratch implementation called MarginPhase. While the core algorithmic ideas are the same, MarginPhase and WhatsHap differ primarily in their specialized preprocessing steps, with the former being developed to work primarily with nanopore data and the latter developed to work primarily with PacBio (although both can work with either). The MarginPhase workflow includes an alignment summation step described in the "Allele supports" section whereas WhatsHap performs a local realignment around analyzed sites explained in the "Allele detection" section. Data preparation and evaluation To test our methods, we used sequencing data for NA12878 from two different long-read sequencing technologies. NA12878 is a participant from the 1000 Genomes Project [2] who has been extensively sequenced and analyzed. This is the only individual for whom there is both PacBio and Nanopore sequencing reads publicly available. We used Oxford Nanopore reads from Jain et al. [7] and PacBio reads from the Genome in a Bottle Consortium [23]. Both sets of reads were aligned to GRCh38 with minimap2, a mapper designed to align error-prone long reads [24] (version 2.10, using default parameters for PacBio and Nanopore reads, respectively). To ensure that any variants we found were not artifacts of misalignment, we filtered out the reads flagged as secondary or supplementary, as well as reads with a mapping quality score less than 30. Genome-wide, this left approximately 12 million Nanopore reads and 35 million PacBio reads. The Nanopore reads had a median depth of 37 × and median length of 5950 bp, including a set of ultra-long reads with lengths up to 900 kb. The PacBio reads had a median depth of 46 × and median length of 2600 bp. To validate the performance of our methods, we used callsets from Genome in a Bottle's (GIAB) benchmark small variant calls v3.3.2 [20]. First, we compared against GIAB's set of high confidence calls, generated by a consensus algorithm spanning multiple sequencing technologies and variant calling programs. The high confidence regions associated with this callset exclude structural variants, centromeres, and heterochromatin. We used this to show our method's accuracy in well-understood and easy-to-map regions of the genome. We also analyzed our results compared to two more expansive callsets, which cover a larger fraction of the genome, that were used in the construction of GIAB's high confidence variants, one made by GATK HaplotypeCaller v3.5 (GATK/HC, [1]) and the other by Freebayes 0.9.20 [25], both generated from a 300× PCR-free Illumina sequencing run [20]. Evaluation statistics We compute the precision and recall of our callsets using the tool vcfeval from Real Time Genomics [26] (version 3.9) in order to analyze our algorithm's accuracy of variant detection between our query callsets and a baseline truth set of variants. All variants that identically match between the truth and query callsets (meaning they share the same genomic position, alleles, and genotype) are considered true positive calls. Calls that do not match any variants in the truth callset are false negatives and truth callset variants that are not matched in our callset are false positives. In order to evaluate the ability of our algorithm to genotype a provided set of variant positions, we compute the genotype concordance. Here, we take all correctly identified variant sites (correct genomic position and alleles), compare the genotype predictions (homozygous or heterozygous) made by our method to the corresponding truth set genotypes, and compute the fraction of correct genotype predictions. This enables us to analyze how well the genotyping component of our model performs regardless of errors arising from wrongly called variant sites in the detection stage of the algorithm. We evaluate the phasing results by computing the switch error rate between the haplotypes our algorithms predict and the truth set haplotypes. We take all variants into account that were correctly genotyped as heterozygous in both our callset and the truth set. Switch errors are calculated by counting the number of times a jump from one haplotype to the other is necessary within a phased block in order to reconstruct the true haplotype sequence [16]. We restrict all analysis to SNVs, not including any short insertions or deletions. This is due to the error profile of both PacBio and Nanopore long reads, for which erroneous insertions and deletions are the most common type of sequencing error by far, particularly in homopolymers [27, 28]. Comparison to short read variant callers We explored the suitability of the current state-of-the-art callers for short reads to process long-read data (using default settings) but were unsuccessful. The absence of base qualities in the PacBio data prevented any calling; for Nanopore data, FreeBayes was prohibitively slow and neither Platypus nor GATK/HC produced any calls. Long read coverage We determined the regions where long and short reads can be reliably mapped to the human genome for the purpose of variant calling, aiming to understand if long reads could potentially make new regions accessible. In Fig. 2, various coverage metrics for short and long reads are plotted against different genomic features, including those known for being repetitive or duplicated. These metrics are described below. Reach of short read and long read technologies. The callable and mappable regions for NA12878 spanning various repetitive or duplicated sequences on GRCh38 are shown. Feature locations are determined based on BED tracks downloaded from the UCSC Genome Browser [48]. Other than the Gencode regions [49, 50], all features are subsets of the Repeat Masker [51] track. Four coverage statistics for long reads (shades of red) and three for short reads (shades of blue) are shown. The labels "PacBio Mappable" and "Nanopore Mappable" describe areas where at least one primary read with GQ ≥ 30 has mapped, and "Long Read Mappable" describes where this is true for at least one of the long read technologies. "Long Read Callable" describes areas where both read technologies have coverage of at least 20 and less than twice the median coverage. "GIAB High Confidence," "GATK Callable," and "Short Read Mappable" are the regions associated with the evaluation callsets. For the feature-specific plots, the numbers on the right detail coverage over the feature and coverage over the whole genome (parenthesized) The callsets on the Illumina data made by GATK/HC and FreeBayes come with two BED files describing where calls were made with some confidence. The first, described in Fig. 2 as Short Read Mappable, was generated using GATK CallableLoci v3.5 and includes regions where there is (a) at least a read depth of 20 and (b) at most a depth of twice the median depth, only including reads with mapping quality of at least 20. This definition of callable only considers read mappings. The second, described as GATK Callable, was generated from the GVCF output from GATK/HC by excluding the areas with genotype quality less than 60. This is a more sophisticated definition of callable as it reflects the effects of homopolymers and tandem repeats. We use these two BED files in our analysis of how short and long reads map differently in various areas of the genome. For long reads, we show four coverage statistics. The entries marked as "mappable" describe the areas where there is at least one high-quality long-read mapping (PacBio Mappable, Nanopore Mappable, and Long Read Mappable for regions where at least one of the sequencing technologies mapped). The Long Read Callable entries cover the regions where our methods should be able to call variants due to having a sufficient depth of read coverage. In these regions, both sequencing technologies had a minimum read depth of 20 and a maximum of twice the median depth (this is similar to the GATK CallableLoci metric, although made from BAMs with significantly less read depth). Figure 2 shows that in almost all cases, long reads map to a higher fraction of the genome than short reads map to. For example, nearly half a percent of the whole genome is mappable by long reads but not short reads. Long reads also map to 1% more of the exome, and 13% more of segmental duplications. Centromeres and tandem repeats are outliers to this generalization, where neither PacBio nor Nanopore long reads cover appreciably more than Illumina short reads. Comparison against high confidence truth set To validate our method, we first analyzed the SNV detection and genotyping performance of our algorithm using the GIAB high confidence callset as a benchmark. All variants reported in these statistics fall within both the GIAB high confidence regions and regions with a read depth between 20 and twice the median depth. Variant detection Figure 3 (top) shows precision and recall of WhatsHap run on PacBio data and MarginPhase on Oxford Nanopore data, which gives the best performance for these two data types (see Additional file 1: Figure S1 for the results for WhatsHap on ONT and MarginPhase on PacBio). On PacBio reads, WhatsHap achieves a precision of 97.9% and recall of 96.3%. On Nanopore reads, MarginPhase achieves a precision of 76.9% and a recall of 80.9%. We further stratify the performance of our methods based on the variant type. For homozygous variants, WhatsHap on PacBio data has a precision of 98.3% and a recall of 99.3%, MarginPhase on Nanopore data has a precision of 99.3% and a recall of 84.5%. For heterozygous variants, WhatsHap on PacBio data has a precision of 96.8% and a recall of 93.8%; MarginPhase on Nanopore data has a precision of 66.5% and a recall of 78.6%. The high error rate of long reads contributes to the discrepancy in the performance between homozygous and heterozygous variant detection, making it more difficult to distinguish the read errors from the alternate allele for heterozygous variants. In Section 5 of Additional file 1, we further discuss the precision and recall as a function of read depth, and we report more performance based on variant type in Section 6. Precision and recall of MarginPhase on Nanopore and WhatsHap on PacBio datasets in GIAB high confidence regions. Genotype concordance (bottom) (wrt. GIAB high confidence calls) of MarginPhase (mp, top) on Nanopore and WhatsHap (wh, middle) on PacBio (PB). Furthermore, genotype concordance for the intersection of the calls made by WhatsHap on the PacBio and MarginPhase on the Nanopore reads is shown (bottom) Long reads have the ability to access regions of the genome inaccessible to short reads ("Long read coverage" section). To explore the potential of our approach to contribute to extending gold standard sets, such as the one produced by the GIAB effort, we produced a combined set of variants which occur in both the calls made by WhatsHap on the PacBio reads and MarginPhase on the Nanopore data, where both tools report the same genotype. This improves the precision inside the GIAB high confidence regions to 99.7% with a recall of 78.7%. In further analysis, we refer to this combined variant set as Long Read Variants. It reflects a high precision subset of variants validated independently by both sequencing technologies. While data from both technologies are usually not available for the same sample in routine settings, such a call set can be valuable for curating variants on well-studied individuals such as NA12878. In order to further analyze the quality of the genotype predictions of our methods (heterozygous or homozygous), we computed the genotype concordance (defined in the "Data preparation and evaluation" section) of our callsets with respect to the GIAB ground truth inside of the high confidence regions. Figure 3 (bottom) shows the results. On the PacBio data, WhatsHap obtains a genotype concordance of 99.79. On the Nanopore data, MarginPhase obtains a genotype concordance of 98.02. Considering the intersection of the WhatsHap calls on PacBio, and MarginPhase calls on Nanopore data (i.e., the Long Read Variants set), we obtain a genotype concordance of 99.99%. We detail the genotype performances for different thresholds on the genotype quality scores that our methods report for each variant call (Additional file 1: Section 7). In addition to genotyping variants, MarginPhase and WhatsHap can also phase them. We evaluated the results of both methods by computing switch error rates (defined in thr "Data preparation and evaluation" section) inside the GIAB high-confidence regions for correctly located and genotyped GIAB truth set variants. We computed the switch error rate of MarginPhase on Nanopore and WhatsHap on PacBio reads. For both datasets, we achieved a low switch error rate of 0.17%. In Additional file 1: Table S1, corresponding per-chromosome switch error rates are given. Cutting and downsampling reads Our genotyping model incorporates haplotype information into the genotyping process by using the property that long sequencing reads can cover multiple variant positions. Therefore, one would expect the genotyping results to improve as the length of the provided sequencing reads increases. In order to examine how the genotyping performance depends on the length of the sequencing reads and the coverage of the data, the following experiment was performed using the WhatsHap implementation. The data was downsampled to average coverages 10×,20×,25×, and 30×. All SNVs inside of the high confidence regions in the GIAB truth set were re-genotyped from each of the resulting downsampled read sets, as well as from the full coverage data sets. Two versions of the genotyping algorithm were considered. First, the full-length reads as given in the BAM files were provided to WhatsHap. Second, in an additional step prior to genotyping, the aligned sequencing reads were cut into shorter pieces such that each resulting fragment covered at most two variants. Additionally, we cut the reads into fragments covering only one variant position. The genotyping performances of these genotyping procedures were finally compared by determining the amount of incorrectly genotyped variants. Figure 4 shows the results of this experiment for the PacBio data. The genotyping error increases as the length of reads decreases. Especially at lower coverages, the genotyping algorithm benefits from using the full length reads, which leads to much lower genotyping errors compared to using the shorter reads that lack information of neighboring variants. For the Nanopore reads, the results were very similar (Additional file 1: Figure S2). In general, the experiment demonstrates that incorporating haplotype information gained from long reads does indeed improve the genotyping performance. This is especially the case at low coverages, since here, the impact of sequencing errors on the genotyping process is much higher. Computing genotypes based on bipartitions of reads that represent possible haplotypes of the individual helps to reduce the number of genotyping errors, because it makes it easier to detect sequencing errors in the given reads. Genotyping errors (with respect to GIAB calls) as a function of coverage. The full length reads were used for genotyping (blue), and additionally, reads were cut such as to cover at most two variants (red) and one variant (yellow) Callset consensus analysis Call sets based on long reads might contribute to improving benchmark sets such as the GIAB truth set. We analyze a call set created by taking the intersection of the variants called by WhatsHap on PacBio reads and MarginPhase on Nanopore reads, which leaves variants that were called identically between the two sets. In Fig. 5, we further dissect the relation of this intersection callset, which we call Long Read Variants, to the GIAB truth set, as well as its relation to the callsets from GATK HaplotypeCaller and FreeBayes, which both contributed to the GIAB truth set. Confirming short-read variants. We examine all distinct variants found by our method, GIAB high confidence, GATK/HC, and FreeBayes. Raw variant counts appear on top of each section, and the percentage of total variants is shown at the bottom. a All variants. b Variants in GIAB high-confidence regions. c Variants outside GIAB high-confidence regions Figure 5a reveals that 404,994 variants in our Long Read Variants callset were called by both the GATK Haplotype Caller and FreeBayes, yet are not in the GIAB truth set. To gather additional support for the quality of these calls, we consider two established quality metrics: the transition/transversion ratio (Ti/Tv) and the heterozygous/non-ref homozygous ratio (Het/Hom) [29]. The Ti/Tv ratio of these variants is 2.09, and the Het/Hom ratio is 1.31. These ratios are comparable to those of the GIAB truth set, which are 2.10 and 1.55, respectively. An examination of the Platinum Genomes benchmark set [30], an alternative to GIAB, reveals 78,493 such long-read validated variants outside of their existing truth set. We hypothesized that a callset based on long reads is particularly valuable in the regions that were previously difficult to characterize. To investigate this, we separately examined the intersections of our Long Read Variants callset with the two short-read callsets both inside the GIAB high confidence regions and outside of them, see Fig. 5b and c, respectively. These Venn diagrams clearly indicate that the concordance of GATK and FreeBayes was indeed substantially higher in high confidence regions than outside. An elevated false-positive rate of the shortread callers outside the high confidence regions is a plausible explanation for this observation. Interestingly, the fraction of calls concordant between FreeBayes and GATK for which we gather additional support is considerably lower outside the high confidence regions. This is again compatible with an increased number of false positives in the short-read callsets, but we emphasize that these statistics should be interpreted with care in the absence of a reliable truth set for these regions. Candidate novel variants To demonstrate that our method allows for variant calling on more regions of the genome than short-read variant calling pipelines, we have identified 15,498 variants which lie outside of the Short Read Mappable area, but inside the Long Read Callable regions. These variants therefore fall within the regions in which there is a sequencing depth of at least 10 and not more than 2 times the median depth for both long-read sequencing technologies, yet the regions are unmappable by short reads. We determined that 4.43 Mb of the genome are only mappable by long reads in this way. Table 1 provides the counts of all variants found in each of the regions from Fig. 2, as well as the counts for candidate novel variants, among the different types of genomic features described in "Long read coverage"section. Over two thirds of the candidate variants occurred in the repetitive or duplicated regions described in the UCSC Genome Browser's repeatMasker track. The transition/transversion ratio (Ti/Tv) of NA12878's 15,498 candidate variants is 1.64, and the heterozygous/homozygous ratio (Het/Hom) of these variants is 0.31. Given that we observe 1 candidate variant in every 325 haplotype bases of the 4.43 Mb of the genome only mappable by long reads, compared to 1 variant in every 1151 haplotype bases in the GIAB truth set on the whole genome, these candidate variants exhibit a 3.6× increase in the haplotype variation rate. Table 1 Distribution of candidate novel variants across different regions of interest Whole-genome variant detection using WhatsHap took 166 CPU hours on PacBio reads, of which genotyping took 44 h. Per chromosome, a maximum of 4.2 GB of memory was required for genotyping, and additionally, at most 2.6 GB was needed for phasing. The MarginPhase implementation took 1550 CPU hours on ONT data, broken down into 330 h for diplotyping and 1220 h for read realignment (described in the "Allele supports" section). The MarginPhase workflow breaks the genome into 2-Mb overlapping windows, and on each of these windows, MarginPhase required on average 22.6 GB of memory, and a maximum of 30.2 GB. We found that the time-consuming realignment step significantly improved the quality of the ONT results and attribute this as the major cause of the difference in runtimes. Furthermore, the methods employed to the find candidate sites differ between the implementations. WhatsHap performs genotyping and phasing in two steps, whereas MarginPhase handles them simultaneously after filtering out the sites that are likely homozygous (in the case of ONT data, this is between 98 and 99% of sites). The filtration heuristic used during our evaluation resulted in MarginPhase analyzing roughly 10× the number of sites than WhatsHap, increasing the runtime and memory usage. We introduce a novel statistical model to unify the inference of genotypes and haplotypes from long (but noisy) third-generation sequencing reads, paving the way for genotyping at intermediate coverage levels. We emphasize that our method operates at coverage levels that preclude the possibility of performing a de novo genome assembly, which, until now, was the most common use of long-read data. Furthermore, we note that unlike the approaches using a haplotype reference panel of a population for statistical phasing and/or imputation [31], our approach only uses sequencing data from the individual; hence, its performance does not rely on the allele frequency within a population. Our method is based on a hidden Markov model that partitions long reads into haplotypes, which we found to improve the quality of variant calling. This is evidenced by our experiment in cutting and downsampling reads, where reducing the number of variants spanned by any given read leads to decreased performance at all levels of read coverage. Therefore, our method is able to translate the increased read lengths of third generation platforms into increased genotyping performance for these noisy long reads. Our analysis of the methods against a high confidence truth set in high confidence regions shows false discovery rates (corresponding to one minus precision) between 3 and 6% for PacBio and between 24 and 29% for Nanopore. However, when considering a conservative set of variants confirmed by both long read technologies, the false discovery rate drops to around 0.3%, comparable with contemporary short-read methods in these regions. In analyzing the area of the genome with high-quality long-read mappings, we found roughly a half a percent of the genome (approximately 15 Mb) that is mappable by long reads but not by short reads. This includes 1% of the human exome, as well as over 10% of segmental duplications. Even though some of these areas have low read counts in our experimental data, the fact that they have high-quality mappings means that they should be accessible with sufficient sequencing. We note that this is not the case for centromeric regions, where Illumina reads were able to map over twice as much as we found in our PacBio data. This may be a result of the low quality in long reads preventing them from uniquely mapping to these areas with an appreciable level of certainty. We demonstrate that our callset has expected biological features, by showing that over our entire set of called variants, the Ti/Tv and Het/Hom ratios were similar to those reported by the truth set. The Ti/Tv ratio of 2.18 is slightly above the 2.10 reported in the GIAB callset, and the Het/Hom ratio of 1.36 is slightly lower than the 1.55 found in the GIAB variants. In the 15,498 novel variant candidates produced by our method in regions unmappable by short reads, the Ti/Tv ratio of 1.64 is slightly lower than that of the truth set. This is not unexpected as gene-poor regions such as these tend to have more transversions away from C:G pairs [32]. We also observe that the Het/Hom ratio dropped to 0.31, which could be due to the systematic biases in our callset or in the reference genome. The rate of variation in these regions was also notably different than in the high confidence regions, where we find three variants per thousand haplotype bases (3.6× the rate in high confidence regions). A previous study analyzing NA12878 [33] also found an elevated variation rate in the regions where it is challenging to call variants, such as low-complexity regions and segmental duplications. The study furthermore found clusters of variants in these regions, which we also observe. The high precision of our set containing the intersection of variants called on Nanopore reads and variants called on PacBio reads makes it useful as strong evidence for confirming existing variant calls. As shown in the read coverage analysis, in both the GIAB and Platinum Genomes efforts many regions could not be called with high confidence. In the regions excluded from GIAB, we found around 400,000 variants using both Nanopore and PacBio reads with our methods, which were additionally confirmed by 2 other variant callers, FreeBayes and GATK/HC, on Illumina reads. Given the extensive support of these variants from multiple sequencing technologies and variant callers, these 400,000 variants are good candidates for addition to the GIAB truth set. Expansion of benchmark sets to harder-to-genotype regions of the human genome is generally important for the development of more comprehensive genotyping methods, and we plan to work with these efforts to use our results. Variant calling with long reads is difficult because they are lossy and error-prone, but the diploid nature of the human genome provides a means to partition reads to lessen this effect. We exploit the fact that reads spanning heterozygous sites must share the same haplotype to differentiate read errors from true variants. We provide two implementations of this method in two long-read variant callers, and while both implementations can be run on either sequencing technology, we currently recommend that MarginPhase is used on ONT data and that WhatsHap is used on PacBio data. One way we anticipate improvement to our method is by incorporating methylation data. Hidden Markov models have been used to produce methylation data for ONT reads using the underlying signal information [34, 35]. As shown by the read-cutting experiment, the amount of heterozygous variants spanned by each read improves our method's accuracy. We predict that the inclusion of methylation into the nucleotide alphabet will increase the amount of observable heterozygosity and therefore further improve our ability to call variants. Work has begun to include methylation probabilities into our method. The long-read genotyping work done by Luo et al. [18] using CNNs does not account for haplotype information. Partitioning reads into haplotypes as a preprocessing step (such as our method does) may improve the CNN's performance; we think this is an interesting avenue of exploration. Further, our method is likely to prove useful for future combined diplotyping algorithms when both genotype and phasing is required, for example, as may be used when constructing phased diploid de novo assemblies [36, 37] or in future hybrid long/short-read diplotyping approaches. Therefore, we envision the statistical model introduced here to become a standard tool for addressing a broad range of challenges that come with long-read sequencing of diploid organisms. We describe a probabilistic model for diplotype inference, and in this paper use it, primarily, to find maximum posterior probability genotypes. The approach builds upon the WhatsHap approach [22] but incorporates a full probabilistic allele inference model into the problem. It has similarities to that proposed by Kuleshov [38], but we here frame the problem using Hidden Markov models (HMMs). Alignment matrix Let M be an alignment matrix whose rows represent sequencing reads and whose columns represent genetic sites. Let m be the number of rows, let n be the number of columns, and let Mi,j be the jth element in the ith row. In each column, let Σj⊂Σ represent the set of possible alleles such that Mi,j∈Σj∪{−}, the " −" gap symbol representing a site at which the read provides no information. We assume no row or column is composed only of gap symbols, an uninteresting edge case. An example alignment matrix is shown in Fig. 6. Throughout the following, we will be informal and refer to a row i or column j, being clear from the context whether we are referring to the row or column itself or the coordinate. Alignment matrix. Here, the alphabet of possible alleles is the set of DNA nucleotides, i.e., Σ={A,C,G,T} Genotype inference problem overview A diplotype H=(H1,H2) is a pair of haplotype (segments); a haplotype (segment)\(H^{k} = H_{1}^{k}, H_{2}^{k}, \ldots, H_{n}^{k}\) is a sequence of length n whose elements represents alleles such that \(H_{j}^{k} \in \Sigma _{j}\). Let B=(B1,B2) be a bipartition of the rows of M into two parts (sets): B1, the first part, and B2, the second part. We use bipartitions to represent which haplotypes the reads came from, of the two in a genome. By convention, we assume that the first part of B are the reads arising from H1 and the second part of B are the reads arising from H2. The problem we analyze is based upon a probabilistic model that essentially represents the (weighted) minimum error correction (MEC) problem [39, 40], while modeling the evolutionary relationship between the two haplotypes and so imposing a cost on bipartitions that create differences between the inferred haplotypes. For a bipartition B, and making an i.i.d. assumption between sites in the reads: $${}P(H | B, \mathbf{M}) = \prod_{j=1}^{n} \sum_{Z_{j} \in \Sigma_{j}} P\left(H_{j}^{1} | B^{1}, Z_{j}\right) P\left(H_{j}^{2} | B^{2}, Z_{j}\right) P(Z_{j}) $$ Here, P(Zj) is the prior probability of the ancestral allele Zj of the two haplotypes at column j, by default we can use a simple flat distribution over ancestral alleles (but see below). The posterior probability \(P(H_{j}^{k}|B^{k}, Z_{j}) = \) $$\frac{ P\left(H_{j}^{k} | Z_{j}\right) \prod_{\left\{ i \in B^{k} : \mathbf{M}_{i,j} \not= -\right\}} P\left(\mathbf{M}_{i,j} | H_{j}^{k}\right)}{\sum_{Y_{j} \in \Sigma_{j}} P(Y_{j} | Z_{j}) \prod_{\left\{ i \in B^{k} : \mathbf{M}_{i,j} \not= -\right\}} P(\mathbf{M}_{i,j} | Y_{j})} $$ for k∈{1,2}, where the probability \(P\left (H^{k}_{j} | Z_{j}\right)\) is the probability of the haplotype allele \(H^{k}_{j}\) given the ancestral allele Zj. For this, we can use a continuous time Markov model for allele substitutions, such as Jukes and Cantor [41], or some more sophisticated model that factors the similarities between alleles (see below). Similarly, \(P\left (\mathbf {M}_{i,j} | H_{j}^{k}\right)\) is the probability of observing allele Mi,j in a read given the haplotype allele \(H_{j}^{k}\). The genotype inference problem we consider is finding for each site: $$\underset{\left(H^{1}_{j}, H^{2}_{j}\right)}{\text{arg\,max}} P\left(H^{1}_{j}, H^{2}_{j} |\mathbf{M}\right) = \underset{\left(H^{1}_{j}, H^{2}_{j}\right)}{\text{arg\,max}} \sum_{B} P\left(H^{1}_{j}, H^{2}_{j} | B,\mathbf{M}\right) $$ i.e., finding the genotype \(\left (H^{1}_{j}, H^{2}_{j}\right)\) with maximum posterior probability for a generative model of the reads embedded in M. A graphical representation of read partitions For column j in M, row i is active if the first non-gap symbol in row i occurs at or before column j and the last non-gap symbol in row i occurs at or after column j. Let Aj be the set of active rows of column j. For column j, row i is terminal if its last non-gap symbol occurs at column j or j=n. Let \(A^{\prime }_{j}\) be the set of active, non-terminal rows of column j. Let \(B_{j} = \left (B_{j}^{1}, B_{j}^{2}\right)\) be a bipartition of Aj into the first part \(B_{j}^{1}\) and a second part \(B_{j}^{2}\). Let Bj be the set of all possible such bipartitions of the active rows of j. Similarly, let \(C_{j} = \left (C_{j}^{1}, C_{j}^{2}\right)\) be a bipartition of \(A^{\prime }_{j}\) and Cj be the set of all possible such bipartitions of the active, non-terminal rows of j. For two bipartitions B=(B1,B2) and C=(C1,C2), B is compatible with C if the subset of B1 in C1∪C2 is a subset of C1, and, similarly, the subset of B2 in C1∪C2 is a subset of C2. Note this definition is symmetric and reflexive, although not transitive. Let G=(VG,EG) be a directed graph. The vertices VG are the set of bipartitions of both the active rows and the active, non-terminal rows for all columns of M and a special start and end vertex, i.e., \(V_{G} = \{ \mathrm {start, end} \} \cup (\bigcup _{j} \mathbf {B}_{\mathbf {j}} \cup \mathbf {C}_{\mathbf {j}}\)). The edges EG are a subset of compatibility relationships, such that (1) for all j, there is an edge (Bj∈Bj,Cj∈Cj) if Bj is compatible with Cj; (2) for all 1≤j<n, there is an edge (Cj∈Cj,Bj+1∈Bj+1) if Cj is compatible with Bj+1; (3) there is an edge from the start vertex to each member of B1; and (4) there is an edge from each member of Bn to the end vertex (note that Cn is empty and so contributes no vertices to G). Figure 7 shows an example graph. Example graph. Left—an alignment matrix. Right—the corresponding directed graph representing the bipartitions of active rows and active non-terminal rows, where the labels of the nodes indicate the partitions, e.g., "1,2 /." is shorthand for A=({1,2},{}}) The graph G has a large degree of symmetry and the following properties are easily verified: For all j and all Bj∈Bj, the indegree and outdegree of Bj is 1. For all j, the indegree of all members of Cj is equal. Similarly, for all j, the outdegree of all members of Cj is equal. Let the maximum coverage, denoted maxCov, be the maximum cardinality of a set Aj over all j. By definition, maxCov≤m. Using the above properties it is easily verified that (1) the cardinality of G (number of vertices) is bounded by this maximum coverage, being less than or equal to 2+(2n−1)2maxCov and (2) the size of G (number of edges) is at most 2n2maxCov. Let a directed path from the start vertex to the end vertex be called a diploid path, D=(D1=start,D2,…,D2n+1=end). The graph is naturally organized by the columns of M, so that \(D_{2j} = \left (B_{j}^{1}, B_{j}^{2}\right) \in \mathbf {B}_{\mathbf {j}}\) and \(D_{2j+1} = \left (C_{j+1}^{1}, C_{j+1}^{2}\right) \in \mathbf {C}_{\mathbf {j}}\) for all 0<j≤n. Let \(B_{D} = \left (B_{D}^{1}, B_{D}^{2}\right)\) denote a pair of sets, where \(B_{D}^{1}\) is the union of the first parts of the vertices of D2,…,D2n+1 and, similarly, \(B_{D}^{2}\) is the union of second parts of the vertices of D2,…,D2n+1. \(B_{D}^{1}\) and \(B_{D}^{2}\) are disjoint because otherwise there must exist a pair of vertices within D that are incompatible, which is easily verified to be impossible. Further, because D visits a vertex for every column of M, it follows that the sum of the cardinalities of these two sets is m. BD is therefore a bipartition of the rows of M which we call a diploid path bipartition. Lemma 1 The set of diploid path bipartitions is the set of bipartitions of the rows of M and each diploid path defines a unique diploid path bipartition. We first prove that each diploid path defines a unique bipartition of the rows of M. For each column j of M, each vertex Bj∈Bj is a different bipartition of the same set of active rows. Bj is by definition compatible with a diploid path bipartition of a diploid path that contains it and incompatible with every other member of Bj. It follows that for each column j, two diploid paths with the same diploid path bipartition must visit the same node in Bj, and, by identical logic, the same node in Cj, but then two such diploid paths are therefore equal. There are 2m partitions of the rows of M. It remains to prove that there are 2m diploid paths. By the structure of the graph, the set of diploid paths can be enumerated backwards by traversing right-to-left from the end vertex by depth-first search and exploring each incoming edge for all encountered nodes. As stated previously, the only vertices with indegree greater than one are for all j the members of Cj, and each member of Cj has the same indegree. For all j, the indegree of Cj is clearly \(\phantom {\dot {i}\!}2^{|C_{j}| - |B_{j}|}\): two to the power of the number of number of active, terminal rows at column j. The number of possible paths must therefore be \(\prod _{j=1}^{n} 2^{|C_{j}| - |B_{j}|}\). As each row is active and terminal in exactly one column, we obtain \(m = \sum _{j} |C_{j}| - |B_{j}|\) and therefore: $$2^{m} = \prod_{j=1}^{n} 2^{|C_{j}| - |B_{j}|} $$ A hidden Markov model for genotype and diplotype inference In order to infer diplotypes, we define a hidden Markov model which is based on G but additionally represents all possible genotypes at each genomic site (i.e., in each B column). To this end, we define the set of states Bj×Σj×Σj, which contains a state for each bipartition of the active rows at position j and all possible assignments of alleles in Σj to the two partitions. Additionally, the HMM contains a hidden state for each bipartition in Cj, exactly as defined for G above. Transitions between states are defined by the compatibility relationships of the corresponding bipartitions as before. This HMM construction is illustrated in Fig. 8. Genotyping HMM. Colored states correspond to bipartitions of reads and allele assignments at that position. States in C1 and C2 correspond to bipartitions of reads covering positions 1 and 2 or 2 and 3, respectively. In order to compute genotype likelihoods after running the forward-backward algorithm, states of the same color have to be summed up in each column For all j and all Cj∈Cj, each outgoing edge has transition probability \(P(a_{1}, a_{2}) = \sum _{Z_{j}} P(a_{1} | Z_{j}) P(a_{2} | Z_{j})P(Z_{j})\), where (Bj,a1,a2)∈Bj×Σj×Σj is the state being transitioned to. Similarly, each outgoing edge of the start node has transition probability P(a1,a2). The outdegree of all remaining nodes is 1, so these edges have transition probability 1. The start node, the end node, and the members of Cj for all j are silent states and hence do not emit symbols. For all j, members of Bj×Σj×Σj output the entries in the jth column of M that are different from "–." We assume every matrix entry to be associated with an error probability, which we can compute from \(P(\mathbf {M}_{ij} | H_{j}^{k})\) defined previously. Based on this, the probability of observing a specific output column of M can be easily calculated. Computing genotype likelihoods The goal is to compute the genotype likelihoods for the possible genotypes for each variant position using the HMM defined above. Performing the forward-backward algorithm returns forward and backward probabilities of all hidden states. Using those, the posterior distribution of a state (B,a1,a2)∈Bj×Σj×Σj, corresponding to bipartition B and assigned alleles a1 and a2, can be computed as: $$ \begin{aligned} P((B,a_{1}, a_{2})\vert \mathbf{M})\! =\! \frac{\alpha_{j}(B,a_{1}, a_{2}) \cdot \beta_{j}(B,a_{1}, a_{2})}{\sum\limits_{B^{\prime} \in \mathcal{B}(A_{j})} \sum\limits_{a_{1}^{\prime}, a_{2}^{\prime} \in \Sigma_{j}} \alpha_{j}\left(B^{\prime},a_{1}^{\prime},a_{2}^{\prime}\right)\cdot \beta_{j}\left(B^{\prime},a_{1}^{\prime},a_{2}^{\prime}\right)} \end{aligned} $$ where αj(B,a1,a2) and βj(B,a1,a2) denote forward and backward probabilities of the state (B,a1,a2) and \(\mathcal {B}(A_{j})\), the set of all bipartitions of Aj. The above term represents the probability for a bipartition B=(B1,B2) of the reads in Aj and alleles a1 and a2 assigned to these partitions. In order to finally compute the likelihood for a certain genotype, one can marginalize over all bipartitions of a column and all allele assignments corresponding to that genotype. In order to compute genotype likelihoods for each column of the alignment matrix, posterior state probabilities corresponding to states of the same color in Fig. 8 need to be summed up. For the first column, adding up the red probabilities gives the genotype likelihood of genotype T/T, blue of genotype G/T, and yellow of G/G. We created two independent software implementations of this model, one based upon WhatsHap and one from scratch, which we call MarginPhase. Each uses different optimizations and heuristics that we briefly describe. WhatsHap implementation We extended the implementation of WhatsHap ([22], https://bitbucket.org/whatshap/whatshap) to enable haplotype aware genotyping of bi-allelic variants based on the above model. WhatsHap focuses on re-genotyping variants, i.e., it assumes SNV positions to be given. In order to detect variants, a simple SNV calling pipeline was developed. It is based on samtools mpileup [42] which provides information about the bases supported by each read covering a genomic position. A set of SNV candidates is generated by selecting genomic positions at which the frequency of a non-reference allele is above a fixed threshold (0.25 for PacBio data, 0.4 for Nanopore data), and the absolute number of reads supporting the non-reference allele is at least 3. These SNV positions are then genotyped using WhatsHap. Allele detection In order to construct the alignment matrix, a crucial step is to determine whether each read supports the reference or the alternative allele at each of n given genomic positions. In WhatsHap, this is done based on re-aligning sections of the reads [16]. Given an existing read alignment from the provided BAM file, its sequence in a window around the variant is extracted. It is aligned to the corresponding region of the reference sequence and, additionally, to the alternative sequence, which is artificially produced by inserting the alternative allele into the reference. The alignment cost is computed by using affine gap costs. Phred scores representing the probabilities for opening and extending a gap and for a mismatch in the alignment can be estimated from the given BAM file. The allele leading to a lower alignment cost is assumed to be supported by the read and is reported in the alignment matrix. If both alleles lead to the same cost, the corresponding matrix entry is "–." The absolute difference of both alignment scores is assigned as a weight to the corresponding entry in the alignment matrix. It can be interpreted as a phred scaled probability for the allele being wrong and is utilized for the computation of output probabilities. Read selection Our algorithm enumerates all bipartitions of reads covering a variant position and thus has a runtime exponential in the maximum coverage of the data. To ensure that this quantity is bounded, the same read selection step implemented previously in the WhatsHap software is run before constructing the HMM and computing genotype likelihoods. Briefly, a heuristic approach described in [43] is applied, which selects phase informative reads iteratively taking into account the number of heterozygous variants covered by the read and its quality. Defining separate states for each allele assignment in Bj enables easy incorporation of prior genotype likelihoods by weighting transitions between states in Cj−1 and Bj×Σj×Σj. Since there are two states corresponding to a heterozygous genotype in the bi-allelic case (0|1 and 1|0), the prior probability for the heterozygous genotype is equally spread between these states. In order to compute such genotype priors, the same likelihood function underlying the approaches described in [44] and [45] was utilized. For each SNV position, the model computes a likelihood for each SNV to be absent, heterozygous, or homozygous based on all reads that cover a particular site. Each read contributes a probability term to the likelihood function, which is computed based on whether it supports the reference or the alternative allele [44]. Furthermore, the approach accounts for statistical uncertainties arising from read mapping and has a runtime linear in the number of variants to be genotyped [45]. Prior genotype likelihoods are computed before read selection. In this way, information of all input reads covering a position can be incorporated. MarginPhase implementation MarginPhase (https://github.com/benedictpaten/marginPhase) is an experimental, open source implementation of the described HMM written in C. It differs from the WhatsHap implementation in the method it uses to explore bipartitions and the method to generate allele support probabilities from the reads. Read bipartitions The described HMM scales exponentially in terms of increasing read coverage. For typical 20–60 × sequencing coverage (i.e., average number of active rows per column), it is impractical to store all possible bipartitions of the rows of the matrix. MarginPhase implements a simple, greedy pruning and merging heuristic outlined in recursive pseudocode in Algorithm 1. The procedure computePrunedHMM takes an alignment matrix and returns a connected subgraph of the HMM for M that can be used for inference, choosing to divide the input alignment matrix into two if the number of rows (termed maxCov) exceeds a threshold t, recursively. The sub-procedure mergeHMMs takes two pruned HMMs for two disjoint alignment matrices with the same number of columns and joins them together in the natural way such that if at each site i there are \(\left |\mathbf {B}_{\mathbf {i}}^{\mathbf {1}}\right |\) states in HMM1 and \(\left |\mathbf {B}_{\mathbf {i}}^{\mathbf {2}}\right |\) in HMM2, then the resulting HMM will have \(\left |\mathbf {B}_{\mathbf {i}}^{\mathbf {1}}\right | \times \left |\mathbf {B}_{\mathbf {i}}^{\mathbf {2}}\right |\) states. This is illustrated in Fig. 9. In the experiments used here t=8 and v=0.01. The merger of two read partitioning HMMs with the same number of columns. Top and middle: two HMMs to be merged; bottom: the merged HMM. Transition and emission probabilities not shown Allele supports In MarginPhase, the alignment matrix initially has a site for each base in the reference genome. To generate the allele support for each reference base from the reads for each read, we calculate the posterior probability of each allele (reference base) using the implementation of the banded forward-backward pairwise alignment described in [46]. The result is that for each reference base, for each read that overlaps (according to an initial guide alignment extracted from the SAM/BAM file) the reference base, we calculate the probability of each possible nucleotide (i.e., { 'A', 'C', 'G', 'T' }). The gaps are ignored and treated as missing data. This approach allows summation over all alignments within the band. Given the supports for each reference base, we then prune the set of reference bases considered to those with greater than (by default) three expected non-reference alleles. This expectation is merely the sum of non-reference allele base probabilities over the reads. This reduces the number of considered sites by approximately two orders of magnitude, greatly accelerating the HMM computation. Substitution probabilities We set the read error substitution probabilities, i.e., \(P\left (\mathbf {M}_{i,j} | H_{j}^{k}\right)\), empirically and iteratively. Starting from a 1% flat substitution probability, we generate a ML read bipartition and pair of haplotypes, we then re-estimate the read error probabilities from the differences between the reads and the haplotypes. We then rerun the model and repeat the process to derive the final probabilities. For the haplotype substitution probabilities, i.e., \(P\left (H_{j}^{k} | Z_{j}\right)\), we use substitution probabilities of 0.1% for transversions and 0.4% for transitions, reflecting the facts that transitions are twice as likely empirically but that there are twice as many possible transversions. Phase blocks MarginPhase divides the read partitioning HMMs into phase sets based on the number of reads which span adjacent likely heterozygous sites. The bipartitioning is performed on each of these phase sets individually. MarginPhase's output includes a BAM which encodes the phasing of each read, including which phase set it is in, which haplotype it belongs to, and what of the aligned portion falls into each phase set. Reads which span a phase set boundary have information for both encoded in them. Van der Auwera GA, Carneiro MO, Hartl C, Poplin R, Del Angel G, Levy-Moonshine A, Jordan T, Shakir K, Roazen D, Thibault J, et al.From FastQ data to high-confidence variant calls: the genome analysis toolkit best practices pipeline. Curr Protoc Bioinforma. 2013; 43(1):11–0. 1000 Genomes Project Consortium. A global reference for human genetic variation. Nature. 2015; 526(7571):68. Li W, Freudenberg J. Mappability and read length. Front Genet. 2014; 5:381. Altemose N, Miga KH, Maggioni M, Willard HF. Genomic characterization of large heterochromatic gaps in the human genome assembly. PLoS Comput Biol. 2014; 10(5):1003628. Porubsky D, Garg S, Sanders AD, Korbel JO, Guryev V, Lansdorp PM, Marschall T. Dense and accurate whole-chromosome haplotyping of individual genomes. Nat Commun. 2017; 8(1):1293. Chaisson MJ, Sanders AD, Zhao X, Malhotra A, Porubsky D, Rausch T, Gardner EJ, Rodriguez O, Guo L, Collins RL, et al.Multi-platform discovery of haplotype-resolved structural variation in human genomes. bioRxiv. 2018. https://doi.org/10.1101/193144. Jain M, Koren S, Miga KH, Quick J, Rand AC, Sasani TA, Tyson JR, Beggs AD, Dilthey AT, Fiddes IT, et al.Nanopore sequencing and assembly of a human genome with ultra-long reads. Nat Biotechnol. 2018; 36(4):338. Sedlazeck FJ, Lee H, Darby CA, Schatz M. Piercing the dark matter: bioinformatics of long-range sequencing and mapping. Nat Rev Genet. 2018; 19(6):1. Bonizzoni P, Vedova GD, Dondi R, Li J. The haplotyping problem: an overview of computational models and solutions. J Comput Sci Technol. 2003; 18(6):675–88. Glusman G, Cox HC, Roach JC. Whole-genome haplotyping approaches and genomic medicine. Genome Med. 2014; 6(9):73. Rhee J-K, Li H, Joung J-G, Hwang K-B, Zhang B-T, Shin S-Y. Survey of computational haplotype determination methods for single individual. Genes Genomics. 2016; 38(1):1–12. Klau GW, Marschall T. A guided tour to computational haplotyping. In: Unveiling Dynamics and Complexity. Lecture Notes in Computer Science. Cham: Springer: 2017. p. 50–63. Pirola Y, Zaccaria S, Dondi R, Klau GW, Pisanti N, Bonizzoni P. Hapcol: accurate and memory-efficient haplotype assembly from long reads. Bioinformatics. 2015; 32(11):1610–7. Bansal V, Bafna V. Hapcut: an efficient and accurate algorithm for the haplotype assembly problem. Bioinformatics. 2008; 24(16):153–9. Patterson M, Marschall T, Pisanti N, Van Iersel L, Stougie L, Klau GW, Schönhuth A. Whatshap: weighted haplotype assembly for future-generation sequencing reads. J Comput Biol. 2015; 22(6):498–509. Martin M, Patterson M, Garg S, Fischer S, Pisanti N, Klau GW, Schoenhuth A, Marschall T. Whatshap: fast and accurate read-based phasing. bioRxiv. 2016. https://doi.org/10.1101/085050. Guo F, Wang D, Wang L. Progressive approach for SNP calling and haplotype assembly using single molecular sequencing data. Bioinformatics. 2018; 34(12):2012–8. Luo R, Sedlazeck FJ, Lam T-W, Schatz M. Clairvoyante: a multi-task convolutional deep neural network for variant calling in single molecule sequencing. bioRxiv. 2018. https://doi.org/10.1101/310458. Poplin R, Chang P-C, Alexander D, Schwartz S, Colthurst T, Ku A, Newburger D, Dijamco J, Nguyen N, Afshar PT, Gross SS, Dorfman L, McLean CY, DePristo MA. A universal SNP and small-indel variant caller using deep neural networks. Nat Biotechnol. 2018; 36(10):983–7. Zook J, McDaniel J, Parikh H, Heaton H, Irvine SA, Trigg L, Truty R, McLean CY, De La Vega FM, Salit M, et al.Reproducible integration of multiple sequencing datasets to form high-confidence SNP, indel, and reference calls for five human genome reference materials. bioRxiv. 2018. https://doi.org/10.1101/281006. Li N, Stephens M. Modeling linkage disequilibrium and identifying recombination hotspots using single-nucleotide polymorphism data. Genetics. 2003; 165(4):2213–33. Zook JM, Catoe D, McDaniel J, Vang L, Spies N, Sidow A, Weng Z, Liu Y, Mason CE, Alexander N, et al.Extensive sequencing of seven human genomes to characterize benchmark reference materials. Sci Data. 2016; 3:160025. Li H. Minimap and miniasm: fast mapping and de novo assembly for noisy long sequences. Bioinformatics. 2016; 32(14):2103–10. Garrison E, Marth G. Haplotype-based variant detection from short-read sequencing. arXiv. 2012. arXiv:1207.3907. Cleary JG, Braithwaite R, Gaastra K, Hilbush BS, Inglis S, Irvine SA, Jackson A, Littin R, Rathod M, Ware D, et al.Comparing variant call files for performance benchmarking of next-generation sequencing variant calling pipelines. bioRxiv. 2015. https://doi.org/10.1101/023754. Korlach J. Perspective - understanding accuracy in SMRT sequencing. 2013. www.pacb.com. Accessed 30 Apr 2019. O'Donnell CR, Wang H, Dunbar WB. Error analysis of idealized nanopore sequencing. Electrophoresis. 2013; 34(15):2137–44. Wang J, Raskin L, Samuels DC, Shyr Y, Guo Y. Genome measures used for quality control are dependent on gene function and ancestry. Bioinformatics. 2014; 31(3):318–23. Eberle MA, Fritzilas E, Krusche P, Källberg M, Moore BL, Bekritsky MA, Iqbal Z, Chuang H-Y, Humphray SJ, Halpern AL, et al.A reference data set of 5.4 million phased human variants validated by genetic inheritance from sequencing a three-generation 17-member pedigree. Genome Res. 2016; 27(1). Browning SR, Browning BL. Haplotype phasing: existing methods and new developments. Nat Rev Genet. 2011; 12(10):703–714. Arndt PF, Hwa T, Petrov DA. Substantial regional variation in substitution rates in the human genome: importance of GC content, gene density, and telomere-specific effects. J Mol Evol. 2005; 60(6):748–63. Weisenfeld NI, Yin S, Sharpe T, Lau B, Hegarty R, Holmes L, Sogoloff B, Tabbaa D, Williams L, Russ C, et al.Comprehensive variation discovery in single human genomes. Nat Genet. 2014; 46(12):1350. Rand AC, Jain M, Eizenga JM, Musselman-Brown A, Olsen HE, Akeson M, Paten B. Mapping dna methylation with high-throughput nanopore sequencing. Nat Methods. 2017; 14(4):411. Simpson JT, Workman RE, Zuzarte P, David M, Dursi L, Timp W. Detecting dna cytosine methylation using nanopore sequencing. Nat Methods. 2017; 14(4):407. Chin C-S, Peluso P, Sedlazeck FJ, Nattestad M, Concepcion GT, Clum A, Dunn C, O'Malley R, Figueroa-Balderas R, Morales-Cruz A, et al.Phased diploid genome assembly with single-molecule real-time sequencing. Nat Methods. 2016; 13(12):1050. Garg S, Rautiainen M, Novak AM, Garrison E, Durbin R, Marschall T. A graph-based approach to diploid genome assembly. Bioinformatics. 2018; 34(13):105–14. Kuleshov V. Probabilistic single-individual haplotyping. Bioinformatics. 2014; 30(17):379–85. Cilibrasi R, Van Iersel L, Kelk S, Tromp J. The complexity of the single individual SNP haplotyping problem. Algorithmica. 2007; 49(1):13–36. Greenberg HJ, Hart WE, Lancia G. Opportunities for combinatorial optimization in computational biology. INFORMS J Comput. 2004; 16(3):211–31. Jukes TH, Cantor CR. Evolution of protein molecules. Mamm Protein Metab. 1969; 1:22–123. Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, Marth G, Abecasis G, Durbin R. The sequence alignment/map format and SAMtools. Bioinformatics. 2009; 25(16):2078–9. Fischer SO, Marschall T. Selecting reads for haplotype assembly. bioRxiv. 2016. https://doi.org/10.1101/046771. Hehir-Kwa JY, Marschall T, Kloosterman WP, Francioli LC, Baaijens JA, Dijkstra LJ, Abdellaoui A, Koval V, Thung DT, Wardenaar R, et al.A high-quality human reference panel reveals the complexity and distribution of genomic structural variants. Nat Commun. 2016; 7:12989. Ebler J, Schönhuth A, Marschall T. Genotyping inversions and tandem duplications. Bioinformatics. 2017; 33(24):4015–23. Jain M, Fiddes IT, Miga KH, Olsen HE, Paten B, Akeson M. Improved data analysis for the minion nanopore sequencer. Nat Methods. 2015; 12(4):351. Ebler J, Haukness M, Pesout T, Marschall T, Paten B. Haplotype-aware diplotyping from noisy long reads data sets. 2018. https://doi.org/10.5281/zenodo.2616973. Karolchik D, Hinrichs AS, Furey TS, Roskin KM, Sugnet CW, Haussler D, Kent WJ. The UCSC table browser data retrieval tool. Nucleic Acids Res. 2004; 32(suppl_1):493–6. Harrow J, Frankish A, Gonzalez JM, Tapanari E, Diekhans M, Kokocinski F, Aken BL, Barrell D, Zadissa A, Searle S, et al.GENCODE: the reference human genome annotation for the ENCODE project. Genome Res. 2012; 22(9):1760–74. Rosenbloom KR, Sloan CA, Malladi VS, Dreszer TR, Learned K, Kirkup VM, Wong MC, Maddren M, Fang R, Heitner SG, et al.ENCODE data in the UCSC genome browser: year 5 update. Nucleic Acids Res. 2012; 41(D1):56–63. Smit A, Hubley R, Green P. Repeatmasker open-4.0. 2013-2015. 2017. http://repeatmasker.org. We thank the GIAB project for providing the data sets used. In particular, we thank Justin Zook for the helpful discussions on how to use GIAB data and Miten Jain for the help with the nanopore data. This work was supported, in part, by the National Institutes of Health (award numbers: 5U54HG007990, 5T32HG008345-04, 1U01HL137183 R01HG010053, U01HL137183, and U54HG007990 to BP), and by the German Research Foundation (DFG) under award number 391137747 and grants from the W.M. Keck foundation and the Simons Foundation. The datasets generated and analyzed during the current study as well as the version of the source code used are available at http://doi.org/10.5281/zenodo.2616973 [47]. MarginPhase and WhatsHap are released as Open Source software under the MIT licence. MarginPhase is available at https://github.com/benedictpaten/marginPhase, and WhatsHap is available at https://bitbucket.org/whatshap/whatshap. Review history The review history is available as Additional file 2. Jana Ebler, Marina Haukness, and Trevor Pesout contributed equally to this work. Center for Bioinformatics, Saarland University, Saarland Informatics Campus E2.1, Saarbrücken, 66123, Germany Jana Ebler & Tobias Marschall Max Planck Institute for Informatics, Saarland Informatics Campus E1.4, Saarbrücken, Germany Graduate School of Computer Science, Saarland University, Saarland Informatics Campus E1.3, Saarbrücken, Germany UC Santa Cruz Genomics Institute, University of California Santa Cruz, Santa Cruz, 95064, CA, USA Marina Haukness , Trevor Pesout & Benedict Paten Search for Jana Ebler in: Search for Marina Haukness in: Search for Trevor Pesout in: Search for Tobias Marschall in: Search for Benedict Paten in: MH, TP, and BP implemented MarginPhase. JE and TM implemented the genotyping module of WhatsHap. All authors contributed to the evaluation and wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Tobias Marschall or Benedict Paten. Tobias Marschall and Benedict Paten are joint last authors. Additional file 1 We provide precision, recall, and genotype concordance of MarginPhase on PacBio and WhatsHap on Nanopore reads. Furthermore, we give the results of cutting and downsampling Nanopore reads. We also provide more detailed phasing statistics and report results on re-typing the GIAB indels. Additionally, we show how precision, recall, and F-measure of our callsets behave as a function of read depth and analyze homozygous and heterozygous calls separately. We also provide figures that show the genotyping performance of our methods for different thresholds on the reported genotype qualities. (PDF 695 kb) Review history. (PDF 220 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Ebler, J., Haukness, M., Pesout, T. et al. Haplotype-aware diplotyping from noisy long reads. Genome Biol 20, 116 (2019) doi:10.1186/s13059-019-1709-0 Received: 25 July 2018 Accepted: 06 May 2019 Computational genomics Haplotypes Diplotypes
CommonCrawl
Level C Unit 13 definitions isabelle_villagrana Adapt 2 (v.) to adjust or change to suit conditions Attest 2 (v.) to bear witness, affirm to be true or genuine Dovetail 1 (v.) to fit together exactly; to connect so as to form a whole; (n.) a carpentry figure resembling a dove's tail Enormity 2 (n.) the quality of exceeding all moral bounds; an exceedingly evil act; huge size, immensity Falter 1 (v.) to hesitate, stumble, lose courage; to speak hesitatingly; to lose drive, weaken, decline Foreboding 2 (n.) a warning or feeling that something bad will happen; (adj.) marked by fear, ominous Forlorn 2 (adj.) totally abandoned and helpless; sad and lonely; wretched or pitiful; almost hopeless Haughty 1 (Adj) chillingly proud and scornful Impediment 2 (n.) a physical defect; a hindrance, obstacle Imperative 2 (Adj) necessary, urgent; (n.) a form of a verb expressing a command; that which is necessary or required Loiter 1 (v.) to linger in an aimless way, hang around, dawdle, tarry Malinger 2 (v.) to pretend illness to avoid duty or work, lie down on the job Pithy 1 (Adj.) short but full of meaning Plunder 1 (v.) to rob by force, especially during wartime; to seize wrongfully; (n.) property stolen by force Simper 1 (v.) to smile or speak in a silly, forced way; (n.) a silly, forced smile Steadfast 1 (adj.) firmly fixed; constant, not moving or changing Vaunted 1 (adj.) much boasted about in a vain or swaggering way Vilify 1 (v.) to abuse or belittle unjustly or maliciously Waif 0 (n.) a person (usually a child) without a home or friend; a stray person or animal; something that comes along by chance, a stray bit Wry 0 (adj.) twisted, turned to one side; cleverly and often grimly humorous Social Studies Final moore21svdp Spanish Final peterson215 Sadlier Level C Unit 13 - Synonyms Kris_Miller2 Sadlier Level C Unit 13 - Antonyms Literature 1-15 Vocab final Social studies final ch 20 Unit 15 Level C Vocabulary Level C Unit 14 From the list below, supply the words needed to complete the paragraph. Some words will not be used. admonish, affliction, lax, imperturbable, delete, obeisance, oust. Everyone knew that Kiplar was an outsider when he failed to give _____ to the passing Queen. One of the royal escorts immediately spotted Kiplar and walked over to _____ him for his failure to show respect. "What is this _____ of yours that prevents you from taking a knee upon sight of our queen? Shall we _____ you from your sleepy village and see how you fare in a dungeon?" The guard's threat did not faze the _____ man. Kiplar, still standing, simply smiled. From the list below, supply the words needed to complete the paragraph. Some words will not be used. oeuvre, approbation, arbiter, coup, attrition, secular, archetype, vagary. J.T. Fleming, donning a beret and sitting in a raised, wooden folding chair, seemed to be a perfect _____ of the Hollywood director. With two science fiction blockbusters in his _____ that had made over $1 billion worldwide, Fleming had come to enjoy the _____ of all his investors, until his latest production. In a departure from _____ films, Fleming now wanted to make a masterpiece based on religious themes. Afraid that the film might offend many patrons, the investors began to plot a[n] _____ against Fleming and slowly pulled their funding for the project. The consequent lawsuits over the contracts between the studio and the investors kept _____ working for months determining who owed money, and the lengthy delay in production caused a[n] _____ of interest in the film, eventually leading it to open poorly at the box office. From the list below, supply the words needed to complete the paragraph. Some words will not be used. $$ \begin{matrix} \text{fluent} & \text{pariah} & \text{ensue} & \text{rambunctious}\\ \text{fiat} & \text{melee} & \text{desecrate}\\ \end{matrix} $$ Community outrage ______ when the groundskeeper reported that a group of ______ teenagers had ______ the cemetery. "What kind of ______ sinks so low as to destroy that which cannot be defended?" asked the mayor when told of the crime. "These are memorials to our forefathers, our families." The mayor, forced to take some type of action to prevent such outrageous crimes, issued a[n] ______ mandating a curfew on the small town. Correct the word in parentheses. If the word is correct, write C in the blank. _____ Hemingway went to Paris where he met American authors such as F. Scott Fitzgerald and Gertrude Stein who were not so (different than) himself.
CommonCrawl
Local exact controllability to positive trajectory for parabolic system of chemotaxis MCRF Home Optimal control for a phase field system with a possibly singular potential March 2016, 6(1): 113-141. doi: 10.3934/mcrf.2016.6.113 A relaxation result for state constrained inclusions in infinite dimension Helene Frankowska 1, , Elsa M. Marchini 2, and Marco Mazzola 3, CNRS, Institut de Mathématiques de Jussieu - Paris Rive Gauche, Sorbonne Universités, UPMC Univ Paris 06, Univ Paris Diderot, Sorbonne Paris Cité, Case 247, 4 Place Jussieu, 75252 Paris, France Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo Da Vinci 32, 20133 Milano, Italy UPMC Univ Paris 06, Institut de Mathématiques de Jussieu - Paris Rive Gauche, Sorbonne Universités, CNRS, Univ Paris Diderot, Sorbonne Paris Cité, Case 247, 4 Place Jussieu, 75252 Paris, France Received December 2014 Revised July 2015 Published January 2016 In this paper we consider a state constrained differential inclusion $\dot x\in \mathbb A x+ F(t,x)$, with $\mathbb A$ generator of a strongly continuous semigroup in an infinite dimensional separable Banach space. Under an ``inward pointing condition'' we prove a relaxation result stating that the set of trajectories lying in the interior of the constraint is dense in the set of constrained trajectories of the convexified inclusion $\dot x\in \mathbb A x+ \overline{\textrm{co}}F(t,x)$. Some applications to control problems involving PDEs are given. Keywords: relaxation, state constraints, mild solution, Differential inclusion, inward pointing condition.. Mathematics Subject Classification: 34G25, 34K09, 49J4. Citation: Helene Frankowska, Elsa M. Marchini, Marco Mazzola. A relaxation result for state constrained inclusions in infinite dimension. Mathematical Control & Related Fields, 2016, 6 (1) : 113-141. doi: 10.3934/mcrf.2016.6.113 N. Alikakos, An application of the invariance principle to reaction-diffusion equations,, J. Differential Equations, 33 (1979), 201. doi: 10.1016/0022-0396(79)90088-3. Google Scholar V. Barbu, Analysis and Control of Nonlinear Infinite Dimensional Systems,, Academic Press, (1993). Google Scholar A. Bensoussan, G. Da Prato, M. Delfour and S. K. Mitter, Representation and Control of Infinite Dimensional Systems, vol.1,, Birkhäuser, (1992). Google Scholar A. Bensoussan, G. Da Prato, M. Delfour and S. K. Mitter, Representation and Control of Infinite Dimensional Systems, vol.2,, Birkhäuser, (1993). doi: 10.1007/978-0-8176-4581-6. Google Scholar P. Bettiol, A. Bressan and R. B. Vinter, On trajectories satisfying a state constraint: $W^{1,1}$ estimates and counter-examples,, SIAM J. Control Optim., 48 (2010), 4664. doi: 10.1137/090769788. Google Scholar P. Bettiol, H. Frankowska and R. B. Vinter, $L^\infty$ estimates on trajectories confined to a closed subset,, J. Differential Equations, 252 (2012), 1912. doi: 10.1016/j.jde.2011.09.007. Google Scholar L. Boltzmann, Zur theorie der elastischen nachwirkung,, Wien. Ber., 70 (1874), 275. Google Scholar L. Boltzmann, Zur theorie der elastischen nachwirkung,, Wied. Ann., 5 (1878), 430. Google Scholar P. L. Butzer and H. Berens, Semi-groups of Operators and Approximation,, Springer-Verlag, (1967). doi: 10.1007/978-3-642-64981-3. Google Scholar P. Cannarsa, H. Frankowska and E. M. Marchini, On Bolza optimal control problems with constraints,, Discrete Contin. Dyn. Syst., 11 (2009), 629. doi: 10.3934/dcdsb.2009.11.629. Google Scholar P. Cannarsa, H. Frankowska and E. M. Marchini, Optimal control for evolution equations with memory,, J. Evol. Equ., 13 (2013), 197. Google Scholar F. H. Clarke, Optimization and Nonsmooth Analysis,, SIAM, (1990). doi: 10.1137/1.9781611971309. Google Scholar C. M. Dafermos, Asymptotic stability in viscoelasticity,, Arch. Ration. Mech. Anal., 37 (1970), 297. Google Scholar H. O. Fattorini, Infinite-dimensional Optimization and Control Theory,, Cambridge University Press, (1999). doi: 10.1017/CBO9780511574795. Google Scholar H. Frankowska, A priori estimates for operational differential inclusions,, J. Differential Equations, 84 (1990), 100. doi: 10.1016/0022-0396(90)90129-D. Google Scholar H. Frankowska and M. Mazzola, Discontinuous solutions of Hamilton-Jacobi-Bellman equation under state constraints,, Calc. Var. Partial Differential Equations, 46 (2013), 725. doi: 10.1007/s00526-012-0501-8. Google Scholar H. Frankowska and M. Mazzola, On relations of the adjoint state to the value function for optimal control problems with state constraints,, Nonlinear Differ. Equ. Appl., 20 (2013), 361. doi: 10.1007/s00030-012-0183-0. Google Scholar H. Frankowska and F. Rampazzo, Filippov's and Filippov-Wazewski's theorems on closed domains,, J. Differential Equations, 161 (2000), 449. doi: 10.1006/jdeq.2000.3711. Google Scholar H. Frankowska and R. B. Vinter, Existence of neighbouring feasible trajectories: Applications to dynamic programming for state constrained optimal control problems,, J. Optim. Theory Appl., 104 (2000), 21. doi: 10.1023/A:1004668504089. Google Scholar A. N. Kolmogorov and S. V. Fomin, Introductory Real Analysis,, Dover, (1975). Google Scholar I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories,, Cambridge University Press, (2000). Google Scholar X. Li and J. Yong, Optimal Control Theory for Infinite-Dimensional Systems,, Birkhäuser, (1995). doi: 10.1007/978-1-4612-4260-4. Google Scholar R. H. Martin, Jr., Invariant sets for perturbed semigroups of linear operators,, Ann. Mat. Pura Appl., 105 (1975), 221. doi: 10.1007/BF02414931. Google Scholar J. V. Outrata and Z. Schindler, An augmented Lagrangian method for a class of convex continuous optimal control problems,, Problems Control Inform. Theory, 10 (1981), 67. Google Scholar A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations,, Springer-Verlag, (1983). doi: 10.1007/978-1-4612-5561-1. Google Scholar D. Preiss, Differentiability of Lipschitz functions on Banach spaces,, J. Functional Anal., 91 (1990), 312. doi: 10.1016/0022-1236(90)90147-D. Google Scholar J. Smoller, Shock Waves and Reaction-Diffusion Equations,, Springer-Verlag, (1983). Google Scholar H. M. Soner, Optimal control with state-space constraints,, SIAM J. Control Optim., 24 (1986), 552. doi: 10.1137/0324032. Google Scholar R. Temam, Infinite-dimensional Dynamical Systems in Mechanics and Physics,, Springer, (1988). doi: 10.1007/978-1-4612-0645-3. Google Scholar V. Volterra, Sur les équations intégro-différentielles et leurs applications,, Acta Math., 35 (1912), 295. doi: 10.1007/BF02418820. Google Scholar V. Volterra, Leçons sur les Fonctions De Lignes,, Gauthier-Villars, (1913). Google Scholar A. P. Wierzbicki and S. Kurcyusz, Projection on a cone, penalty functionals and duality theory for problems with inequality constraints in Hilbert space,, SIAM J. Control Optim., 15 (1977), 25. doi: 10.1137/0315003. Google Scholar Piotr Kowalski. The existence of a solution for Dirichlet boundary value problem for a Duffing type differential inclusion. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2569-2580. doi: 10.3934/dcdsb.2014.19.2569 Thomas Lorenz. Partial differential inclusions of transport type with state constraints. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1309-1340. doi: 10.3934/dcdsb.2019018 Piernicola Bettiol, Hélène Frankowska. Lipschitz regularity of solution map of control systems with multiple state constraints. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 1-26. doi: 10.3934/dcds.2012.32.1 Wojciech Kryszewski, Dorota Gabor, Jakub Siemianowski. The Krasnosel'skii formula for parabolic differential inclusions with state constraints. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 295-329. doi: 10.3934/dcdsb.2018021 Elimhan N. Mahmudov. Optimal control of second order delay-discrete and delay-differential inclusions with state constraints. Evolution Equations & Control Theory, 2018, 7 (3) : 501-529. doi: 10.3934/eect.2018024 Haiyan Zhang. A necessary condition for mean-field type stochastic differential equations with correlated state and observation noises. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1287-1301. doi: 10.3934/jimo.2016.12.1287 Francesca Faraci, Antonio Iannizzotto. Three nonzero periodic solutions for a differential inclusion. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 779-788. doi: 10.3934/dcdss.2012.5.779 Mansour Shrahili, Ravi Shanker Dubey, Ahmed Shafay. Inclusion of fading memory to Banister model of changes in physical condition. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 881-888. doi: 10.3934/dcdss.2020051 Jean-François Babadjian, Clément Mifsud, Nicolas Seguin. Relaxation approximation of Friedrichs' systems under convex constraints. Networks & Heterogeneous Media, 2016, 11 (2) : 223-237. doi: 10.3934/nhm.2016.11.223 Yejuan Wang, Tongtong Liang. Mild solutions to the time fractional Navier-Stokes delay differential inclusions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3713-3740. doi: 10.3934/dcdsb.2018312 Zhiming Guo, Zhi-Chun Yang, Xingfu Zou. Existence and uniqueness of positive solution to a non-local differential equation with homogeneous Dirichlet boundary condition---A non-monotone case. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1825-1838. doi: 10.3934/cpaa.2012.11.1825 T. Caraballo, J. A. Langa, J. Valero. Structure of the pullback attractor for a non-autonomous scalar differential inclusion. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 979-994. doi: 10.3934/dcdss.2016037 Ziqing Yuana, Jianshe Yu. Existence and multiplicity of nontrivial solutions of biharmonic equations via differential inclusion. Communications on Pure & Applied Analysis, 2020, 19 (1) : 391-405. doi: 10.3934/cpaa.2020020 M. Arisawa, P.-L. Lions. Continuity of admissible trajectories for state constraints control problems. Discrete & Continuous Dynamical Systems - A, 1996, 2 (3) : 297-305. doi: 10.3934/dcds.1996.2.297 Mikhail Gusev. On reachability analysis for nonlinear control systems with state constraints. Conference Publications, 2015, 2015 (special) : 579-587. doi: 10.3934/proc.2015.0579 Hyun-Jung Kim. Stochastic parabolic Anderson model with time-homogeneous generalized potential: Mild formulation of solution. Communications on Pure & Applied Analysis, 2019, 18 (2) : 795-807. doi: 10.3934/cpaa.2019038 Gaston N'Guerekata. On weak-almost periodic mild solutions of some linear abstract differential equations. Conference Publications, 2003, 2003 (Special) : 672-677. doi: 10.3934/proc.2003.2003.672 Clara Carlota, António Ornelas. The DuBois-Reymond differential inclusion for autonomous optimal control problems with pointwise-constrained derivatives. Discrete & Continuous Dynamical Systems - A, 2011, 29 (2) : 467-484. doi: 10.3934/dcds.2011.29.467 Antonia Chinnì, Roberto Livrea. Multiple solutions for a Neumann-type differential inclusion problem involving the $p(\cdot)$-Laplacian. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 753-764. doi: 10.3934/dcdss.2012.5.753 Dina Kalinichenko, Volker Reitmann, Sergey Skopinov. Asymptotic behavior of solutions to a coupled system of Maxwell's equations and a controlled differential inclusion. Conference Publications, 2013, 2013 (special) : 407-414. doi: 10.3934/proc.2013.2013.407 Helene Frankowska Elsa M. Marchini Marco Mazzola
CommonCrawl
Determine a sufficient condition for a Hessenberg matrix to be nonsingular Consider $A\in\mathbb{R}^{n\times n}$ whose nonzero elements are restricted to the main diagonal the strict upper triangular part, and the first subdiagonal. For $n = 8$ the locations that must be zero are indicated and the positions that may be nonzero are indicated by $\alpha_{ij}$: $$\begin{pmatrix} \alpha_{11} & \alpha_{12} & \alpha_{13} & \alpha_{14} & \alpha_{15} & \alpha_{16} & \alpha_{17} & \alpha_{18}\\ \alpha_{21} & \alpha_{22} & \alpha_{23} & \alpha_{24} & \alpha_{25} & \alpha_{26} & \alpha_{27} & \alpha_{28}\\ 0 & \alpha_{32} & \alpha_{33} & \alpha_{34} & \alpha_{35} & \alpha_{36} & \alpha_{37} & \alpha_{38}\\ 0 & 0 & \alpha_{43} & \alpha_{44} & \alpha_{45} & \alpha_{46} & \alpha_{47} & \alpha_{48}\\ 0 & 0 & 0 & \alpha_{54} & \alpha_{55} & \alpha_{56} & \alpha_{57} & \alpha_{58}\\ 0 & 0 & 0 & 0 & \alpha_{65} & \alpha_{66} & \alpha_{67} & \alpha_{68}\\ 0 & 0 & 0 & 0 & 0 & \alpha_{76} & \alpha_{77} & \alpha_{78}\\ 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{87} & \alpha_{88}\\ \end{pmatrix}$$ i.) Suppose the subdiagonal elements $\alpha_{i+1,i} \neq 0$ (this is called an unreduced Hessenberg matrix). Determine a necessary and sufficient condition for $A$ to be nonsingular. Attempted solution - If $\det(A)\neq 0$ then $A$ is nonsingular. ii.) Describe an efficient algorithm to solve $Ax = b$ via factorization and determine the order computational complexity, i.e., give $k$ in $O(n^k)$. Your solution should include a description of how you exploit the structure of the matrix and how it influences the structure of your factors. Attempted solution - I am thinking of just using the $LU$ factorization and getting $A$ such that $A = L + D + L^T$ then I can just calculate $Lx$, $Dx$, and $L^T x$ and sum the results (Carl Christian) recommended this in another exercise. Also since $A$ is almost upper trapezoidal we could simply apply the Gauss transform matrices $M_1, M_2,\ldots, M_7$ to get $U$ then we can easily find $L$ and then we would just use a forward and backward solve to compute $Ax = b$. This will still result in $O(n^2)$ computations. Anyways these type of questions are challenging for me, if anyone has any suggestions I would greatly appreciate it. Also, I want to know what constitutes as a complete solution for b.) as in what do I need to show in my solution to satisfy the conditions asked. linear-algebra matrix Rodrigo de Azevedo WolfyWolfy $\begingroup$ Hints. 1) What is a simple condition for a triangular matrix to be nonsingular? 2) How might you convert a Hessenberg matrix into a triangular matrix? $\endgroup$ – Richard Zhang $\begingroup$ For 1) if the determinant of a matrix is non zero then it is nonsingular. 2) we can convert the Hessenberg matrix into a triangular matrix by multiplying the Gaussian transform matrices in which I described above $\endgroup$ – Wolfy $\begingroup$ And what's the determinant of a triangular matrix? $\endgroup$ $\begingroup$ The determinant of a triagular matrix is the product of the diagonal entries $\endgroup$ $$\mathrm{A} = \begin{bmatrix} \mathrm{r}^{\top} & \alpha_{18}\\ \mathrm{U} & \mathrm{c}\end{bmatrix}$$ where $\mathrm{U} \in \mathbb{R}^{(n-1) \times (n-1)}$ is an upper triangular matrix. There is a permutation matrix $\mathrm{P}$ such that $$\mathrm{\mathrm{A}} \mathrm{\mathrm{P}} = \begin{bmatrix} \alpha_{18} & \mathrm{r}^{\top}\\ \mathrm{c} & \mathrm{U}\end{bmatrix}$$ whose determinant is $$\det (\mathrm{AP}) = \det (\mathrm{A}) \cdot \underbrace{\det(\mathrm{P})}_{=\pm1} = \det (\mathrm{U}) \cdot (\alpha_{18} - \mathrm{r}^{\top} \mathrm{U}^{-1} \mathrm{c})$$ As $\mathrm{U}$ is upper triangular, its determinant is the product of its entries on the main diagonal. Thus, if there are no zero entries on the main diagonal of $\mathrm{U}$, then $\mathrm{U}$ is invertible. If $\mathrm{U}$ is invertible and $\alpha_{18} \neq \mathrm{r}^{\top} \mathrm{U}^{-1} \mathrm{c}$, then we have $\pm \det (\mathrm{A}) \neq 0$, i.e., $\mathrm{A}$ is non-singular. To summarize, we have the following sufficient condition $$\left(\displaystyle\prod_{i=1}^{n-1} u_{ii} \neq 0\right) \land \left(\alpha_{18} \neq \mathrm{r}^{\top} \mathrm{U}^{-1} \mathrm{c}\right)$$ Note that if $\mathrm{U}$ is invertible, then $\mathrm{U}^{-1} \mathrm{c}$ is the unique solution to the linear system $\mathrm{U} \mathrm{y} = \mathrm{c}$, whose augmented matrix is $[\mathrm{U}\,|\,\mathrm{c}]$, which is a submatrix of $\mathrm{A}$ (namely, its last $n-1$ rows). Rodrigo de AzevedoRodrigo de Azevedo $\begingroup$ You can strengthen your proof to show that you condition is both necessary and sufficient. Suppose the block matrix $M = \begin{bmatrix} A & B \\ C & D \end{bmatrix}$ has $A$ nonsingular, then $M$ is row equivalent to $\begin{bmatrix} I & A^{-1}B \\ 0 & S \end{bmatrix}$ where $S = D - CA^{-1} B$. Hence $A$ is nonsingular if and only if $S$ is nonsingular. Your approach is perfectly fine, it is force of habit which has me forming the Schur complement in the lower right corner rather than the upper left corner. Kind regards $\endgroup$ – Carl Christian $\begingroup$ I am sure this solution is correct I just don't understand it at all and I would have no idea how to duplicate this approach towards a similar problem. $\endgroup$ $\begingroup$ @CarlChristian just curious if you would be able to tutor me on Skype for $x an hour but I assume you are probably too busy. Let me know if you would be interested although. $\endgroup$ $\begingroup$ @Wolfy I am merely permuting the columns so that I get a "nice" block matrix. The original matrix wasn't "nice enough" because it didn't have square blocks in the northwest and southeast corners. $\endgroup$ – Rodrigo de Azevedo $\begingroup$ @Wolfy $\mathrm{U}$ has that name because it's upper triangular. Don't think of $\mathrm{A}$ as an "almost upper triangular" matrix with a nonzero subdiagonal. Think of it as an upper triangular matrix to which a row and a column were "glued":$$\left[\begin{array}{ccccccc|c} * & * & * & * & * & * & * & \alpha_{18}\\ \hline * & * & * & * & * & * & * & *\\ 0 & * & * & * & * & * & * & *\\ 0 & 0 & * & * & * & * & * & *\\ 0 & 0 & 0 & * & * & * & * & *\\ 0 & 0 & 0 & 0 & * & * & * & *\\ 0 & 0 & 0 & 0 & 0 & * & * & *\\ 0 & 0 & 0 & 0 & 0 & 0 & * & *\\ \end{array}\right]$$ $\endgroup$ Not the answer you're looking for? Browse other questions tagged linear-algebra matrix or ask your own question. Algorithm to calculate the exponential of an Hessenberg matrix Creating a matrix that saves storage Schur complement of a matrix $A$ $LU$ Factorization of a nonsingular matrix with a particular pattern Efficient algorithm for a matrix product Matrix condition number and reordering Pseudospectrum of non square Matrix in Python
CommonCrawl
How are the constellation point coordinates determined in digital modulation I need to understand where the concept of the distance comes from. What I mean is, how do we know what coordinates are to assign to points in the constellation. For example here for QPSK the points are given coordinates $$\pm\sqrt(E_b),\pm\sqrt(E_b)$$ For BPSK on the other hand they are $$\pm\sqrt(E_b),0$$ This question was answered here as: "A symbol with coordinates $(x,y)$ has energy $x^2+y^2$", and then it makes sense as for example for BPSK the energy per symbol is just $E_b$ and $\sqrt(E_b)^2+0^2=E_b$ but where does this come from? How to derive this? Thanks. digital-communications modulation signal-energy Scavenger23Scavenger23 $\begingroup$ Hey, has your question been answered? $\endgroup$ – Marcus Müller Sep 17 '20 at 21:34 Constellation diagrams exist in what is called signal space which is an abstraction used to describe finite-energy signals. The coordinate axes, even if they are marked $x$ and $y$ as in Marcus Muller's answer, really represent unit-energy signals such as $$s_I(t) = \sqrt{\frac 2T}\cos(2\pi f_c t)\mathbf 1_{t \in [0,T)}(t),$$ $$s_Q(t) = -\sqrt{\frac 2T}\sin(2\pi f_c t)\mathbf 1_{t \in [0,T)}(t)$$ where $T = \frac{n}{f_c}$ and $\mathbf 1_{t \in [0,T)}(t)$ means a rectangular pulse of duration $T$. Note that both sinusoidal pulses consist of and integer number $n$ periods of a sinusoid of frequency $f_c$. Verify for yourself that $s_I(t)$ and $s_Q(t)$ are indeed unit energy signals and that they are orthogonal, that is, verify that $$\int_0^T [s_I(t)]^2 \mathrm dt = \int_0^T [s_Q(t)]^2 \mathrm dt = 1;\quad \int_0^T s_I(t)s_Q(t) \mathrm dt =0.$$ If we use $\pm As_I(t)$ as our PSK signals, then the signal energy is $A^2$(why?) and so if we use $E_b$ to denote the energy per bit, then with respect to the signal space with axes $s_I(t)$ and $s_Q(t)$, the two signals used in PSK can be represented by the constellation points $(\sqrt{E_b},0)$ and $(-\sqrt{E_b},0)$. Left-handed folks might prefer to use PSK signals $\pm As_Q(t)$ in which case the signal constellation would have points $(0,\sqrt{E_b})$ and $(0,-\sqrt{E_b})$. Mealy-mouthed ambidextrous people might want to use $$\pm A[s_I(t)\cos(\theta)+s_Q(t)\sin(\theta)] = \pm A\sqrt{\frac 2T}\cos(2\pi f_c t+\theta)\mathbf 1_{t \in [0,T)}(t)$$ as PSK signals in which case in the signal space with axes $s_I(t)$ and $s_Q(t)$, the constellation points would be $\pm (\sqrt{E_b}\cos\theta,\sqrt{E_b}\sin\theta)$. But if we had chosen axes $\sqrt{\frac 2T}\cos(2\pi f_c t+\theta)$ and $-\sqrt{\frac 2T}\sin(2\pi f_c t+\theta)$ instead (this just corresponds to a rotation of axes by $\theta$ from the previous signal space), then the constellation points would once again be $(\sqrt{E_b},0)$ and $(-\sqrt{E_b},0)$. Thus, in signal space in which the coordinate axes represent unit-energy orthogonal signals, the energy of the signal corresponding to a constellation point equals the squared distance from the origin. Note that in (antipodal) PSK considered above, this squared distance from the origin is $E_b$ for each of the two constellation points. For QPSK (which I have ignored for the most part but you can read about it in this answer of mine), each of the four constellation points are at squared distance $2E_b$ from the origin which makes sense in that $E_b$ is the energy per bit and a QPSK signal carries two bits. Finally, with regard to the importance of distance as a concept, given two constellation points in signal space, the probability of error (in an AWGN channel with two-sided power spectral density $N_0/2$) is $Q\left(\frac{d}{\sqrt{2N_0}}\right)$ where $d$ is the distance between the two points. For the PSK examples considered above, $d = 2\sqrt{E_b}$ which gives the familiar result $$P_e = Q\left(\frac{d}{\sqrt{2N_0}}\right) = Q\left(\frac{2\sqrt{E_b}}{\sqrt{2N_0}}\right) = Q\left(\sqrt{\frac{2E_b}{N_0}}\right)$$ What I mean is, how do we know what coordinates are to assign to points in the constellation. We don't; you can rotate your PSK however you like it. You can also vary the diameter however you like it. It's all convention. All that PSK means is: "the information is in the phase of the symbols, all other parameters (which is amplitude) are left at a constant value". A symbol with coordinates $(x,y)$ has energy $x^2+y^2$ … but where does it come from? Math. That's simply how power is defined: magnitude squared of an amplitude. Not the answer you're looking for? Browse other questions tagged digital-communications modulation signal-energy or ask your own question. I and Q components and the difference between QPSK and 4QAM Determine minimum distance between two symbols in 16-QAM using the average symbol energy Is there a closed form for the average energy of a QAM constellation? BPSK vs $\pi/2$ BPSK, what the benefits? GNU Radio - PSK Mod block - unexpected constellation diagram Why not use zero-signal in digital transmission? Energy per bit ($E_b$) in BPSK with non-unity channel gain What techniques are available for correcting constellation rotation due to phase ambiguity? What is a constant envelope modulation? What causes this type of distortion in a constellation I/Q plot?
CommonCrawl
Putting up the lights two I hope you are familiar with this. Each strand of 20 lights is wired like a 5x4 grid, and each bulb can be either blue, red, purple, or green. However, when you change the color of 1 bulb, the adjacent ones in the grid will cycle to the next color as well. You change the color of the 1st light and the 2nd and 6th lights also change to red. You change it again, and the same three become purple. Cycle again makes them green, and one last time brought them back to blue. Just to make sure, you cycle through the colors of the 7th light and notice the 2nd, 6th, 8th, and 12th lights follow suit. Now you've just drawn out some really cool random patterns! (for example, red blue green red green blue purple etc. etc.) Is it always possible to change the lights to match your pattern or does there exist some configurations for which it is impossible? mathematics seasonal lights-out Wen1nowWen1now $\begingroup$ Your explanation is somewhat not clear. Do you start with Blue? When you change 1, 2 and 6 change - does it impact their adjacent bulbs? What is the starting pattern, all blue? $\endgroup$ – Moti Dec 30 '16 at 5:19 $\begingroup$ Yeah I believe so, but if you think about it the question is equivalent to is it possible to do it given any starting and ending configuration. And when you change 1, ONLY 2 and 6 change - nothing else.If you're still confused try the link I put at the top. $\endgroup$ – Wen1now Dec 30 '16 at 5:22 Short answer: Yes, it's always possible. Somewhat less short answer with no spoiler tags: Your puzzle appears to be: Given any two configurations of lights, is it always possible to reach one from the other? (Scenario 1) This is equivalent to the following question, which is the one I will address: Given any configuration of lights, is it always possible to reach the configuration where all lights are blue? (Scenario 2) A proof of this equivalency is at the end of my answer. I'll expand on the ideas presented here to show that the answer to scenario 2 is yes. I'm new to this particular stack and as such I don't know what the mathematical background of the "typical" user here is, so I'll explain in as much basic detail as I can. For any single given light bulb starting on blue, the sequence of colors is: $$ \text{blue $\to$ red $\to$ purple $\to$ green $\to$ blue} $$ I will assign numeric values to the colors as follows: \begin{array}{c|c} \text{color} & \text{value}\\ \hline \text{blue} & 0\\ \text{red} & 1\\ \text{purple} & 2\\ \text{green} & 3 \end{array} Note that starting with $\text{blue} = 0$ is completely arbitrary. All that matters for the math that follows is that one of these colors is assigned $0$, and each of the following colors gets assigned $1, 2, 3$ in the order in which the colors appear in the sequence (so, for example, purple = 0, green = 1, blue = 2, red = 3 is also perfectly acceptable). Anyway, an example of a 5x4 grid with these values is as follows. (The mathematician in me is obligated to point out that I interpret "5x4 grid" as 5 rows and 4 columns, but based on your further description you clearly meant 5 columns and 4 rows, so I will proceed like that for consistency.) \begin{matrix} 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 1 & 2\\ 3 & 1 & 2 & 3 & 2\\ 2 & 2 & 3 & 3 & 3 \end{matrix} Just for clarity, in this example configuration the entire first row is blue, the light in the lower-left corner is purple, the light in the lower-right corner is green, and the light in the first column of row two is red. Now, here's where we see why this numbering is useful. In other words, here's where we get mathy. If we change one of the lights, then that light and its two, three, or four neighbors (whichever of the top, bottom, left, and right neighbors actually exist in the grid) also get changed. And here's the important part: "Getting changed" means we add 1 to the light's numeric value, modulo 4. "Modulo 4" means if we have $3+1 = 4$, then we replace $4$ with $0$ (because 4 modulo 4 is 0, i.e., the remainder is 0 when 4 is divided by 4). Here is a complete list of how a single light can change: If a blue light (value $0$) gets changed, it becomes red (value $1$, which is $0 + 1$). If a red light (value $1$) gets changed, it becomes purple (value $2$, which is $1 + 1$). If a purple light (value $2$) gets changed, it becomes green (value $3$, which is $2 + 1$). If a green light (value $3$) gets changed, it becomes blue (value $0$, which is $3 + 1$ modulo $4$). So, continuing with our example configuration above, if we were to change the light in row 3, column 4, boxed here for emphasis: \begin{matrix} 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 1 & 2\\ 3 & 1 & 2 & \boxed 3 & 2\\ 2 & 2 & 3 & 3 & 3 \end{matrix} Then it would change (i.e., increase by $1$, modulo $4$) the values of each of its four neighbors, also boxed below: \begin{matrix} 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & \boxed 1 & 2\\ 3 & 1 & \boxed 2 & \boxed 3 & \boxed 2\\ 2 & 2 & 3 & \boxed 3 & 3 \end{matrix} So our result would be this configuration: \begin{matrix} 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 2 & 2\\ 3 & 1 & 3 & 0 & 3\\ 2 & 2 & 3 & 0 & 3 \end{matrix} For one more example, if we were to take this new configuration and then change the light in row 1, column 2, boxed here for emphasis: \begin{matrix} 0 & \boxed 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 2 & 2\\ 3 & 1 & 3 & 0 & 3\\ 2 & 2 & 3 & 0 & 3 \end{matrix} Then it would change that light and its three neighbors, boxed here: \begin{matrix} \boxed 0 & \boxed 0 & \boxed 0 & 0 & 0\\ 1 & \boxed 1 & 1 & 2 & 2\\ 3 & 1 & 3 & 0 & 3\\ 2 & 2 & 3 & 0 & 3 \end{matrix} So our configuration would be: \begin{matrix} 1 & 1 & 1 & 0 & 0\\ 1 & 2 & 1 & 2 & 2\\ 3 & 1 & 3 & 0 & 3\\ 2 & 2 & 3 & 0 & 3 \end{matrix} So the question now is, what exactly are we doing? We're just doing matrix addition! If we view our original example configuration as a $4\times5$ matrix (4 rows and 5 columns), then our first light change was just this matrix addition (don't forget the modulo 4) here: $$ \begin{bmatrix} 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 1 & 2\\ 3 & 1 & 2 & 3 & 2\\ 2 & 2 & 3 & 3 & 3 \end{bmatrix} + \begin{bmatrix} 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 2 & 2\\ 3 & 1 & 3 & 0 & 3\\ 2 & 2 & 3 & 0 & 3 \end{bmatrix} $$ And our second light change example was just this matrix addition here: $$ \begin{bmatrix} 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 2 & 2\\ 3 & 1 & 3 & 0 & 3\\ 2 & 2 & 3 & 0 & 3 \end{bmatrix} + \begin{bmatrix} 1 & 1 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 1 & 1 & 0 & 0\\ 1 & 2 & 1 & 2 & 2\\ 3 & 1 & 3 & 0 & 3\\ 2 & 2 & 3 & 0 & 3 \end{bmatrix} $$ Now, every time we change the light in row 3, column 4, we are simply adding this matrix (modulo 4!!!), which I'll call $A_{34}$ (because row 3, column 4), to our existing configuration: $$ A_{34} = \begin{bmatrix} 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 1 & 0 \end{bmatrix} $$ Similarly, every time we change the light in row 1, column 2, we are simply adding this matrix (modulo 4!!!), which I'll call $A_{12}$ (because row 1, column 2), to our existing configuration: $$ A_{12} = \begin{bmatrix} 1 & 1 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} $$ And here's a very important note: Each light in the grid has its own corresponding matrix! For example, the light in row 4, column 5 has this matrix: $$ A_{45} = \begin{bmatrix} 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 1 & 1 \end{bmatrix} $$ And so on. So, to summarize the main points so far: Blue = 0, red = 1, purple = 2, green = 3. Changing a light means adding 1 (modulo 4) to its numeric value and to the numeric values of its existing neighbors. Each light in our grid has its own special matrix $A_{ij}$, where $i$ is the row of the light and $j$ is the column of the light. Changing a light in row $i$ and column $j$ in the configuration is represented mathematically by adding $A_{ij}$ (modulo 4) to the numeric representation of the configuration. Recall the question we're addressing: Note that "all lights are blue" corresponds to the following matrix: $$ B = \begin{bmatrix} 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} $$ Based on the discussion so far, hopefully everyone is convinced that scenario 2 is equivalent to the following more mathematical question: Given any matrix with $4$ rows and $5$ columns where each entry is one of the numbers $0$, $1$, $2$, or $3$, is it possible to add some combination of the $A_{ij}$ matrices (as defined above) modulo 4, so that we end up with the matrix $B$ (also defined above)? (Scenario 3a) Using more sophisticated mathematical notation, we can rephrase Scenario 3a as follows: Fix $S \in \Bbb{Z}_4^{4 \times 5}$. Can we always find $x_{ij} \in \Bbb{Z}_4$ that satisfy the following equation, where $A_{ij}$ and $B$ are as defined above? $$ S + \sum_{i=1}^4 \sum_{j=1}^5 x_{ij} A_{ij} = B $$ (Scenario 3b) Note that since $B$ has only zeros in it, then this equation is equivalent to: $$ \sum_{i=1}^4 \sum_{j=1}^5 x_{ij} A_{ij} = -S $$ (Since we're working modulo 4 then the $-S$ can be simplified in practice, but this is irrelevant for what we're trying to show.) Let's break down what all of this means. First, the big $\sum$ ("sigma") is just a shorthand way of writing a sum of a bunch of numbers. Second, $S \in \Bbb{Z}_4^{4 \times 5}$ is just a mathy (and perhaps non-standard) way of saying what we said in Scenario 3a: $S$ is a matrix with 4 rows and 5 columns, and each entry of $S$ is one of $0$, $1$, $2$, or $3$. Next, what are the $x_{ij} \in \Bbb{Z}_4$? The notation just means each $x_{ij}$ can only be $0$, $1$, $2$, or $3$. And what do these $x_{ij}$ actually represent? Well, $x_{ij}$ represents the number of times that we add $A_{ij}$ to our configuration. In other words, $x_{ij}$ represents the number of times we change the light in row $i$ and column $j$. Note that this is why each $x_{ij}$ can only be one of $0$, $1$, $2$, or $3$: Because if we change a light, say, 4 times, that's the same as doing absolutely nothing, because after 4 changes it's back to its original color (mathematically, it's because 4 modulo 4 is 0). And if we change a light, say, 6 times, that's the same as changing it 4 times and then 2 more times, and since those first 4 times are the equivalent of doing nothing, then we really only changed the light twice, with those last two changes (mathematically, it's because 6 modulo 4 is 2). So now that we've got this down to a linear algebra problem, how do we actually answer it? Luckily, we actually know what all of the $A_{ij}$ are. We explicitly wrote down a few of them earlier in this post. Here is one more, just as another example: $$ A_{24} = \begin{bmatrix} 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} $$ Here's where I need to be a little bit hand-wavy, for a few reasons. One, this post is long enough already. Two, I don't want to scare off too many more non-mathy people by giving too many more mathy details. Three, most of the details I'm not showing next are just routine yet tedious linear algebra calculations. Anyway, since we explicitly know all 20 of the $A_{ij}$ for $i = 1,2,3,4$ and $j=1,2,3,4,5$, then we can explicitly write down what the following sum is: $$ \sum_{i=1}^4 \sum_{j=1}^5 x_{ij} A_{ij} $$ It's going to be a very messy matrix with 4 rows and 5 columns and a bunch of $x_{ij}$ (where $i=1,2,3,4$ and $j=1,2,3,4,5$) all over the place. And we can rewrite that matrix in a "nicer" form as a product of a $20 \times 20$ matrix with a $20 \times 1$ vector, where the vector consists of the values $x_{ij}$. Note that this sort of thing is exactly what they did in the link I provided near the beginning of the post, where they ended up with a $9 \times 9$ matrix when considering a $3 \times 3$ grid. Anyway, when all is said and done, we end up with this as our $20 \times 20$ matrix: $$ A = \begin{bmatrix} 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 \end{bmatrix} $$ And similar to what's in the link I provided above, we can define a vector $x$ to be $$ x = \begin{bmatrix} x_{11}\\ x_{12}\\ x_{13}\\ x_{14}\\ x_{25}\\ x_{21}\\ x_{22}\\ x_{23}\\ x_{24}\\ x_{25}\\ x_{31}\\ x_{32}\\ x_{33}\\ x_{34}\\ x_{35}\\ x_{41}\\ x_{42}\\ x_{43}\\ x_{44}\\ x_{45} \end{bmatrix} $$ Finally, also similar to what they did in the link, let's define the matrix $M$ by taking our $-S$ from way up above and rewriting it as a single column. That is, we'll take the second column of $-S$ and stick it under the first. Then we'll take the third column of $-S$ and stick it under the second (so now we have the first 3 columns in one stack). Repeat with the last two columns. If you're unclear on this stacking process, don't worry because it's actually completely irrelevant since all that really matters at this point is our matrix $A$. I'm just continuing in this manner to explain why our matrix $A$ matters. So, after all this, we end up with the standard linear algebra equation $$ Ax = M,$$ where $A$ is the $20 \times 20$ matrix we just wrote above, $x$ is the vector we also just wrote above, and $M$ is the vector that consists of the columns of $-S$ stacked on top of one another (so $M$ is a $20 \times 1$ vector, just like $x$ is). Recall that our goal was to determine whether or not we can always find values $x_{ij}$ that solve the equation in Scenario 3b. Well, this is equivalent to solving the equation $Ax = M$ for $x$. And since $A$ is a square matrix, then we know that $x = A^{-1}M$, provided $A^{-1}$ exists. If $A^{-1}$ does exist, then the answer to OP's original question is "Yes, it's always possible." If $A^{-1}$ does not exist, then the answer to OP's original question is "No, it's not always possible." So how do we know whether or not $A^{-1}$ exists? Since $A$ is a square matrix then we can just calculate its determinant. This is hilariously nontrivial (to do by hand) for a $20 \times 20$ matrix. Computers to the rescue! Using Maple, we see that the determinant of this matrix is $55$, and $55$ modulo $4$ is $3$, which means $A$ is invertible. Therefore $A^{-1}$ exists, so we can always write $x = A^{-1}M$. (Side note: Some care needs to be taken when dealing with the determinant modulo $4$. It's not enough to just have the determinant be nonzero. A discussion is beyond the scope of this post, but see the comment thread on this answer for some details.) Proof of equivalency between scenario 1 and scenario 2: We'll show that each scenario implies the other. Let's first suppose scenario 1 is true, i.e., if we have any two given configuration of lights, then each can reach the other. Since this is possible for any two configurations, then we can simply fix one of the configurations to have all blue lights (ABL). Since the other configuration can be any configuration we want, then we see that any configuration we want can reach ABL. This shows that scenario 1 implies scenario 2. Now suppose scenario 2 holds. Then for any two light configurations we have, we know that each can reach ABL. Therefore for one configuration $C_1$ to reach any other configuration $C_2$, all we need to do is get $C_1$ to ABL, and then follow the $C_2$'s path to ABL "backwards" so that we go from ABL to $C_2$. Therefore scenario 2 implies scenario 1. tilpertilper $\begingroup$ You added that the determinant value seems to be 77.5. Of course that is the determinant if you work in the reals (or rationals), but I'm not so sure that actually tells you whether the determinant is zero or not when working mod 2. Especially since 77.5 indicates there was a division by 2, which you obviously cannot do modulo 2. If you work out the actual value of the determinant while working in the integers mod 2, then you will get 1, because that is the only non-zero value available in that field. $\endgroup$ – Jaap Scherphuis Jan 3 '17 at 17:38 $\begingroup$ @JaapScherphuis, thanks, didn't even think of that, but why mod 2 and not 4? $\endgroup$ – tilper Jan 3 '17 at 17:50 $\begingroup$ Yes, I should have said mod 4 - I was thinking about the standard Lights Out game. That makes the situation a bit worse actually, since the integers mod 4 do not form a field, which means you can have a non-zero determinant (namely 2) and still have a matrix that is not invertible. I'm sure however that this matrix is invertible so it will have determinant 1 or 3. $\endgroup$ – Jaap Scherphuis Jan 3 '17 at 18:03 $\begingroup$ @JaapScherphuis, thanks for pointing this out since this clears up something that made no sense to me. Without the modular requirement on the determinant, there appeared to be no relevance to the fact that we're working mod 4 since the construction of the matrix $A$ is completely independent of that. This would've meant that any $5 \times 4$ grid would be solvable no matter how many different types of lights we would have, which seemed very counterintuitive. But the modular condition on $\operatorname{det} A$ definitely addresses that. I'll verify with a CAS if I have time much later today. $\endgroup$ – tilper Jan 3 '17 at 18:10 $\begingroup$ ...Actually, none of this makes sense. $77.5$ is $1.5$ mod 4, which doesn't exist. And taking the modulus at the very end makes no difference since a determinant calculation is basically just addition, subtraction, and multiplication. So it makes no sense that $A$ has a non-integer determinant when $A$ consists only of $0$ and $1$. I think the problem might be that the PHP script at the website I used isn't really designed to handle $20 \times 20$ matrices. That's no fault of the author, it's just that determinant calculation is perhaps borderline intractable. $\endgroup$ – tilper Jan 3 '17 at 18:32 The anser is because the matrix transforming "# of times the buttons have been pressed" to "end state of lights": $$ \left( \begin{array}{cccccccccccccccccccc} 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 \\ \end{array} \right) $$ an inverse mod 4 (which transforms "what state I want the lights to be in" to "how many times I should press each button"): 2012rcampion2012rcampion Not the answer you're looking for? Browse other questions tagged mathematics seasonal lights-out or ask your own question. Putting up the lights Replace all letters with a digit and solve the equations Magic cutting squares with given sum Surviving Captain Nefarious Single solved state for Sudoku Pyraminx Traveler problem : Choosing right street between varying answers Place 4x12 detainees on a 7x7 grid of cells Stargate escape Christmas light horror story! Grid based color changing puzzle
CommonCrawl
Optimal growth conditions and economic analysis of sea cucumber releasing Cheol Lee1 & Sang Duk Choi ORCID: orcid.org/0000-0002-2516-76462 Fisheries and Aquatic Sciences volume 23, Article number: 11 (2020) Cite this article We tried to find the optimal growth conditions of sea cucumber and to analyze the economic effectiveness of the sea cucumber seedling release project in Korea. We first examined the optimal growth conditions of sea cucumber in the relating literatures. Then, we analyzed the economic effectiveness of the sea cucumber seedling release project of the Woncheon fishing village union of Gyeongnam Province in 2016–2018 by using the cost benefit analysis method. The net income of the release project of the Woncheon fishing village union was 69,850 Korean won. The benefit to cost ratio of the sea cucumber seedling release project of the Woncheon fishing village union was estimated to be 1.7, indicating that the project was economically feasible. In order to improve the economic feasibility of the sea cucumber release project, as we see in the case of the Woncheon fishing village union, it is necessary to manage the purchase of the sea cucumber seedling, to improve the recapture rate of sea cucumber, and to manage marketing of sea cucumber. Recently, due to the economic importance of sea cucumber, technological developments on artificial seed production and aquaculture have been carried out in many countries including Korea, China, and Japan. Even though China has some challenges restricting development of the sea cucumber industry, which includes weaknesses in the basic biological research, the issue of germplasm degradation, environmental stress caused by global climate change, and food safety issues. The production of Apostichopus japonicus has become commercially lucrative and successful after nearly 30 years of development (Ru et al. 2019). Apostichopus japonicus is mainly distributed in the northern waters of the western Pacific, including the Bohai Sea and Yellow Sea of China, as well as the eastern coast of Russia and the coasts of Japan and South Korea (Yanagisawa 1998, Zhang et al. 2017). Now the Japanese sea cucumber, Apostichopus japonicus is the most economically important and valuable species in China (Ru et al. 2019). In terms of supply and demand of sea cucumber, China's sea cucumber consumption was 200,969 tons in 2014, which exceeded its supply (China Fishery Group 2015). Now, China is looking for alternative sea cucumber farms in Russia, Japan, and North Korea and is attempting joint sea cucumber farming in South Korea. In Japan, in response to the growing demand for dried sea cucumber in China and other countries, large-scale fisheries companies are, in connection with village fisheries, expanding the sea cucumber production (Aoki Prefecture Fisheries Technology Center 2012a). In Korea, sea cucumber production reached 2491 tons in 1990, which declined due to shrinking habitats caused by pollution of coastal fisheries and coastal landfills, resulting in 833 tons in 2002 (Kang et al. 2012). However, sea cucumber production increased, through the release and aquaculture of sea cucumber seedlings, to 2045 tons in 2017 (Korean Statistical Information Service 2019). In order to solve the shortage and price increase of sea cucumbers, sea cucumber seedlings have been released and farmed in Korea, China, and Japan, and research on the optimum growth environment of sea cucumber has been carried out to enhance the effectiveness of such releases. In this study, we try to find the optimal growth conditions of sea cucumbers by examining various relating literature in Korea, China, and Japan, and suggest the results with respect to the factors that compose the habitat environments. Then, we analyze the effectiveness of a sea cucumber release project in Korea using the sea cucumber seedling release project of the Woncheon fishing village union of Gyeongnam Province in 2016–2018. In Korea, sea cucumbers are released by individual fishers, and fishing village unions subsidized by central and local governments in sites considered to satisfy the optimal growth conditions of sea cucumbers. Subsidies to releasing of sea cucumbers are usually given to a couple of fishing village unions representing one province. Then, the subsidies are offered to other fishing village unions or other regions in the province if they are economically feasible. The Woncheon fishing village union or Woncheon village is one of the representative releasing sites designated by Gyeongnam Province in 2016–2018. The results of the feasibility analysis of the releasing project by Woncheon fishing village unions are important because, as mentioned earlier, the subsidized releasing project of sea cucumber can be extended to the fishing village unions or other areas that satisfy the optimal growth conditions of sea cucumbers in Gyeongnam Province. Examination of optimal growth conditions for sea cucumber In order for the released sea cucumbers to settle stably and grow rapidly, various environmental conditions must be suitable for the habitat or site of release. Some of the factors that compose the habitat environments favorable to growth of sea cucumbers include water temperature, salinity, depth of water, velocity of water current, and quality of the substrate or farming site. We examined the optimal growth conditions of sea cucumber in the relating literatures in in Korea, China, and Japan. Cost and benefit analysis The cost and benefit analysis method is used to analyze the economic feasibility of the sea cucumber discharge project in Woncheon fishing village union of Gyeongnam Province. The cost of purchasing and releasing sea cucumber seedlings is used as a cost in B-C analysis. The collecting or harvesting cost, which is the cost of collecting the sea cucumber after the growth period of about 2 years, should be included as a cost, if the individual business operators or the fishing villages implement the discharge project by themselves and hire the harvesters to collect the sea cucumber after the growth period. However, if the municipal, provincial, or central government carry out seedling discharge project and the produced sea cucumber is collected by the harvesting company and distributed at a certain agreed rate between the fishing village and the harvesting company, the harvester's share will not be counted as a cost in the B-C analysis because the share will be the harvester's income. As was stated earlier, the harvesters take 45% of the harvest and the fishing village takes 55% of the harvest. The benefit required for the B-C analysis is the amount of production that can be gained from the sales of sea cucumber. If individual business operators or fishing villages directly collect the released sea cucumber and sell them, the total amount of the sea cucumber production will be the direct income of the individual business operators or the fishing village. However, when the fishing village produces sea cucumber by hiring harvesters instead of collecting the sea cucumber directly, and distributes the produced sea cucumber at a certain rate with the harvesters, the benefit of the sea cucumber seed release project is the total production amount of sea cucumber. The distribution ratio between the fishing village and the harvester multiplied by the total production amount will be their respective income. In order to calculate the benefit of sea cucumber discharge, we first estimate the yield of the resources by examining the release of the sea cucumber, the growth period, the final average weight at the time of final harvest, and the recapture rate, which is the ratio between initial discharge and final harvest. The equation can be given as follows: $$ Q=N\times R\times W $$ Q is the quantity of production N is the number of cucumber released R is the recapture rate W is the average weight As shown in the above Eq. (1), in order to increase the production of sea cucumber, the value of each variable constituting the production amount should be increased. In order to increase the recapture rate, the survival rate of the released sea cucumber should be high, and the sea cucumber should not move away and stay in the vicinity of the release sites, so that efforts and costs for harvesting should not increase. If the sea cucumber cannot be recovered from the place where the sea cucumber have been released, the final recapture rate of released cucumber will be very low and the production amount of sea cucumber and the economic effectiveness of sea cucumber seedling release will be significantly lowered. When the production of sea cucumber is estimated, the market price of sea cucumber is multiplied, and the total production amount of sea cucumber, which is a benefit required for economic analysis, can be obtained. $$ A=Q\times P $$ A is the aggregate production amount P is the average market price When the total production amount of sea cucumber is obtained, it can be compared with the sea cucumber seedling release cost, from which the economic effect can be obtained. The equation can be given as follows. $$ \mathrm{EE}=\left(B-C\right)=A-C $$ $$ \mathrm{EE}=\left(B/C\right)=A/C $$ (B − C) is the difference between benefits and costs of sea cucumber seedling release. (B/C) is the benefits and cost ratio of sea cucumber seedling release A is the total amount of sea cucumber C is the costs of sea cucumber seedling discharge project Premise of analysis The release of sea cucumber seedling by Woncheon fishing village union of Gyeongnam Province was carried out several times from 2014 to 2018 and the harvest of sea cucumber was carried out between 2014 and April 2019. We considered the case that the release of sea cucumber seedling was carried out on 2 May 2017, and the harvest of sea cucumber was carried out between November 2018 and April 2019, for which the economic feasibility of the sea cucumber release project was examined. The release cost of sea cucumber seedlings released in 2017 was KRW 180 million and the number of released stocks reached 370,849. In the case of Woncheon fishing village union, harvesting of sea cucumber was handled by the harvesters, and 45% of the harvested sea cucumbers are distributed to the harvesters, so they are not included in the harvest costs. The recapture rate of sea cucumber from November 2018 to April 2019 was 51.2%, and the average weight was 168.9 g. The price of sea cucumber was assumed to be KRW 9500/kg, reflecting the price of sea cucumber from 2017 to 2019. Woncheon fishing village union has received public support for the release of sea cucumber seedlings since 2014. The members of the fishing village union had installed natural rocks in the left and right coasts of the Woncheon port and carried out the project for release of sea cucumber seedlings. And they have been experiencing an increase in the production of sea cucumber since then. The release of sea cucumber seedlings from 2014 to 2018 is shown in Table 1 below. Table 1 Release of sea cucumber seedlings of Woncheon fishing village Union The Woncheon fishing village union contracts the sea cucumber harvesting with woman divers and the diving apparatus fishery (DAF) divers on every 2 to 3 years and allows them to collect sea cucumber. The cost of purchasing the sea cucumber seedlings is borne by the Woncheon fishing village union. While the seedlings are growing after release, the harvesters are responsible for the custody and harvest of the sea cucumber. After the sea cucumber is harvested, the harvesters take 45% of the harvest and the fishing village takes 55% of the harvest. More than 50% of the sea cucumber harvests are sold by free sales system and the rest are sold by obligatory sales system. The price of sea cucumber was higher than KRW 10,000/kg until 2015, but it has been declining since then, reaching KRW 9000~10,000/kg in 2019. The production of sea cucumber, the amount of production, and the unit price per kilogram from 2012 to 2018 are shown in Table 2 below. Table 2 Annual production of sea cucumber, production amount, and unit price per kilogram Optimal growth conditions for sea cucumber Korean case Water temperature is one of the most important ecological factors that affect growth and physiological processes in aquatic eurytherms. According to the 2014 Fisheries Seedling Management Project Guidelines, sea cucumbers grow most stably at 10~15 °C, begin to prepare for aestivation at 20 °C or higher, and enter aestivation at the temperature up to 24 °C (Korea Fisheries Resources Agency 2014a). However, if the water temperature persists above 28 °C, it is difficult for sea cucumbers to adapt to that temperature. Thus the habitat with the temperature higher than 28 °C is considered unsuitable for released sea cucumbers (Korea Fisheries Resources Agency 2014b). Lee et al. (2018) regarded the water temperature, 10~15 °C as suitable for growth of sea cucumbers. Unlike others, however, Jeong et al. (2019) reported that sea cucumbers grow even at the temperature higher than 28 °C. For salinity, 28~34 psu was reported to be suitable for a habitat for the released sea cucumbers (Kang et al. 2012). Lee et al. (2018) also reported salinity levels of 28~34 psu as appropriate salinity. For water depth, 5~10 m was reported as a suitable depth for the released sea cucumbers in the development project of sea cucumber island in Yangyang-gun, Korea (Korea Fisheries Resources Agency 2013a). In addition, 5~20 m was reported as a suitable depth for the released sea cucumbers in the sea cucumber aquaculture guidebook (Ministry of Agriculture, Food and Rural Affairs 2011). In the plan report for the establishment sea cucumber aquacultural complex, the depth less than 30 m was reported as a suitable depth (Korea Fisheries Resources Agency 2013b). Lee et al. (2013) reported 6~10 m as a depth suitable for sea cucumbers. Lee et al. (2018) also reported 5~10 m as the depth suitable for inhabit of sea cucumbers. For the velocity of water, less than 0.6 m/s and less than 2 m/s were reported as adequate velocity in Korea. Considering this, Lee et al. regarded 0.2~0.4 m/s as adequate velocity (Lee et al. 2018). The Ministry of Maritime Affairs and Fisheries reported the substrate with rock, sand, and mud as adequate for sea cucumber aquaculture. The Ministry of Agriculture, Food and Rural Affairs reported the substrate with rock, amber stone, and mud was adequate for sea cucumber aquaculture (Ministry of Agriculture, Food and Rural Affairs 2011). Korea Fisheries Resources Agency reported the substrate with coarse gravel and sand as suitable for sea cucumber aquaculture (Korea Fisheries Resources Agency 2013c). Lee et al. (2013) reported the substrate with rock and muddy sand was adequate for sea cucumber aquaculture. Lee et al. (2018) also reported the substrate with rock, stone, and muddy sand with less than 20% sand in the substrate as adequate. Chinese case A few reports showed that empirically, Apostichopus japonicus could grow at a temperature range of 10~20 °C (Sui 1990; Yu and Song 1999; Chen 2004; Yang et al. 2005). Dong et al. (2006) found that the optimum temperature for growth of Apostichopus japonicus was 15~18 °C. Based upon which, Ji et al. (2008) chose 20 °C as optimal temperature for growth of sea cucumbers. The water temperature of 10~15 °C, was regarded as adequate in the cucumber farming areas in Shandong (Kang et al. 2012). Chen (2004), Yang et al. (2005) and Ji et al. (2008) chose 26 °C as the threshold temperature to aestivation. The Yellow Sea Fisheries Research Institute reported that 70% of sea cucumbers of less than 85 g appeared at the depth of 1.7 m and 90% of sea cucumbers of more than 126 g appeared at the depth of 7.7 m (Kang et al. 2012). Choi (1963) reported that sea cucumbers of 50~100 g appeared at depth of 5 m and below, 100~150 g at depths of 5~10 m, and 150~200 g at depths of 10~15 m. The velocity of 0.5 m/s or less was reported as adequate. For the substrate, it was reported that sea cucumber distribution was high at the habitat with low mud, and that reefs and sands with high sea algae and seagrass were adequate (Kang et al. 2012). Japanese case Aomori Fisheries Research Institute reported that the period when the water temperature was 2~10 °C met the habitat conditions of released sea cucumbers (Aoki Prefecture Fisheries Technology Center 2012a, b). For water depth, the project to increase of releasing sea cucumber reported 10 m as the adequate for the release of sea cucumbers (Hokkaido Research Organization 2012). Lee et al. (2018) also considered 5~10 m suitable for growing sea cucumbers. For the velocity, the site with slow water flow was regarded as adequate. For the substrate, it was reported that sea cucumbers were distributed in the habitat with amber stone and sands, and less than 10% of mud, but sea cucumbers did not inhabit in the site with 27.8% or more of mud (Aoki Prefecture Fisheries Technology Center 2012a). Economic effectiveness of sea cucumber seedling release Based on the above analysis, the economic feasibility of the sea cucumber seedling release project in Woncheon fishing village union of Gyeongnam Province is as follows. The production of sea cucumber obtained using the released sea cucumber, recapture rate, and average weight of the harvested sea cucumber seedlings is 32,070 kg, and the total amount is KRW 307,044. The production cost of sea cucumber reaches KRW 180,000 which is sea cucumber seedling release fee. The economic effect of the sea cucumber seedling release project is as shown in the following Table 3. Table 3 Economic effects of the sea cucumber seedling release project in Woncheon fishing village union The economic effectiveness of the sea cucumber seedling release project in Woncheon fishing village union is KRW 127 million, which is the total production amount of released sea cucumber with the release project cost subtracted. The fishing village union and the harvesters distribute the total production amount at a ratio of 55:45, so the net income of the fishing village union is KRW 69,850. The income of the harvesters reaches KRW 57,150. The benefit to cost ratio of the sea cucumber seedling discharge project in Woncheon fishing village union is estimated to be 1.7, indicating that sea cucumber seedling release project is economically feasible. When the price of sea cucumber and the price of sea cucumber seedlings are given as assumed, the break-even point of the sea cucumber seedlings release project in Woncheon fishing village union in terms of recapture rate of sea cucumber is the point where the recapture rate is 30.25%. When the recapture rate is higher than 30.25%, the sea cucumber seedlings release project in Woncheon fishing village union will have the economic feasibility. On the other hand, when the recapture rate of sea cucumber and the price of sea cucumber seedlings are given as assumed, the break-even point of the sea cucumber seedling release project in Woncheon fishing village union in terms of the price of harvested sea cucumber is the point where the price of harvested sea cucumber is KRW 5613/kg. When the price of harvested sea cucumber is higher than the price, the seedling release project in Woncheon fishing village union will have the economic feasibility. When the recapture rate of sea cucumber and the price of sea cucumber are given as assumed, the break-even point of the sea cucumber seedling release project in Woncheon fishing village union in terms of the price of the sea cucumber seedling is the point where the price of the sea cucumber seedling is KRW 822 each. When the price of the sea cucumber seedling is lower than the price, the seedling release project in Woncheon fishing village union will have the economic feasibility. The optimum growth conditions for sea cucumber were similar in all three countries, although there were some differences. The adequate water temperature is at 5~15 °C in Korea and China, but 2~10 °C in Japan, of which lower limit is smaller than that of other two countries. For the water depth, 6~10 m seems to be appropriate in all three countries. The substrate with rock and muddy sand seems to be adequate in all three countries. Sensitivity analysis The cost and benefit of the sea cucumber seedling release project can be affected by the recapture rate, the price of the seedling, and the market price of the collected sea cucumber. Therefore, the sensitivity analysis should be performed to analyze the economic feasibility reflecting the change of the situations or conditions. The higher the market price of sea cucumber seedlings, the lower the mobility of sea cucumber and the higher survival rate of sea cucumber seedlings, the higher the economic effectiveness of sea cucumber seedlings release project. According to the FIRA (Korea Fisheries Resources Agency), the B to C ratio of the sea cucumber release projects in Jeonnam Province and Gyeongnam Province in 2008 ~ 2010 were 3.18. The B to C ratio of the sea cucumber release projects in Chungnam Province, Gangwon Province, Gyeongbuk Province, Incheon City, Jeonnam Province and Gyeongnam Province in 2013~2014 was 1.45, indicating that both were effective. Park et al. (2013) analyzed the economic effects of the sea cucumber seedlings project in Gyeongbuk Province and found that the B to C ratio was 2, which is efficient. Sensitivity analysis also showed that the economic feasibility varied greatly depending on changes in recapture rate, market price, and seedling price. And it had been shown that there were cases in which the economic effectiveness was lost. Sensitivity analysis was carried out for the range of 10 to 70% of recapture rate in consideration of the precedent cases to see how the economic effectiveness of the sea cucumber seedling release project in Woncheon fishing village union changes according to the change of the recapture rate. The analysis results are shown in the following Table 4. Table 4 Sensitivity analysis for recapture rate As shown in Table 4, the economic feasibility of the project for the release of the sea cucumber seedlings decreases with the decrease in the recapture rate. If the recapture rate drops to the value less than 30.25%, the economic effectiveness of the sea cucumber seedling release project in Woncheon fishing village union is negative, which means there is no economic feasibility. On the contrary, if the recapture rate increases, the economic effectiveness of the sea cucumber seedling release project in Woncheon fishing village union will be improved. In order to improve the economic feasibility of the sea cucumber discharge project in the Woncheon fishing village union of Gyeongnam Province, it is necessary to increase the recapture rate of released sea cucumber. In order to raise the recapture rate, the survival rate of the seedlings must be high, and the sea cucumber should stay in the vicinity of the release sites, so that efforts and costs for harvesting should not increase. The survival rate of seedlings is related to the quality of seedlings and this is also related to the cost of seedlings. Therefore, the selection of seedlings should be based on the consideration of cost, quality, or survival rate. In order to increase the recapture rate, the seedling should not move far away from release sites during the growing period, so that it can be harvested in large quantity at harvest time. In order to prevent the seedlings from moving far away, it is necessary to select the release sites with good nourishing environments. Park et al. (2013) also emphasized the importance of selection of release sites for the good results of the sea cucumber seedling release project in Gyeongbuk Province area. In addition, the cost for harvesting should be small, which will improve the economic feasibility by reducing the value of the cost in the benefit cost analysis. In order to see how the economic effect of the sea cucumber seedling release project in Woncheon fishing village union changes with the change of the sea cucumber price, the prices of sea cucumber from 2012 to 2019 are included in the sensitivity analysis. The price of sea cucumber ranged from KRW 5000 to KRW 20,000/kg. The results of the sensitivity analysis are shown in Table 5 below. Table 5 Sensitivity analysis of sea cucumber prices As shown in Table 5, the economic feasibility of the sea cucumber seedlings release project decreases as the price of sea cucumber falls. If the price of sea cucumber drops to the value less than KRW 5613, the economic effectiveness of the sea cucumber seedling release project in Woncheon fishing village union becomes negative. On the contrary, if the price of sea cucumber rises, the economic effectiveness increases. Marketing management of the sea cucumber needed to improve economic effectiveness is an attempt to raise the shipping price of sea cucumber and increase the value of the benefit in economic analysis. Since the price of sea cucumber is determined by the market, the attempt cannot directly affect the price of sea cucumber, but the price of sea cucumber can be kept high by adjusting the shipping time and amount, and the economic effectiveness can be increased accordingly. Sensitivity analysis was carried out for the case where the price of sea cucumber seedlings ranged from KRW 200 to KRW 1000 to see how the economic effectiveness of the sea cucumber seedling release project in Woncheon fishing village union changes according to the change of sea cucumber seedling price. The results of the analysis are shown in Table 6 below. Table 6 Sensitivity analysis of sea cucumber seed price As shown in Table 6, the economic effectiveness of the sea cucumber seedlings release project decreases as the sea cucumber seed price rises. If the price of sea cucumber seedlings rises to the value higher than KRW 822, the economic effectiveness of the sea cucumber seedling release project in Woncheon fishing village union becomes negative. As the prices of sea cucumber seedlings declines, the economic effectiveness of the sea cucumber seedling release project in Woncheon fishing village union increases significantly. In order to improve the economic feasibility of the sea cucumber release project in Woncheon fishing village union of Gyeongnam province, it is necessary to manage the purchase of the sea cucumber seedling, to improve the recapture rate of sea cucumber, and to manage marketing of sea cucumber. Through the adequate management of seedling purchase, the purchase cost of seedling can be lowered, which is possible by lowering the purchase price through mass purchase or adjustment of purchase timing. However, the purchase of hi-quality seedlings will increase the survival rate of the seedlings and contribute to the improvement of the recapture rate, which will raise the value of benefit in B-C analysis. Therefore, the purchase of good seedlings should not be comprised to lower the purchase price of seedlings. Limitation of this study Since the main purpose of this study was to verify the effectiveness of the sea cucumber seedling release program, not all the literature covering all aspects of the optimal growth conditions for sea cucumber was examined. In this study, only the optimal conditions regarding sea water temperature, depth, and the composition of substrate, which are some of the factors that indicate the optimal conditions of the sea cucumber release site were examined. In the future, it is necessary to study all aspects of the optimal growth conditions for sea cucumber. This study examined the economic effectiveness of sea cucumber seedling release in a region on the south coast of Korea. Thus, it is not appropriate to expect that sea cucumber seedling release program in Korea will be effective based on these results. It would be more reasonable to determine that sea cucumber seedling release project in Korea would be effective after a review of the economic effectiveness of all the release program in various regions of the Korean coast. In this study, the causal relationship between the economic effectiveness of sea cucumber releasing and the proper growth condition of sea cucumber was not addressed because of insufficient data for the relationship. Therefore, what is needed more in this study is to find out how the optimal growth conditions of sea cucumbers affect the economic effectiveness of sea cucumber releasing, and thus further studies regarding this relationship are needed. In addition, a more detailed and extensive comparative studies of the optimal growth conditions of sea cucumbers in China, Japan, and Korea are needed to find out the factors pivotal to the growth of sea cucumbers and the relationship between those factors and economic effectiveness of sea cucumber releasing. We tried to find the optimal growth conditions of sea cucumber and to analyze the economic effectiveness of the sea cucumber seedling release project in Korea. We first presented some optimal growth conditions of sea cucumber in Korea, China, and Japan by reviewing the relating literatures. Then, we presented the economic effectiveness of the sea cucumber seedling release project of the Woncheon fishing village union of Gyeongnam Province in 2016–2018 by using the cost benefit analysis method. The net income of the release project of the Woncheon fishing village union was 69,850 Korean won. The benefit to cost ratio of the sea cucumber seedling release project of the Woncheon fishing village union was estimated to be 1.7, indicating that the project was economically feasible. The sea cucumber release projects, which are carried out by the private sector with the help of the central and local governments of Korea, are carried out under tight budget constraints, so improving the economic feasibility of the release is essential to the success and sustainability of such release projects. In order to improve the economic feasibility of the sea cucumber release project, as we saw in the case of the Woncheon fishing village union, it is necessary to manage the purchase of the sea cucumber seedling, to improve the recapture rate of sea cucumber, and to manage marketing of sea cucumber. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Aoki Prefecture Fisheries Technology Center. 2012a. The ecology and resource management of sea cucumber, Stichopus japonicas A. http://www.aomori-itc.or.jp/public/zoshoku/dayori/114g/114_p8-10.pdf. Accessed 15 Oct 2019. Aoki Prefecture Fisheries Technology Center. 2012b. The releasing and seedling manual of sea cucumber, Stichopus japonicas B. https://www.aomori-itc.or.jp/_files/00093750/Namako_houryuu.pdf. Chen JX. Present status and prospects of sea cucumber industry in China. In: Lovatelli A, Conand C, Purcell S, Uthicke S, Hamel JF, Mercier A, editors. Advances in sea cucumber aquaculture and management. Rome: FAO; 2004. p. 25–38. China Fishery Group. The consumption of sea cucumber in China. Beijing: China Fishery Group; 2015. p. 19. Choi S. Study of sea cucumber, Stichopus japonicus. Tokyo: Kaibuto; 1963. p. 35. Dong YW, Dong SL, Tian XL, Wang F, Zhang MZ. Effects of diel temperature fluctuations on growth, oxygen consumption and proximate body composition in the sea cucumber Apostichopus japonicus Selenka. Aquaculture. 2006;255:514–21. Hokkaido Research Organization. The project to increase of releasing sea cucumber, Stichopus japonicus. Sapporo: Hokkaido Research Organization; 2012. p. 64. Jeong UC, Anisuzzaman M, Jin F, Choi JK, Han JC, Choi BD, Kang SJ. Effect of High Water Temperature on the Growth and Lipid Compositions of the Sea Cucumber Apostichopus japonicus. Korean J Fish Aquat Sci. 2019:52(4);400–407. https://doi.org/10.5657/KFAS.2019.0400. Ji T, Dong YW, Dong SL. Growth and physiological responses in the sea cucumber, Apostichopus japonicus Selenka: aestivation and temperature. Aquaculture. 2008;283:180–7. Kang SJ, Kang SW, Kang JH, Jung WC, Jin SD, Choi BD, Han JC. Sea cucumber aquaculture technology. Seoul: Aquainfo; 2012. p. 426. Korea Fisheries Resources Agency. The development project of sea cucumber island in Yangyang-gun A: Korea Fisheries Resources Agency, FIRA-PR-2013-027; 2013a. p. 173. Korea Fisheries Resources Agency. The plan report for the establishment of sea cucumber aquacultural complex: Korea Fisheries Resources Agency, FIRA-PR-2013-032; 2013b. p. 119. Korea Fisheries Resources Agency. Eco-friendly seedling test of sea cucumber, Stichopus japonicas: Korea Fisheries Resources Agency; 2013c. p. 266. Korea Fisheries Resources Agency. 2014 Fisheries seedling management guidelines: Ministry of Oceans and Fisherie, FIRA-PR-2014-050; 2014a. p. 35. Korea Fisheries Resources Agency. Estimation of optimum releasing rate of sea cucumber, Stichopus japonicas: Korea Fisheries Resources Agency, FIRA-PR-2014-022; 2014b. p. 61. Korean Statistical Information Service. 2019. Production amount of sea cucumber in Republic of Korea from 1990 to 2019. http://www.kostat.go.kr. Accessed 18 Dec 2019. Lee CH, Lee DH, Kwak SN, Kim HW. A preliminary study in habitat characteristics and settlement of released sea cucumber, Stichopus japonicus in the coastal waters of Korean peninsula. J Fish Resour Manag. 2013;3:113–27. Lee JW, Gill HW, Lee DH, Kim JK, Hur JW. Variations of size and density of sea cucumber (Stichopus japonicus) released to the habitat conditions. Ocean Polar Res. 2018;40(2):69–75. Ministry of Agriculture, Food and Rural Affairs. The guidebook for aquaculture of sea cucumber, Stichopus japonicas, vol. 134. Sejong: Ministry of Agriculture, Food and Rural Affairs; 2011. Park KI, Kim YJ, Kim DH. Analyzing Economic Effectiveness of the Sea Cucumber Seed Releasing Program in Gyeongsangbuk-do Region. Journal of Fisheries Business Administration. 2013;44(1);81–90. https://doi.org/10.12939/FBA.2013.44.1.081. Ru XS, Zhang LB, Li XN, Liu SL, Yang HS. Developing strategies for the sea cucumber industry in China. J Oceanol Limnol. 2019;37(1):300–12. https://doi.org/10.1007/s00343-019-7344-5. Sui X. Culture and enhance of sea cucumber. Beijing: China Agriculture Publishing House; 1990. (In Chinese). Yanagisawa T. Aspects of the biology and culture of the sea cucumber. In: Tropical mariculture. London: Academic; 1998. p. 291–308. Yang HS, Yuan XT, Zhou Y, Mao YZ, Zhang T, Liu Y. Effects of body size and water temperature on food consumption and growth in the sea cucumber Apostichopus japonicus (Selenka) with special reference to aestivation. Aquac Res. 2005;36:1085–92. Yu DX, Song BX. Variation of survival rates and growth characteristics of pond cultural juvenile Apostichopus japonicus. J Fish Sci China. 1999;6:109–10 (In Chinese with English abstract). Zhang XF, Liu Y, Li Y, Zhao XD. Identification of the geographical origins of sea cucumber (Apostichopus japonicus) in northern China by using stable isotope ratios and fatty acid profiles. Food Chem. 2017;218:269–76. Cheol Lee and Sang-Duk Choi contributed equally to this work. Division of Logistics and International Commerce, Chonnam National University, Yeosu, 59626, South Korea Cheol Lee Division of Marine Technology, Chonnam National University, Yeosu, 59626, South Korea Sang Duk Choi The authors contributed to this manuscript as follows: CL contributed to the design of the study and cost-benefit analysis and sensitivity analysis, and writing of the manuscript. CDC contributed to drafting and reviewing of the manuscript. All authors reviewed, edited, and approved the manuscript for submission. Correspondence to Sang Duk Choi. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Lee, C., Choi, S.D. Optimal growth conditions and economic analysis of sea cucumber releasing. Fish Aquatic Sci 23, 11 (2020). https://doi.org/10.1186/s41240-020-00151-0 Optimal growth condition Seedling release Ecology and Fisheries resource management
CommonCrawl
The Dynamical Prediction of Wind Tides on Lake Erie DOI:10.1007/978-1-940033-54-9_1 In book: The Dynamical Prediction of Wind Tides on Lake Erie (pp.1-44) George W. Platzman This person is not on ResearchGate, or hasn't claimed this research yet. Request full-text PDF To read the full-text of this research, you can request a copy directly from the author. Lake Erie is an enclosed, shallow sea with approximate mean dimensions of 60 feet in depth, 240 miles in length, and 40 miles in width. It is located in the region of confluence of the principal winter-time tracks associated with Alberta and Colorado lows, and therefore is exposed to wind action from severe cyclonic storms many of which reach their full intensity while well within range of influence upon the Lake. The associated wind tides are well known and in extreme cases have produced wind set-up in excess of 13 feet difference between Buffalo and Toledo at opposite ends of the longitudinal axis. In this investigation numerical computations have been made for nine cases of record of extreme wind tide on Lake Erie. The computations are based upon an approximate, two-dimensional form of the Ekman boundary-layer equations, in which the viscous dimension is parameterized by an Ekman number. Effects of gravity, friction (with an eddy viscosity 40 cm2 sec−1) and the earth's rotation are included. The prediction equations are amenable to numerical integration by standard methods applicable to the momentum form of the dynamical equations; a pair of conjugate Richardson lattices is used for this purpose. Wind stress was obtained by an interpolation procedure based upon hourly surface-wind observations at six first-order stations located on the periphery of the Lake. A quadratic resistance formula with skin-friction coefficient 3.0 × 10−8 gave good results for computed wind set-up. Although prediction of resurgences associated with the 14-hr free period was unsatisfactory, the average coefficient of correlation obtained between computed and observed set-up at various stations where hourly lake-level data are available is greater than 0.90. In general, the results of the investigation may be regarded as confirming that a sound basis exists for operational prediction of wind tides on Lake Erie by dynamical methods. No full-text available To read the full-text of this research, you can request a copy directly from the author. ... A different procedure has been employed by KIELMANN and KOWALIK [31]. The most elegant method has, however, been proposed by PLATZMAN 8 [43]. He treats the time derivative ∂(·)/∂t in (9.153) as if it were a real (or complex) number and writes (9.153) for constant ν as ... ... (9.174) Equation (9.173) is in the desired form. The four coefficient functions have been plotted by PLATZMAN [43] as functions of σ 0 . These graphs show that |D| is small in comparison to the moduli of the other coefficients for all values of σ 0 . ... ... The distinguished feature of this equation is that it has been obtained without taking side boundaries into account. In PLATZMAN's [43] words: 'An important step in delineating predictive characteristics of this equation is the determination of frequencies of free modes of vibration. In general, it is very difficult to do this if one requires that boundary conditions be met on the sides of a basin of even the simplest shape.' ... Vertical Structure of Wind-Induced Currents in Homogeneous and Stratified Waters Columban Hutter Yongqi Wang Irina Chubarenko In this chapter the intention is to describe the vertical and (eventually) also horizontal structure of the horizontal current in lakes which are subjected to external wind forces. The water will be assumed to be homogeneous or stratified in two layers, and the internal friction and the effects of the rotation of the Earth will play an important role in the establishment of the current distribution. ... We are not interested in the temporal development of the flow field, which can be studied with time-varying solutions specifically designed for that purpose (e.g. Fjeldstad, 1930;Hidaka, 1933;Platzman, 1963;Madsen, 1977). The numerical results show that in many cases an equilibrium condition is achieved after a spin-up time that varies from some hours to some days of constant and uniform wind, a condition that many lakes experience from local breezes to large-scale synoptic winds. ... ... The path from Ekman's original solution to the description of steady currents in real enclosed basins was certainly long, but for the range of intermediate conditions several generalizations of the original Ekman theory have been proposed (see also Simons (1980), Hutter et al. (2011) and Defant (1962) for a review). Finite-depth and timedependent solutions followed the initial suggestion available in Ekman's work (Fjeldstad, 1930;Hidaka, 1933;Platzman, 1963;Madsen, 1977), together with the inclusion of horizontal pressure gradients associated to wind set-up (Welander, 1957;Birchfield, 1972), which were particularly relevant for the case of enclosed basins and were successfully applied to describe circulation in large lakes (Gedney and Lick, 1972). Additional complexity was added by taking into account density stratification (Lee and Ligget, 1970), with relevant contributions from estuarine studies (Kasai et al., 2000;Valle-Levinson et al., 2003), and non constant vertical turbulence (Thomas, 1975;Madsen, 1977;Svensson, 1979). ... What makes an elongated lake 'large'? Scales from wind-driven steady circulation on a rotating Earth J GREAT LAKES RES Marina Amadori Sebastiano Piccolroaz Henk A. Dijkstra Marco Toffolon When investigating wind-induced steady circulation, the effect of the acceleration due to Earth's rotation is often neglected in narrow lakes, but the argument behind this assumption is blurred. Commonly, when the horizontal dimension is smaller than the Rossby radius, the Coriolis force is considered unimportant, but this is correct only for inertial currents and barotropic and baroclinic waves. In this work, we revisit the classical Ekman transport solution for wind stress acting along the main axis of an elongated lake in steady-state conditions. We demonstrate that a secondary circulation develops and that the resulting crosswise volume transport, constrained in the closed domain, produces downwelling and upwelling that cannot be predicted by the standard Ekman formulas. We claim that the Rossby radius does not play any role in this process, which on the contrary is governed by the ratio between the actual depth and the thickness of the Ekman layer. The theoretical analysis is supported by numerical experiments to show the dependence on latitude, width, depth and turbulence closure. What makes an elongated lake 'large'? Scales from wind-driven steady circulation on a rotating Earth When investigating wind-induced steady circulation, the effect of the acceleration due to Earth's rotation is often neglected in narrow lakes, but the argument behind this assumption is blurred. Commonly, when the horizontal dimension is smaller than the Rossby radius, the Coriolis force is considered unimportant, but this is correct only for inertial currents and barotropic and baroclinic waves. In this work, we revisit the classical Ek-man transport solution for wind stress acting along the main axis of an elongated lake in steady-state conditions. We demonstrate that a secondary circulation develops and that the resulting crosswise volume transport, constrained in the closed domain, produces down-welling and upwelling that cannot be predicted by the standard Ekman formulas. We claim that the Rossby radius does not play any role in this process, which on the contrary is governed by the ratio between the actual depth and the thickness of the Ekman layer. The theoretical analysis is supported by numerical experiments to show the dependence on latitude, width, depth and turbulence closure. ... This procedure appears most convenient for dealing with variable stratification and topography combined with baroclinicity. In the horizontal, the variables are staggered to form a lattice structure (Platzman, 1963). In a horizontal plane, the free surface, the vertical velocity, and the density are located at the center of squares, at the sides of which the velocity components are defined. ... ... As wind stress was used with a coefficient equal to 1.6 x l P 3 . A linear bottom stress, inversely proportional to the square of the depth with a coefficient of 50 cm2/s, was incorporated in the first run, approximating Platzman's (1963) solution of the Ekman equations. The results were averaged over 28 hours, eliminating inertial motions with periods of about 14 hours, as well as the major contributions of the free surface modes with periods a little over 1 day. ... Wind-driven circulations in the southwest Baltic T. J. Simons ... Another approach to prescribing the bottom stress is to specify the vertical turbulent diffusion of momentum by a constant eddy viscosity . Platzman (1963) deduced a bottom friction coefficient as a function of the Ekman number ( d√f/2 ) in such a way that B → 0 for great depths and = 2.5 / 2 for shallow water. For Lake Erie, for instance, Platzman (1963) , and the components of the wind stress, τ s x and τ s y . ... ... Platzman (1963) deduced a bottom friction coefficient as a function of the Ekman number ( d√f/2 ) in such a way that B → 0 for great depths and = 2.5 / 2 for shallow water. For Lake Erie, for instance, Platzman (1963) , and the components of the wind stress, τ s x and τ s y . In principle, the atmospheric pressure gradients can be prescribed either from observations or from numerical weather prediction models. ... Breaching of coastal barriers under extreme storm surges and implications for groundwater contamination: State of the Art Report. Saber Abdelaal Hocine Oumeraci In many countries, coastal dunes have proven environmentally and functionally efficient as important components of entire defence systems, which generally consist of both man-made and natural barriers against coastal flood and erosion. However, for dunes and natural barriers, such as sand pits and narrow islands, which may constitute an important part of the entire defence line, no systematic research has yet started to assess the safety under extreme storm conditions and the consequences of possible barrier breaching and overwash on the subsequent flooding and saltwater intrusion into the groundwater. Even the existing state-of-the-art prediction models for the erosion and breaching of dunes and coastal barriers (e.g. XBeach) still have serious limitations. This report attempts to summarize the current knowledge on all relevant aspects related to the processes and models associated with (i) the storm surges, (ii) the initiation and development of breaches in dunes and similar coastal barriers induced by extreme water levels and waves, (iii) the flood wave propagation and inundation of the hinterland resulting from the barrier breaching, and (iv) the infiltration of saltwater into coastal aquifers and contamination of groundwater. A particular focus is put on the coastal processes that might initiate a coastal barrier breaching either from the seaward, such as wave impact (wave collision), wave run-up, and run down, or from the landward, such as overtopping, overflow, combined flow, seepage, overwash, and washover processes. A brief overview on the state of the art breaching models, their capabilities and their limitation has been provided. Based on this overview, the XBeach model has been selected as the most appropriate model to study the breaching of coastal barriers under extreme storm surges. Very detailed overview of the XBeach model and its limitations has been presented in order to improve them in the future during this PhD study. The coastal flooding of a hinterland has recognised as the induced result of the breaching. Therefore, the coastal flood risk and the flood propagation modelling has also discussed in order to predict the inundation extent and the resulting inundation depths. It is found that saltwater inundation, owing to coastal barrier breaching, results in aquifers salinity. Furthermore, natural remediation process of contaminated aquifers takes up to 20 years. Consequently, groundwater contamination as intangible and indirect damage that results from coastal flooding is suggested to be added to the whole flood damages in order to precisely estimate coastal flood risks. In order to overcome the variety of processes (breaching, inundation, and infiltration and solute transport) and to simplify the flow and transport boundary conditions from process to another, the likelihood of coupling the modelling of these processes has been mentioned. It is found that coupling the modelling of these processes is very likely through XBeach. Wind‐driven circulations in the southwest Baltic Tayler Simons Wind-driven circulations in the southwest Baltic are analyzed with reference to the combined effects of topography and stratification. Numerical results from a multi-layer nested model are compared with observations taken during the spring of 1975. It appears that the bathymetry and stratification of the southwest Baltic lead to substantial coupling of topographic and baroclinic effects. ... A series of studies have been carried out to quantify, understand and model wind tides in the 45 field. Among them, we can cite Kenney (1979) in Lake Winnipeg in Canada , Hellström (1941); Platzman (1963); Keulegan (1953); Gillies (1959) in Lake Erie in Canada and the United-States, Farrer (1957); Kivisild (1954); Saville (1952) in Lake Okeechobee in the United-States, Harris (1957); Platzman (1965); Hugues (1965) in Lake Michigan in the United-States, Nomitsu (1935); Hayami et al. (1996) in Lake Biwa in Japan, Nomitsu (1935) in Lake Kariba in Zambia and Zimbabwe, 50 De Lauro et al. (2018) in Venice lagoon in Italy or Metler et al. (1975) in Lake Balaton in Hungary. ... Wind tides and surface friction coefficient in semi-enclosed shallow lagoons ESTUAR COAST SHELF S Emmanuelle Migne Caroline Paugam Damien Sous Vincent Rey The present paper is specifically focused on enclosed or semi-enclosed basins where the wind is the dominant driver of water surface tilting, leading to the so-called wind tide contributing to water levels rise. Wind-induced free surface tilting is studied using the 1-D steady form of the depth-averaged shallow water (Saint-Venant) momentum equation which reflects the depth-averaged local balance between surface slope and wind stress. Two contrasted field sites, the Berre and Vaccarès lagoons, have been monitored providing water level data along a reference axis. This study highlighted the occurrence of wind tides at the two field sites. The bimodal wind exposure ensured the robustness of the observations, with non-linear but symmetric behaviors patterns observed in winds from opposite directions. It is observed that the higher the wind speed, the steeper the slope of the free surface in accordance with the well known basic trend. In addition, a significant effect of depth is observed, with greater surface tilting in the shallower lagoon. The data analysis confirmed the robustness of such a simple approach in the present context. Using the additional assumption of constant, i.e. wind-independent, drag coefficients (CD) allowed a good match with the observations for moderate wind speeds for both sites. However, the depth effect required the CD to be increased in the shallower basin. Classical empirical wind-dependent CD parameterizations provide better wind-tide predictions than the constant-CD approach in very strong wind conditions but totally failed in predicting surface tilting in the shallower site, suggesting that physical parameters other than wind speed should be taken into account for the CD parameterization in very shallow lagoons. ... While hurricane-induced wind set-up events are more extreme than the types of events involved in the 2011 Richelieu River flooding, Chimney (2005) suggests that the use of calibrated relatively simple steady-state models can lead to satisfactory results, even in a similar context. A study by Platzman (1963) used more complex dynamic models to analyse the effects of the wind on wind set-up on Lake Erie. ... A Semi‐Empirical Wind Set‐up Forecasting Model for Lake Champlain Guillaume Loiselle Jean-Luc Martel Annie Poulin Richard Arsenault The precision of Lake Champlain's water level estimation is a key component in the flood forecasting process for the Richelieu River. Hydrological models do not typically take into consideration the effects of the wind on the water level (also known as the wind set‐up). The objective of this study is to create an empirical wind set‐up forecast model for Lake Champlain during high wind events. The proposed model uses wind speed and direction across the Lake, as well as wind gusts as inputs. The model is calibrated to a subset of observations and evaluated on an independent sample, considering four wind speed bins. It is tested and compared to a variant of the Zuider Zee equation on twenty wind set‐up events that occurred between 2017 and 2019 using hindcast data from five different numerical weather prediction systems (GDPS, RDPS, HRDPS, NOAA and ECMWF). A quantile mapping‐based forecast calibration scheme is implemented for each of the forecast products to correct their biases. Results show that events are successfully predicted by the proposed model at least 72 hours in advance. These results are better than the other comparative models found in the literature and tested herein. Overall, significant improvements are obtained by including wind speed and wind gusts from different weather stations. This article is protected by copyright. All rights reserved. ... The aim of the study is to understand the wind effect on mean water level variation in semienclosed shallow basins. The studied physical phenomenon is nearly steady water surface tilting due to wind stress, the so-called wind tide (Platzman (1963)). During strong wind conditions, wind tides can have significant consequences on low-lying areas such as submersion and flooding. ... FIELD STUDY OF WIND TIDE IN SEMI-ENCLOSED SHALLOW BASINS Samuel Meulé The aim of the study is to understand the wind effect on mean water level variation in semi-enclosed shallow basins. The studied physical phenomenon is nearly steady water surface tilting due to wind stress, the so-called wind tide (Platzman (1963)). During strong wind conditions, wind tides can have significant consequences on low-lying areas such as submersion and flooding. Two field sites are monitored in the S-E of France to characterize wind tides and more specifically to understand the relative effect of wind magnitude and depth on the mean water level dynamics.Recorded Presentation from the vICCE (YouTube Link): https://youtu.be/Q30I0taty9w ... The second group predicts storm surge numerically by using fluid dynamics and atmospheric driving forces [15]. Two-dimensional depth-integrated hydrodynamic models in the Cartesian structure grid system are commonly utilized to simulate storm surge, where governing equations are nonlinear shallow water equations with Boussinesq and hydrostatic approximations (e.g., [23]). The driving forces (i.e., pressure and wind fields) can be specified using parametric cyclone models [24,25] for hindcast/diagnostic purposes. ... Long-Lead-Time Prediction of Storm Surge Using Artificial Neural Networks and Effective Typhoon Parameters: Revisit and Deeper Insight wei-ting Chao Chih-Chieh Young T.-W. Hsu Chian-Yi Liu Storm surge induced by severe typhoons has caused many catastrophic tragedies to coastal communities over past decades. Accurate and efficient prediction/assessment of storm surge is still an important task in order to achieve coastal disaster mitigation especially under the influence of climate change. This study revisits storm surge predictions using artificial neural networks (ANN) and effective typhoon parameters. Recent progress of storm surge modeling and some remaining unresolved issues are reviewed. In this paper, we chose the northeastern region of Taiwan as the study area, where the largest storm surge record (over 1.8 m) has been observed. To develop the ANN-based storm surge model for various lead-times (from 1 to 12 h), typhoon parameters are carefully examined and selected by analogy with the physical modeling approach. A knowledge extraction method (KEM) with backward tracking and forward exploration procedures is also proposed to analyze the roles of hidden neurons and typhoon parameters in storm surge prediction, as well as to reveal the abundant, useful information covered in the fully-trained artificial brain. Finally, the capability of ANN model for long-lead-time predictions and influences in controlling parameters are investigated. Overall, excellent agreement with observations (i.e., the coefficient of efficiency CE > 0.95 for training and CE > 0.90 for validation) is achieved in one-hour-ahead prediction. When the typhoon affects coastal waters, contributions of wind speed, central pressure deficit, and relative angle are clarified via influential hidden neurons. A general pattern of maximum storm surge under various scenarios is also obtained. Moreover, satisfactory accuracy is successfully extended to a much longer lead time (i.e., CE > 0.85 for training and CE > 0.75 for validation in 12-h-ahead prediction). Possible reasons for further accuracy improvement compared to earlier works are addressed. ... Storm surge is the combined effect of (a) low atmospheric pressure allowing water to increase in elevation, this effect is significant in large storms (Feagin et al. 2010) and (b) wave-induced set up, as driven by radiation stress (Longuet-Higgins and Stewart 1962). Wind tides are changes in water level that occur within enclosed basins, such that the wind drives currents which push water into or out of the basins (Keulegan 1951;Platzman 1963). Surge is predominantly a regional-scale process -to model it requires knowledge of atmospheric pressure, waves, wind, and bathymetry. ... Enhanced tide model: Improving tidal predictions with integration of wind data Thomas P. Huff Rusty A. Feagin Jens Figlus Publicly available tidal predictions for coastlines are predominantly based on astronomical predictions. In shallow water basins, however, water levels can deviate from these predictions by a factor of two or more due to wind-induced fluctuations from non-regional storms. To model and correct these wind-induced tidal deviations, a two-stage empirical model was created: the Enhanced Tidal Model (ETM). For any given NOAA tide gauge location, this model first measured the wind-induced deviation based on a compiled dataset, and then adjusted the astronomical predictions into the future to create a 144-hour forecast. The ETM, when incorporating wind data, had only 76% of the error of NOAA astronomical tidal predictions (e.g. if NOAA had 1.0 ft. of error, ETM had only 0.76 ft. error from the observed water level). Certain ETM locations had approximately half (49%) as much prediction error as NOAA. With the improvement in tidal accuracy prediction, the ETM has the ability to significantly aid in navigation along with coastal flood prediction. We envision the ETM as a resource for industry and the public to make informed decisions that impact their livelihood. ... The SPLASH model has been partially documented in three publications, the mathematical technique in Jelesnianski (1967), and three operational techniques to run and interpret the results for forecasting purposes, Jelesnianski (1972Jelesnianski ( , 1976. The mathematical techniques are adapted from Platzman (1963). The tropical storm model used to generate surges is discussed in Jelesnianski and Taylor (1973). ... SPLASH - A MODEL FOR FORECASTING TROPICAL STORM SURGES Celso S. Barrientos CHESTER P. JELESNIANSKI A significant portion of the damage by hurricanes is the storm surges. The National Weather Service has developed a dynamical-numerical model to forecast hurricane storm surges. The model is used operationally for prediction, warning, and planning purposes. The model requires fixed oceanographic and real time meteorological input data. The oceanographic data were prepared for the Gulf and East coasts of the U.S. and are stored as an essential part of the program. Meteorological data for any tropical storm are supplied by the forecasters or planners using the model. The model was applied to hurricane Camille 1969. Comparison between the observed and computed surges for Camille was satisfactory for prediction purposes. ... The majority of the research on surface seiche damping has been based upon the measurements made by Endrös. Despite numerous reproductions and reference to these measurements [3,[14][15][16][17][18] no verification of the values he obtained, reproduced in "Appendix 1" (Table 4), has ever been carried out. This paper seeks to present a method through which the damping ratios and periods of surface seiches may be extracted using existing data without the need for labor intensive and subjective visual inspection of water elevation records. ... A novel technique for experimental modal analysis of barotropic seiches for assessing lake energetics ENVIRON FLUID MECH Zachariah Wynne Thomas Reynolds Damien Bouffard Danielle J. Wain Basin scale seiches in lakes are important elements of the total energy budget and are a driver of fluxes of important ecological parameters, such as oxygen, nutrients, and sediments. At present, the extraction of the damping ratios of surface seiches, which are directly related to the capacity of seiches to drive these fluxes through the increased mixing of the water column, is reliant on spectral analysis which may be heavily influenced by the transformation of water level records from the time domain to the frequency domain, and which are sensitive to the level of noise present within the data. Existing spectral-based methods struggle to extract the periods of surface seiches which are of similar magnitude due to the overlap between their spectral responses. In this study, the principles of operational modal analysis, through the random decrement technique (RDT), currently used primarily in the analysis of high rise structures and in the aeronautical industry and not previously applied within the fields of limnology or ecology, are applied to barotropic seiches through the analysis of water level data for Lake Geneva, Switzerland, and Lake Tahoe, USA. Using this technique, the autocorrelation of the measurements is estimated using the RDT and modal analysis can then be carried out on this time-domain signal to estimate periods of the dominant surface seiches and the corresponding damping ratios. The estimated periods show good agreement with experimental results obtained through conventional spectral techniques and consistent damping ratios are obtained for the dominant surface seiche of Lake Tahoe. The effect of input parameters is discussed, using data for the two lakes, alongside discussion of the application of RDT to the study of internal seiches and current barriers to its application. RDT has great potential for the analysis of both surface and internal seiches, offering a method through which accurate damping ratios of seiche oscillations may be obtained using readily available data without necessitating spectral analysis. ... For the first time, a viscous damping effect in the sea current dynamics was studied by Ekman (1905), then by Welander (1957), Platzman (1963), Jelesnianski (1970), and Jordan and Baker (1980). Mofjeld (1980) also investigated the influence of vertical viscosity on the propagation of barotropic Kelvin wave. ... Viscous waves on a beta-plane and its zonal asymmetry V. N. Zyryanov Marianna Chebanova Effects of the viscosity, Earth rotation, and sphericity (beta-effect) on the long-wave dynamics are investigated based on the linear model. The basic equation for the complex amplitudes of gravitational long waves is obtained. It is shown that the viscosity plays a significant role in the long-wave dynamics. Stokes' layer thickness is the criterion which separates two regimes of long-wave evolution: low viscosity and viscous flows. Two Stokes' layers occur in the rotating fluid. The thickness of the first approaches to infinity as the frequency tends to inertial frequency. Considering the role of the Stokes' layer as a criterion of viscosity influence, we can conclude that for the waves of the near-inertial frequency, viscosity always plays a significant role irrespective of ocean depths. The beta-effect leads to the planetary drift velocity occurrence in the equation. The planetary drift velocity can have either eastward or westward direction depending on the wave frequency. Thus, Earth sphericity causing the planetary drift plays a major role in the asymmetry of the eastward and westward directions in wave dynamics. Friction is another reason for the asymmetry of the eastward and westward directions in wave dynamics. Damping decrements of the westward and eastward waves differ with the biggest difference for waves with the near-inertial frequencies. Group velocities of eastward and westward waves are nonsymmetrical too. Moreover, in a certain range of the near-inertial frequencies, group velocities of both westward and eastward waves are directed exceptionally eastward. Thus, the beta-effect and fluid viscosity can be the reasons for the asymmetry of western and eastern bays in the tidal wave dynamics. ... The horizontal transport equations are solved through the application of the Navier-Stokes momentum equations for incompressible and turbulent flow and solved for the pressure, Coriolis and frictional forces every time step. The governing equations are integrated over the entire depth of the water column as derived by Platzman 1963, in which the dissipation is determined solely by an eddy viscosity coefficient, and modified with a bottom slip coefficient proposed by Jelesniansky (1967). The bottom stress is not determined by the depth-averaged velocity. ... Hurricane flood risk assessment for the Yucatan and Campeche State coastal area NAT HAZARDS Wilmer Rey Ernesto Tonatiuh Mendoza Paulo Salles Gemma L. Franklin In this study, the first ever Sea, Lake, Overland Surges from Hurricanes (SLOSH) grid was built for the Yucatan Peninsula. The SLOSH model was used to simulate storm surges in the coastal area of the states of Yucatan and Campeche (Mexico). Based on climatology, more than 39,900 hypothetical hurricanes covering all possible directions of motion were synthesized. The storm intensity (category), forward speed, radius of maximum winds and the tide anomaly were varied for each hypothetical track. According to these scenarios, the potential storm surge and associated inundation threat were computed. Subsequently, the Maximum Envelope of Water (MEOW) and the Maximum of the MEOWs (MOMs) were calculated to assess the flood hazard induced by tropical cyclones under varying conditions. In addition, for each MOM, the socioeconomic vulnerability aspects were taken into account in order to assess the hurricane flood risk for the states of Yucatan and Campeche. Results show that the most vulnerable areas are the surroundings of Terminos lagoon, Campeche City and its neighboring areas in the state of Campeche. For Yucatan, the towns located in the Northwest (Celestun, Hunucma and Progreso) and the eastern part of the state presented the highest risk values. The methodology used in this study can be applied to other coastal zones of Mexico as well as places with similar attributes. Furthermore, the MEOW and MOM are very useful as a decision-making tool for prevention, preparedness, evacuation plans, mitigation of the flood hazard and its associated risk, and also for insurance companies. ... For Lake Ontario, Simons (1973a, b) gave a value of 2 9 10 4 -4 9 10 4 cm 2 s -1 in shallow as well as deep water, and this leads to B = 50/D 2 to 100/D 2 in C.G.S. units. Another approach for prescribing the bottom stress is to specify the vertical turbulent diffusion of momentum by a constant eddy viscosity v. Platzman (1963) deduced a bottom friction coefficient as a function of the Ekman number, D ffiffiffiffiffiffiffiffiffi f =2v p , in such a way that B ? 0 for great depths and B = 2.5v/D 2 for shallow water. For Lake Erie, Platzman took v = 40 cm 2 s -1 , which gives B = 100/D 2 in C.G.S. units. ... A review of cyclone track shifts over the Great Lakes of North America: implications for storm surges Tew-Fik Mahdi Gaurav Jain Shay Patel Aman Sidhu Cyclone tracks over the Great Lakes of North America shift, both East–West as well as North–South. The reasons for the shifts are various small-scale as well as large-scale processes associated with the general circulation of the atmosphere. The East–West shift has an approximate periodicity of 10 years, while the North–South shift occurs roughly with a periodicity of 20 years. The East–West shift is more important than the North–South shift. The amount of shift could be as much as a few hundred kilometers. The implication of these shifts for storm surges in the Great Lakes is considered. ... Taylor 1919 [9]. Ramming and Kowalik 1980 [7]; however, recall that (23.5) 2 is only one possible form of possible functional relations between R and u, see in particular Platzman's parameterization [6], treated in detail in Vol. I, Chap. ... Barotropic Wind-Induced Motions in a Shallow Lake In this chapter the intention is to describe the horizontal velocity distribution in a homogeneous lake by the spatially three-dimensional dynamical equations, based on the hydrostatic pressure assumption on the one hand, and their spatially two dimensional depth integrated reduction on the other hand. Comparison of the two sets of solutions for wind forcing, uniform in space and Heaviside in time, from various directions discloses the conditions when the depth averaged equations likely yield valid approximations of the three dimensional situation. Lake Zurich is used as an example. The extensive computations reveal that the problem of approximate determination of the barotropic velocity distribution in a homogeneous lake needs careful scrutiny. We shall analyze this problem by applying layered versions of the equations of motion in the hydrostatic pressure assumption to Lake Zurich and comparing the wind-induced current distribution obtained for a number of wind scenarios of a one layer and an eight-layer model. It may be deduced that depth integrated models deliver horizontal currents in homogeneous lakes of extremely shall depths (ca 5 m) only. ... The equations of motion in the Cartesian coordinate system used in SLOSH were first developed by Platzman (1963) and later modified by Jelesnianski (1967) to include a bottom slip coefficient: ... Joint Distributions of Hurricane Wind and Storm Surge for the City of Charleston in South Carolina Nadarajah Ravichandran Bin Pei Weichiang Pang Firat Y Testik Major coastal cities, which have large populations and economies, easily suffer from the losses due to hurricane wind and storm surge hazards. Although current design codes consider the joint occurrence of high wind and surge, information on site specific joint distributions of hurricane wind and storm surge along the U.S. Eastern Coast and Gulf of Mexico is still sparse and limited. In this paper, joint probability distributions of combined hurricane wind and storm surge for the City of Charleston, SC is developed. A stochastic hurricane model was used to simulate 5,000 years of synthetic hurricanes. The simulated hurricanes were inputted into the ADCIRC (Advanced Circulation) surge prediction model to compute the surge heights at selected locations. The calculated peak wind speeds and surge heights were employed to generate the joint probability distributions at each location. These joint distributions developed can be used in a multi-hazard design or risk assessment framework to consider the combined effects of hurricane wind and storm surge hazards. Mapping joint hurricane wind and surge hazards for Charleston, South Carolina Fangqian Liu Combined effects of hurricane wind and surge can pose significant threats to coastal cities. Although current design codes consider the joint occurrence of wind and surge, information on site-specific joint distributions of hurricane wind and surge along the US Coast is still sparse and limited. In this study, joint hazard maps for combined hurricane wind and surge for Charleston, South Carolina (SC), were developed. A stochastic Markov chain hurricane simulation program was utilized to generate 50,000 years of full-track hurricane events. The surface wind speeds and surge heights from individual hurricanes were computed using the Georgiou's wind field model and the Sea, Lake and Overland Surges from Hurricanes (SLOSH) model, respectively. To validate the accuracy of the SLOSH model, the simulated surge levels were compared to the surge levels calculated by another state-of-the-art storm surge model, ADCIRC (Advanced Circulation), and the actual observed water elevations from historical hurricane events. Good agreements were found between the simulated and observed water elevations. The model surface wind speeds were also compared with the design wind speeds in ASCE 7-10 and were found to agree well with the design values. Using the peak wind speeds and maximum surge heights, the joint hazard surfaces and the joint hazard maps for Charleston, SC, were developed. As part of this study, an interactive computer program, which can be used to obtain the joint wind speed and surge height distributions for any location in terms of latitude and longitude in Charleston area, was created. These joint hazard surfaces and hazard maps can be used in a multi-hazard design or risk assessment framework to consider the combined effects of hurricane wind and surge. ... By an analysis similar to the one discussed by Platzman [6], the stability condition for system ' ... An Energy-Conserving Difference Scheme for the Storm Surge EQUATIONS1 Anita Sielecki A system of finite difference equations for storm surge prediction has been constructed, using forward time differences. The scheme was tested for special simple geometrical configurations, and it was found to be stable without intro- ducing smoothing operators. The variation with time of the total energy was, in each case, the test of stability. The small-scale oscillation of the energy with time (characteristic of forward difference schemes) was studied in detail. A method of reducing this effect is suggested. A completely implicit finite difference scheme is discussed from the point of view of stability and convergence. It is shown how the requirement of a convergent iterative process actually introduces a severe restriction on the ratio ALIAS, thus canceling the advantages of the otherwise unconditionally stable implicit schemes. ... Most lakes are, however, not sufficiently large that the rotation of the Earth would in any even marginal form, play a role in the oscillation characteristics of homogeneous lakes. 2 The Great Lakes, the Baltic Sea, the (semi)-bounded ocean basins (say, Adriatic Sea in the Mediterranean Sea, Black Sea, Lake Baikal, the Caspian Sea) are safe candidates where the rotation of the Earth plays a significant role. References on these are by Mortimer [35,37,38], in particular with emphasis on Lakes Michigan and Superior by Mortimer and Fee [42] and Mortimer [41], on Lake Erie by Platzman [44] and Platzman and Rao [50], on Lakes Ontario and Superior by Rao and Schwab [52] and on Lake Huron by Schwab and Rao [57]. Platzman [48] reports on a barotropic seiche analysis of the Atlantic and Indian Oceans and [49] and on gravitational seiches of the entire World Ocean. ... Barotropic and Baroclinic Basin-Scale Wave Dynamics Affected by the Rotation of the Earth We have already given a detailed description of rotation affected external and internal waves in idealized containers of constant depth: straight channels, gulfs, rectangles and circular and elliptical cylinders. Pure Kelvin and Poincaré waves were shown to describe the oscillating motion in straight channels and their combination yielded the solution of the reflection of the rotation affected waves at the end wall of a rectangular gulf. The typical characterizations of Kelvin and Poincaré waves were seen to prevail (with some modification) in the fluid motion of rotating circular and elliptical cylinders with constant depth. The behaviour was termed Kelvin-type if for basin-scale dynamics the amplitudes of the surface and isopycnal displacements and velocities are shore-bound (i.e. large close to the boundaries and smaller in the interior of the basin), the motion cyclonic (that is counter-clockwise on the N.H.) and frequencies sub- or (less often) superinertial. Alternatively, for Poincaré-type behaviour, the surface and isopycnal displacements and velocities have large amplitudes in off-shore regions, their motion is anti-cyclonic and frequencies are strictly superinertial. ... Such investigations have been carried out by Hansen (1956) and Miyazaki (1965). Instead of treating bottom stress as an extrapolation of present forces, Platzman (1963) considered the time history of present and past forces using Ekman's theory and derived a diflerentd operator for bottom stress in series form. Jelesnianski (1967) used a modification of this scheme in numerical computations of hurricanegenerated storm surges along coastal areas. ... Bottom stress time history in linearized equations of motion for storm surges A transient Ekman's transport equation, in which bottom stress is formed as a convoluted integral in terms of surface stress and surface slope, and a continuity equation are used as predictors to compute storm surges in a model basin. Driving forces in the basin are analytically computed, using a model storm to represent actual meteorological conditions. A coastal boundary condition that relates surface slope to surface stress is developed by balancing slope and drift transports normal to a vertical wall. At interior grid points of the basin, sea-surface heights are computed by numerical means, using the prediction equations. These sea-surface heights are then extrapolated to the coast to agree with the coastal surface slope given by the boundary condition. Coastal storm surges computed in this manner are compared with observed su-%ges to test the model developed in this study. ... The above scheme is centrally differenced in time and uses the Jacobian difference operators devised by Arakawa (1966) to ensure that various integral constraints on the convective terms of the differential equations are preserved in the finite-difference formulation. Finally, the diffusion terms are evaluated at the preceding time level, as indicated by the subscript n-1, to prevent computational instability arising from these terms (Platzman 1963). The numerical procedure consists of successively solving (8)-( 10) until the desired state of flow development, usually the steady state, has been reached. ... The role of dynamic pressure in generating fire wind J FLUID MECH B. R. Morton Lance M. Leslie Earlier models of fire plumes based on simple entrainment laws and neglecting dynamic pressure have failed to produce the relatively shallow inflow over the fire perimeter known as fire wind. This inflow is of prime importance in fire modelling as it normally provides much of the air required for combustion; for this reason we have carried out a very simple numerical experiment on two-dimensional natural convection above a strip heat source with the intention of simulating those aspects of fire behaviour involved in the generation of fire wind without attempting the formidably difficult task of detailed fire modelling. Our results show clearly that fire wind is driven by the dynamic pressure field which is generated by and intimately related to the region of strong buoyant acceleration close above the ground boundary. Throughout our parametric range there is a concentrated region of large horizontal pressure gradient in a neighbourhood above the perimeter of the fire, and elsewhere the pressure gradients play a lesser role. ... If these constraints arenot observed computational instability will arise owing to the uncontrolled aliasing of long and short wavelengths (Phillips 1959). The diffusion terms also need careful treatment and Platzman (1963) has shown that the diffusion terms of (12) and (13) must be evaluated a t times (n-1) At to ensure (linear) computational stability of the difference equations. In the notation of Lilly (1964) this is indicated by the subscript 'lag'. ... The development of concentrated vortices: A numerical study Amongst the more important laboratory experiments which have produced concentrated vortices in rotating tanks are the sink experiments of Long and the bubble convection experiments of Turner & Lilly. This paper describes a numerical experiment which draws from the laboratory experiments those features which are believed to be most relevant to atmospheric vortices such as tornadoes and waterspouts. In the numerical model the mechanism driving the vortices is represented by an externally specified vertical body force field defined in a narrow neighbourhood of the axis of rotation. The body force field is applied to a tank of fluid initially in a state of rigid rotation and the subsequent flow development is obtained by solving the Navier–Stokes equations as an initial-value problem. Earlier investigations have revealed that concentrated vortices will form only for a restricted range of flow parameters, and for the numerical experiment this range was selected using an order-of-magnitude analysis of the steady Navier–Stokes equations for sink vortices performed by Morton. With values of the flow parameters obtained in this way, concentrated vortices with angular velocities up to 30 times that of the tank are generated, whereas only much weaker vortices are formed at other parametric states. The numerical solutions are also used to investigate the comparative effect of a free upper surface and a no-slip lid. The concentrated vortices produced in the numerical experiment grow downwards from near the top of the tank until they reach the bottom plate whereupon they strengthen rapidly before reaching a quasi-steady state. In the quasi-steady state the flow in the tank typically consists of the vortex at the axis of rotation, strong inflow and outflow boundary layers at the bottom and top plates respectively, and a region of slowly-rotating descending flow over the remainder of the tank. The flow is cyclonic (i.e. in the same sense as the tank) in the vortex core and over most of the bottom half of the tank and is anticyclonic over the upper half of the tank away from the axis of rotation. Analysis of Flood Vulnerability and Transit Availability with a Changing Climate in Harris County, Texas TRANSPORT RES REC Joshua Pulcinella Arne M.E. Winguth Diane Jones Allen Niveditha Dasa Gangadhar Hurricanes and other extreme precipitation events can have devastating effects on population and infrastructure that can create problems for emergency responses and evacuation. Projected climate change and associated global warming may lead to an increase in extreme weather events that results in greater inundation from storm surges or massive precipitation. For example, record flooding during Hurricane Katrina or, more recently, during Hurricane Harvey in 2017, led to many people being cut off from aid and unable to evacuate. This study focuses on the impact of severe weather under climate change for areas of Harris County, TX that are susceptible to flooding either by storm surge or extreme rainfall and evaluates the transit demand and availability in those areas. Future risk of flooding in Harris County was assessed by GIS mapping of the 100-year and 500-year FEMA floodplains and most extreme category 5 storm tide and global sea level rise. The flood maps have been overlaid with population demographics and transit accessibility to determine vulnerable populations in need of transit during a disaster. It was calculated that 70% of densely populated census block groups are located within the floodplains, including a disproportional amount of low-income block groups. The results also show a lack of transit availability in many areas susceptible to extreme storm surge exaggerated with sea level rise. Further study of these areas to improve transit infrastructure and evacuation strategies will improve the outcomes of extreme weather events in the future. Simple Two- and Three-Dimensional Flow Problems of the Navier-Stokes Equations This chapter begins with studying steady state layer flows through cylindrical conduits (ellipse, triangle, rectangle) and use of the Prandtl membrane analogy. This study of the Navier-Stokes fluids is important in geophysical fluid dynamics and is manifest in Ekman's theory and its extensions, where non-inertial effects chiefly influence the details of the fluid flow, evidenced in the Ekman spiral in atmospheric and oceanic boundary flows and in free geostrophic flows as their outer solutions. Extensions of the behavior exhibited by the assumption of constant (turbulent) viscosity are based on influences of depth dependence of the viscosity which influences the circulation pattern of such steady flows. Unsteady flows are analyzed for viscous flows along an oscillating wall and the growth of a viscous boundary layer as a response of an initial tangential velocity jump with time. The chapter closes with the study of an axial laminar jet and viscous flows in a converging two-dimensional channel. Modeling of Hydraulic Systems by Finite-Element Methods RALPH TA-SHUN CHENG This article attempts to present the state of the art of the finite-element modeling techniques in surface and subsurface hydraulic problems. Fundamentals of finite-element techniques are introduced and presented based on the convective-dispersion equation. Before discussing any applications, the author shows the Rayleigh-Ritz and the Galerkin finite-element formulations, their equivalence, and their essential difference. He examines in detail the finite-element approximations, and outlines the finite-element algorithm for preparing a computer code. Internal Waves in Lakes: Generation, Transformation, Meromixis – An Attempt at a Historical Perspective We review experimental and theoretical studies of linear and nonlinear internal fluid waves and argue that their discovery is based on a systematic development of thermometry from the early reversing thermometers to the moored thermistor chains. The latter (paired with electric conductivity measurements) allowed development of isotherm (isopycnal) time series and made the observation of large amplitude internal waves possible. Such measurements (particularly in the laboratory) made identification of solitary waves possible and gave rise to the emergence of very active studies of the mathematical description of the motion of internal waves in terms of propagating time-dependent interface motions of density interfaces or isopycnal surfaces. As long as the waves remain stable, i.e., do not break, they can mathematically be described for two-layer fluids by the Korteweg-de Vries equation and its generalization. When the waves break, the turbulent analogs of the Navier–Stokes equations must be used with appropriate closure conditions to adequately capture their transformation and flux of matter to depth, which is commonly known as meromixis. Mathematical Modelling of Global Storm Surges Problems T. S. Murty Storm surges are the world's foremost natural hazard. The global storm surge problems are reviewed, starting with global climatology, the tracks of tropical and extra-tropical cyclones, and the region where major surges occur. The storm surge prediction problems discussed from a mathematical point of view and the input data, boundary conditions and the meteorological forcing terms are explained. The particular uses of various types of grids is elucidated; special problems such as inclusion of tidal flats and ice cover are considered. The similarities and differences between tropical cyclone-generated surges and extra-tropical cyclone-generated surges are discussed. Fundamental Equations and Approximations The fundamental physical principles governing the motion of lake waters are the conservation laws of mass, momentum and energy. When diffusion processes of active or passive tracer substances are also considered, these laws must be complemented by transport equations of tracer mass. All these statements have the form of balance laws and in each of them flux terms arise, for which, in order to arrive at field equations, phenomenological postulates must be established. Hydrodynamics of lakes can be described by a Navier-Stokes-Fourier-Fick fluid or its simplifications. Its field equations are partial differential equations for the velocity vector v, the pressure p, the temperature T and, possibly, the mass concentrations cα (α =1, ..., N) of N different tracers (i.e. a suspended sediment, phosphate, nitrate, salinity, etc.). Boundary conditions for v, p, T and cα must also be established; in view of the fact that surfaces may deform and that evaporation may occur, these are not alltogether trivial. In fact the equations of motion of the free or of internal surtaces of density discontinuity — these are the so-called kinematic surface equations — serve as further field equations with the surface displacements as unknown boundary variables. Additional boundary conditions have to be formulated at the lake bottom and along the shore. The latter play a more significant role in physical limnology than in oceanography because for many phenomana the boundedness of the lake domain will affect the details of the processes while oceans may for the same processes be regarded as infinite or semi-infinite. This, for instance, implies that by and large wave spectra in the ocean are continuous, while they are often quantized in lakes. Vertical Structure of Current in Homogeneous and Stratified Waters N.S. Heaps In this section the hydrodynamic equations are formulated, mainly in order to state basic principles and introduce a notation. Simple solutions are then developed for water movements in a narrow rectangular basin subjected to steady wind directed along its length. Vertical structures of current are derived for both one- and two-layered systems representing, respectively, a lake during conditions of winter homogeneity and summer stratification. In spite of their simplicity, for the most part achieved by linearization, the use of constant coefficients of eddy viscosity and the neglect of the Coriolis force, the solutions illustrate some important facts about the dynamics of wind-driven flows in a long narrow lake. Perhaps the main interest of the analysis lies in the actual construction of closed solutions, satisfying appropriate boundary conditions, for lake circulation. Insurance Rate Filings and Hurricane Loss Estimation Models Charles C. Watson Mark E. Johnson Martin Simons E Johnson Insurance rate filings involving hurricane perils are generally based on complex, numerical models. Evaluation of such rate filings are further complicated if the model is proprietary so that state regulators are shielded from the inner workings of the models. To circumvent this difficulty while adhering to proprietary restrictions, Watson and Johnson (2003) developed an ensemble of 324 public domain models that bracket the published results of proprietary models while having the advantage of full disclosure of methodology. Moreover, the collection of models can provide regulators an appreciation of the state of the art of hurricane risk modeling to assist them in evaluating future rate filings. This methodology was applied for the North Carolina Department of Insurance but similar studies can be rapidly completed for other states as well. The results provide regulators with an independent, public domain spectrum of values to assess specific rate filings. U.S. IOOS coastal and ocean modeling testbed: Inter‐model evaluation of tides, waves, and hurricane surge in the Gulf of Mexico Patrick C. Kerr Aaron S. Donahue Joannes J. Westerink Andrew T. Cox A Gulf of Mexico performance evaluation and comparison of coastal circulation and wave models was executed through harmonic analyses of tidal simulations, hindcasts of Hurricane Ike (2008) and Rita (2005), and a benchmarking study. Three unstructured coastal circulation models (ADCIRC, FVCOM, and SELFE) validated with similar skill on a new common Gulf scale mesh (ULLR) with identical frictional parameterization and forcing for the tidal validation and hurricane hindcasts. Coupled circulation and wave models, SWAN+ADCIRC and WWMII+SELFE, along with FVCOM loosely coupled with SWAN, also validated with similar skill. NOAA's official operational forecast storm surge model (SLOSH) was implemented on local and Gulf scale meshes with the same wind stress and pressure forcing used by the unstructured models for hindcasts of Ike and Rita. SLOSH's local meshes failed to capture regional processes such as Ike's forerunner and the results from the Gulf scale mesh further suggest shortcomings may be due to a combination of poor mesh resolution, missing internal physics such as tides and nonlinear advection, and SLOSH's internal frictional parameterization. In addition, these models were benchmarked to assess and compare execution speed and scalability for a prototypical operational simulation. It was apparent that a higher number of computational cores are needed for the unstructured models to meet similar operational implementation requirements to SLOSH, and that some of them could benefit from improved parallelization and faster execution speed. A numerical model of the Adriatic for the prediction of high tides at Venice Q J ROY METEOR SOC Carlo Finizio S. Palmieri A. Riccucci The present study deals with the application of a physical mathematical model of the Adriatic Sea to the problem of predicting the sea level variations at Venice. The equations of motion are considered. Taking into account the peculiar topography of the region, a unidimensional model of the Adriatic Sea is formulated. The conditions for the computation stability are determined; a first test of the model is carried out by computing the variation in time of some fundamental quantities like the mass and energy of the basin. Thereafter, the main effects due to the geometry of the basin are studied by using ideal 'wind quanta'. Agreement is found between the results of the model and those obtained by means of classical hydrodynamics. Applications of the model to actual cases of sea level rise at Venice follow. Finally, on the basis of a critical analysis of the results, some considerations on the predictability of the phenomenon and on the requirements of the predicted meteorological fields are offered. Simulation of Water Circulations and Chloride Transports in Lake Superior for Summer 1973 D.C.L. Lam The mean water circulations of Lake Superior during June-September, 1973, is obtained by hydrodynamical modeling. The time interval covers most of the stratification period during which the temperature data have been adequately collected and analysed. The computed results show reasonable agreement with observed current meter readings. In particular, the generally counterclockwise circulation and some features of the Keweenaw current are obtained in the results. A computed map showing the frequent upwelling and downwelling zones is also given. These zones are often referred to in the description of the physical, chemical and biological regimes of the lake.The computed currents provide adequate description of the advective transports during the period. By parameterizing the turbulent diffusion in an advection-diffusion model, it is shown that the formulation proposed in previous studies appears to be also applicable for simulating the chloride transport in Lake Superior. Based on these studies, the mixing times of conservative materials have been estimated to be about 2 to 3 years for Lake Superior, depending on the location of the source. The Lake Erie Response to the January 26, 1978, Cyclone J. Steven Dingman Keith Bedford A descriptive analysis of the response of Lake Erie to the passage of the blizzard cyclone of January 26, 1978, is presented. This intense extratropical cyclone, the worst ever to cross the Ohio Valley and Great Lakes regiion of the United States, set numerous record low sea level pressure readings at nearly every recording station surrounding Lake Erie and subjected the lake region to sharp temperature drops and high winds. The lake surface was significantly ice covered during the storm event; it remained virtually intact on the entire Western Basin, and partial ice cover breakup occurred over the Central Basin. The investigation of the water level fluctuations induced by the cyclone are based on data acquired from normal meterological and water level monitoring stations surrounding the lake. The most unusual aspects of the water surface fluctuations include the observance of a pressure suction induced rise in water level in the Western Basin before the storm passed north of the lake; a maximum storm surge setup occurring between Marblehed, Ohio, and Port Colborne, Canada, and not between the ends of the lake; and a separate oscillatory surge occurring at Port Stanley, Canada. The probable causes and reasons for these fluctuations are thoroughly analyzed in the context of existing theories that deal with how a lake surface responds to external atmospheric forcing functions such as wind stress, sea level pressure changes, and resonance. The effect of the ice cover on the water level fluctuations is also presented. Wind-Induced and Thermally Induced Currents in the Great Lakes Kwang K. Lee A linearized stratified lake of rectangular shape with dimensions and other physical parameters comparable to those of the Great Lakes is presented to examine the currents induced by prescribed wind and thermal conditions at the boundary. The boundary layer of thickness E½ is identified at the surface and the solid boundary, where the wind and wall effects are most important. The influence of thermal input is restricted in the interior. The upwelling and downwelling phenomena and a relatively strong coastal current are significant in the linear stratified lake. Their relations to the input functions established in the analysis substantiate observations in the Great Lakes. The contribution of TOPEX/POSEIDON to the global monitoring of climatically sensitive lakes Charon Birkett Remote sensing and long-term monitoring of closed and climatically sensitive open lakes can provide useful information for the study of climatic change. Satellite radar altimetry offers the advantages of day/night and all-weather capability in the production of relative lake level changes on a global scale. A simple technique which derives relative lake level changes is described with specific relevance to the TOPEX/POSEIDON geophysical data record data set. An assessment of the coverage and global tracking performance of both the NASA radar altimeter and the solid state altimeter over these lakes is discussed, and results are presented for the first 1.75 years of the mission. Lake level time series were acquired for 12 closed lakes, six open lakes, and three reservoirs, providing information in many cases where ground gauge data are unobtainable or the lake is inaccessible. The results, accurate to ˜4 cm rms, mark the beginning of a very accurate lake level data set, showing that TOPEX/POSEIDON can successfully contribute to the long-term global program. Azimuthal Variations in Tsunami Interactions with Multiple-Island Systems G. T. Hebenstreit E. N. Bernard The purpose of this study is to examine variations in the response of an island system (the Hawaiian Islands, in this case) to an incoming tsunamilike wave pulse approaching the system along various azimuths. Simulations were carried out numerically by using an explicit finite-difference analog for the linearized equations of motion and continuity for long waves in a variable depth ocean. The model topography is based on the submarine topography of the Hawaiian Island region. Island coastlines are fully reflecting, so no attempt to simulate runup was made. Qualitative comparisons between model results and historical data from tsunamis approaching along similar azimuths show that the model produces realistic simulations. Azimuths were chosen for waves approaching from four general geographic areas: South America, Alaska, Aleutians-Kuriles-Japan-Philippines, and Southwest Pacific. Nearly all distant tsunamis striking Hawaii have come from one of these areas. Our conclusions are: (1) Tsunami response in the overall system does not vary greatly over small (10°–15°) changes in azimuth but does vary significantly over large changes (>60°). (2) Local response may vary greatly with azimuth, but certain areas seem to respond strongly to tsunamis approaching from almost any direction. (3) Topographic focusing seems to play the dominant role in determining localized response. Wind-Driven Currents in Lake Erie R. T. Gedney Wilbert J. Lick The steady-state wind-driven currents in Lake Erie are investigated. A numerical solution for the mass-transport stream function and the three-dimensional velocities as a function of depth and horizontal position is obtained and compared with measurements. The agreement is good. This report shows that the currents depend strongly on bottom topography and boundary geometry. An inverse method for determining wind stress from water-level fluctuations DYNAM ATMOS OCEANS David J. Schwab The hydrodynamic equations governing the water-level response of a lake to wind stress are inverted to determine wind stress from water-level fluctuations. In order to obtain a unique solution, the wind-stress field is represented in terms of a finite number of spatially dependent basis functions with time-dependent coefficients. The discretized version of the inverse equation is solved by a least-squares procedure to obtain the coefficients, and thereby the stress. The method is tested for several ideal cases with Lake Erie topography. Real water-level data is then used to determine hourly values of vector wind stress over Lake Erie for the period 5 May–31 October, 1979. Results are compared with measurements of wind speed and direction from buoys deployed in the lake. Calculated stress direction agrees with observed wind direction for wind speeds > 7.5 m s−1. Under neutral conditions, calculated drag coefficients increase with the wind speed from 1.53 × 10−3 for 7.5−10 m s−1 winds to 2.04 × 10−3 for 15−17.5 m s−1 winds. Drag coefficients are lower for stable conditions and higher for unstable conditions. Reconstruction of Some of the Early Storm Surges on the Great Lakes R. J. Polavarapu Some early (pre-Second World War) storm surges on the Great Lakes produced storms that were extensive enough to influence the whole Great Lakes system were reconstructed making use of scant meteorological and oceanographic data and descriptions from popular literature. Port Colborne, on the northeastern shore of Lake Erie exhibited consistently great storm surges. Negative surges were observed in narrow connecting waters such as the Detroit River. The simple analytical techniques used here showed that the water level changes could be accounted for mostly by the large-scale pressure system. Ice Forecasting Model for Lake Erie Shih-Huang Chieh Akio Wake Ralph R. Rumer A previously developed Great Lakes ice forecasting model for winter navigation aid has been calibrated using observed data from specific Lake Erie ice transport events. The model is based on the macroscopic continuum hypothesis for the fragmented ice field and the internal ice stress is represented by a viscous-plastic type constitutive law. The external driving force includes the time-dependent wind and water current fields as well as the thermodynamic source/sink terms for the ice mass conservation. By adjusting the model coefficients employed in the constitutive equations and the thermodynamic factors, the computed results are in reasonable agreement with the short-term observations for the 1978-79 ice season in Lake Erie. Major findings include the profound effects of the wind-driven water currents and the ice melt at the icewater interface on the ice regime of Lake Erie during the late stage of the ice season. Analyses of storm surges in the western part of the Seto Inland Sea of Japan caused by Typhoon 9119 CONT SHELF RES Tatsuo Konishi Yoshinobu Tsuji The 19th typhoon of 1991 (hereafter referred to as Typhoon 9119) caused remarkable storm surges more than 3 m in height with great inundation resulting along the coast in the western part of the Seto Inland Sea of Japan. This paper discussed the generation mechanism of storm surges due to Typhoon 9119 using numerical simulations with the conventional two-dimensional model. Some discrepancies between calculated storm surges and those observed were found, the reason for which was assumed to be lack of accuracy in the typhoon model. A method was developed to estimate wind field in a typhoon using sea level data and adopting the D. J. Schwab method (1982) Dynamics of Atmosphere and Oceans, 6, 251–278. Comparison of the estimated wind field with the observed wind field showed good agreement. Using the estimated wind field and newly determined normal modes of the analyzed region, the detailed structure and the generation mechanism of the storm surge resulting from Typhoon 9119 are identified. Effective Wind Stress over the Great Lakes Derived from Long-Term Numerical Model Simulations T.J. Simons Numerical models were used to compute water circulations throughout the 1970 shipping season for Lake Erie and for the 1972 International Field Year on Lake Ontario. Simultaneous computations of surface elevations were compared with observed water levels to adjust the model results after the fact. As a by‐product of these simultations, effective stress coefficients over water can be estimated. The results support earlier evidence that the effective wind stress over water is larger than indicated by atmospheric boundary layer measurements. A transport approach to the convolution method for numerical modelling of linearized 3D circulation in shallow seas INT J NUMER METH FL Zhigang Xu A new method for solving the linearized equations of motion is presented in this paper, which is the implementation of an outstanding idea suggested by Welander: a transport approach to the convolution method. The present work focuses on the case of constant eddy viscosity and constant density but can be easily extended to the case of arbitrary but time-invariant eddy viscosity or density structure. As two of the three equations of motion are solved analytically and the main numerical 'do-loop' only updates the sea level and the transport, the method features succinctness and fast convergence. The method is tested in Heaps' basin and the results are compared with Heaps' results for the transient state and with analytical solutions for the steady state. The comparison yields satisfactory agreement. The computational advantage of the method compared with Heaps' spectral method and Jelesnianski's bottom stress method is analysed and illustrated with examples. Attention is also paid to the recent efforts made in the spectral method to accelerate the convergence of the velocity profile. This study suggests that an efficient way to accelerate the convergence is to extract both the windinduced surface Ekman spiral and the pressure-induced bottom Ekman spiral as a prespecified part of the profile. The present work also provides a direct way to find the eigenfunctions for arbitrary eddy viscosity profile. In addition, mode-trucated errors are analysed and tabulated as functions of mode number and the ratio of the Ekman depth to the water depth, which allows a determination of a proper mode number given an error tolerance. Effect of coastal boundary resolution and mixing upon internal wave generation and propagation in coastal regions Philip Hall Alan M. Davies A non-linear two-dimensional vertically stratified cross-sectional model of a constant depth basin without rotation is used to investigate the influence of vertical and horizontal diffusion upon the wind-driven circulation in the basin and the associated temperature field. The influence of horizontal grid resolution, in particular the application of an irregular grid with high resolution in the coastal boundary layer is examined. The calculations show that the initial response to a wind impulse is downwelling at the downwind end of the basin with upwelling and convective mixing at the opposite end. Results from a two-layer analytical model show that the initial response is the excitation of an infinite number of internal seiche modes in order to represent the initial response which is confined to a narrow near coastal region. As time progresses, at the downwind end of the basin a density front propagates away from the boundary, with the intensity of its horizontal gradient and associated vertical velocity determined by both horizontal and vertical viscosity values. Calculations demonstrate the importance of high horizontal grid resolution in resolving this density gradient together with an accurate density advection scheme. The application of an irregular grid in the horizontal with high grid resolution in the nearshore region enables the initial response to be accurately reproduced although physically unrealistic short waves appear as the frontal region propagates onto the coarser grid. Parameterization of horizontal viscosity using a Smagorinsky-type formulation acts as a selective grid size-dependent filter, and removes the short-wave problem although enhanced smoothing can occur if the scaling coefficient in the formulation is too large. Calculations clearly show the advantages of using an irregular grid but also the importance of using a grid size-dependent filter to avoid numerical problems. A dynamical model of storm surges along island coasts Mamdouh Amin Fahmy Muhammad Zaki A time-independent dynamical model of storm surge along island coasts using orthogonal curvilinear coordinates is presented. The curved annulus between an island coast and an arbitrary deep-water boundary is mapped conformally onto a rectangular image. Two configurations of island coasts are investigated; circular and elliptic coasts. The corresponding coordinates are circular polar and elliptic respectively. The linearized vertically-integrated equations of motion are used to model storm surges with two assumptions: (i) bottom stress is proportional to horizontal transport, and (ii) storm forces are shear stresses on water surface. Analytical solutions are presented for three dynamical cases: (i) a constant-depth basin acted upon by a uniform storm stress, (ii) variable-depth basin acted upon by a uniform-direction variable-magnitude stress, and. (iii) a basin with closed depth contours acted upon by vortex-shaped storm stress. The obtained solutions clarify the relative importance of the various parameters and variables that affect surge height distribution along island coasts. These solutions may be used to test a time-dependent, numerical dynamical storm model. Wind-driven oscillations of an enclosed basin with bottom friction W. Krauss Oscillations of an enclosed basin due to a suddenly blowing wind which is periodic for several periods and constant otherwise, are studied. If quadratic bottom friction is taken into consideration a forced oscillation and seiches and, in addition, non-linear oscillations given by the higher harmonics and the sum and difference frequencies, are obtained. The results are compared with observations made in the Baltic Sea where the forced oscillation shows a period of 120 hours. The main energy of the non-linear components concentrates on 58.7–60 hours and on about 35 hours.Es werden die Schwingungen eines abgeschlossenen Meeresgebietes als Folge eines Windes betrachtet, der anfangs periodisch, spter konstant ist. Bercksichtigt man quadratische Bodenreibung, so entstehen neben der erzwungenen Schwingung und den Seiches nicht-lineare Schwingungen. Ihre Frequenzen sind durch ganzzahlige Vielfache der Frequenzen der linearen Schwingungen sowie durch Summen- und Differenzfrequenzen gegeben. Die Ergebnisse werden zur Interpretation der in der Ostsee beobachteten Energiekonzentrationen bei 120h, 60h und 35h herangezogen.On tudie les oscillations d'un bassin ferm, dues un vent soufflant brusquement, de faon priodique pendant plusieurs priodes, et, puis d'une manire constante. Si on considre le frottement quadratique sur le fond, on obtient des seiches, une oscillation force et, en outre, des oscillations non linaires provoques par les harmoniques les plus leves ainsi que par la somme et la diffrence des frquences. On compare les rsultats avec les observations faites en mer Baltique o l'oscillation force a une priode de 120 heures. La plus grande partie de l'nergie des composantes non linaires se concentre sur 58.7–70 heures et sur environ 35 heures. Numerical simulation of typhoon surges along the coast of Taiwan Hsien-Wen Li Cheng-Han Tsai Yao-Tsai Lo A numerical model has been designed to study the storm surge induced by typhoon along the coast of Taiwan. The governing equations have been expressed in spherical coordinate system, and a finite difference method has been used to solve them. In the system of hydrodynamical equations, the nonlinear advection and lateral eddy viscosity terms are prominent in shallow coastal waters. Air pressure gradient and wind stresses are the driving forces in the model of typhoon surge. The model has been verified with storm surges induced by Typhoons Herb in 1996, and by typhoons Kai-Tak and Bilis in 2000. Numerical Solution of Flow Problems in Rivers E. Isaacson J. J. Stoker A. Troesch Statistical Prediction by Discriminant Analysis Robert G. Miller The limited amount of information contained in a set of meteorological predictors precludes any precise statement concerning which one of a number of possible future events will occur. For purposes of operational decision making the probability distribution over the possible events for given values of the predictors is required. The mathematical exposition of a technique for obtaining this distribution is presented. An objective procedure is proposed for excluding from the analysis any redundant or nonsignificant information. Two numerical examples are provided which illustrate the application of the technique where the predictors are selected using the proposed procedure. On the influence of the Earth's rotation on oceancurrents V.W. Ekman Coast effect upon the ocean current and the sea level. II. Changing State. Memoirs, College of Science, Kyoto Imp T. Nomitsu Non-stationary ocean currents Part I K. Hidaka Wind Profile, Surface Stress and Geostrophic Drag Coefficients in the Atmospheric Surface Layer ADV GEOPHYS Heinz H. LeHau Über Horizontalzirkulation bei winderzeugten Meeresströmungen A numerical computation of the surge of 26 June 1954 on Lake Michigan G.W. Platzman T. E. Pochapsky Albert Defant Dynamical Oceanography J. Proudman Hydrodynamic Effects of Gales on Lake Erie J Res Natl Bur Stand G.H. Keulegan The North Sea problem. IV: Free oscillations of a rotating rectangular sea D. van Dantzig H.A. Lauwerier Time-relations in meteorological effects on the sea P LOND MATH SOC A. T. Doodson The North sea problem. V: Free motions of a rotating rectangular bay Contributions to the theory of stationary drift currents in the ocean Koji Hidaka The equivalence between certain statistical prediction methods and linearized dynamical methods D. LEE HARRIS The linearized hydrodynamic equations for storm surges are solved in analytic form for a very simple model basin and an arbitrary field of wind and pressure to show that a solution can be obtained as an integral of the product of the atmospheric forcing function and an influence function whose value tends to zero with increasing time lags. In practical cases t,his solution can be computed as a weighted sum of the meteorological observations during a short period before the storm surge observation. A finite difference scheme for a slightly more goncral basin is then developed and the solution given formally in terms of a polynomial involving bot'h vectors and matrices. It is shown that this solution is equivalent to the analytic solution and that both are equivalent to a li~lear function of the meteorological measurements of wind and pressure which must be used to obtain a descript,ion of any actual forcing function for storm surges. The technique can be generalized to provide the solution for basins of almost any shape. The difficulties and uncertainties involved in the hydrodynamic solution are discussed, and the advantages of using a statistical method to dc4erminc the solution of the problem when sufficient data are available are shom-n. An Investigation of the Meteorological Conditions Associated with Extreme Wind Tides on Lake Erie Shirley M. Irish The dates of incidence of extreme wind tide on Lake Erie have been determined for the 20-year period 1940 through 1959, for all cases in which the difference in lake level between Buffalo and Toledo exceeded 6 feet. A frequency-intensity analysis shows that a set-up in excess of 10 feet may be expected once every 2 years. Extreme wind tides occur mainly in the 6-month period October through March; more than 70 percent of the cases fall in the three months November, December, and January. November is the month of most frequent incidence, having more than one-third of the total number of cases in the period studied. The observed seasonal variation of extreme set-up frequency is interpreted as a reflection of the seasonal variation of storm frequency and of storm-track location. Secondary, but important factors are: seasonal variation of storm intensity and seasonal variation of thermal stability of the atmospheric boundary layer. The tendency for marked temperature stratification to be present in t... Note on Surface Wind Stress over Water at Low and High Wind Speeds Basil W. Wilson In connection with recent work of predicting hurricane storm tides in New York Bay, it became desirable to adopt the most suitable value of surface wind stress over water at high wind velocities that current knowledge will sustain. To this end a review of available literature of field and laboratory experiments on the subject was undertaken and the collated results are presented. To some extent it has been found possible to resolve some of the wide disparities that have existed up to now between field and laboratory results by the quite simple expedient of adapting the laboratory data for wind speeds at 10-cm height (usually) to prototype conditions of wind speeds at 10-m height through use of the Karman-Prandtl equation for verticle velocity distribution. Some 47 authorities are quoted in arriving at Cd values of 0.0024±.0005 for strong winds and 0.0015±.0008 for light winds, in a wind stress relationship of the form in which ρa is the mass density of the air and U the wind velocity at 10-m height. Although there is fairly satisfactory unanimity now regarding the Cd value for high winds, the situation is far less satisfactory for the case of light winds. Ein Problem aus der Windstromtheorie Jonas Ekman Fjeldstad A contribution to the study of storm surges on the Dutch coast W. F. Schalkwijk Generalised Sturm-Liouville Expansions in Series of Pairs of Related Functions Proc Math Phys Eng Sci H. Horrocks Meteorological Perturbations of Tides and Currents in an Unlimited Channel Rotating with the Earth On a Class of Expansions Tidal Oscillations in Gulfs and Rectangular Basins G. I. Taylor Numerical Prediction of Storm Surges Pierre Welander This chapter discusses the problem of predicting the storm surge once the meteorological conditions are given. The existing crude methods can predict the storm-surge amplitudes from given meteorological conditions with accuracy of the order of 10–20%. With the more general numerical and empirical methods to be discussed, it seems possible to predict the storm-surge amplitude from given meteorological conditions with an error of only a few percent. For the prediction of a storm surge in a given meteorological case, one has to introduce initial values, but these play a small role because of the presence of strong forcing functions. If these forcing functions are sufficiently familiar over a past time, one can start from a sea at rest. The forcing functions can continually control the development and prevent the growth of the initial errors. Finally, the basic model equations are in themselves both simpler and more accurate than in the meteorological case. The thermodynamic processes in the sea can be safely neglected and the equations can, with good approximation, be linearized. Weather Prediction by Numerical Process Lewis F. Richardson The idea of forecasting the weather by calculation was first dreamt of by Lewis Fry Richardson. The first edition of this book, published in 1922, set out a detailed algorithm for systematic numerical weather prediction. The method of computing atmospheric changes, which he mapped out in great detail in this book, is essentially the method used today. He was greatly ahead of his time because, before his ideas could bear fruit, advances in four critical areas were needed: better understanding of the dynamics of the atmosphere; stable computational algorithms to integrate the equations; regular observations of the free atmosphere; and powerful automatic computer equipment. Over the ensuing years, progress in numerical weather prediction has been dramatic. Weather prediction and climate modelling have now reached a high level of sophistication, and are witness to the influence of Richardson's ideas. This edition contains a new foreword by Peter Lynch that sets the original book in context. © Stephen A. Richardson and Elaine Traylen and Peter Lynch 2007. Wind Action on a Shallow Sea: Some Generalizations of Ekman's Theory In 1923 Ekman developed a theory for the sea-level changes produced in a deep sea by the action of a steady wind. In the present paper the theory has been extended to a shallow sea, for which the Ekman "depth of frictional influence" d is comparable to the actual depth h. It is pointed out that the wind-stress divergence may be of the same importance as the wind-stress curl in this case.The theory is furthermore extended to the transient case. It is demonstrated how the velocity profile and the flow can be expressed in terms of the local time-histories of the wind-stress and the surface slope, and a single integro-differential equation is derived for the sea-level elevation. This equation could possibly be used for prediction of meteorologic tides and storm surges. On the Influence of the Earth's Rotation on Ocean-Currents Vagn Walfrid Ekman Caption title. Arkiv för matematik, astronomi och fysik, Bd. 2, no. 11. Xerographic reproduction of the JHU copy, on double leaves folded once in Japanese style, and bound with leaves inverted. Bibliographical foot-notes. Storms of the Great Lakes E B Garriott Beobachtungen über die Dämpfung der Seiches in Seen A Endrös Winds and water levels on Lake Erie D K A Gillies Détermination des dénivellations et des courants de marée. Proceedings, Seventh Congress on Coastal Engineering, The Hague F Gohin Wind effects on lakes and rivers. Ingeniörsvetenskaps-akademien, Handlingar 158 B M O Hellström A mathematical investigation on the development of wind currents in heterogeneous waters 1956: A procedure for numerical integration of the primitive equations of the two-parameter model of the atmosphere A Eliassen 1902: Wind velocity and fluctuations of water level on Lake Erie Alfred J Henry 1959: Winds, wind set-ups, and seiches on Lake Erie, part 2. U. S. Corps of Engineers, Lake Survey Ira A Hunt Non-stationary ocean currents. Memoirs, Imperial Marine Observatory Coast effect upon the ocean current and the sea level. Memoirs, College of Science, Kyoto Imperial University (Series A). I. (with T. Takegami) Steady state Takaharu Nomitsu 1957: Modification of the quadratic bottom-stress law for turbulent channel flow in the presence of surface wind-stress R O Reid Long and short period oscillations in Lake Erie. State of Ohio, Department of Natural Resources, Division of Shore Erosion James L Verber 1922: Effects of winds and of barometric pressures on the Great Lakes John F Hayford Winds, wind set-ups, and seiches on Lake Erie, part 1. U. S. Corps of Engineers, Lake Survey Theoretical investigations of typhoon surges along the Japanese coast M Miyazaki T Ueno S Unoki On the development of the slope current and the barometric current in the ocean The currents of western Lake Erie Franklyn C W Olson A treatise on limnology G Hutchinson A theory of the rising stage of drift current in the ocean Theory of ocean tides. International Union of Geodesy and Geophysics, General Assembly at Helsinki C L Pekeris M Dishon Turbulent transfer in the lower atmosphere C H B Priestley Recommended publications Numerical Investigation with a Physically Based Regional Interpolator for Off-Line Downscaling of GC... December 1996 · Journal of Climate Stephane Goyette René Laprise A novel approach for regional climate modeling based on an off-line downscaling of GCM simulations is described and illustrated with a one-month simulation example. The model is physically based and it requires outputs from a previous GCM integration. The methodology is based upon the premise that much of "small-scale" variability (i.e., for spatial scales below current GCM resolution) is often ... [Show full abstract] the result of surface forcings rather than small-scale dynamical effects. Following on this consideration, the present work seeks to address the question of regional climate diagnostics by combining precomputed GCM atmospheric large-scale transports of momentum, heat, and moisture, called "the dynamics," with recomputed GCM subgrid-scale parameterized effect, called "the physics," including an additional mesoscale forcing term that is parameterized in terms of large-scale flow resolved by GCM coupled with fine-scale geophysical surface fields. This combination is integrated in a prognostic mode on a high-r... The Immersed Boundary Method: A Projection Approach August 2007 · Journal of Computational Physics Kunihiko Taira Tim Colonius A new formulation of the immersed boundary method with a structure algebraically identical to the traditional fractional step method is presented for incompressible flow over bodies with prescribed surface motion. Like previous methods, a boundary force is applied at the immersed surface to satisfy the no-slip constraint. This extra constraint can be added to the incompressible Navier–Stokes ... [Show full abstract] equations by introducing regularization and interpolation operators. The current method gives prominence to the role of the boundary force acting as a Lagrange multiplier to satisfy the no-slip condition. This role is analogous to the effect of pressure on the momentum equation to satisfy the divergence-free constraint. The current immersed boundary method removes slip and non-divergence-free components of the velocity field through a projection. The boundary force is determined implicitly without any constitutive relations allowing the present formulation to use larger CFL numbers compared to some past methods. Symmetry and positive-definiteness of the system are preserved such that the conjugate gradient method can be used to solve for the flow field. Examples show that the current formulation achieves second-order temporal accuracy and better than first-order spatial accuracy in L2-norms for one- and two-dimensional test problems. Results from two-dimensional simulations of flows over stationary and moving cylinders are in good agreement with those from previous experimental and numerical studies. Measurement of $B \to \D^*$ Form Factors in the Semileptonic Decay $B^{0}\to D^{*-}\ell^+ \nu$ B. Aubert for the BABAR Collaboration We present a preliminary measurement of \rone, \rtwo and \rhosq which are the three parameters used to characterize the $B\rar D^*\ell\bar\nu_e$ form factors ($A_1$, $V$, and $A_2$). We use $85 \times 10^6$ $B\bar B$-pairs accumulated on the \upsfs resonance at PEP-II. In this analysis we use the decay mode $\bar B^0\rar D^{+*}e^-\bar\nu$ and its charge conjugate. The $D^{*+}$ is reconstructed in ... [Show full abstract] the channel $D^{*+}\to D^0\pi^+$ and the $D^0$ in the channel $\dtokpi$. We parameterize the \ff s in terms of their ratios (determined by the parameters $R_1$ and $R_2$) and the common slope $\rhosq$ in the variable $w$ (a quantity related to the momentum transfer in the decay process). These parameters are determined via an unbinned maximum likelihood fit to the distributions in four kinematic variables (three decay angles and $w$). The results are $R_1= 1.328 \pm 0.055\pm 0.025\pm 0.025$ and $R_2=0.920 \pm 0.044\pm 0.020\pm 0.013 $ for the ratios and $\rhosq= 0.769\pm 0.039 \pm 0.019\pm 0.032 $ for the slope. The stated errors are the statistical uncertainty from the data, statistical uncertainty from the size of the Monte Carlo sample and the systematic uncertainty, respectively. Parameterizations of Sea-Spray Impact on the Air-Sea Momentum and Heat Fluxes December 2011 · Monthly Weather Review Junhong Bao Chris Fairall S. A. Michelson Laura Bianco This paper focuses on parameterizing the effect of sea spray at hurricane-strength winds on the momentum and heat fluxes in weather prediction models using the Monin-Obukhov similarity theory (a common framework for the parameterizations of air sea fluxes). In this scheme, the mass-density effect of sea spray is considered as an additional modification to the stratification of the near-surface ... [Show full abstract] profiles of wind, temperature, and moisture in the marine surface boundary layer (MSBL). The overall impact of sea-spray droplets on the mean profiles of wind, temperature, and moisture depends on the wind speed at the level of sea-spray generation. As the wind speed increases, the mean droplet size and the mass flux of sea-spray increase, rendering an increase of stability in the MSBL and the leveling-off of the surface drag. Sea spray also tends to increase the total air sea sensible and latent heat fluxes at high winds. Results from sensitivity testing of the scheme in a numerical weather prediction model for an idealized case of hurricane intensification are presented along with a dynamical interpretation of the impact of the parameterized sea-spray physics on the structure of the hurricane boundary layer.
CommonCrawl
DEM of triaxial tests on crushable cemented sand J. P. de Bono1, G. R. McDowell1 & D. Wanatowski1 Granular Matter volume 16, pages 563–572 (2014)Cite this article Using the discrete element method, triaxial simulations of cemented sand consisting of crushable particles are presented. The triaxial model used features a flexible membrane, allowing realistic deformation to occur, and cementation is modelled using inter-particle bonds. The effects of particle crushing are explored, as is the influence of cementation on the behaviour of the soil. An insight to the effects that cementation has on the degree of crushing is presented. The behaviour of cemented sand has been given much attention over recent years, and has been the subject of a number of laboratory studies. The presence of cement has a dramatic influence on the triaxial behaviour of sand; for a sand sheared at a given confining pressure, cementation (either natural or artificial) generally causes an increase in stiffness, peak strength, and the amount and rate of dilation; with these effects increasing with cement content [1–3]. The addition of cement introduces well-defined yield points and peak stresses, and reduces the axial strain at the peak stress. Cementation also influences the failure modes of the sand; brittle failure with shear planes are often witnessed in cemented specimens, while barrelling failure is observed for equivalent uncemented samples at the same confining pressure [4–6]. Particle crushing, while a separate phenomenon, also largely affects the stress and strain behaviour of granular soils. In triaxial tests, particle breakage decreases the rate of dilation [7], which in turn has an influence on any peak stress associated with the density. The degree of crushing is influenced by a number of factors, most principally the strength of the grains and the effective stress state; as such the effects of particle crushing are most pronounced at high pressures [8]. The discrete element method (DEM) has proved to be a useful tool for modelling granular soil; however, much of currently available literature using DEM to model cemented sand has been limited to two dimensions, for example [9–13]. Potyondy and Cundall [14] included some three dimensional modelling, although their work didn't feature flexible boundaries which are characteristic of laboratory triaxial tests. Wang and Tonon [15] however, did use flexible boundaries when modelling rock, although the focus of their work was to highlight the advantages and importance of such boundaries, rather than the micro mechanics of inter particle bonding. Particle crushing is often ignored in DEM, due to difficulties in implementing an effective and realistic breakage mechanism. In recent years, crushable particles have been modelled using DEM either by using agglomerates ('grains' consisting of smaller elementary spheres, bonded together), or by replacing 'broken' particles with smaller, self-similar fragments. Using the former method, Bolton et al. [16] showed that crushable particles are necessary for achieving realistic levels of volumetric contraction when modelling triaxial shear tests on granular soils. However, one drawback with using agglomerates is the large number of elementary particles required, which severely limits the overall number of grains that can be used; Bolton et al. [16] used just 389 agglomerates, consisting of less than 50 spheres each, while Lim and McDowell [17] showed that each agglomerate ideally should comprise at least 500 spheres to correct capture the size effect on strength. The following work aims to show that is possible to model three-dimensional triaxial simulations on a crushable, cemented sand using DEM, with a large number of particles. The work presented here features a triaxial model with a flexible membrane allowing realistic deformation [18], with a simple breakage mechanism incorporated that replaces broken sand particles with new, smaller fragments [19], and follows on from the authors' recent work investigating the behaviour of a cemented sand with crushable particles in normal compression [20]. This paper aims to investigate the combined effects of cement bond breakage and particle crushing in triaxial shear, and provide a step towards improving the realism of DEM simulations. Triaxial model The sand particles are modelled using spheres; the triaxial sample used is cylindrical, with a height of 100 mm and a diameter of 50 mm. The sand particles use the Hertz–Mindlin contact model, and are given a Poisson's ratio, \(\nu =0.25\) and a shear modulus, \(G=28\) GPa, typical values for quartz. The initial specimen is mono-disperse, consisting of 3,350 particles of uniform size \(d_{0}=4\) mm, and is generated using the radii expansion method [21] to give an initial voids ratio \(e_{0}=0.75\). Although this quantity of particles may be considered unrealistic, it is a larger number than used in many of the similar simulations of cemented materials [12, 13] or crushable soils [16, 22]. Furthermore, the particles can break an unlimited number of times, giving a higher 'breakage capacity' than the agglomerates used in the aforementioned studies. This initial diameter (which determines how many specimen particles are created) was chosen for computational efficiency; using a smaller initial diameter would result in a much larger number of both specimen particles and membrane particles (which are required to be smaller). Although this may seem somewhat unrealistic, this work serves as a fundamental investigation in to the combined effects of crushing and cementation, rather than a direct physical calibration. The flexible membrane used is the same as described in de Bono et al. [18], full details of which won't be repeated, the only principal difference being that the Hertz–Mindlin contact model is now used, rather than the linear spring model. To summarise, the cylindrical membrane consists of hexagonally-arranged particles, which are created a factor of 1/3 smaller than the smallest specimen particle. These particles are given artificially high stiffness—both to prevent them from penetrating the specimen, and to keep them aligned; with a system in place to remove the additional hoop tension which resulted. In the previous work, the membrane particles were bonded using contact bonds, which were vanishingly small and transmitted no moments. In this work however, due to the different contact model used (Hertz–Mindlin), membrane particles are ascribed a Poisson's ratio \(\nu =0.5\) (typical for rubber) and given an arbitrarily high shear modulus of \(G=1\) MPa to prevent penetration. The membrane particles are bonded together with parallel bonds [21], the diameter of which are \(10^{-10}\) times smaller than the membrane particles. These bonds are given 'stiffnesses' (stress per displacement) of \(1\times 10^{40}\) Pa/m; sufficient to keep the membrane particles aligned. The excess hoop tension is alleviated by allowing the membrane particles to expand, details of which are in de Bono et al. [18]. Particle crushing Crushing has generally been modelled using DEM via two alternatives: replacing 'breaking' grains with new, smaller fragments, generally in two-dimensions [23–27] or by using three-dimensional agglomerates [16, 22, 28, 29]. In the latter method, no consideration was given to the complex distribution of loads on each particle at its multiple contacts; however, McDowell and de Bono [19] allowed three-dimensional particles to fracture without the use of agglomerates by considering the stresses induced in a particle due to the multiple contacts. The same breakage mechanism and criteria are used in the following simulations, in which each particle is allowed to split into two new fragments, when the value of induced particle stress is found to be greater or equal to its strength. The new sphere fragments overlap enough to be contained within the bounding parent sphere, with the axis joining the new spheres aligned in the direction of the minor principal stress (Fig. 1). Although the fragments of broken spheres are not spheres, realistic particle shape has not been employed in this work—however, Bowman et al. [30] demonstrated (using Fourier descriptor analysis) that crushing a laboratory-grade silica sand resulted in statistically-insignificant changes in particle elongation and shape, suggesting that using self-similar fragments in these simulations is acceptable. The total volume of the new spheres is equal to that of the original parent sphere, obeying conservation of mass. This produces local pressure spikes during breakage; however the fragments move along the direction of the minor principal stress for the original particle, just as would occur for a single particle crushed between platens. Although conservation of energy is not observed in this case, the goal is to achieve an effective breakage mechanism as simple and effective as possible. As several authors (e.g. [23]) have implied, it is not possible to simulate perfectly realistic fracture using self-similar fragments; however it is not the purpose of this study to resolve this problem, but rather to adopt the best approach to investigate the effects of cementation and particle crushing during shear. Equal diametral splitting mechanism McDowell and de Bono [19] showed that the octahedral shear stress, \(q\) within a particle, given by: $$\begin{aligned} q=\frac{1}{3}\left[ {\left( {\sigma _1 -\sigma _2 } \right) ^{2}+\left( {\sigma _2 -\sigma _3 } \right) ^{2}+\left( {\sigma _1 -\sigma _3 } \right) ^{2}} \right] ^{1/2} \end{aligned}$$ could be used to determine whether fracture should occur or not. Jaeger [31] proposed the tensile strength of grains could be measured by diametral compression between platens as \(\sigma = F/d^{2}\). In PFC3D, for a sphere of size \(d\) compressed between two walls exerting force \(F\), the value of induced octahedral shear stress, \(q\), was found to be: $$\begin{aligned} q=0.9\frac{F}{d^{2}} \end{aligned}$$ and so proportional to the assumed induced stress in particle crushing tests. The octahedral shear stress was deemed an appropriate means by which to determine fracture, as it takes into account multiple contacts and complex distribution of loads while avoiding the use of agglomerates. By crushing individual sand particles, McDowell et al. [32, 33] showed that the stresses at failure for a given particle size satisfied a Weibull distribution of strengths (with same variation regardless of size). The mean strength \(\sigma _{m}\) of the particles was related to size by \(\sigma _{m}=d^{-b}\) (where \(b\) describes the size-hardening law). Hence McDowell and de Bono [19] assumed a particle would break when the octahedral shear stress was greater than or equal to its strength, where the strengths of the particles satisfy a Weibull distribution of \(q\) values. The strengths were related to size by: $$\begin{aligned} q_0 \propto d^{-b} \end{aligned}$$ which, assuming that the Weibull size effect is applicable to soil particles [33], leads to: $$\begin{aligned} q_0 \propto d^{-3/m} \end{aligned}$$ where \(q_{0}\) is the characteristic strength, and is a value of the distribution such that 37 % (i.e. exp[-1]) of random strengths are greater—and is related to the mean; \(m\) is the modulus, which describes the variation of the distribution. Particle breakage is determined using the octahedral shear stress according to equation (1). The particles in the simulations have random strengths from a distribution defined by the Weibull parameters obtained from silica sand by McDowell [34], i.e. \(q_{0}=20\) MPa for the initial particles (\(d_{0}=4\) mm); the Weibull modulus, \(m=3.3\); and the size-hardening law is governed by equation (4), which is used to attribute random strengths to new fragments. The details of the specimen and membrane used in the following simulations are given in Table 1. Table 1 Summary of DEM parameters for triaxial model Cementation Cement bonds are modelled using the parallel bond feature of the software [21]. These consist of a finite-sized cylindrical piece of material between the two particles, acting in parallel with the standard force-displacement contact model. These have been used in previous studies to model structured soils, e.g. [11, 13, 14]; as well as by the authors [20, 35] in modelling cemented sand. The bonds are defined by normal and shear stiffness (in terms of stress/displacement), normal and shear strength (in terms of stress) and size, and are installed before application of the confining pressure. It is somewhat unclear how to simulate the size of cement bonds; one may consider them small relative to the particles, occurring just at the contacts and independent of particle size; or as proportional to the particle diameters, filling much of the void space. Both of these approaches seem acceptable depending on interpretation and analysis of images; in this paper, all bonds are created with the same size, equal to the sand particles (\(d_{bond}=d_{0})\). To reduce the number of variables, and because this paper is not specifically concerned with calibration against physical tests, the parallel bond normal and shear stiffnesses are set equal to one another to minimise input variables, as are the mean normal and shear strengths. The parallel bond normal stiffness is defined assuming that cement has an elastic modulus of around 30 GPa [36] and bonds are given random values from a distribution of strengths. De Bono et al. [20, 35] showed that a Weibull distribution of strengths with a modulus of 0.5 was appropriate for modelling Portland cement, hence the parallel bond strengths here satisfy a Weibull distribution with a modulus of 0.5, with an arbitrary characteristic strength of 10 MPa. With regard to simulating an increasing degree of cementation, analysis of experimental data could suggest altering the variation or magnitude of bond strengths and stiffness, or various combinations thereof, while analysing high magnification images would suggest altering the quantity of bonds and/or bond size. It was demonstrated and justified in related work by the authors [20]—and will also be shown here—that increasing the quantity of parallel bonds is the single most effective method of capturing the correct qualitative change in behaviour that results from an increase in cement content. Increasing the strength or stiffness of bonds fails to capture all the qualitative effects one would expect with an increase in cement content. By default, when parallel bonds are created, they are installed at existing particle contacts, as well as between particles within very close proximity. By increasing this proximity in which non-touching particles are bonded to one another, an increasing quantity of parallel bonds can be installed. Hence, an increasing degree of cementation (i.e. cement content) is modelled herein by installing a larger number of bonds, and is measured by the average number of parallel bonds per particle. For the particle breakage mechanism described (used previously in [19, 20, 37, 38]), particles are only allowed to break at discrete intervals, with a number of computational timesteps allowed between these breakages to allow the elastic energy from the overlaps to dissipate. The same approach is therefore adopted for bond breakage, i.e. the inter-particle bonds (cement) are only allowed to break at given intervals. This is following the authors' recent work on modelling the behaviour of crushable cemented sand in compression [20]; the complementary work presented here investigating the behaviour during triaxial shear. The simulations are strain controlled; i.e. the top platen is accelerated downwards, then decelerated and stopped after an increase in axial strain of 0.01 %. During application of the confining pressure, and then immediately following each strain increment, the cemented sand particles are checked and if the normal or shear stress at any contact exceeds the strength of the cement bond (if one is present), then the bond is considered broken and removed. After the all the contacts have been checked and the bonds allowed to break, the stresses within the particles are checked and the particle themselves allowed to break. A number of timesteps (inversely proportional to the size of the numerical timestep) are then completed, over the course of which no bonds or particles may break; this is to allow the artificial energy from new overlapping particles to dissipate. These two processes are repeated (allowing fragments to break multiple times if necessary) until no further breakages occur; after which the next strain increment is applied. With regard to the influence that cementation (i.e. particle bonding) has on particle crushing, there is no conclusive evidence available in the literature. Coop and Atkinson [1] suggested that bond breakage precedes or coincides with particle breakage—however; by default, in the simulations particles may break regardless of whether they are bonded or not. Additionally, if a particle breaks, any bonds associated with it are automatically removed when the 'broken' particle is deleted and replaced by new fragments. It is generally believed that during normal compression, cementation reduces particle crushing (e.g. [6]), based on the observation that increasing the cement content of sand reduces the compressibility. This proposition seems feasible if one envisages a sand particle bonded by cement to neighbouring particles—the cement will increase the contact which area, which would reduce the induced tensile stress [31]. When modelling crushable cemented sand, de Bono and McDowell [20] investigated various configurations of bond and particle breakage, and it was found that if the presence of cementation prevented particles from fracturing, good qualitative agreement could be observed with experiments, and cementation resulted in a reduction of particle crushing. This method is adopted here, and is achieved by simply not allowing any particle to break if there exists one or more parallel bonds attached to it, meaning bond breakage must precede particle breakage. This approach is reasonable if one considers a particle coated in cement, or heavily bonded—for the particle to be loaded diametrically—which will give the highest octahedral shear stress—then the cementation will have to be broken first. The triaxial test variables monitored and recorded during the simulations are the deviatoric stress, \(q\), the axial strain, \(\varepsilon _\mathrm{a}\), and volumetric strain, \(\varepsilon _\mathrm{v}\). The deviatoric stress is measured as the difference between the axial stress (the major principal stress) and confining pressure (minor principal stress), where the axial stress is obtained from the average stress acting on the top and bottom platens. The volumetric strain is calculated using the current and original volumes of the sample; the volume of the sample being calculated from the locations of the membrane particles [18]. Cemented sand Experimental data for triaxial tests on sand bonded with various quantities of Portland cement are shown in Fig. 2 from Marri et al. [6]. The figure shows graphs of deviatoric stress, stress ratio and volumetric strain plotted against axial strain, for Portaway sand bonded with various amounts of Portland cement (measured by percentage dry mass), sheared under a confining pressure of 1 MPa. The figure shows that the addition of cement causes a peak stress to occur and increases the overall dilation. Increasing the amount of cement magnifies these effects—by causing the peak stress to increase and become more distinguished, and reducing the strain at this stress. In general, increasing the cement content causes the behaviour to become more brittle. Experimental triaxial results for sand at 1 MPa confining pressure with a range of Portland cement contents: deviatoric stress (a), stress ratio (b) and volumetric response (c), versus stain [6] Figure 3 shows the equivalent set of results from simulations of a crushable sand, sheared under 1 MPa of confining pressure, with varying levels of bonding. An increasing degree of cementation is modelled by increasing the quantity of bonds within the material, measured by the average number of parallel bonds per particles. The figure shows the deviatoric stress, stress ratio and volumetric strain as a function of axial strain for simulations with an average of 0 (unbonded), 5, 10 and 15 parallel bonds per particle. All four simulations start with an initially mono-disperse material (\(d_{0}=4\) mm), and as mentioned above, bond breakage must precede particle crushing. Triaxial results for simulations of crushable sand with an increasing degree of cementation: deviatoric stress (a), stress ratio (b) and volumetric response (c) versus strain The results show the correct trend that one would expect from an increase in cement content: there is an increase in the peak and maximum deviatoric stress, a higher initial stiffness and there is a more dilative response; in general the material displays more brittle stress–strain behaviour. The peak deviatoric stress appears to occur at slightly earlier axial strains with an increasing degree of cementation, in harmony with typical experimental results such as those shown in Fig. 2. The deviatoric stress responses do not appear to completely converge at large strains, displaying agreement with the experimental results, which also do not converge, even at strains as high as 30 %. Both numerical and experimental stress ratio graphs display the same pattern of behaviour, and reveal slightly different final values of \(\eta \) for the various materials. The most heavily cemented samples display the highest final stress ratios, suggesting that there is still active cementation, affecting the macroscopic grading at high strains under such a confining pressure. This is confirmed by the quantity of intact parallel bonds: at the end point of the simulations (i.e. \(\varepsilon _{a}=20\) %), the lightly, medium, and heavily cemented simulations had 1,029, 1,262 and 1,738 parallel bonds remaining respectively. Repeating the tests with a higher confining pressure would be expected to reduce the difference in the final values of stress ratio. Figure 4 shows the corresponding results for simulations using unbreakable particles. The stress–strain behaviour is largely the same as above, with the introduction of cement producing the same results: an increase in peak/maximum deviatoric stress, higher initial stiffness and increased dilation; with these effects increasing with cement content. The unbreakable simulations however appear to display very slightly higher values of peak stress and dilation. This is because as particles are unable to break in these simulations, they will need to rearrange to accommodate the macroscopic strain by sliding and rolling over one another (as opposed to breaking), requiring additional dilation. However, the difference is slight, and if smaller, more realistic initial particles were used, one would expect there to be even less difference between the crushable and non-crushable simulations—due to the smaller particles being stronger and therefore less likely to break. Triaxial results for simulations of non-crushable sand with an increasing degree of cementation: deviatoric stress (a), stress ratio (b) and volumetric response (c) versus strain At the end of these simulations (20 % axial strain), the quantities of intact bonds are 1,048, 1,352, and 1,682, respectively for the lightly, medium and heavily cemented non-crushable materials; similar to the crushable cemented sand simulations above. Increasing the cement content (by increasing the quantity of parallel bonds) causes the material to become more brittle; this is also evident in the deformation. Figure 5 displays the particle rotations on a vertical cutting plane at approximately 4 % axial strain (around the point of maximum dilation). The rotations are given for the unbonded and most heavily bonded simulations, for both crushable and non-crushable simulations. Both unbonded simulations in Fig. 5a display no clear pattern, while the heavily cemented simulations in Fig. 5b display localised failure with mild shear planes. This indicates that the ability for particles to crush has little effect on the failure mode and the overall deformation of cemented sand. Particle rotations on a vertical cutting plane through the samples, at 4 % axial strain: unbonded simulations (a) and heavily cemented simulations (b), with unbreakable particles (i) and breakable particles (ii). Dark indicates the most rotation Marri et al. [6] analysed particle breakage resulting from their high-pressure triaxial tests on cemented and uncemented sand. They provided photographic images of samples after shearing to approximately 30 % axial strain, key examples of which are given in Fig. 6. The two images compare an uncemented sample of sand (a) and a sample with 15 % content of Portland cement (b); both sheared under a confining pressure of 20 MPa. Marri et al. [6] suggested that the amount of particle crushing appeared less in the cemented soil—although this can be disputed, as both images clearly reveal what appear to be broken particles. Furthermore, in the case of the cemented sand, the cementation obscures the particles, which may conceal further particle breakage. Photographs showing close-ups of uncemented sand (a) and cemented sand (b) after drained triaxial shearing, under 20 MPa confining pressure [6] With regards to the crushable simulations, the unbonded and most heavily cemented samples (with an average of 0 and 15 bonds per particle respectively) sheared under 1 MPa confining pressure are shown in Fig. 7a, after shearing to 20 % axial strain. Although the broken fragments are highlighted, no major differences with regards to the amount of crushing are externally visible. However, the cemented material has experienced a total of 244 breakages. Overall, 140 original particles have undergone fragmentation, meaning 4.18 % by mass has crushed. This is markedly more crushing than the unbonded specimen, in which only 1.13 % of the sample has broken at the same stage. These numbers can be confirmed visually in Fig. 7b, which presents similar images of the samples, but reveal the inner breakages. These latter images also reveal a small number of significantly smaller particles in the heavily cemented material—indicating that some fragments have broken repeatedly; a phenomenon which is not observed in the uncemented sample. External views of the samples with the broken fragments highlighted (a), and inner views showing all broken fragments (b); for the unbonded simulation (i) and heavily cemented (ii) Considering just the heavily cemented sample, the broken fragments are shown again in Fig. 8a, in which a horizontal view at 4 % axial strain is presented (all fragments are displayed, throughout the sample), taken from the same angle as Fig. 5b, ii. The crushing does not appear to occur uniformly throughout the sample, rather it appears very localised. The fragments indicate a shear plane, in harmony with the Fig. 5b, ii, which displays the particle rotations from an identical point of view. Again, from this same point of view, Fig. 8b shows the remaining unbroken parallel bonds at the same axial strain (for clarity, only bonds on a vertical cutting plane through the centre of the sample are displayed). Most bond breakages have occurred in the same area as crushing, which is unsurprising considering bond breakage must precede particle fragmentation. Particle crushing, bond breakage and the particle rotations all conform to the same shear plane, showing that the deformation and failure is brittle, and highly localised. At the same strain, the uncemented simulation has experienced only 8 breakages. Images of the heavily cemented, crushable sample after 4 % axial strain. Inner view with all broken fragments highlighted (a), and view of the remaining intact parallel bonds on a vertical cutting plane (b) Therefore, by using the intrusive capability of DEM, the simulations suggest that the presence of cement—contrary to what Marri et al. [6] suggested—actually increases the degree of crushing during shear, although the crushing is localised and concentrated around the shear plane. This proposition is supported by the fact that Marri et al. [6] based their suggestion on the perhaps subjective interpretation of images, and the fact that the images themselves were non-intrusive, and did not reveal interior micro scale behaviour. Increasing the degree of cementation/cement content in the simulations (by increasing the quantity of parallel bonds) increases not only the number of overall breakages but also the percentage of mass of the original material that undergoes breakage; with 1.13, 2.36, 2.96 and 4.18 % by mass of the original samples undergoing crushing in the simulations with averages of 0, 5, 10 and 15 bonds per particle respectively. However, the introduction of crushing does not largely affect the overall stress–strain behaviour of the cemented sand, at least at a confining pressure of 1 MPa. The observation here that increasing the degree of cementation increases the amount of crushing during shear may seem somewhat contrary to the conclusions from authors' work on one-dimensional compression [20]—in which cementation was shown to decrease the amount of crushing for a given applied stress. The principal difference it seems, is that during the stress-controlled one-dimensional compression simulations, the cemented sand could exhibit deformation in only one direction (the z-axis); during the strain-controlled triaxial simulations the specimens could deform freely in all three directions. If one analyses the actual failure of the triaxial specimens, it can be seen that increasing the quantity of bonds rendered the material more brittle, and changed the method of deformation/failure mode. A high level of cementation resulted in localized failure in the form of a shear plane (regardless of whether the particles could break or not); across which parallel bonds broke as the macroscopic strain was applied. According to classical soil mechanics, a shear plane separates two intact 'blocks' of soil, and after rupture, the soil only shears on this plane, which becomes much weaker than the rest of the sample and continues to distort. As shearing only takes place between these two intact blocks, the particles on this plane are subjected to much larger shear stresses than elsewhere in the sample. This can be confirmed by inspecting the internal contact forces between the sand particles, which are displayed in Fig. 9 for both the uncemented and heavily cemented crushable simulations. Figure 9a shows the contact forces on a vertical cutting plane through the uncemented sample at 4 % axial strain, and displays a fairly uniform distribution. Figure 9b shows the equivalent for the heavily cemented sample, and reveals both larger contact forces as well as a much more irregular distribution. The localised concentrations of internal forces agree with the shear plane revealed in both Figs. 5b, ii and 8; all of these images giving an identical point of view of the sample. The concentration of shear stresses explains the increased breakage on the shear plane shown in Fig. 8a. If one considers the uncemented crushable simulation, as axial strain is applied to the specimen, there is fairly uniform deformation, resulting in barrelling failure, as indicated by Fig. 7a, i. To accommodate the macroscopic strain, all particles are free to move relative to each other; by sliding, rotating and rolling over one another. This means that local shearing takes place throughout the whole sample (on a particle-to-particle scale), and therefore almost all particles are subjected to local shear stresses—however, the individual particle stresses will be relatively uniform, and not as high as those on the shear plane in the heavily cemented, brittle material. In the simulations of one-dimensional normal compression by comparison, all samples regardless of cement content exhibited the same mode of failure and deformation, during which, as the applied stress increased, so did all the local particle stresses. This is in contrast to the triaxial simulations, in which the bonded particles within the intact 'blocks' were largely not subjected to increasing shear stresses as the test progressed. Images of the uncemented (a) and heavily cemented (b) crushable samples at 4 % axial strain, showing the contact forces on the particles on a vertical cutting plane. The thickness of the lines denote the magnitude of each contact force; with maximum forces of 168 and 386 N respectively for the uncemented and cemented samples Sand has been modelled using crushable particles, which break according to the octahedral shear stress induced from multiple contacts, using the mechanism developed by McDowell and de Bono [19]. Cementation has been modelled by incorporating parallel bonds, the presence of which prevented a bonded particle from fracturing. In general, the presence of parallel bonds resulted in the correct qualitative change in behaviour that is observed in laboratory tests, and increasing the degree of cementation—by increasing the quantity of bonds—magnified these effects. The most heavily cemented material resulted in the most brittle failure, with a clear shear plane visible, which was manifested in the location of broken fragments, broken parallel bonds, and the particle rotations. In the cemented material, an increase in the degree of crushing was observed with increasing cement content, with this observation attributed to the change in deformation and failure from ductile to brittle; particle breakage appeared localised and concentrated internally on the failure plane, further highlighting the importance of using a flexible boundary. Coop, M.R., Atkinson, J.H.: The mechanics of cemented carbonate sands. Géotechnique 43, 53–67 (1993) Huang, J.T., Airey, D.W.: Properties of artificially cemented carbonate sand. J. Geotech. Geoenviron. Eng. 124, 492–499 (1998) Schnaid, F., Prietto, P.D.M., Consoli, N.C.: Characterization of cemented sand in triaxial compression. J. Geotech. Geoenviron. Eng. 127, 857–868 (2001) Asghari, E., Toll, D.G., Haeri, S.M.: Triaxial behaviour of a cemented gravely sand, Tehran alluvium. Geotech. Geol. Eng. 21, 1–28 (2003) Haeri, S.M., Hosseini, S.M., Toll, D.G., Yasrebi, S.S.: The behaviour of an artificially cemented sandy gravel. Geotech. Geol. Eng. 23, 537–560 (2005) Marri, A., Wanatowski, D., Yu, H.S.: Drained behaviour of cemented sand in high pressure triaxial compression tests. Geomech. Geoeng. 7, 159–174 (2012) Hardin, B.O.: Crushing of soil particles. J. Geotech. Eng. 111, 1177–1192 (1985) Yamamuro, J.A., Lade, P.V.: Drained sand behavior in axisymmetric tests at high pressures. J. Geotech. Eng. 122, 109–119 (1996) Jiang, M., Leroueil, S., Konrad, J.: Yielding of microstructured geomaterial by distinct element method analysis. J. Eng. Mech. 131, 1209–1213 (2005) Jiang, M., Yu, H., Leroueil, S.: A simple and efficient approach to capturing bonding effect in naturally microstructured sands by discrete element method. Int. J. Numer. Methods Eng. 69, 1158–1193 (2007) Article MATH Google Scholar Wang, Y.H., Leung, S.C.: Characterization of cemented sand by experimental and numerical investigations. J. Geotech. Geoenviron. Eng. 134, 992–1004 (2008) Utili, S., Nova, R.: DEM analysis of bonded granular geomaterials. Int. J. Numer. Anal. Methods Geomech. 32, 1997–2031 (2008) Camusso, M., Barla, M.: Microparameters calibration for loose and cemented soil when using particle methods. Int. J. Geomech. 9, 217–229 (2009) Potyondy, D.O., Cundall, P.A.: A bonded-particle model for rock. Int. J. Rock Mech. Min. Sci. 41, 1329–1364 (2004) Wang, Y., Tonon, F.: Modeling triaxial test on intact rock using discrete element method with membrane boundary. J. Eng. Mech. 135, 1029–1037 (2009) Bolton, M.D., Nakata, Y., Cheng, Y.P.: Micro- and macro-mechanical behaviour of DEM crushable materials. Géotechnique 58, 471–480 (2008) Lim, W.L., McDowell, G.R.: The importance of coordination number in using agglomerates to simulate crushable particles in the discrete element method. Géotechnique 57, 701–705 (2007) De Bono, J., McDowell, G., Wanatowski, D.: Discrete element modelling of a flexible membrane for triaxial testing of granular material at high pressures. Géotech. Lett. 2, 199–203 (2012) McDowell, G.R., de Bono, J.P.: On the micro mechanics of one-dimensional normal compression. Géotechnique 63, 895–908 (2013) De Bono, J.P., McDowell, G.R.: Discrete element modelling of one-dimensional compression of cemented sand. Granul. Matter. 16, 79–90 (2013) Itasca: Particle Flow Code in 3 Dimensions. Itasca Consulting Group Inc, Minneapolis (2005) Harireche, O., Mcdowell, G.R.: Discrete element modelling of yielding and normal compression of sand. Géotechnique 52, 299–304 (2002) Åström, J.A., Herrmann, H.J.: Fragmentation of grains in a two-dimensional packing. Eur. Phys. J. B. 5, 551–554 (1998) ADS Article Google Scholar Tsoungui, O., Vallet, D., Charmet, J.: Numerical model of crushing of grains inside two-dimensional granular materials. Powder Technol. 105, 190–198 (1999) Lobo-Guerrero, S., Vallejo, L.E.: Crushing a weak granular material: experimental numerical analyses. Géotechnique 55, 245–249 (2005) Ben-Nun, O., Einav, I.: The role of self-organization during confined comminution of granular materials. Philos. Trans. A. Math. Phys. Eng. Sci. 368, 231–47 (2010) Ben-Nun, O., Einav, I., Tordesillas, A.: Force attractor in confined comminution of granular materials. Phys. Rev. Lett. 104, 108001 (2010) McDowell, G.R., Harireche, O.: Discrete element modelling of soil particle fracture. Géotechnique 52, 131–135 (2002) Cheng, Y.P., Bolton, M.D., Nakata, Y.: Crushing and plastic deformation of soils simulated using DEM. Géotechnique 54, 131–141 (2004) Bowman, E.T., Soga, K., Drummond, W.: Particle shape characterisation using Fourier descriptor analysis. Géotechnique 51, 545–554 (2001) Jaeger, J.C.: Failure of rocks under tensile conditions. Int. J. Rock Mech. Min. Sci. Geomech. Abstr. 4, 219–227 (1967) McDowell, G.R., Bolton, M.D.: On the micromechanics of crushable aggregates. Géotechnique 48, 667–679 (1998) McDowell, G.R., Amon, A.: The application of weibull statistics to the fracture of soil particles. Soils Found 40, 133–141 (2000) McDowell, G.R.: On the yielding and plastic compression of sand. Soils Found 42, 139–145 (2002) De Bono, J.P., McDowell, G.R., Wanatowski, D.: Modelling Cemented Sand using DEM. Advances in Transportation Geotechnics II. Taylor and Francis, Sapporo (2012) Ashby, M.F., Jones, D.R.: Engineering Materials 2: An Introduction to Microstructures, Processing and Design. Pergamon Press, New York (1986) McDowell, G.R., de Bono, J.P., Yue, P., Yu, H.-S.: Micro mechanics of isotropic normal compression. Géotech. Lett. 3, 166–172 (2013) McDowell, G.R., de Bono, J.P.: A new creep law for crushable aggregates. Géotech. Lett. 3, 103–107 (2013) The authors are grateful to the Engineering and Physical Sciences Research Council (EPSRC) for their financial support through the Doctoral Training fund. University of Nottingham, Nottingham, UK J. P. de Bono, G. R. McDowell & D. Wanatowski J. P. de Bono G. R. McDowell D. Wanatowski Correspondence to G. R. McDowell. Bono, J.P.d., McDowell, G.R. & Wanatowski, D. DEM of triaxial tests on crushable cemented sand. Granular Matter 16, 563–572 (2014). https://doi.org/10.1007/s10035-014-0502-8 Issue Date: August 2014 Discrete-element modelling
CommonCrawl
when trees fall... Learning to Dequantise with Truncated Flows A pet idea that I've been coming back to time and again is doing autoregressive language modelling with ``stochastic embeddings''. Each word would have a distribution over the embedding that represented it, instead of a deterministic embedding. The thought would be that modelling word embeddings in this way would better represent the ability for word meanings to overlap while not completely subsuming the other, or in some cases have multi-modal representations because of the distinct word senses in which they are used ('bank' to refer to the 'land alongside a body of water' or 'a financial institution'). Vectorising The Inside Algorithm This one goes by several names: CYK, Inside, Matrix chain ordering problem. Whatever you call it, the "shape" of the algorithm looks the same: And it's ultimately used to enumerate over all possible full binary trees. In the Matrix chain ordering problem, the tree defines the pairwise order in which matrices are multiplied, $$(A(BC))(DE)$$ while CYK constructs a tree from the bottom up with Context-Free Grammar rules that would generate the observed sentence. Smoothing With Backprop If you've ever implemented forward-backward in an HMM (likely for a class assignment), you know this is an annoying exercise fraught with off-by-one errors or underflow issues. A fun fact that has since been made concrete by Jason Eisner's tutorial paper in 2016 is that backpropagation is forward-backward — if you implemented the forward pass for marginalisation for an HMM, then performing backpropagation will net you the result of forward-backward, or the smoothing result. Traversing Connectionist Trees We've just put our paper "Recursive Top-Down Production for Sentence Generation with Latent Trees" up on ArXiv. The code is here. The paper has been accepted to EMNLP Findings (slightly disappointing for me, but, such is life.) This has been an interesting project to work on. Automata theory has been a interesting topic for me, coming up close behind machine learning. Context-free grammars (CFGs), in particular, comes up often when studying language, and grammars are often written as CFG rewriting rules. On "Better Exploiting Latent Variables in Text Modeling" I've been working on latent variable language models for some time, and intend to make it the topic of my PhD. So when Google Scholar recommended "Better Exploiting Latent Variables in Text Modeling", I was naturally excited to see that this work has continued beyond the Bowman's paper on VAE language models. Of course, since then, there have been multiple improvements on the original model. More recently, Yoon Kim from Harvard has been publishing papers on this topic that have been particularly interesting. Can Chinese Rooms Think? There's a tendency as a machine learning or CS researcher to get into a philosophical debate about whether machines will ever be able to think like humans. This argument goes so far back that the people that started the field have had to grapple with it. It's also fun to think about, especially with sci-fi always portraying AI vs human world-ending/apocalypse showdowns, and humans always prevailing because of love or friendship or humanity. But there's a tendency for people in such a debate to wind up talking past each other. Computing Log Normal for Isotropic Gaussians Consider a matrix $\mathbf{X}$ with rows of datapoints $\mathbf{x_i}$ which are $(n, d)$. The matrix $\mathbf{M}$ is made up of the $\boldsymbol{\mu}_j$ of $k$ different Gaussian components. The task is to compute the log probability of each of these $k$ components for all $n$ data points. In [1]: import theano import theano.tensor as T import numpy as np import time X = T.matrix('X') M = T. Details of the Hierarchical VAE (Part 2) So, as a recap, here's a (badly drawn) diagram of the architecture: The motivation to use a hierarchical architecture for this task was two-fold: Learning a vanilla encoder-decoder type of architecture for the task would be the basic deep learning go-to model for such a task. However, the noise modeled if we perform maximum likelihood is only at the pixel level. This seems inappropriate as it implies there is one "right answer", with some pixel colour variations, given an outer context. The hierarchical VAE models different uncertainties at different levels of abstraction, so it seems like a good fit. I wanted to investigate how the hierarchical factorisation of the latent variables affect learning in such a model. It turns out certain layers overfit, or I would diagnose the problem as overfitting, and I'm unsure how to remedy those problems. Samples from the Hierarchical VAE Each of the following plots are samples of the conditional VAE that I'm using for the inpainting task. As expected with results from a VAE, they're blurry. However, the fun thing about having a hierarchy of latent variables is I can freeze all the layers except for one, and vary that just to see the type of noise it models. The pictures are generated by using the $\mu_{z_l}(z_{l-1})$ for all layers except for the $i$-th layer. $i=1$ © 2022 when trees fall...
CommonCrawl
When gaming is NP-hard by @ulaulaman about #candycrush #bejeweled #shariki #nphard #computerscience Shariki is a puzzle game developed by the russian programmer Eugene Alemzhin in 1994. The rules are simple: (...) matching three or more balls of the same color in line (vertical or horizontal). These balls then explode and a new ones appear in their place. The first Shariki's clone is Tetris Attack, a fusion between Shariki and the most famous Tetris, also this developed in Soviet Union by Alexey Pajitnov. But the most famous clone is Bejeweled (2001) by PopCap Games, from which is derived the Candy Crush Saga. During this March, Toby Walsh and the italian team composed by Luciano Gualà, Stefano Leucci, Emanuele Natale proved that Candy Crush and other similar games are NP-hard: The twentieth century has seen the rise of a new type of video games targeted at a mass audience of "casual" gamers. Many of these games require the player to swap items in order to form matches of three and are collectively known as tile-matching match-three games. Among these, the most influential one is arguably Bejeweled in which the matched items (gems) pop and the above gems fall in their place. Bejeweled has been ported to many different platforms and influenced an incredible number of similar games. Very recently one of them, named Candy Crush Saga enjoyed a huge popularity and quickly went viral on social networks. We generalize this kind of games by only parameterizing the size of the board, while all the other elements (such as the rules or the number of gems) remain unchanged. Then, we prove that answering many natural questions regarding such games is actually NP-Hard. These questions include determining if the player can reach a certain score, play for a certain number of turns, and others. The italian team realized also a web-based implementation of their technique. Toby Walsh (2014). Candy Crush is NP-hard, arXiv: 1403.1911v1 Luciano Gualà, Stefano Leucci & Emanuele Natale (2014). Bejeweled, Candy Crush and other Match-Three Games are (NP-)Hard, arXiv: 1403.5830v1 Labels: abstract, arxiv, bejeweled, candy crush, computer science, mathematics, np hard, np problem, shariki, video game A blink of an eye By Gianluigi Filippelli on Monday, March 17, 2014 Almost 14 billion years ago, the universe we inhabit burst into existence in an extraordinary event that initiated the Big Bang. In the first fleeting fraction of a second, the universe expanded exponentially, stretching far beyond the view of our best telescopes. All this, of course, was just theory. Researchers from the BICEP2 collaboration today announced the first direct evidence for this cosmic inflation. Their data also represent the first images of gravitational waves, or ripples in space-time. These waves have been described as the "first tremors of the Big Bang." Finally, the data confirm a deep connection between quantum mechanics and general relativity. (from the press release) In physics, gravitational waves are ripples in the curvature of spacetime that propagate as a wave, travelling outward from the source. Predicted in 1916 by Albert Einstein to exist on the basis of his theory of general relativity, gravitational waves theoretically transport energy as gravitational radiation. Sources of detectable gravitational waves could possibly include binary star systems composed of white dwarfs, neutron stars, or black holes. The existence of gravitational waves is a possible consequence of the Lorentz invariance of general relativity since it brings the concept of a limiting speed of propagation of the physical interactions with it. Gravitational waves cannot exist in the Newtonian theory of gravitation, in which physical interactions propagate at infinite speed. (from Wikipedia) Cosmic inflation was introduced by Alan Guth and Andrei Linde in 1981: One of the intriguing consequences of inflation is that quantum fluctuations in the early universe can be stretched to astronomical proportions, providing the seeds for the large scale structure of the universe. The predicted spectrum of these fluctuations was calculated by Guth and others in 1982. These fluctuations can be seen today as ripples in the cosmic background radiation, but the amplitude of these faint ripples is only about one part in 100,000. Nonetheless, these ripples were detected by the COBE satellite in 1992, and they have now been measured to much higher precision by the WMAP satellite and other experiments. The properties of the radiation are found to be in excellent agreement with the predictions of the simplest models of inflation. (from MIT) So our universe is born after a quantum blink of an eye, or like a solution of the equation of everything. In the following a storify with a collection of link about one of the most important discovery of the... universe! Labels: bicep2, cosmic inflation, gravitational wave, universe A brief history of pi: part 2 by @ulaulaman about #piday #pi #MachinFormula #EulerIdentity Today is the pi day, so I continue the brief history of $\pi$ After the introduction of $\pi$ in mathematics, one of the quest linked with the calculation of its digits is the research about its nature, or in other words what kind of number it is. Numbers classification is simple for all: we start with natural numbers (positive and negative), and so we can define rational numbers, as the numbers generated by the ratio between two natural numbers. Every rational number could be expressed like $\frac{a}{b}$, with $a$, $b$ natural and $b$ not null. Johann Heinrich Lambert was the first to show the irrational nature of $\pi$ in 1761 in Mémoire sur quelques propriétés remarquables des quantités transcendantes circulaires et logarithmiques: that could be written in this way: \[\tan(x) = \cfrac{x}{1 - \cfrac{x^2}{3 - \cfrac{x^2}{5 - \cfrac{x^2}{7 - {}\ddots}}}}\] Lamberd proved that if $x$ is not null and rational, then the previous expression must be irrational. So the irrationality of $\pi$ follows from $\tan (\pi /4) = 1$. A good synthesis of Lambert's proof is on The world of $\pi$. In 1997 Laczkovich proposed a simplification of this demonstration, while another variation was proposed in 2009 by Li Zhou, using the integral calculus. In particular the second demonstration is inspired by the proof that Charles Hermite written in two letters to Paul Gordan and Carl Borchardt in 1873. Following Harold Jeffreys in Scinetific interference (1973), a simplification of this proof, that used a reductio ab adsurdum is proposed by Mary Cartwright. Another proof of one page about the irrationality of $\pi$ is dued by Ivan Niven in 1946. At the other hand, the transcendence of $\pi$ is a direct consequence of the Lindemann-Weierstrass theorem: If $\alpha_1$, $\cdots$, $\alpha_n$ are algebraic numbers that are linearly independent over rationals, then $e^{\alpha_1}$, $\cdots$, $e^{\alpha_n}$ are algebraically independent over rationals. where an algebraic number is the solution of a polynomial equation with rational coefficients. In 1882 Lindemann, using this theorem, showed that $e$ is transcendental, and, like a consequence of the Euler's identity, also $\pi$ is transcendental. Labels: euler identity, machin formula, mathematics, pi day Stephen Hawking and the (cosmological) Riemann's zeta function Following Emilio Elizalde (read this presentation in pdf) I found a paper by Stephen Hawking in which he used the Riemann's zeta function: This paper describes a technique for regularizing quadratic path integrals on a curved background spacetime. One forms a generalized zeta function from the eigenvalues of the differential operator that appears in the action integral. The zeta function is a meromorphic function and its gradient at the origin is defined to be the determinant of the operator. This technique agrees with dimensional regularization where one generalises ton dimensions by adding extra flat dimensions. The generalized zeta function can be expressed as a Mellin transform of the kernel of the heat equation which describes diffusion over the four dimensional spacetime manifold in a fith dimension of parameter time. Using the asymptotic expansion for the heat kernel, one can deduce the behaviour of the path integral under scale transformations of the background metric. This suggests that there may be a natural cut off in the integral over all black hole background metrics. By functionally differentiating the path integral one obtains an energy momentum tensor which is finite even on the horizon of a black hole. This energy momentum tensor has an anomalous trace. Hawking used the following version for the zeta: \[\zeta (s) = \sum_n \lambda_n^{-s}\] where $\lambda_n$ are the eigenvalues of a given operator $A$, constructed using the background fields of the spacetime. In four dimensions the function will converge for $\Re (s) > 2$. About the use of the Riemann's zeta function in physics, you could also read Effective Lagrangian and energy-momentum tensor in de Sitter space Hawking S.W. (1977). Zeta function regularization of path integrals in curved spacetime, Communications in Mathematical Physics, 55 (2) 133-148. DOI: 10.1007/BF01626516 (pdf) Labels: abstract, cosmology, stephen hawking, zeta function Extracting energy from a black hole The Penrose process is a process theorised by Roger Penrose wherein energy can be extracted from a rotating black hole. That extraction is made possible because the rotational energy of the black hole is located, not inside the event horizon of the black hole, but on the outside of it in a region of the Kerr spacetime called the ergosphere, a region in which a particle is necessarily propelled in locomotive concurrence with the rotating spacetime. All objects in the ergosphere become dragged by a rotating spacetime. In the process, a lump of matter enters into the ergosphere of the black hole, and once it enters the ergosphere, it is split into two. The momentum of the two pieces of matter can be arranged so that one piece escapes to infinity, whilst the other falls past the outer event horizon into the hole. The escaping piece of matter can possibly have greater mass-energy than the original infalling piece of matter, whereas the infalling piece has negative mass-energy. In summary, the process results in a decrease in the angular momentum of the black hole, and that reduction corresponds to a transference of energy whereby the momentum lost is converted to energy extracted. The process obeys the laws of black hole mechanics. A consequence of these laws is that if the process is performed repeatedly, the black hole can eventually lose all of its angular momentum, becoming non-rotating, i.e. a Schwarzschild black hole. Demetrios Christodoulou calculated an upper bound for the amount of energy that can be extracted by the Penrose process. And Reva-Kay Williams used the Penrose process to explain the collimated and asymmetrycal jets from some space objects, like rotating black holes: Over the past three decays, since the discovery of quasars, mounting observational evidence has accumulated that black holes indeed exist in nature. In this paper, I present a theoretical and numerical (Monte Carlo) fully relativistic 4-D analysis of Penrose scattering processes (Compton and $\gamma \gamma \rightarrow e^+ e^-$) in the ergosphere of a supermassive Kerr (rotating) black hole. These model calculations surprisingly reveal that the observed high energies and luminosities of quasars and other AGNs, the collimated jets about the polar axis, and the asymmetrical jets (which can be enhanced by relativistic Doppler beaming effects), all, are inherent properties of rotating black holes. That is, from this analysis, it is shown that the Penrose scattered escaping particles exhibit tightly wounded coil-like cone distributions (highly collimated jet distributions) about the polar axis, with helical polar angles of escape varying from 0.5o to 30o for the highest energy particles. It is also shown that the gravitomagnetic (GM) field, which causes the dragging of inertial frames, exerts a force acting on the momentum vectors of the incident and scattered particles, causing the particle emission to be asymmetrical above and below the equatorial plane, thus breaking the reflection symmetry of the Kerr metric (above and below the equatorial plane). When the accretion disk is assumed to be a two-temperature bistable thin disk/ion corona, recently referred to as an advection dominated accretion flow (ADAF), energies as high as 54 GeV can be attained by these Penrose processes alone; and when relativistic beaming is included, energies in the TeV range can be achieved, agreeing with observations of some BL Lac objects. When this model is applied specifically to quasars 3C 279 and 3C 273, their observed high energy luminosity spectra can be duplicated and explained. Moreover, this Penrose energy extraction model can be applied to any size black hole, irrespective of the mass, and, thus, suggests a complete theory for the extraction of energy from a black hole. Williams, R.K., High Energy-Momentum Extraction from Rotating Black Holes Using the Penrose Mechanism. American Astronomical Society, 195th AAS Meeting, #134.02; Bulletin of the American Astronomical Society, Vol. 32, p.881 Reva Kay Williams (2002). The Gravitomagnetic Field and Penrose Processes, arXiv: astro-ph/0203421v2 Penrose, R., Gravitational Collapse: the Role of General Relativity. Rivista del Nuovo Cimento, Numero Speziale I, 252 (1969) (pdf) Labels: abstract, astrophysics, black hole, reva kay williams, roger penrose #Bikini #BikiniAtoll #fallout #ColdWar The United States was in a Cold War Nuclear arms race with the Soviet Union to build bigger and better bombs. The next series of tests over Bikini Atoll was code named Operation Castle. The first test of that series was Castle Bravo, a new design utilizing a dry fuel thermonuclear hydrogen bomb. It was detonated at dawn on March 1, 1954. The 15 megaton nuclear explosion far exceeded the expected yield of 4 to 8 megatons (6Mt predicted), and was about 1,000 times more powerful than each of the atomic bombs dropped on Hiroshima and Nagasaki during World War II. The nuclear weapon was the most powerful device ever detonated by the United States (and just under one-third the energy of the Tsar Bomba, the largest ever tested). The scientists and military authorities were shocked by the size of the explosion and many of the instruments they had put in place to evaluate the effectiveness of the device were destroyed. The Bravo fallout plume spread dangerous levels of radiation over an area over 100 miles (160 km) long, including inhabited islands. The contour lines show the cumulative radiation dose in roentgens (R) for the first 96 hours after the test. Map showing points (X) where contaminated fish were caught or where the sea was found to be excessively radioactive. B = original "danger zone" around Bikini announced by the U.S. government. W = "danger zone" extended later. xF = position of the Lucky Dragon fishing boat. NE, EC, and SE are equatorial currents. Read also: A Short History of the People of Bikini Atoll Labels: bikini atoll, cold war, nuclear fallout Stephen Hawking and the (cosmological) Riemann's z...
CommonCrawl
Heterogeneity in Marginal Non-Monetary Returns to Higher Education Heterogeneity in Marginal Non-Monetary Returns to Higher Education Kamhöfer, Daniel, A;Schmitz,, Hendrik;Westphal,, Matthias 2019-02-01 00:00:00 Abstract In this paper we estimate the effects of college education on cognitive abilities, health, and wages, exploiting exogenous variation in college availability. By means of semiparametric local instrumental variables techniques we estimate marginal treatment effects in an environment of essential heterogeneity. The results suggest positive average effects on cognitive abilities, wages, and physical health. Yet, there is heterogeneity in the effects, which points toward selection into gains. Although the majority of individuals benefits from more education, the average causal effect for individuals with the lowest unobserved desire to study is zero for all outcomes. Mental health effects, however, are absent for the entire population. 1. Introduction "The whole world is going to university—Is it worth it?" The Economist's headline read in March 2015.1 Although convincing causal evidence on positive labor market returns to higher education is still rare and nearly exclusively available for the United States, even less is known about the non-monetary returns to college education (see Oreopoulos and Petronijevic 2013; Barrow and Malamud 2015). Although non-monetary factors are acknowledged to be important outcomes of education (Oreopoulos and Salvanes 2011), evidence on the effect of college education is so far limited to health behaviors (see in what follows). We estimate the long-lasting marginal returns to college education in Germany decades after leaving college. As a benchmark, we start by looking at wage returns to higher education but the paper's focus is on the non-monetary returns that might also be seen as mediators of the more often studied effect of education on wages. These non-monetary returns are cognitive abilities and health. Cognitive abilities and health belong to the most important non-monetary determinants of individual well-being. Moreover, the stock of both factors also influences the economy as a whole (see, among many others, Heckman et al. 1999 and Cawley et al. 2001 for cognitive abilities and Acemoglu and Johnson 2007, Cervellati and Sunde 2005, and Costa 2015 for health). Yet, non-monetary returns to college education are not fully understood (Oreopoulos and Salvanes 2011). Psychological research broadly distinguishes between effects of education on the long-term cognitive ability differential that are either due to a change in the cognitive reserve (i.e., the cognitive capacity) or due to an altered age-related decline (see, e.g., Stern 2012). Still, even the compound manifestation of the overall effect has rarely been studied for college education over a short-term horizon2 and—as far as we are aware—it has never been assessed for the long run. Few studies analyze the returns to college education on health behaviors (Currie and Moretti 2003; Grimard and Parent 2007; de Walque 2007). We use a slightly modified version of the marginal treatment effect approach introduced and forwarded by Björklund and Moffitt (1987) and Heckman and Vytlacil (2005). The main feature of this approach is to explicitly model the choice for education, thus turning back from a mere statistical view of exploiting exogenous variation in education to identify casual effects toward a description of the behavior of economic agents. Translated into our research question, the MTE is the effect of education on different outcomes for individuals at the margin of taking higher education. The MTE can be used to generate all conventional treatment parameters, such as the average treatment effect (ATE). On top of this, comparing the marginal effects along the probability of taking higher education is also informative in its own right: different marginal effects do not just reveal effect heterogeneity but also some of its underlying structure (for instance, selection into gains). This is an important property that the local average treatment effect—LATE, as identified by conventional two stage least squares methods—would miss. The individuals in our sample made their college decision between 1958 and 1990 and graduated in the case of college education between 1963 and 1995. Our outcome variables (wages, standardized measures of cognitive abilities3 and mental and physical health) are assessed between 2010 and 2012, thus, 20–54 years after the college decision. Our instrument is a measure of the relative availability of college spots (operationalized by the number of enrolled students divided by the number of inhabitants) in the area of residence at the time of the secondary school graduation. Using detailed information on the arguably exogenous expansions of college capacities in all 326 West German districts (cities or rural areas) during the so-called "educational expansion" between the 1960s and 1980s generates variation in the availability of higher education. By deriving treatment effects over the entire support of the probability of college attendance, this paper contributes to the literature mainly in two important ways. First, this is the first study that analyzes the long-term effect of college education on cognitive abilities and general health measures (instead of specific health behaviors). Long-run effects on skills are crucial in showing the sustainability of human capital investments after the age of 19. Along this line, this outcome can complement existing evidence in identifying the fundamental value of college education since—unlike studies on monetary returns—effects on cognitive skills do neither directly exhibit signaling (see the debate on discrepancy between private and social returns as in Clark and Martorell 2014) nor adverse general equilibrium effects (as skills are not determined by both, forces of demand and supply). Second, by going beyond the point estimate of the LATE, we provide a more comprehensive picture in an environment of essential heterogeneity. The results suggest positive average returns to college education for wages, cognitive abilities, and physical health. Yet, the returns are heterogeneous—thus, we find evidence for selection into gains—and even close to zero for the around 30% of individuals with the lowest desire to study. Mental health effects are zero throughout the population. Thus, our findings can be interpreted as evidence for remarkable positive average returns for those who took college education in the past. Yet, a further expansion in college education, as sometimes called for, is likely not to pay off as this would mostly affect individuals in the part of the distribution that are not found to be positively affected by education. We also try to substantiate our results by looking at potential mechanisms of the average effects. Although we cannot causally differentiate all channels and the data allow us to provide suggestive evidence only, our findings may be interpreted as follows. Mentally more demanding jobs, jobs with a less health deteriorating effects and better health behaviors probably add to the explanation of skill and health returns to education. The paper is organized as follows. Section 2 briefly introduces the German educational system and describes the exogenous variation we exploit. Section 3 outlines the empirical approach. Section 4 presents the data. The main results are reported in Section 5 whereas Section 6 addresses some of its potential underlying pathways. Section 7 concludes. 2. Institutional Background and Exogenous Variation 2.1. The German Higher Educational System After graduating from secondary school, adolescents in Germany either enroll into higher education or start an apprenticeship. The latter is part-time training-on-the-job and part-time schooling. This vocational training usually takes three years and individuals often enter the firm (or another firm in the sector) as a full-time employee afterward. To be eligible for higher education in Germany, individuals need a university entrance degree. In the years under review, only academic secondary schools (Gymnasien) with 13 years of schooling in total award this degree (Abitur). Although the tracking from elementary schools to secondary schools takes place rather early at the age of 10, students can switch secondary school tracks in every grade. It is also possible to enroll into academic schools after graduating from basic or intermediate schools in order to receive a university entrance degree. In Germany, mainly two institutions offer higher education: universities/colleges4 and universities of applied science (Fachhochschulen). The regular time to receive the formerly common Diplom degree (master's equivalent) was 4.5 years at both institutions. Colleges are usually large institutions that offer degrees in various subjects. The other type of higher educational institutions, universities of applied science, are usually smaller than colleges and often specialized in one field of study (e.g., business schools). Moreover, universities of applied science have a less theoretical curriculum and a teaching structure that is similar to schools. Nearly all institutions of higher education in Germany do not charge any tuition fees. However, students have to cover their own costs of living. On the other hand, their peers in apprenticeship training earn a small salary. Possible budget constraints (e.g., transaction costs arising through the need to move to another city in order to go to college) are likely determinants of the decision to enroll into higher education. 2.2. Exogenous Variation in College Education over Time Although the higher educational system as described in Section 2.1 did not change in the years under review, the accessibility (in terms of mere quantity but also distribution within Germany) of tertiary education changed significantly, providing us with a source of exogenous variation. This so called "educational expansion" falls well into the period of study (1958–1990). Within this period, the shrinking transaction costs of studying may have changed incentives and the mere presence of new or growing colleges could also have nudged individuals toward higher education that otherwise would not have studied. In this paper, we consider two processes in order to quantify the educational expansion. The first is the openings of new colleges, the second is the extension in capacity of all colleges (we refer to both as college availability).5 College availability as an instrument for higher education was introduced to the literature by Card (1995) and has frequently been employed since then (e.g., Currie and Moretti 2003), also to estimate the MTE (e.g., Carneiro et al. 2011; Nybom 2017). We exploit the rapid increase in the number of new colleges and in the number of available spots to study as exogenous variation in the college decision. Between 1958 (the earliest secondary school graduation year in our sample) and 1990 the number of colleges in Germany doubled from 33 to 66.6 In particular, the opening of new colleges introduced discrete discontinuities in choice sets. As an example, students had to travel 50 km, on average, to the closest college before a college was opened in their district (measured from district centroid to centroid), see Figure 1. Figure A.1 in the Appendix gives an impression of the spatial variation in college availability over time. Figure 1. View largeDownload slide Average distance to the closest college over time for districts with a college opening. Own illustration. Information on colleges are taken from the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). The distances (in km) between the districts are calculated using district centroids. These distances are weighted by the number of individuals observed in the particular district-year cells in our estimation sample of the NEPS-Starting Cohort 6 data. The resulting average distances are depicted by the black circles. Note that prior to time period 0, the average distance changes over time either due to sample composition or a college opening in a neighboring district. Only districts with a college opening are taken into account. Figure 1. View largeDownload slide Average distance to the closest college over time for districts with a college opening. Own illustration. Information on colleges are taken from the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). The distances (in km) between the districts are calculated using district centroids. These distances are weighted by the number of individuals observed in the particular district-year cells in our estimation sample of the NEPS-Starting Cohort 6 data. The resulting average distances are depicted by the black circles. Note that prior to time period 0, the average distance changes over time either due to sample composition or a college opening in a neighboring district. Only districts with a college opening are taken into account. There was an increase in the size of existing colleges and, therefore, in the number of available spots to study as well. The average number of students per college was 5,013 in 1958 and 15,438 in 1990. Of the 33 colleges in 1958, 30 still existed in 1990 and had an average size of 23,099 students. The total number of students increased from 155,000 in 1958 to 1 million in 1990. Figure 2 shows the trends in college openings and enrolled students (normalized by the number of inhabitants) for the five most-populated German states. Although the actual numbers used in the regressions vary on the much smaller district level, the state level figures simplify the visualization of the pattern. Figure 2. View largeDownload slide Number of colleges and students over the time in selected states. Own illustration. College opening and size information are taken from the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). Yearly information on the district-specific population size is based on personal correspondence with the statistical offices of the federal states. For sake of lucidity the trends are only plotted for the five most populated states. Figure 2. View largeDownload slide Number of colleges and students over the time in selected states. Own illustration. College opening and size information are taken from the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). Yearly information on the district-specific population size is based on personal correspondence with the statistical offices of the federal states. For sake of lucidity the trends are only plotted for the five most populated states. Factors that have driven the increase in the number of colleges and their size can briefly be summarized into four groups: (i) The large majority of the population had a low level of education. This did not only result from WWII but also from the "anti-intellectualism" (Picht 1964, p. 66) in the Third Reich, and the notion of education in Imperial Germany before, befitting the social status of certain individuals only. (ii) An increase in the number of academic secondary schools at the same time (as analyzed in Jürges et al. 2011; Kamhöfer and Schmitz 2016 for instance) qualified a larger share of school graduates to enroll into higher education (Bartz 2007). (iii) A change in production technologies led to an increase in firm's demand for high-skilled workers—especially, given the low level of educational participation (Weisser 2005). (iv) Political decision makers were afraid that "without an increase in the number of skilled graduates the West German economy would not be able to compete with communist rivals" (Jürges et al. 2011, p. 846, in reference to Picht 1964). Although these reasons (maybe except for the firm's demand for more educated workers) affected the 10 West German federal states—that are in charge of educational policy—in the same way, the measures taken and the timing of actions differed widely between states. Because of local politics (e.g., the balancing of regional interests and avoiding clusters of colleges) there was also a large amount of variation in college openings within the federal states. See Online Appendix B to the paper for a much more detailed description of the political process involved. A major concern for instrument validity is that, even though the political process did not follow a unified structure and included some randomness in the final choice of locations and timing of openings, regions where colleges were opened differed from those that already had colleges before (or that never established any). Table 1 reports some numbers on the regional level as of the year 1962 (the earliest possible year available to us with representative data).7 Regions that already had colleges before did not differ in terms of sociodemographics (except for population densities, as mostly large cities had colleges before) but were somewhat stronger in terms of socioeconomic indices. The differences were not large however. Given that we include district fixed-effects and a large set of socioeconomic controls (including the socioeconomic environment before the college decision, see Section 4), this should not be a problematic issue. Table 1. Comparison of regions with and without college openings before college opens using administrative data. (1) (2) (3) (4) (5) (6) College opening... Before Between Later than 1958 1958–1990 1990 or never Mean s.d. Mean s.d. Mean s.d. Observations Number of regions 27 30 190 Sociodemographic characteristics Female (in %) 53.0 (2.0) 53.0 (1.4) 52.9 (4.3) Average age (in years) 37.2 (1.1) 37.0 (1.1) 36.6 (1.9) Singles (in %) 38.8 (2.5) 37.7 (2.3) 38.9 (4.6) Population density per km2 in 1962 1381.9 (1076.7) 1170.1 (1047.3) 327.1 (479.7) Change in population density 1962–1990 1.6 (186.3) −71.0 (202.8) 31.5 (98.5) Migrational background (in %) 2.7 (3.0) 1.6 (1.5) 2.1 (2.3) Socioeconomic characteristics Share of employees to all individuals (in %) 47.0 (3.6) 45.3 (4.2) 46.2 (5.2) Employees with an income > 600 DM (in %) 27.3 (3.8) 24.8 (5.3) 25.9 (6.4) Employees by industry (in %) – Primary 2.1 (5.2) 5.2 (5.2) 2.8 (5.5) – Secondary 52.9 (8.4) 54.7 (6.2) 54.3 (8.9) – Tertiary 45.0 (9.3) 40.1 (8.3) 42.9 (9.6) Employees in blue collar occup. (in %) 53.6 (9.4) 59.0 (7.9) 56.5 (9.3) Employees in academic occup. (in %) 22.0 (4.4) 17.5 (4.3) 20.3 (5.9) (1) (2) (3) (4) (5) (6) College opening... Before Between Later than 1958 1958–1990 1990 or never Mean s.d. Mean s.d. Mean s.d. Observations Number of regions 27 30 190 Sociodemographic characteristics Female (in %) 53.0 (2.0) 53.0 (1.4) 52.9 (4.3) Average age (in years) 37.2 (1.1) 37.0 (1.1) 36.6 (1.9) Singles (in %) 38.8 (2.5) 37.7 (2.3) 38.9 (4.6) Population density per km2 in 1962 1381.9 (1076.7) 1170.1 (1047.3) 327.1 (479.7) Change in population density 1962–1990 1.6 (186.3) −71.0 (202.8) 31.5 (98.5) Migrational background (in %) 2.7 (3.0) 1.6 (1.5) 2.1 (2.3) Socioeconomic characteristics Share of employees to all individuals (in %) 47.0 (3.6) 45.3 (4.2) 46.2 (5.2) Employees with an income > 600 DM (in %) 27.3 (3.8) 24.8 (5.3) 25.9 (6.4) Employees by industry (in %) – Primary 2.1 (5.2) 5.2 (5.2) 2.8 (5.5) – Secondary 52.9 (8.4) 54.7 (6.2) 54.3 (8.9) – Tertiary 45.0 (9.3) 40.1 (8.3) 42.9 (9.6) Employees in blue collar occup. (in %) 53.6 (9.4) 59.0 (7.9) 56.5 (9.3) Employees in academic occup. (in %) 22.0 (4.4) 17.5 (4.3) 20.3 (5.9) Notes: Own calculations based on Micro Census 1962, see Lengerer et al. (2008). Regions are defined through administrative Regierungsbezirk entries and the degree urbanization (Gemeindegrößenklasse) and may cover more than one district. College information is aggregated at regional level and a region is considered to have a college if at least one of its districts has a college. Calculations for population density and change in population density based on district-level data acquired through personal correspondence with the statistical offices of the federal states. Data are available on request. The variables "employees in blue collar occup." and "employees in academic occup." state the shares of employees in the region in an occupation that is usually conducted by a blue collar worker/a college graduate, respectively. Standard deviations (s.d.) are given in italics in parentheses. View Large Table 1. Comparison of regions with and without college openings before college opens using administrative data. (1) (2) (3) (4) (5) (6) College opening... Before Between Later than 1958 1958–1990 1990 or never Mean s.d. Mean s.d. Mean s.d. Observations Number of regions 27 30 190 Sociodemographic characteristics Female (in %) 53.0 (2.0) 53.0 (1.4) 52.9 (4.3) Average age (in years) 37.2 (1.1) 37.0 (1.1) 36.6 (1.9) Singles (in %) 38.8 (2.5) 37.7 (2.3) 38.9 (4.6) Population density per km2 in 1962 1381.9 (1076.7) 1170.1 (1047.3) 327.1 (479.7) Change in population density 1962–1990 1.6 (186.3) −71.0 (202.8) 31.5 (98.5) Migrational background (in %) 2.7 (3.0) 1.6 (1.5) 2.1 (2.3) Socioeconomic characteristics Share of employees to all individuals (in %) 47.0 (3.6) 45.3 (4.2) 46.2 (5.2) Employees with an income > 600 DM (in %) 27.3 (3.8) 24.8 (5.3) 25.9 (6.4) Employees by industry (in %) – Primary 2.1 (5.2) 5.2 (5.2) 2.8 (5.5) – Secondary 52.9 (8.4) 54.7 (6.2) 54.3 (8.9) – Tertiary 45.0 (9.3) 40.1 (8.3) 42.9 (9.6) Employees in blue collar occup. (in %) 53.6 (9.4) 59.0 (7.9) 56.5 (9.3) Employees in academic occup. (in %) 22.0 (4.4) 17.5 (4.3) 20.3 (5.9) (1) (2) (3) (4) (5) (6) College opening... Before Between Later than 1958 1958–1990 1990 or never Mean s.d. Mean s.d. Mean s.d. Observations Number of regions 27 30 190 Sociodemographic characteristics Female (in %) 53.0 (2.0) 53.0 (1.4) 52.9 (4.3) Average age (in years) 37.2 (1.1) 37.0 (1.1) 36.6 (1.9) Singles (in %) 38.8 (2.5) 37.7 (2.3) 38.9 (4.6) Population density per km2 in 1962 1381.9 (1076.7) 1170.1 (1047.3) 327.1 (479.7) Change in population density 1962–1990 1.6 (186.3) −71.0 (202.8) 31.5 (98.5) Migrational background (in %) 2.7 (3.0) 1.6 (1.5) 2.1 (2.3) Socioeconomic characteristics Share of employees to all individuals (in %) 47.0 (3.6) 45.3 (4.2) 46.2 (5.2) Employees with an income > 600 DM (in %) 27.3 (3.8) 24.8 (5.3) 25.9 (6.4) Employees by industry (in %) – Primary 2.1 (5.2) 5.2 (5.2) 2.8 (5.5) – Secondary 52.9 (8.4) 54.7 (6.2) 54.3 (8.9) – Tertiary 45.0 (9.3) 40.1 (8.3) 42.9 (9.6) Employees in blue collar occup. (in %) 53.6 (9.4) 59.0 (7.9) 56.5 (9.3) Employees in academic occup. (in %) 22.0 (4.4) 17.5 (4.3) 20.3 (5.9) Notes: Own calculations based on Micro Census 1962, see Lengerer et al. (2008). Regions are defined through administrative Regierungsbezirk entries and the degree urbanization (Gemeindegrößenklasse) and may cover more than one district. College information is aggregated at regional level and a region is considered to have a college if at least one of its districts has a college. Calculations for population density and change in population density based on district-level data acquired through personal correspondence with the statistical offices of the federal states. Data are available on request. The variables "employees in blue collar occup." and "employees in academic occup." state the shares of employees in the region in an occupation that is usually conducted by a blue collar worker/a college graduate, respectively. Standard deviations (s.d.) are given in italics in parentheses. View Large Yet, changes in district characteristics that are potentially related to the outcome variables might be a more important problem. There could, for instance, be changes in the population structure that both induce a higher demand for college education and go along with improved cognitive abilities and health. This could be the case if the regions with college openings were more "dynamic" with a younger and potentially increasing population. Table 1 shows a decline in the population density by 6% between 1962 and 1990 in the areas that opened colleges whereas there were no average changes in the areas with preexisting colleges and a 10% increase in the areas that never opened any. This reflects different regional trends in population ageing. As one example, the Ruhr Area in the west, where three colleges were opened, experienced a population decline and comparably stronger population ageing over time. Again, these differences are not dramatically large, but we might be worried of different trends in health and cognitive abilities that are correlated with college expansion. If this was the case—more expansion in areas that have a more ageing population with deteriorating health and cognitive abilities—we might underestimate the effect of college eduction on these outcomes. We include a district-specific time trend to account for this in the analysis. The expansion in secondary schooling noted previously was unrelated to the college expansion. Although college expansion naturally took place in a small number of districts, expansion in secondary schooling was across all regions. In addition, Kamhöfer and Schmitz (2016) do not find any local average treatment effects of school expansion on cognitive abilities and wages. Thus, it seems unlikely that selective increases in cognitive abilities due to secondary school expansion invalidate the instrument. Nevertheless, again, district-specific time trends should capture large parts if this was a problem. So essentially, what we do is the following: we look within each district and attribute changes in the college (graduation/enrollment) rate from the general trend (by controlling for cohort FE) and the district specific trend (which might be due to continually increased access to higher secondary education) to either changes in the college spots or a new opening of a college nearby. We use discontinuities in college access over time that cannot be exploited using data on individuals that make the college decision at the same point in time (for instance cohort studies) as some of the previous literature that used college availability as an instrument did. Details on how we exploit the variation in college availability in the empirical specification are discussed in Section 4.4 after presenting the data. 3. Empirical Strategy Our estimation framework widely builds on Heckman and Vytlacil (2005) and Carneiro et al. (2011). Derivations and in-depth discussion of most issues can be found there. We start with the potential outcome model, where Y1 and Y0 are the potential outcomes with and without treatment. The observed outcome Y either equals Y1 in case an individual received a treatment—which is college education here—or Y0 in the absence of treatment (the individual identifier i is implied). Obviously, treatment participation is voluntary, rendering a treatment dummy D in a simple linear regression endogenous. In the marginal treatment effect framework, this is explicitly modeled by using a choice equation, that is, we specify the following latent index model: \begin{equation} Y^1 = X^{\prime }\beta _1 + U_1, \end{equation} (1) \begin{equation} Y^0 = X^{\prime }\beta _0 + U_0, \end{equation} (2) \begin{equation} D^* = Z^{\prime }\delta - V, \quad \mbox{where }\, D = \boldsymbol {1}[ D^* \ge 0] = \boldsymbol {1}[ Z^{\prime }\delta \ge V]. \end{equation} (3) The vector X contains observable, and U1, U0 unobservable factors that affect the potential outcomes.8D* is the latent desire to take up college education that depends on observed variables Z and unobservables V. Z includes all variables in X plus the instruments. Whenever D* exceeds a threshold (set to zero without loss of generality), the individual opts for college education, otherwise she does not. U1, U0, V are potentially correlated, inducing the endogeneity problem (as well as heterogenous returns) as we observe Y(=DY1 + (1 − D)Y0), D, X, Z, but not U1, U0, V. Following this model, individuals are indifferent between higher education and directly entering the labor market (e.g., through an apprenticeship) whenever the index of observables Z΄δ is equal to the unobservables V. Thus, if we knew the switching point (point of indifference) and its corresponding value of the observables, we could make sharp restriction on the value of the unobservables. This property is exploited in the estimation. Since for every value of the index Z΄δ one needs individuals with and without higher education, it is important to meaningfully aggregate the index by a monotonous transformation that for example returns the quantiles of Z΄δ and V. One such rank-preserving transformation is done by the cumulative distribution function that returns the propensity score P(Z) (quantiles of Z) and UD (quantiles of V).9 If we vary the excluded instruments in Z΄δ from the lowest to the highest value while holding the covariates X constant, more and more individuals will select into higher education. Those who react to this shift also reveal their rank in the unobservable distribution. Thus, the unobservables are fixed given the propensity score and it is feasible to evaluate any outcome for those who select into treatment at any quantile UD that is identified by the instrument-induced change of the higher education choice. In general, estimating marginal effects by UD does not require stronger assumptions than those required by the LATE since Vytlacil (2002) showed its equivalence.10 Yet, strong instruments are beneficial for robustly identifying effects over the support of P(Z). This, however, is testable. The marginal treatment effect (MTE), then, is the marginal (gross) benefit of taking the treatment for those who are just indifferent between taking and not-taking it and can be expressed as \begin{equation*} {\mathit {MTE}}(x,u_D) = \frac{\partial E(Y|x, p)}{\partial p}. \end{equation*} This is the effect of an incremental increase in the propensity score on the observed outcome. The MTE varies along the line of UD in case of heterogeneous treatment effects that arise if individuals self-select into the treatment based on their expected idiosyncratic gains. This is a situation Heckman et al. (2006) call "essential heterogeneity". This is an important structural property that the MTE can recover: If individuals already react at low values of the instrument, where the observed part of the latent desire of selecting into higher education (P(Z)) is still very low, a prerequisite for yet going to college is that V is marginally lower. These individuals could choose college against all (observed) odds because they are more intrinsically talented or motivated as indicated by a low V. If this is translated into higher future gains (U1 − U0), the MTE would exhibit a significant negative slope: As P(Z) rises, marginal individuals need less and less compensation in terms of unobserved and expected returns to yet choose college—this is called selection into gains. As Basu (2011, 2014) notes, essential heterogeneity is not restricted to active sorting into gains but is always an issue if selection is based on factors that are not completely independent of the gains. Thus, in health economic applications, where gains are arguably harder to predict for the individual than, say, monetary returns, essential heterogeneity is also an important phenomenon. In this case the common treatment parameters ATE, ATT, and LATE do not coincide. The MTE can be interpreted as a more fundamental parameter than the usual ones as it unfolds all local switching effects by intrinsic "willingness" to study and not only some weighted average of those.11 The main component for estimating the MTE is the conditional expectation E(Y | X, p). Heckman and Vytlacil (2007) show that if we plug in the counterfactuals in (1) and (2) in the potential outcome equation, rearrange and apply the expectation E(. | X, p) to all expressions and impose an exclusion restriction of p on Y (exposed in what follows), we get an expression that can be estimated: \begin{eqnarray} E(Y|X, p) & =& X^{\prime }\beta _0 + X^{\prime }(\beta _1 -\beta _0) \cdot p + E(U_1 - U_0 | D=1, X) \cdot p \nonumber \\ & =& X^{\prime }\beta _0 + X^{\prime }(\beta _1 -\beta _0) \cdot p + K(p), \end{eqnarray} (4) where K(p) is some not further specified function of the propensity score if one wants to avoid distributional assumptions of the error terms. Thus, the estimation of the MTE involves estimating the propensity score in order to estimate equation (4) and, finally, taking its derivative with respect to p. Note that this derivative—and hence the effect of college education—depends on heterogeneity due to observed components X and unobserved components K(p), since this structure was imposed by equations (1) and (2): \begin{eqnarray} \frac{\partial E(Y|X, p)}{\partial p} & =& X^{\prime }(\beta _1 -\beta _0) + \frac{\partial K(p)}{\partial p}. \end{eqnarray} (5) To achieve non-parametric identification of the terms in equation (5), the Conditional Independence Assumption has to be imposed on the instrument \begin{equation*} (U_1,U_0,V)\!\perp \!\!\!\perp Z|X \end{equation*} meaning that the error terms are independent of Z given X. That is, after conditioning on X a shift in the instruments Z (or the single index P(Z)) has no effect on the potential outcome distributions. Non-parametrically estimating separate MTEs for every data cell determined by X is hardly ever feasible due to a lack of observations and powerful instruments within each such cell. Yet, in case of parametric or semiparametric specifications a conditional independence assumption is not sufficient to decompose the effect into observed and unobserved sources of heterogeneity. To separately identify the right hand side of equation (5) unconditional independence is required: (U1, U0, V) ⊥⊥ Z, X (Carneiro et al. 2011, for more details consult the Online Appendices).12 In a pragmatic approach, one can now either follow Brinch et al. (2017) or Cornelissen et al. (forthcoming) who do not aim at causally separating the causes of the effect heterogeneity. In this case a conventional exclusion restriction on the instruments suffices for estimating the overall level and the curvature of the MTE. Our solution in bringing the empirical framework to the data without too strong assumptions, is to estimate marginal effects that only vary over the unobservables while fixing the X-effects at mean value. This means to deviate from (4) by restricting β1 = β0 = β except for the intercepts α1, α0 in (1) and (2) such that E(Y | X, p) becomes \begin{eqnarray} E(Y|X, p) & = X^{\prime }\beta + (\alpha _1 -\alpha _0) \cdot p + K(p). \end{eqnarray} (6) Thus, we allow for different levels of potential outcomes, whereas we keep conditioning on X. This might look like a strong restriction at first sight but is no more different than the predominant approach in empirical economics of trying to identify average treatment effects where the treatment indicator is typically not interacted with other observables. Certainly, this does not rule out that the MTE varies by observable characteristics. Even with the true population effects that are varying over X, note that the derivative of equation (4) w.r.t. the propensity score is constant in X. Hence, only the level of the MTE changes for certain subpopulations determined by X, the curvature remains unaffected. Thus, estimation of equation (6) delivers an MTE that has a level that is averaged over all subpopulations without changing the curvature. In this way all crucial elements of the MTE are preserved, since we are interested in the average effect and its heterogeneity with respect to the unobservables for the whole population. How this heterogeneity is varying for certain subpopulations is of less importance and also the literature has focused on MTEs where the X-part is averaged out. On the other hand we gain with this approach by considerably relaxing our identifying assumption from an unconditional to a conditional independence of the instrument. One advantage in not estimating heterogeneity in the observables can arise if X contains many variables that each take many different values. In this case, problems of weak instruments can inflate the results.13 In estimating (6), we follow Carneiro et al. (2010, 2011) again and use semi-parametric techniques as suggested by Robinson (1988).14 Standard errors are clustered at the district level and were generated by bootstrapping the entire procedure using 200 replications. 4. Data 4.1. Sample Selection and College Education Our main data source are individual level data from the German National Educational Panel Study (NEPS), see Blossfeld et al. (2011). The NEPS data map the educational trajectories of more than 60,000 individuals in total. The data set consists of a multi-cohort sequence design and covers six age groups, called "starting cohorts": newborns and their parents, pre-school children, children in school grades 5 and 9, college freshmen students, and adults. Within each starting cohort the data are organized in a longitudinal manner, that is, individuals are interviewed repeatedly. For each starting cohort, the interviews cover extensive information on competence development, learning environments, educational decisions, migrational background, and socioeconomic outcomes. We aim at analyzing longer term effects of college education and, therefore, restrict the analysis to the "adults starting cohort". For this age group six waves are available with interviews conducted between 2007/2008 (wave 1) and 2013 (wave 6), see LIfBi (2015). Moreover, the NEPS includes detailed retrospective information on the educational and occupational history as well as the living conditions at the age of 15—about three years before individuals decide for higher education. From the originally 17,000 respondents in the adults starting cohort, born between 1944 and 1989, we exclude observations for four reasons: First, we focus on individuals from West Germany due to the different educational system in the former German Democratic Republic (GDR), thereby dropping 3,500 individuals living in the GDR at the age of the college decision. Second, to allow for long-term effects we make a cut-off at college attendance before 1990 and drop 2,800 individuals who graduated from secondary school in 1990 or later. Third, we drop 1,000 individuals with missing geographic information. An attractive (and for our analysis necessary) feature of the NEPS data is that they include information on the district (German Kreis) of residence during secondary schooling that is used in assigning the instrument in the selection equation. The fourth reason for losing observations is that the dependent variables are not available for each respondent, see in what follows. Our final sample includes between 2,904 and 4,813 individuals, depending on the outcome variable. The explanatory variable "college degree" takes on the value 1 if an individual has any higher educational degree, and 0 otherwise. Dropouts are treated as all other individuals without college education. More than one fourth of the sample has a college degree, whereas three fourths do not. 4.2. Dependent Variables Wages. The data set covers a wide range of individual employment information such as monthly income and weekly hours worked. We calculate the hourly gross wage for 2013 (wave 6) by dividing the monthly gross labor market income by the actual weekly working hours (including extra hours) times the average number of weeks per month, 4.3. A similar strategy is, for example, applied by Pischke and von Wachter (2008) to calculate hourly wages using German data. For this outcome variable, we restrict our sample to individuals in working age up to 65 years and drop observations with hourly wages below €5 and above the 99th quantile (€77.52) as this might result from misreporting. Table 2 reports descriptive statistics and reveals considerably higher hourly wages for individuals with college degree. The full distribution of wages (and the other outcomes) for both groups is shown in Figure A.2 in the Appendix. In the regression analysis we use log gross hourly wages. Table 2. Descriptive statistics dependent variables. (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Observations 3,378 4,813 4,813 3,995 4,576 2,904 with college degree (in %) 31.0 28.1 28.1 27.8 28.1 28.0 Raw values Mean with degree 27.95 53.31 51.15 39.69 29.76 13.37 Mean without degree 19.35 50.39 50.53 35.99 22.75 9.36 Maximum possible value –a 100 100 51 39 22 Transformed values Mean with degree 3.25 0.23 0.04 0.32 0.63 0.61 Mean without degree 2.88 −0.09 −0.02 −0.12 −0.25 −0.24 (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Observations 3,378 4,813 4,813 3,995 4,576 2,904 with college degree (in %) 31.0 28.1 28.1 27.8 28.1 28.0 Raw values Mean with degree 27.95 53.31 51.15 39.69 29.76 13.37 Mean without degree 19.35 50.39 50.53 35.99 22.75 9.36 Maximum possible value –a 100 100 51 39 22 Transformed values Mean with degree 3.25 0.23 0.04 0.32 0.63 0.61 Mean without degree 2.88 −0.09 −0.02 −0.12 −0.25 −0.24 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Gross hourly wage given in euros. Gross hourly wage is transformed to its log value, the other variables are transformed in units of standard deviation with mean 0 and standard deviation 1. a. The gross hourly wage is truncated below at €5 and above at the highest quantile (€77.52). View Large Table 2. Descriptive statistics dependent variables. (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Observations 3,378 4,813 4,813 3,995 4,576 2,904 with college degree (in %) 31.0 28.1 28.1 27.8 28.1 28.0 Raw values Mean with degree 27.95 53.31 51.15 39.69 29.76 13.37 Mean without degree 19.35 50.39 50.53 35.99 22.75 9.36 Maximum possible value –a 100 100 51 39 22 Transformed values Mean with degree 3.25 0.23 0.04 0.32 0.63 0.61 Mean without degree 2.88 −0.09 −0.02 −0.12 −0.25 −0.24 (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Observations 3,378 4,813 4,813 3,995 4,576 2,904 with college degree (in %) 31.0 28.1 28.1 27.8 28.1 28.0 Raw values Mean with degree 27.95 53.31 51.15 39.69 29.76 13.37 Mean without degree 19.35 50.39 50.53 35.99 22.75 9.36 Maximum possible value –a 100 100 51 39 22 Transformed values Mean with degree 3.25 0.23 0.04 0.32 0.63 0.61 Mean without degree 2.88 −0.09 −0.02 −0.12 −0.25 −0.24 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Gross hourly wage given in euros. Gross hourly wage is transformed to its log value, the other variables are transformed in units of standard deviation with mean 0 and standard deviation 1. a. The gross hourly wage is truncated below at €5 and above at the highest quantile (€77.52). View Large Health. Two variables from the health domain are used as outcome measures: the Physical Health Component Summary Score (PCS) and the Mental Health Component Summary Score (MCS), both from 2011/2012 (wave 4).15 These summary scores are based on the SF12 questionnaire, which is an internationally standardized set of 12 items regarding eight dimensions of the individual health status. The eight dimensions comprise physical functioning, physical role functioning, bodily pain, general health perceptions, vitality, social role functioning, emotional role functioning and mental health. A scale ranging from 0 to 100 is calculated for each of these eight dimensions. The eight dimensions or subscales are then aggregated to the two main dimensions mental and physical health, using explorative factor analysis (Andersen et al. 2007). For our regression analysis, we standardize the aggregated scales (MCS and PCS) to have mean 0 and standard deviation 1, where higher values indicate better health. Columns (2) and (3) of Table 2 report sample means of the health measures across individuals by college graduation. Those with college degree have, on average, a better physical health score. With respect to mental health, both groups differ only marginally. Cognitive Abilities. Cognitive abilities summarize the "ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought" (American Psychological Association 1995), where the sum of these abilities is referred to as intelligence. Psychologists distinguish several concepts of intelligence with different cognitive abilities. However, they all include measures of verbal comprehension, memory and recall as well as processing speed. Although comprehensive cognitive intelligence tests take hours, a growing number of socioeconomic surveys includes much shorter proxies that measure specific skill components. The short ability tests are usually designed by psychologists and the results are highly correlated with the results of more comprehensive intelligence tests (cf. Lang et al. 2007 for a comparison of cognitive skill tests in the German Socio-economic Panel with larger psychological test batteries). The NEPS includes three kinds of competence tests that cover various domains of cognitive functioning: reading speed, reading competence, and mathematical competence.16 All competence tests were conducted once in 2010/2011 (wave 3) or 2012/2013 (wave 5), respectively, as paper and pencil tests under the supervision of a trained interviewer and the test language was German. The first test measures reading speed.17 The participants receive a booklet consisting of 51 short true-or-false questions and the test duration is 2 min. Each question has between 5 and 18 words. The participants have to answer as many questions as possible in the given window. The test score is the number of correct answers. Since the test aims at the answering speed, the questions only deal with general knowledge and use easy language. One question/statement, for example, reads "There is a bath tub in every garage." The mean number of correct answers in our estimation sample is 39.69 (out of 51) for college graduates and 35.99 for others, see Table 2. For more information, see Zimmermann et al. (2014). The reading competence test measures understanding of texts. It lasts 28 min and covers 32 items. The test consists of three different tasks. First, participants have to answer multiple choice questions about the content of a text, where only one out of four possible answers is right. In a decision-making task, the participants are asked whether statements are right or wrong according to the text. In a third task, participants need to assign possible titles out of a list to sections of the text. The test includes several types of texts, for example, comments, instructions, and advertising texts (LIfBi 2011). Again, the test score reflects the number of correct answers. Participants with college degree score on average 29.76 and without 22.75 (out of 39).18 The mathematical literacy test evaluates "recognizing and [...] applying [of] mathematics in realistic, mainly extra-mathematical situations" (LIfBi 2011, p. 8). The test has 22 items and takes 28 min. It follows the principle of the OECD-PISA tests and consists of the areas quantity, space and shape, change and relations, as well as data and change, and measures the cognitive competencies in the areas of application of skills, modeling, arguing, communicating, representing, as well as problem solving; see LIfBi (2011). Individuals without college degree score on average 9.36 (out of 22) and persons who graduated from college receive 4 points more. Due to the rather long test duration given the total interview time, not every respondent had to do all three tests. Similarly to the OECD-PISA tests for high school students, individuals were randomly assigned a booklet with either all three or two out of the three tests. 3,995 individuals did the reading speed test, 4,576 the reading competence test, and 2,904 math. Since the tests measure different competencies that refer to distinct cognitive abilities, we may not combine the different test scores into an overall score but give the results separately (see Anderson 2007). 4.3. Control Variables Individuals in our sample made their college decision between 1958 and 1990. The NEPS allows us to consider important socioeconomic characteristics that probably affect both the college education decision as well as the outcomes today (variables denoted with X in Section 3). This is general demographic information such gender, migrational background, and family structure, parental characteristics like parent's educational background. Moreover, we include two blocks of controls that were determined before the educational decision was made. Pre-college living conditions include family structure, parental job situation, and household income at the age of 15, whereas pre-college education includes educational achievements (number of repeated grades and secondary school graduation mark). Table A.1 in the Appendix provides more detailed descriptions of all variables and reports the sample means by treatment status. Apart from higher wages, abilities and a better physical health status (as seen in Table 2), individuals with a college degree are more likely to be males from an urban district without a migrational background. Moreover, they are more likely to have healthy parents (in terms of mortality). Other variables seem to differ less between both groups. We also account for cohort effects of mother and father, district fixed effects as well as district-specific time trends (see Mazumder 2008; Stephens and Yang 2014 for the importance of the latter). 4.4. Instrument The processes of college expansion discussed in Section 2.2 probably shifted individuals also with a lower desire to study into college education. Such powerful exogenous variation is beneficial for our approach as we try to identify the MTE along the distribution of the desire to study. We assign each individual the college availability as instrument (that is, a variable in Z but not in X). In doing so, we use the information on the district of the secondary school graduation and the year of the college decision, which is the year of secondary school graduation. The district—there are 326 districts in West Germany—is either a city or a certain rural area. The question is how to exploit the regional variation in openings and spots most efficiently as it is almost infeasible to control for all distances to all colleges simultaneously. Our approach to this question is to create an index that best reflects the educational environment in Germany and combines the distance with the number of college spots, \begin{eqnarray} Z_{it}=\sum _{j}^{326}K( {\mathit {dist}}_{ij}) \times \Bigg (\frac{\# {\mathit {students}}_{jt}}{\# {\mathit {inhabitants}}_{jt}}\Bigg ). \end{eqnarray} (7) The college availability instrument Zit basically includes the total number of college spots (measured by the number of students) per inhabitant in district j (out of the 326 districts in total) individual i faces in year t weighted by the distance between i's home district and district j. Weighting the number of students by the population of the district takes into account that districts with the same number of inhabitants might have colleges of a different size. This local availability is then weighted by the Gaussian kernel distance |$K( {\mathit {dist}}_j)$| between the centroid of the home district and the centroid of district j. The kernel puts a lot of weight to close colleges and a very small weight to distant ones. Since individuals can choose between many districts with colleges, we calculate the sum of all district-specific college availabilities within the kernel bandwidth. Using a bandwidth of 250 km, this basically amounts to |$K( {\mathit {dist}}_j) = \phi ( {\mathit {dist}}_j/250)$| where ϕ is the standard normal pdf. Although 250 km sounds like a large bandwidth, this implies that colleges in the same district receive a weight of 0.4, whereas the weight for colleges that are 100 km away is 0.37, but it is reduced to 0.24 for 250 km. Colleges that are 500 km away only get a very low weight of 0.05. A smaller bandwidth of, say, 100 km would mean that already colleges that are 250 km away receive a weight of 0.02 that implies the assumption that individuals basically do not take them into account at all. Most likely this does not reflect actual behavior. As a robustness check, however, we carry out all estimations with bandwidths between 100 and 250 km and the results are remarkably stable, see Online Appendix Figure C.1. Table 3 presents the descriptive statistics. We also provide background information on certain descriptive measures on distance and student density. Table 3. Descriptive statistics of instruments and background information. (1) (2) (3) (4) Statistics Mean SD Min Max Instrument: college availability 0.459 0.262 0.046 1.131 Background information on college availability (implicitly included in the instrument) Distance to nearest college 27.580 26.184 0 172.269 At least one college in district 0.130 0.337 0 1 Colleges within 100 km 5.860 3.401 0 16 College spots per inhabitant within 100 km 0.034 0.019 0 0.166 (1) (2) (3) (4) Statistics Mean SD Min Max Instrument: college availability 0.459 0.262 0.046 1.131 Background information on college availability (implicitly included in the instrument) Distance to nearest college 27.580 26.184 0 172.269 At least one college in district 0.130 0.337 0 1 Colleges within 100 km 5.860 3.401 0 16 College spots per inhabitant within 100 km 0.034 0.019 0 0.166 Notes: Own calculations based on NEPS-Starting Cohort 6 data and German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). Distances are calculated as the Euclidean distance between two respective district centroids. View Large Table 3. Descriptive statistics of instruments and background information. (1) (2) (3) (4) Statistics Mean SD Min Max Instrument: college availability 0.459 0.262 0.046 1.131 Background information on college availability (implicitly included in the instrument) Distance to nearest college 27.580 26.184 0 172.269 At least one college in district 0.130 0.337 0 1 Colleges within 100 km 5.860 3.401 0 16 College spots per inhabitant within 100 km 0.034 0.019 0 0.166 (1) (2) (3) (4) Statistics Mean SD Min Max Instrument: college availability 0.459 0.262 0.046 1.131 Background information on college availability (implicitly included in the instrument) Distance to nearest college 27.580 26.184 0 172.269 At least one college in district 0.130 0.337 0 1 Colleges within 100 km 5.860 3.401 0 16 College spots per inhabitant within 100 km 0.034 0.019 0 0.166 Notes: Own calculations based on NEPS-Starting Cohort 6 data and German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). Distances are calculated as the Euclidean distance between two respective district centroids. View Large The instrument jointly uses college openings and increases in size. Size is measured in enrollment as there is no available information on actual college spots. This might be considered worrisome as enrollment might reflect demand factors that are potentially endogenous. Although we believe that this is not a major problem as most study programs in the colleges were used to capacity, we also, as a robustness check, neglect information on enrollment and merely exploit information on college openings by using \begin{equation} Z_{it}=\sum _{j}^{326}K( {\mathit {dist}}_{ij}) \times \boldsymbol {1}{[\mbox{college{$\,\,$}available}_{jt}],} \end{equation} (8) where |$\boldsymbol {1}{[\cdot ]}$| is the indicator function. The results when using this instrument are comparable, with minor differences, to those from the baseline specification as shown in Figure A.3 in the Appendix. Certainly, the overall findings and conclusions are not affected by this choice. We prefer the combined instrument as this uses information from both aspects of the educational expansion. 5. Results 5.1. OLS Although we are primarily interested in analyzing the returns to college education for the marginal individuals, we start with ordinary least squares (OLS) estimations as a benchmark. Column (1) in Table 4, panel A, reports results for hourly wages, columns (2) and (3) for the two health measures, whereas columns (4)–(6) do the same for the three measures of cognitive abilities. Each cell reports the coefficient of college education from a separate regression. After conditioning on observables, individuals with a college degree earn approximately 28% higher wages, on average. Although PCS is higher by around 0.3 of a standard deviation—recall that all outcomes but wages are standardized—there is no significant relation with MCS. Individuals with a college degree read, on average, 0.4 SD faster than those without college education. Moreover, they approximately have a by 0.7 SD better text understanding and mathematical literacy. All in all, the results are pretty much in line with the differences in standardized means as shown in Table 2, slightly attenuated, however, due to the inclusion of control variables. Table 4. Regression results for OLS and first stage estimations. (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Panel A: OLS results College degree 0.277*** 0.277*** 0.003 0.398*** 0.729*** 0.653*** (0.019) (0.033) (0.036) (0.037) (0.032) (0.044) Panel B: 2SLS first-stage results College availability 2.368*** 2.576*** 2.576*** 2.521*** 2.327*** 2.454*** (0.132) (0.122) (0.122) (0.132) (0.119) (0.159) Observations 3,378 4,813 4,813 3,995 4,576 2,904 (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Panel A: OLS results College degree 0.277*** 0.277*** 0.003 0.398*** 0.729*** 0.653*** (0.019) (0.033) (0.036) (0.037) (0.032) (0.044) Panel B: 2SLS first-stage results College availability 2.368*** 2.576*** 2.576*** 2.521*** 2.327*** 2.454*** (0.132) (0.122) (0.122) (0.132) (0.119) (0.159) Observations 3,378 4,813 4,813 3,995 4,576 2,904 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Regressions also include a full set of control variables as well as year-of-birth and district fixed effects, and district-specific linear trends. District clustered standard errors in parentheses. ***p < 0.01. View Large Table 4. Regression results for OLS and first stage estimations. (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Panel A: OLS results College degree 0.277*** 0.277*** 0.003 0.398*** 0.729*** 0.653*** (0.019) (0.033) (0.036) (0.037) (0.032) (0.044) Panel B: 2SLS first-stage results College availability 2.368*** 2.576*** 2.576*** 2.521*** 2.327*** 2.454*** (0.132) (0.122) (0.122) (0.132) (0.119) (0.159) Observations 3,378 4,813 4,813 3,995 4,576 2,904 (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Panel A: OLS results College degree 0.277*** 0.277*** 0.003 0.398*** 0.729*** 0.653*** (0.019) (0.033) (0.036) (0.037) (0.032) (0.044) Panel B: 2SLS first-stage results College availability 2.368*** 2.576*** 2.576*** 2.521*** 2.327*** 2.454*** (0.132) (0.122) (0.122) (0.132) (0.119) (0.159) Observations 3,378 4,813 4,813 3,995 4,576 2,904 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Regressions also include a full set of control variables as well as year-of-birth and district fixed effects, and district-specific linear trends. District clustered standard errors in parentheses. ***p < 0.01. View Large Panel B of Table 4 reports the first stage results of the 2SLS estimations. The coefficients of the instrument point into the expected direction and are highly significant. As to be expected, they barely change across the outcome variables (as the first-stage specifications only differ in the number of observations across the columns). In order to get a feeling for the effect size of college availability in the first-stage, we consider, as an example, the college opening in the city of Essen in 1972. In 1978, about 11,000 students studied there. To illustrate the effect of the opening, we assume a constant population size of 700,000 inhabitants. The kernel weight of new spots in the same district is 0.4 (=K(0)). According to equation (7), the instrument value increases by 0.006 (rounded). Given the coefficient of college availability of 2.4, an individual who made the college decision in Essen in 1978 had a 1.44 percentage points higher probability to go to college due to the opening of the college in Essen (compared to an individual who made the college decision in 1971). This seems to be a plausible effect. The effect of the college opening in Essen on individuals who live in districts other than Essen is smaller, depending on the distance to Essen. 5.2. Marginal Treatment Effects Figure 3(a) shows the distribution of the propensity scores used in estimating the MTE by treatment and control group. They are obtained by logit regressions of the college degree on all Z and X variables. Full regression results of the first and the second stage of the 2SLS estimations are reported in the Online Appendices. For both groups, the propensity score varies from 0 to about 1. Moreover, there is a common support of the propensity score almost on the unit interval. Variation in the propensity score where the effects of the X variables are integrated out is used to identify local effects. Figure 3. View largeDownload slide Distribution of propensity scores. Own illustration based on NEPS-Starting Cohort 6 data. The left panel shows the propensity score (PS) density by treatment status. The right panel illustrates the joint PS density (dashed line). The solid line shows the PS variation solely caused by variation in Z, since the X-effects have been integrated out. Further note that in the right panel the densities were both normalized such that they sum up to one over the 100 points where we evaluate the density. Figure 3. View largeDownload slide Distribution of propensity scores. Own illustration based on NEPS-Starting Cohort 6 data. The left panel shows the propensity score (PS) density by treatment status. The right panel illustrates the joint PS density (dashed line). The solid line shows the PS variation solely caused by variation in Z, since the X-effects have been integrated out. Further note that in the right panel the densities were both normalized such that they sum up to one over the 100 points where we evaluate the density. This variation is presented in Figure 3(b). It shows the conditional support of P when the influence of the linear X-index of observables on the propensity score is integrated out (∫fP(Z, X)dX). Here, the support ranges nearly from 0 to 0.8 only caused by variation in the instrument—the identifying variation. This is important in the semiparametric estimation since it shows the regions in which we can credibly identify (conditional on our assumptions) marginal effects without having to rely on inter- or extrapolations to regions where we do not have identifying variation. We calculate the MTE using a local linear regression with a bandwidth that ranges from 0.10 to 0.16 depending on the outcome variable.19 We calculate the marginal effects along the quantiles UD by evaluating the derivative of the treatment effect with respect to the propensity score (see equation (6) in Section 3). Figure 4 shows the MTE for all outcome variables. The upper left panel presents the MTE for wages. We find that individuals with low values of UD have the highest monetary returns to college education. Low values of UD mean that these are the individuals who are very likely to study as already small values of P(z) exceed UD, see the transformed choice equation in Section 3. The returns are close to 80% for the smallest values of UD and then approach 0 at UD ≈ 0.7. Thus, we tend to interpret these findings as clear and strong positive returns for the 70% of individuals with the highest desire to study, whereas there is no clear evidence for any returns for the remaining 30%. Hence, there is obviously selection into gains with respect to wages, where individuals with higher (realized) returns self-select into more education. This reflects the notion that individuals make choices based on their expected gains. Figure 4. View largeDownload slide Marginal treatment effects for cognitive abilities and health. Own illustration based on NEPS-Starting Cohort 6 data. For gross hourly wage, the log value is taken. Health and cognitive skill outcomes are standardized to mean 0 and standard deviation 1. The MTE (vertical axis) is measured in logs for wage and in units of standard deviations of the health and cognitive skill outcomes. The dashed lines give the 95% confidence intervals based on clustered bootstrapped standard errors with 200 replications. Calculations based on a local linear regression where the influence of the control variables was isolated using a semiparametric Robinson estimator (Robinson 1988) for each outcome variable. The optimal, exact bandwidths for the local linear regressions are: for wage 0.10, PCS 0.13, MCS 0.16, reading competence 0.10, for reading speed 0.11, math score 0.12. Figure 4. View largeDownload slide Marginal treatment effects for cognitive abilities and health. Own illustration based on NEPS-Starting Cohort 6 data. For gross hourly wage, the log value is taken. Health and cognitive skill outcomes are standardized to mean 0 and standard deviation 1. The MTE (vertical axis) is measured in logs for wage and in units of standard deviations of the health and cognitive skill outcomes. The dashed lines give the 95% confidence intervals based on clustered bootstrapped standard errors with 200 replications. Calculations based on a local linear regression where the influence of the control variables was isolated using a semiparametric Robinson estimator (Robinson 1988) for each outcome variable. The optimal, exact bandwidths for the local linear regressions are: for wage 0.10, PCS 0.13, MCS 0.16, reading competence 0.10, for reading speed 0.11, math score 0.12. The curve of marginal treatment effects resembles the one found by Carneiro et al. (2011) for the United States with the main difference that we do not find negative effects (but just zero) for a part of the distribution. The effect sizes are also comparable although ours are somewhat smaller. For instance, Carneiro et al. (2011) find highest returns of 28% per year of college, whereas we find 80% for the college degree that, on average, takes 4.5 years to be earned. What could explain these wage returns? Two potential channels of higher earnings could be better cognitive skills and/or better health due to increased education. The findings on skills and health that we discuss in the following could, thus, be read as investigations into mechanisms for the positive wage returns. However, at least for health, this would only be one potential interpretation as health might also be directly affected by income. The right column of Figure 4 plots the results for cognitive skills. The distribution of marginal treatment effects is remarkably similar to the one for wages. We see that, also in terms of cognitive skills, not everybody benefits from more education. Some individuals, again those with high desire to study, strongly benefit, while the effects approach zero for individuals with UD > 0.6. This holds for reading speed, reading competence, as well as mathematical literacy. The largest returns are as high as 2–3 standard deviations, again, for the small group with highest college readiness only. Thus, we observe the same selection into gains as with wages and the findings could be interpreted as returns to cognitive abilities from education being a potential pathway for positive earnings returns. The findings are somewhat different for health, as seen in the lower left part of Figure 4. First of all, the returns are much more homogeneous than those for wages and skills. Although there is still some heterogeneity in returns to physical health (though to a smaller degree than before) returns are completely homogeneous for mental health. Moreover, the returns are zero throughout for mental health. Physical health effects are positive (although not always statistically significant) for around 75% of the individuals whereas they are close to zero for the 25% with the lowest desire to study. The main findings of this paper can be summarized as follows: – Education leads to higher wages and cognitive abilities for the same approximately 60% of individuals. This can also be read as suggestive evidence for cognitive abilities being a channel for the effect of education on wages. – Education does not pay off for everybody. However, in no case are the effects negative. Thus, education does never harm in terms of gross wages, skills and health. (Obviously, this view only considers potential benefits and disregards costs—thus, net benefits might well be negative for some individuals.) – There are clear signs of selection into gains. Those individuals who realize the highest returns to education are those who are most ready to take it. With policy initiatives such as the "Higher Education Pact 2020" Germany continuously increases participation in higher education in order to meet OECD standards (see OECD 2015a,b). Our results imply that this might not pay off, at least in terms of productivity (measured by wages), cognitive abilities, and health. Without fully simulating the results of further increased numbers of students in Germany, it is safe to assume that additional students would be those with higher values of UD as those with the high desire to study are in large parts already enrolled. But these additional students are the ones that do not seem to benefit from college education. However, this projection needs to be taken with a grain of salt as our findings are based on education in the 1960s–1980s and current education might yield different effects. We carry out two kinds of robustness checks with respect to the definition of the instrument (see Section 4.4). Figure A.3 in the Appendix reports the findings when the instrument definition does not consider the increases in college size. The MTE curves do not exactly stay the same as before but the main conclusions are unchanged. Wage returns are slightly more homogeneous. The results for reading competence and mathematical literacy are virtually the same whereas for reading speed homogeneously positive effects are found. However, the confidence bands of the curves for both definitions of the instrument widely overlap. This also holds for the health measures. The MTE curve for MCS is slightly shifted upward and the one for PCS is more homogeneous but the difference in the curves across both kinds of instruments are not significant. Although the likelihood that two valid instruments exactly deliver the same results is fairly low in any application (and basically zero when so many points are evaluated as is the case here), the broad picture that leads to the conclusions stated previously is invariant to the change in the instrument definition. In Online Appendix C, we report the results of robustness check where we use different kernel bandwidths to weight the college distance (bandwidths between 100 and 250 km). Here the differences are indeed widely absent. Although the condensation of college availability in equation (7) seems somewhat arbitrary, these robustness checks show that the specification of the instrument does not affect our conclusions. 5.3. Treatment Parameters Table 5 reports the conventional treatment parameters estimated using the MTE and the respective weights as described previously and more formally derived and explained in, for example, Heckman et al. (2006). In particular, we calculate the average treatment effect (ATE), the average treatment effect on the treated (ATT), the average treatment effect on the untreated (ATU) and the local average treatment effect (LATE). The estimated weights applied to the returns for each UD on the MTE curve are shown in Figure 5. Figure 5. View largeDownload slide Treatment parameter weights conditional on the propensity score. Own illustration based on NEPS-Starting Cohort 6 data. Weights were calculated using the entire sample of 8,672 observations for that we have instrument and control variable information in spite of availability of the outcome variable. Figure 5. View largeDownload slide Treatment parameter weights conditional on the propensity score. Own illustration based on NEPS-Starting Cohort 6 data. Weights were calculated using the entire sample of 8,672 observations for that we have instrument and control variable information in spite of availability of the outcome variable. Table 5. Estimated treatment parameters for main results. (1) (2) (3) (4) Treatment parameter ATE ATT ATU LATE Main outcomes Log gross wage 0.43 0.59 0.36 0.49 (0.06) (0.07) (0.07) (0.05) PCS 0.45 0.86 0.29 0.55 (0.13) (0.13) (0.16) (0.09) MCS 0.10 0.09 0.10 0.05 (0.10) (0.12) (0.13) (0.08) Reading competence 1.10 1.88 0.78 1.18 (0.13) (0.15) (0.16) (0.08) Reading speed 0.72 1.17 0.54 0.70 (0.14) (0.15) (0.18) (0.11) Mathematical literacy 1.11 1.56 0.93 1.13 (0.17) (0.21) (0.19) (0.14) (1) (2) (3) (4) Treatment parameter ATE ATT ATU LATE Main outcomes Log gross wage 0.43 0.59 0.36 0.49 (0.06) (0.07) (0.07) (0.05) PCS 0.45 0.86 0.29 0.55 (0.13) (0.13) (0.16) (0.09) MCS 0.10 0.09 0.10 0.05 (0.10) (0.12) (0.13) (0.08) Reading competence 1.10 1.88 0.78 1.18 (0.13) (0.15) (0.16) (0.08) Reading speed 0.72 1.17 0.54 0.70 (0.14) (0.15) (0.18) (0.11) Mathematical literacy 1.11 1.56 0.93 1.13 (0.17) (0.21) (0.19) (0.14) Notes: Own calculations based on NEPS-Starting Cohort 6 data. The MTE is estimated with a semiparametric Robinson estimator. The LATE is estimated using the IV weights depicted in Figure 5. Therefore, the LATE in this table deviates slightly from corresponding 2SLS estimates. Standard error estimated using a clustered bootstrap (at district level) with 200 replications. View Large Table 5. Estimated treatment parameters for main results. (1) (2) (3) (4) Treatment parameter ATE ATT ATU LATE Main outcomes Log gross wage 0.43 0.59 0.36 0.49 (0.06) (0.07) (0.07) (0.05) PCS 0.45 0.86 0.29 0.55 (0.13) (0.13) (0.16) (0.09) MCS 0.10 0.09 0.10 0.05 (0.10) (0.12) (0.13) (0.08) Reading competence 1.10 1.88 0.78 1.18 (0.13) (0.15) (0.16) (0.08) Reading speed 0.72 1.17 0.54 0.70 (0.14) (0.15) (0.18) (0.11) Mathematical literacy 1.11 1.56 0.93 1.13 (0.17) (0.21) (0.19) (0.14) (1) (2) (3) (4) Treatment parameter ATE ATT ATU LATE Main outcomes Log gross wage 0.43 0.59 0.36 0.49 (0.06) (0.07) (0.07) (0.05) PCS 0.45 0.86 0.29 0.55 (0.13) (0.13) (0.16) (0.09) MCS 0.10 0.09 0.10 0.05 (0.10) (0.12) (0.13) (0.08) Reading competence 1.10 1.88 0.78 1.18 (0.13) (0.15) (0.16) (0.08) Reading speed 0.72 1.17 0.54 0.70 (0.14) (0.15) (0.18) (0.11) Mathematical literacy 1.11 1.56 0.93 1.13 (0.17) (0.21) (0.19) (0.14) Notes: Own calculations based on NEPS-Starting Cohort 6 data. The MTE is estimated with a semiparametric Robinson estimator. The LATE is estimated using the IV weights depicted in Figure 5. Therefore, the LATE in this table deviates slightly from corresponding 2SLS estimates. Standard error estimated using a clustered bootstrap (at district level) with 200 replications. View Large Whereas the local average treatment effect is an average effect weighted by the conditional density of the instrument, the ATT (vice versa for the ATU) for example gives more weight to those individuals that select already into higher education at low UD values (indicating low intrinsic reluctance for higher education). The reason is that their likelihood of being in any "treatment group" is higher compared to individuals with higher values of UD. The ATE places equal weight over the whole support. In all cases but mental health and reading speed, the LATE parameters in column (4) approximately double compared to the OLS estimates. Increasing local average treatment effects (compared to OLS) seem to be counterintuitive as one often expects OLS to overestimate the true effects. Yet, this is not an uncommon finding and in a world with heterogeneous effects often explained by the group of compliers that potentially has higher individual treatment effects than the average individual (Card 2001). This is directly obvious by comparing the LATE to column (1) that is another indication of selection into gains. Regarding the other treatment parameters, the LATE lies within the range of the ATT and the ATU. Note that these are the "empirical", conditional-on-the-sample parameters as calculated in Basu et al. (2007), that is, the treatment parameters conditional on the common support of the propensity score. The population ATE, however, would require full support on the unity interval.20 As depicted in Figure 3, we do not have full support in the data at hand. Although we observe individuals with and without college degree for most probabilities to study, we cannot observe an individual with a probability arbitrarily close to 100% without college degree (and arbitrarily close to 0% with a degree). Instead, the parameters in Table 5 were computed using the marginal treatment effects on the common support only. However, as this reaches from 0.002 to 0.969 it seems fair to say that this probably comes very close to the true parameters. Table 5 is informative in particular for two reasons. First, it boils down the MTE to single numbers such that the average effect size immediately becomes clear. And, second, differences between the parameters again emphasize the role of effect heterogeneity. Together with the bootstrapped standard errors the table reveals that the ATT and the ATU structurally differ from each other for all outcomes but mental health. Hence, the treatment group of college graduates seems to benefit from higher education in terms of wages, skills, and physical health compared to the non-graduates. One reason is that they might choose to study because of their idiosyncratic skill returns. Yet, it is also likely to be windfall gains that go along with monetary college premiums that the decision was more likely to be based on. Nonetheless, this also is evidence for selection into gains. The effect sizes for all (ATE), for the university degree subgroup (ATT), and for those without higher education (ATU) in Table 5 capture the overall returns to college education, not the per-year effects. On average, the per-year effect is approximately the overall effect divided by 4.5 years (the regular time it takes to receive a Diplom degree), if we assume linear additivity of the yearly effects. The per-year effects for mathematical literacy and reading competence are about 25% of a standard deviation for all parameters. For reading speed the effects are around 15% of an SD, whereas the wage effects are around 10%. These effects are of considerable size, yet slightly smaller than those found in the previous literature on different treatments and, importantly, different compliers. For instance, ability returns to an additional year of compulsory schooling were found to be up to 0.5 SD (see, e.g., Banks and Mazzonna 2012). To get an idea of the total effect of college education on, say, math skills, the following example might help. If you start at the median of the standardized unconditional math score distribution (Φ(0) = 50%), the average effect of 1.11 of a standard deviation, all other things the same, will make you end up at the 87% quantile of that distribution (Φ(0 + 1.11) = 87%)—in the thought experiment of being the only treated in the peer group. As suggested by the pattern of the marginal treatment effects in Figure 4, the health returns to higher education are smaller than the skill returns, still they are around 10% of an SD per year (except for the zero effect on mental health). Given the previous literature, the results seem reasonable. Regarding statistical significance of the effects, note that we use several outcome variables and potentially run into multiple testing problems. Yet, we refrain from taking this into account by a complex algorithm that also accounts for the correlation of the six outcome variables and argue the following way: All ATEs and ATTs are highly statistically significant. Thus, our multiple testing procedure with six outcomes should not be a major issue. Even with a most conservative Bonferroni correction, critical values for statistical significance at the 5% level would increase from 1.96 to 2.65 and would not change any conclusions regarding significance.21 6. Potential Mechanisms for Health and Cognitive Abilities In this section, we investigate the role of potential mechanisms through which college education may work. It is likely to affect the observed level of health and cognitive abilities through the attained stock of health capital and the cognitive reserve—the mind's ability to tolerate brain damage (Stern 2012; Meng and D'Arcy 2012). There are probably three channels through which education affects long-run health and cognitive abilities: – in college: a direct effect from education; – post-college: a diminished age-related decline in health and skills due to the higher health capital/cognitive reserve attained in college (e.g., the "cognitive reserve hypothesis", Stern et al. 1999); – post-college: different health behavior or different jobs that are less detrimental to health and more cognitively demanding (Stern 2012). The post-college mechanisms that compensate for the decline also contain implicit multiplying factors like complementarities and self-productivity, see Cunha et al. (2006) and Cunha and Heckman (2007). The NEPS data includes various job characteristics and health behaviors that potentially reduce the age-related skill/health decline. However, the data neither allow us to disentangle these components empirically (i.e., observing changes in one channel that are exogenous from other channels) nor to analyze how the effect on the mechanism causally maps into higher skills or better health (as, e.g., in Heckman et al. 2013). Thus, it should be noted that this sub-analysis is merely suggestive and by no means a comprehensive analysis on the mechanisms of the effects found in the previous section. Moreover, the following analysis focusses on the potential channel of different jobs and health behaviors. It does the same as before (same controls, same estimation strategy and instrument) but replaces the outcome variables by the indicators of potential mechanisms. Cognitive Abilities. The main driving force behind skill formation after college might lie in activities on the job. When individuals with college education engage in more cognitively demanding activities, for example, more sophisticated jobs, this might mentally exercise their minds (Rohwedder and Willis 2010). This effect of mental training is sometimes referred to as use-it-or-lose-it hypothesis, see Rohwedder and Willis (2010) or Salthouse (2006). If such an exercise effect leads to alternating brain networks that "may compensate for the pathological disruption of preexisting networks" (Meng and D'Arcy 2012, p. 2), a higher demand for cognitively demanding tasks (as a result of college education) increases the individual's cognitive capacity. In order to investigate if a more cognitively demanding job might be a potential mechanism (as, e.g., suggested by Fisher et al. 2014), we use information on the individual's activities on the job. All four outcome variables considered in this subsection are binary, their definitions, sample means effects of college education are given in Table 6. For the sake of brevity we focus on the most relevant treatment parameters here and do not discuss the MTE curvatures. Table 6. Potential mechanisms for cognitive skills. Parameter Definition Sample mean ATE ATT ATU Math: percentages =1 if job requires calculating with 0.711 0.20 0.23 0.19 percentages and fractions (0.06) (0.07) (0.07) Reading =1 if respondent often spends more 0.777 0.23 0.30 0.30 than 2 hours reading (0.03) (0.03) (0.04) Writing =1 if respondent often writes more 0.704 0.39 0.64 0.29 than 1 page (0.07) (0.09) (0.07) Learning new things =1 if respondent reports to learn new 0.671 0.22 0.31 0.18 things often (0.07) (0.09) (0.07) Parameter Definition Sample mean ATE ATT ATU Math: percentages =1 if job requires calculating with 0.711 0.20 0.23 0.19 percentages and fractions (0.06) (0.07) (0.07) Reading =1 if respondent often spends more 0.777 0.23 0.30 0.30 than 2 hours reading (0.03) (0.03) (0.04) Writing =1 if respondent often writes more 0.704 0.39 0.64 0.29 than 1 page (0.07) (0.09) (0.07) Learning new things =1 if respondent reports to learn new 0.671 0.22 0.31 0.18 things often (0.07) (0.09) (0.07) Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Standard error estimated using a clustered bootstrap (district level) and reported in parentheses. View Large Table 6. Potential mechanisms for cognitive skills. Parameter Definition Sample mean ATE ATT ATU Math: percentages =1 if job requires calculating with 0.711 0.20 0.23 0.19 percentages and fractions (0.06) (0.07) (0.07) Reading =1 if respondent often spends more 0.777 0.23 0.30 0.30 than 2 hours reading (0.03) (0.03) (0.04) Writing =1 if respondent often writes more 0.704 0.39 0.64 0.29 than 1 page (0.07) (0.09) (0.07) Learning new things =1 if respondent reports to learn new 0.671 0.22 0.31 0.18 things often (0.07) (0.09) (0.07) Parameter Definition Sample mean ATE ATT ATU Math: percentages =1 if job requires calculating with 0.711 0.20 0.23 0.19 percentages and fractions (0.06) (0.07) (0.07) Reading =1 if respondent often spends more 0.777 0.23 0.30 0.30 than 2 hours reading (0.03) (0.03) (0.04) Writing =1 if respondent often writes more 0.704 0.39 0.64 0.29 than 1 page (0.07) (0.09) (0.07) Learning new things =1 if respondent reports to learn new 0.671 0.22 0.31 0.18 things often (0.07) (0.09) (0.07) Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Standard error estimated using a clustered bootstrap (district level) and reported in parentheses. View Large College education has strong effects on all four outcomes. It increases the likelihood to be in a job that requires calculating with percentages and fractions, that involves reading or writing and in which individuals often learn new things. The effect sizes are very large which is not too surprising as many of the jobs that entail these mentally demanding tasks require a college diploma as a quasi-formal condition of employment. Moreover, as observed before, there seems to be effect heterogeneity here as well and selection into gains as all average treatment effects on the treated are larger than the treatment effects on the untreated (except for the case of reading more than 2 h). The differences are particularly strong for writing and for learning new things. All in all, the findings suggest that cognitively more demanding jobs due to college education might play a role in explaining long-run cognitive returns to education. Note again, however, that these findings are only suggestive evidence for a causal mechanism. It might as well be that it is the other way around and the cognitive abilities attained in college induce a selection into these job types. Health Concerning the health mechanisms, we study job-related effects and effects on health behavior. The NEPS data cover engagement in several physical activities on the job, for example, working in a standing position, working in an uncomfortable position (like bending often), walking or cycling long distances, or carrying heavy loads. Table 7 reports definitions, sample means and effects. The binary indicators are coded as 1 if the respondent reports to engage in the activity (and 0 otherwise) in the upper panel of the table. Table 7. Potential mechanisms for health. Parameter Definition Sample mean ATE ATT ATU Physically demanding activities on the job Standing position =1 if often working in a standing 0.302 −0.37 −0.56 −0.30 position for 2 or more hours (0.07) (0.09) 0.08) Uncomfortable pos. =1 if respondent needs to bend, crawl, 0.190 −0.20 −0.37 −0.13 lie down, keen or squat (0.05) (0.06) (0.06) Walking =1 if job often requires walking, 0.242 −0.39 −0.56 −0.32 running or cycling (0.06) (0.07) (0.07) Carrying =1 if often carrying a load of at least 0.182 −0.40 −0.50 −0.37 10 kg (0.05) (0.05) (0.05) Health behaviors Obesity =1 if body mass index (=weight in 0.155 −0.08 −0.15 −0.05 kg/height in m2) > 30 (0.04) (0.05) (0.05) Smoking =1 if currently smoking 0.270 −0.18 −0.23 −0.16 (0.06) (0.06) (0.07) Alcohol amount =1 if three or more drinks when 0.187 −0.14 −0.13 −0.14 consuming alcohol (0.05) (0.06) (0.06) Sport =1 if any sporting exercise in the 0.717 0.16 0.31 0.10 previous 3 months (0.07) (0.07) (0.09) Parameter Definition Sample mean ATE ATT ATU Physically demanding activities on the job Standing position =1 if often working in a standing 0.302 −0.37 −0.56 −0.30 position for 2 or more hours (0.07) (0.09) 0.08) Uncomfortable pos. =1 if respondent needs to bend, crawl, 0.190 −0.20 −0.37 −0.13 lie down, keen or squat (0.05) (0.06) (0.06) Walking =1 if job often requires walking, 0.242 −0.39 −0.56 −0.32 running or cycling (0.06) (0.07) (0.07) Carrying =1 if often carrying a load of at least 0.182 −0.40 −0.50 −0.37 10 kg (0.05) (0.05) (0.05) Health behaviors Obesity =1 if body mass index (=weight in 0.155 −0.08 −0.15 −0.05 kg/height in m2) > 30 (0.04) (0.05) (0.05) Smoking =1 if currently smoking 0.270 −0.18 −0.23 −0.16 (0.06) (0.06) (0.07) Alcohol amount =1 if three or more drinks when 0.187 −0.14 −0.13 −0.14 consuming alcohol (0.05) (0.06) (0.06) Sport =1 if any sporting exercise in the 0.717 0.16 0.31 0.10 previous 3 months (0.07) (0.07) (0.09) Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Standard error estimated using a clustered bootstrap (at district level) and reported in parentheses. View Large Table 7. Potential mechanisms for health. Parameter Definition Sample mean ATE ATT ATU Physically demanding activities on the job Standing position =1 if often working in a standing 0.302 −0.37 −0.56 −0.30 position for 2 or more hours (0.07) (0.09) 0.08) Uncomfortable pos. =1 if respondent needs to bend, crawl, 0.190 −0.20 −0.37 −0.13 lie down, keen or squat (0.05) (0.06) (0.06) Walking =1 if job often requires walking, 0.242 −0.39 −0.56 −0.32 running or cycling (0.06) (0.07) (0.07) Carrying =1 if often carrying a load of at least 0.182 −0.40 −0.50 −0.37 10 kg (0.05) (0.05) (0.05) Health behaviors Obesity =1 if body mass index (=weight in 0.155 −0.08 −0.15 −0.05 kg/height in m2) > 30 (0.04) (0.05) (0.05) Smoking =1 if currently smoking 0.270 −0.18 −0.23 −0.16 (0.06) (0.06) (0.07) Alcohol amount =1 if three or more drinks when 0.187 −0.14 −0.13 −0.14 consuming alcohol (0.05) (0.06) (0.06) Sport =1 if any sporting exercise in the 0.717 0.16 0.31 0.10 previous 3 months (0.07) (0.07) (0.09) Parameter Definition Sample mean ATE ATT ATU Physically demanding activities on the job Standing position =1 if often working in a standing 0.302 −0.37 −0.56 −0.30 position for 2 or more hours (0.07) (0.09) 0.08) Uncomfortable pos. =1 if respondent needs to bend, crawl, 0.190 −0.20 −0.37 −0.13 lie down, keen or squat (0.05) (0.06) (0.06) Walking =1 if job often requires walking, 0.242 −0.39 −0.56 −0.32 running or cycling (0.06) (0.07) (0.07) Carrying =1 if often carrying a load of at least 0.182 −0.40 −0.50 −0.37 10 kg (0.05) (0.05) (0.05) Health behaviors Obesity =1 if body mass index (=weight in 0.155 −0.08 −0.15 −0.05 kg/height in m2) > 30 (0.04) (0.05) (0.05) Smoking =1 if currently smoking 0.270 −0.18 −0.23 −0.16 (0.06) (0.06) (0.07) Alcohol amount =1 if three or more drinks when 0.187 −0.14 −0.13 −0.14 consuming alcohol (0.05) (0.06) (0.06) Sport =1 if any sporting exercise in the 0.717 0.16 0.31 0.10 previous 3 months (0.07) (0.07) (0.09) Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Standard error estimated using a clustered bootstrap (at district level) and reported in parentheses. View Large We find that college education reduces the probability of engaging in all four physically demanding activities. Again, the estimated effects are very large in size, implying that it is the college diploma that qualifies for a white-collar office-job position. These effects might explain why we find physical health effects of education and are in line with the absence of mental health effects. White-collar jobs are usually less demanding with respect to physical health but not at all less stressful. Besides physical activities on the job, health behaviors may be considered as an important dimension of the general formation of health over the life-cycle, see Cutler and Lleras-Muney (2010). To analyze this, we resort to the following variables in our data set: a binary indicator for obesity (body mass index exceeds 30) as a compound lifestyle measure and more direct behavioral variables like an indicator for smoking, the amount of alcohol consumption (1 if having at least three or more drinks when consuming alcohol), as well as physical activity measured by an indicator of having taken any sport exercise in the previous 3 months. The lower panel in Table 7 reports the sample means and treatment effects. College education leads to a decrease in the probability of being obese, but increases the probability of smoking. This is in line with LATE estimates of the effect of college education in the United States of Grimard and Parent (2007) and de Walque (2007). College education also seems to negatively affect alcohol consumption and increases the likelihood to engage in sport exercise. Again, the effect sizes are large, if not as large compared to the other potential mechanisms. Moreover, some of them are only marginally statistically significant. Taken together, college education affects potential health mechanisms in the expected direction. Again, there is effect heterogeneity, observable in different treatment parameters for the same outcome variables. Since health is a high dimensional measure, the potential mechanisms at hand are of course not able to explain the health returns to college education entirely. Nevertheless, the findings encourage us in our interpretation of the effects of college education on physical health. 7. Conclusion This paper uses the Marginal Treatment Effect framework introduced and advanced by Björklund and Moffitt (1987) and Heckman and Vytlacil (2005, 2007) to estimate returns to college education under essential heterogeneity. We use representative data from the German National Educational Panel Study (NEPS). Our outcome measures are wages, cognitive abilities, and health. Cognitive abilities are assessed using state-of-the-art cognitive competence tests on individual reading speed, text understanding, and mathematical literacy. As expected, all outcome variables are positively correlated with having a college degree in our data set. Using an instrument that exploit exogenous variation in the supply of colleges, we estimate marginal returns to college education. The main findings of this paper are as follows: College education improves average wages, cognitive abilities and physical health (but not mental health). There is heterogeneity in the effects and clear signs of selection into gains. Those individuals who realize the highest returns to education are those who are most ready to take it. Moreover, education does not pay off for everybody. Although it is never harmful, we find zero causal effects for around 30%–40% of the population. Thus, although college education is beneficial on average, further increasing the number of students—as sometimes called for—is less likely to pay off, as the current marginal students are those who are mostly in the range of zero causal effects. Potential mechanisms of skill returns are more demanding jobs that slow down the cognitive decline in later ages. Regarding health we find positive effects of higher education on BMI, non-smoking, sports participation and alcohol consumption. All in all, given that the average individual clearly seems to benefit from education and provided that the continuing technological progress has skills become more and more valuable, education should still be an answer to the technological change for the average individual. One limitation of this paper is that we are not able to stratify the analysis by study subject. This is left for future work. Appendix: Additional Figures and Tables Figure A.1. View largeDownload slide Spatial variation of colleges across districts and over time. Own illustration based on the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). The maps show all 326 West-German districts (Kreise, spatial units of 2009) but Berlin in the years 1958 (first year in the sample), 1970, 1980, and 1990 (last year in the sample). Districts usually cover a bigger city or some administratively connected villages. If a district has at least one college, the district is depicted in black. Only few districts have more than one college. For those districts the number of students is added up in the calculations but multiple colleges are not depicted separately in the maps. Figure A.1. View largeDownload slide Spatial variation of colleges across districts and over time. Own illustration based on the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). The maps show all 326 West-German districts (Kreise, spatial units of 2009) but Berlin in the years 1958 (first year in the sample), 1970, 1980, and 1990 (last year in the sample). Districts usually cover a bigger city or some administratively connected villages. If a district has at least one college, the district is depicted in black. Only few districts have more than one college. For those districts the number of students is added up in the calculations but multiple colleges are not depicted separately in the maps. Figure A.2. View largeDownload slide Distribution of dependent variables by college graduation. Own illustration based on NEPS-Starting Cohort 6 data. Figure A.2. View largeDownload slide Distribution of dependent variables by college graduation. Own illustration based on NEPS-Starting Cohort 6 data. Figure A.3. View largeDownload slide Sensitivity in marginal treatment effects when using only the sum of the kernel weighted college distances. Own illustration based on NEPS-Starting Cohort 6 data. For gross hourly wage, the log value is taken. Health and cognitive skill outcomes are standardized to mean 0 and standard deviation 1. The MTE (vertical axis) is measured in logs for wage and in units of standard deviations of the health and cognitive skill outcomes. The dashed lines give the 95% confidence intervals. Calculations based on a local linear regression where the influence of the control variables was isolated using a semiparametric Robinson estimator (Robinson 1988) for each outcome variable. Figure A.3. View largeDownload slide Sensitivity in marginal treatment effects when using only the sum of the kernel weighted college distances. Own illustration based on NEPS-Starting Cohort 6 data. For gross hourly wage, the log value is taken. Health and cognitive skill outcomes are standardized to mean 0 and standard deviation 1. The MTE (vertical axis) is measured in logs for wage and in units of standard deviations of the health and cognitive skill outcomes. The dashed lines give the 95% confidence intervals. Calculations based on a local linear regression where the influence of the control variables was isolated using a semiparametric Robinson estimator (Robinson 1988) for each outcome variable. Table A.1. Control variables and means by college degree. Respondents Variable Definition with college degree w/o college degree General information Female =1 if respondent is female 40.38 54.18 Year of birth (FE) Year of birth of the respondent 1959 1959 Migrational background =1 if respondent was born abroad 0.89 0.64 No native speaker =1 if mother tongue is not German 0.30 0.43 Rural district =1 if current district is rural 16.79 24.96 Mother still alive =1 if mother is still alive in 2009/10 65.38 63.83 Father still alive =1 if father is still alive in 2009/10 45.27 42.3 Pre-college living conditions Married before college =1 if respondent got married before the year of the college decision or in the same year 0.20 0.44 Parent before college =1 if respondent became a parent before the year of the college decision or in the same year 0.30 0.17 Siblings Number of siblings 1.56 1.87 First born =1 if respondent was the first born in the family 33.73 29.01 Age 15: lived by single parent =1 if respondent was raised by single parent 5.33 5.32 Age 15: lived in patchwork family =1 if respondent was raised in a patchwork family 1.11 0.27 Age 15: orphan =1 if respondent was an orphan at the age of 15 0.10 0.20 Age 15: mother employed =1 if mother was employed at the respondent's age of 15 45.93 46.87 Age 15: mother never unemployed =1 if mother was never unemployed until the respondent's age of 15 61.24 62.29 Age 15: father employed =1 if father was employed at the respondent's age of 15 92.46 90.73 Age 15: father never unemployed =1 if father was never unemployed until the respondent's age of 15 98.45 97.14 Pre-college education Final school grade: excellence =1 if the overall grade of the highest school degree was excellent 4.59 1.79 Final school grade: good =1 if the overall grade of the highest school degree was good 31.51 25.83 Final school grade: satisfactory =1 if the overall grade of the highest school degree was satisfactory 17.97 28.03 Final school grade: sufficient or worse =1 if the overall grade of the highest school degree was sufficient or worse 1.04 1.42 Repeated one grade =1 if student needed to repeat one grade in elementary or secondary school 19.97 20.51 Repeated two or more grades =1 if student needed to repeat two or more grades in elementary or secondary school 2.74 1.85 Military service =1 if respondent was drafted for compulsory military service 28.03 23.89 Parental characteristics (M: mother, F: father) M: year of birth (FE) Year of birth of the respondent's mother 1930 1932 M: migrational background =1 if mother was born abroad 5.47 4.85 M: at least inter. edu =1 if mother has at least an intermediate secondary school degree 17.97 5.95 M: vocational training =1 if mother's highest degree is vocational training 20.86 16.18 M: further job qualification =1 if mother has further job qualification (e.g., Meister degree) 4.29 1.73 F: year of birth (FE) Year of birth of the respondent's father 1927 1929 F: migrational background =1 if father was born abroad 6.36 5.54 F: at least inter. edu =1 if father has at least an intermediate secondary school degree 20.86 8.09 F: vocational training =1 if father's highest degree is vocational training 19.12 21.99 F: further job qualification =1 if father has further job qualification (e.g., Meister degree) 11.46 6.76 Number of observations (PCS and MCS sample) 1,352 3,461 Respondents Variable Definition with college degree w/o college degree General information Female =1 if respondent is female 40.38 54.18 Year of birth (FE) Year of birth of the respondent 1959 1959 Migrational background =1 if respondent was born abroad 0.89 0.64 No native speaker =1 if mother tongue is not German 0.30 0.43 Rural district =1 if current district is rural 16.79 24.96 Mother still alive =1 if mother is still alive in 2009/10 65.38 63.83 Father still alive =1 if father is still alive in 2009/10 45.27 42.3 Pre-college living conditions Married before college =1 if respondent got married before the year of the college decision or in the same year 0.20 0.44 Parent before college =1 if respondent became a parent before the year of the college decision or in the same year 0.30 0.17 Siblings Number of siblings 1.56 1.87 First born =1 if respondent was the first born in the family 33.73 29.01 Age 15: lived by single parent =1 if respondent was raised by single parent 5.33 5.32 Age 15: lived in patchwork family =1 if respondent was raised in a patchwork family 1.11 0.27 Age 15: orphan =1 if respondent was an orphan at the age of 15 0.10 0.20 Age 15: mother employed =1 if mother was employed at the respondent's age of 15 45.93 46.87 Age 15: mother never unemployed =1 if mother was never unemployed until the respondent's age of 15 61.24 62.29 Age 15: father employed =1 if father was employed at the respondent's age of 15 92.46 90.73 Age 15: father never unemployed =1 if father was never unemployed until the respondent's age of 15 98.45 97.14 Pre-college education Final school grade: excellence =1 if the overall grade of the highest school degree was excellent 4.59 1.79 Final school grade: good =1 if the overall grade of the highest school degree was good 31.51 25.83 Final school grade: satisfactory =1 if the overall grade of the highest school degree was satisfactory 17.97 28.03 Final school grade: sufficient or worse =1 if the overall grade of the highest school degree was sufficient or worse 1.04 1.42 Repeated one grade =1 if student needed to repeat one grade in elementary or secondary school 19.97 20.51 Repeated two or more grades =1 if student needed to repeat two or more grades in elementary or secondary school 2.74 1.85 Military service =1 if respondent was drafted for compulsory military service 28.03 23.89 Parental characteristics (M: mother, F: father) M: year of birth (FE) Year of birth of the respondent's mother 1930 1932 M: migrational background =1 if mother was born abroad 5.47 4.85 M: at least inter. edu =1 if mother has at least an intermediate secondary school degree 17.97 5.95 M: vocational training =1 if mother's highest degree is vocational training 20.86 16.18 M: further job qualification =1 if mother has further job qualification (e.g., Meister degree) 4.29 1.73 F: year of birth (FE) Year of birth of the respondent's father 1927 1929 F: migrational background =1 if father was born abroad 6.36 5.54 F: at least inter. edu =1 if father has at least an intermediate secondary school degree 20.86 8.09 F: vocational training =1 if father's highest degree is vocational training 19.12 21.99 F: further job qualification =1 if father has further job qualification (e.g., Meister degree) 11.46 6.76 Number of observations (PCS and MCS sample) 1,352 3,461 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Mean values refer to the MCS and PCS sample. FE = variable values are included as fixed effects in the analysis. View Large Table A.1. Control variables and means by college degree. Respondents Variable Definition with college degree w/o college degree General information Female =1 if respondent is female 40.38 54.18 Year of birth (FE) Year of birth of the respondent 1959 1959 Migrational background =1 if respondent was born abroad 0.89 0.64 No native speaker =1 if mother tongue is not German 0.30 0.43 Rural district =1 if current district is rural 16.79 24.96 Mother still alive =1 if mother is still alive in 2009/10 65.38 63.83 Father still alive =1 if father is still alive in 2009/10 45.27 42.3 Pre-college living conditions Married before college =1 if respondent got married before the year of the college decision or in the same year 0.20 0.44 Parent before college =1 if respondent became a parent before the year of the college decision or in the same year 0.30 0.17 Siblings Number of siblings 1.56 1.87 First born =1 if respondent was the first born in the family 33.73 29.01 Age 15: lived by single parent =1 if respondent was raised by single parent 5.33 5.32 Age 15: lived in patchwork family =1 if respondent was raised in a patchwork family 1.11 0.27 Age 15: orphan =1 if respondent was an orphan at the age of 15 0.10 0.20 Age 15: mother employed =1 if mother was employed at the respondent's age of 15 45.93 46.87 Age 15: mother never unemployed =1 if mother was never unemployed until the respondent's age of 15 61.24 62.29 Age 15: father employed =1 if father was employed at the respondent's age of 15 92.46 90.73 Age 15: father never unemployed =1 if father was never unemployed until the respondent's age of 15 98.45 97.14 Pre-college education Final school grade: excellence =1 if the overall grade of the highest school degree was excellent 4.59 1.79 Final school grade: good =1 if the overall grade of the highest school degree was good 31.51 25.83 Final school grade: satisfactory =1 if the overall grade of the highest school degree was satisfactory 17.97 28.03 Final school grade: sufficient or worse =1 if the overall grade of the highest school degree was sufficient or worse 1.04 1.42 Repeated one grade =1 if student needed to repeat one grade in elementary or secondary school 19.97 20.51 Repeated two or more grades =1 if student needed to repeat two or more grades in elementary or secondary school 2.74 1.85 Military service =1 if respondent was drafted for compulsory military service 28.03 23.89 Parental characteristics (M: mother, F: father) M: year of birth (FE) Year of birth of the respondent's mother 1930 1932 M: migrational background =1 if mother was born abroad 5.47 4.85 M: at least inter. edu =1 if mother has at least an intermediate secondary school degree 17.97 5.95 M: vocational training =1 if mother's highest degree is vocational training 20.86 16.18 M: further job qualification =1 if mother has further job qualification (e.g., Meister degree) 4.29 1.73 F: year of birth (FE) Year of birth of the respondent's father 1927 1929 F: migrational background =1 if father was born abroad 6.36 5.54 F: at least inter. edu =1 if father has at least an intermediate secondary school degree 20.86 8.09 F: vocational training =1 if father's highest degree is vocational training 19.12 21.99 F: further job qualification =1 if father has further job qualification (e.g., Meister degree) 11.46 6.76 Number of observations (PCS and MCS sample) 1,352 3,461 Respondents Variable Definition with college degree w/o college degree General information Female =1 if respondent is female 40.38 54.18 Year of birth (FE) Year of birth of the respondent 1959 1959 Migrational background =1 if respondent was born abroad 0.89 0.64 No native speaker =1 if mother tongue is not German 0.30 0.43 Rural district =1 if current district is rural 16.79 24.96 Mother still alive =1 if mother is still alive in 2009/10 65.38 63.83 Father still alive =1 if father is still alive in 2009/10 45.27 42.3 Pre-college living conditions Married before college =1 if respondent got married before the year of the college decision or in the same year 0.20 0.44 Parent before college =1 if respondent became a parent before the year of the college decision or in the same year 0.30 0.17 Siblings Number of siblings 1.56 1.87 First born =1 if respondent was the first born in the family 33.73 29.01 Age 15: lived by single parent =1 if respondent was raised by single parent 5.33 5.32 Age 15: lived in patchwork family =1 if respondent was raised in a patchwork family 1.11 0.27 Age 15: orphan =1 if respondent was an orphan at the age of 15 0.10 0.20 Age 15: mother employed =1 if mother was employed at the respondent's age of 15 45.93 46.87 Age 15: mother never unemployed =1 if mother was never unemployed until the respondent's age of 15 61.24 62.29 Age 15: father employed =1 if father was employed at the respondent's age of 15 92.46 90.73 Age 15: father never unemployed =1 if father was never unemployed until the respondent's age of 15 98.45 97.14 Pre-college education Final school grade: excellence =1 if the overall grade of the highest school degree was excellent 4.59 1.79 Final school grade: good =1 if the overall grade of the highest school degree was good 31.51 25.83 Final school grade: satisfactory =1 if the overall grade of the highest school degree was satisfactory 17.97 28.03 Final school grade: sufficient or worse =1 if the overall grade of the highest school degree was sufficient or worse 1.04 1.42 Repeated one grade =1 if student needed to repeat one grade in elementary or secondary school 19.97 20.51 Repeated two or more grades =1 if student needed to repeat two or more grades in elementary or secondary school 2.74 1.85 Military service =1 if respondent was drafted for compulsory military service 28.03 23.89 Parental characteristics (M: mother, F: father) M: year of birth (FE) Year of birth of the respondent's mother 1930 1932 M: migrational background =1 if mother was born abroad 5.47 4.85 M: at least inter. edu =1 if mother has at least an intermediate secondary school degree 17.97 5.95 M: vocational training =1 if mother's highest degree is vocational training 20.86 16.18 M: further job qualification =1 if mother has further job qualification (e.g., Meister degree) 4.29 1.73 F: year of birth (FE) Year of birth of the respondent's father 1927 1929 F: migrational background =1 if father was born abroad 6.36 5.54 F: at least inter. edu =1 if father has at least an intermediate secondary school degree 20.86 8.09 F: vocational training =1 if father's highest degree is vocational training 19.12 21.99 F: further job qualification =1 if father has further job qualification (e.g., Meister degree) 11.46 6.76 Number of observations (PCS and MCS sample) 1,352 3,461 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Mean values refer to the MCS and PCS sample. FE = variable values are included as fixed effects in the analysis. View Large The editor in charge of this paper was Claudio Michelacci. Acknowledgments We thank the editor and two anonymous referees for many helpful suggestions which improved the paper considerably. We are grateful to Pedro Carneiro, Arnaud Chevalier, Damon Clark, Eleonora Fichera, Martin Fischer, Hendrik Jürges and Corinna Kleinert for valuable comments and Claudia Fink for excellent research assistance. Furthermore, we would like to thank the participants of several conferences and seminars for helpful discussions. Access to Micro Census data at the GESIS-German Microdata Lab, Mannheim, is gratefully acknowledged. Financial support from the Deutsche Forschungsgemeinschaft (DFG, Grant number SCHM 3140/1-1) is gratefully acknowledged. Matthias Westphal is affiliated with and was also partly funded by the Ruhr Graduate School in Economics. Hendrik Schmitz and Matthias Westphal are furthermore affiliated with the Leibniz Science Campus Ruhr. This paper uses data from the National Educational Panel Study (NEPS): Starting Cohort Adults, 10.5157/NEPS:SC6:5.1.0. From 2008 to 2013, NEPS data was collected as part of the Framework Program for the Promotion of Empirical Educational Research funded by the German Federal Ministry of Education and Research (BMBF). As of 2014, NEPS is carried out by the Leibniz Institute for Educational Trajectories (LIfBi) at the University of Bamberg in cooperation with a nationwide network. Footnotes 1 The Economist, edition March 28th to April 3rd 2015. 2 Hansen et al. (2004) use a control function approach to adjust for education in the short-term development of cognitive abilities. Carneiro et al. (2001, 2003) analyze the short-term effects of college education. Glymour et al. (2008), Banks and Mazzonna (2012), Schneeweis et al. (2014), and Kamhöfer and Schmitz (2016) analyze the effects of secondary schooling on long-term cognitive skills. 3 See Section 4 for a detailed definition of cognitive abilities. We use the terms "cognitive abilities", "cognitive skills", and "skills" interchangeably. 4 We use the words university and college as synonyms to refer to German Universitäten and closely related institutions like institutes of technology (Technische Universitäten/Technische Hochschulen), an institutional type that combines features of colleges and universities applied science (Gesamthochschulen) and universities of the armed forces (Bundeswehruniversitäten/Bundeswehrhochschulen). 5 The working paper version Kamhöfer et al. (2015) also uses the introduction of a student loan program as further source exogenous variation. Using this instrument does not affect the findings at all but is not considered here for the sake of legibility of the paper. 6 All data are taken from the German Statistical Yearbooks, 1959–1991, see German Federal Statistical Office (various issues, 1959–1991). We only use colleges and no other higher educational institutes described in Section 2.1 (e.g., universities of applied science). Administrative data on openings and the number of students are not available for other institutions than colleges. However, since other higher educational institutions are small in size and highly specialized, they should be less relevant for the higher education decision and, thus, neglecting them should not affect the results. 7 Table 1 uses a different data source than the main analysis and the local level is slightly broader than districts, see the notes to the table. 8 Note that the general derivation does not require linear indices. However, it is standard to assume linearity when it comes to estimation. 9 By applying, for instance, the standard normal distribution to the left and the right of the equation: Z΄δ ≥ V ⇔ Φ(Z΄δ) ≥ Φ(V) ⇔ P(Z) ≥ UD, where P(Z) ≡ P(D = 1|Z) = Φ(Z΄δ). 10 In this model the exclusion restriction is implicit since Z has an effect on D* but not on Y1, Y0. Monotonicity is implied by the choice equation since D* monotonously either increases are decreases the higher the values of Z. 11 To make this explicit, all treatment parameters (TEj(x)) can be decomposed into a weight (hj(x, uD)) and the MTE: |$TE_j(x)=\int _{0}^{1} {\mathit {MTE}}(x,u_D)h_j(x,u_D)du_D$|⁠. See, for example, Heckman and Vytlacil (2007) for the exact expressions of the weights for common parameters. 12 Essentially, this is equivalent to a simple 2SLS case. If one wants to identify observable effect heterogeneity (i.e., interact the treatment indicator with control variables in the regression model) the instrument needs to be independent unconditional of these controls. 13 On the other hand, estimating with heterogeneity in the observables can lead to an efficiency gain. 14 Semi-parametrically, the MTE can only be identified over the support of P. The greater the variation in Z (conditional on X) and, thus P(Z), the larger the range over which the MTE can be identified. This may be considered a drawback of the MTE approach, in particular, because treatment parameters that have weight unequal to zero outside the support of the propensity score are not identified using semi-parameteric techniques. This is sometimes called the "identification at infinity" requirement (see Heckman 1990) of the MTE. However, we argue that the MTE over the support of P is already very informative. We use semi-parametric estimates of the MTE and restrict the results to the empirical ATE or ATT that are identified for those individuals who are in the sample (see Basu et al. 2007). Alternatively one might use a flexible approximation of K(p) based on a polynomial of the propensity score as done by Basu et al. (2007). This amounts to estimating |$E(Y|X, p) = X^{\prime }\beta + (\alpha _1 -\alpha _0) \cdot p + \sum _{j=1}^k \phi _j p^j$| by OLS and using the estimated coefficients to calculate |$\widehat{\,\it MTE\,}\!(x,p) = (\widehat{\alpha }_1 - \widehat{\alpha }_0) + \sum _{j=1}^k \widehat{\phi }_j j p^{j-1}$|⁠. 15 The working paper version also considers health satisfaction with results very similar to PCS (Kamhöfer et al. 2015). 16 For a general overview over test designs and applications in the NEPS, see Weinert et al. (2011). 17 The test measures the "assessment of automatized reading processes", where a "low degree of automation in decoding [...] will hinder the comprehension process", that is, understanding of texts (Zimmermann et al. 2014, p. 1). The test was newly designed for NEPS but based on the well-established Salzburg reading screening test design principles (LIfBi 2011). 18 The total number of possible points exceeds 32 because some items were worth more than one point. 19 We assess the optimal bandwidth in the local linear regression using Stata's lpoly rule of thumb. Our results are also robust to the inclusion of higher order polynomials in the local (polynomial) regression. The optimal, exact bandwidths are: wage 0.10, PCS 0.13, MCS 0.16, reading competence 0.10, for reading speed 0.11, math score 0.12. 20 The ATT would require for every college graduate in the population a non-graduate with the same propensity score (including 0%). For the ATU one would need the opposite: a graduate for every non-graduate with the same propensity score including 100%. 21 Also taking into account the outcomes from Section 6 and assuming that we test 18 times would increase the critical value to 2.98 in the (overly conservative) Bonferroni correction. References Acemoglu Daron , Johnson Simon ( 2007 ). "Disease and Development: The Effect of Life Expectancy on Economic Growth." Journal of Political Economy , 115 , 925 – 985 . Google Scholar Crossref Search ADS American Psychological Association ( 1995 ). "Intelligence: Knowns and Unknowns." Report of a task force convened by the American Psychological Association. Andersen Hanfried H. , Mühlbacher Axel , Nübling Matthias , Schupp Jürgen , Wagner Gert G. ( 2007 ). "Computation of Standard Values for Physical and Mental Health Scale Scores Using the SOEP Version of SF-12v2." Schmollers Jahrbuch: Journal of Applied Social Science Studies/Zeitschrift für Wirtschafts- und Sozialwissenschaften , 127 , 171 – 182 . Anderson John ( 2007 ). Cognitive Psychology and its Implications , 7 ed. Worth Publishers , New York . Banks James , Mazzonna Fabrizio ( 2012 ). "The Effect of Education on Old Age Cognitive Abilities: Evidence from a Regression Discontinuity Design." The Economic Journal , 122 , 418 – 448 . Barrow Lisa , Malamud Ofer ( 2015 ). "Is College a Worthwhile Investment?" Annual Review of Economics , 7 , 519 – 555 . Google Scholar Crossref Search ADS Bartz Olaf ( 2007 ). "Expansion und Umbau–Hochschulreformen in der Bundesrepublik Deutschland zwischen 1964 und 1977." Die Hochschule , 2007 , 154 – 170 . Basu Anirban ( 2011 ). "Estimating Decision-Relevant Comparative Effects using Instrumental Variables." Statistics in Biosciences , 3 , 6 – 27 . Google Scholar Crossref Search ADS PubMed Basu Anirban ( 2014 ). "Person-Centered Treatment (PeT) Effects using Instrumental Variables: An Application to Evaluating Prostate Cancer Treatments." Journal of Applied Econometrics , 29 , 671 – 691 . Google Scholar Crossref Search ADS PubMed Basu Anirban , Heckman James J. , Navarro-Lozano Salvador , Urzua Sergio ( 2007 ). "Use of Instrumental Variables in the Presence of Heterogeneity and Self-selection: An Application to Treatments of Breast Cancer Patients." Health Economics , 16 , 1133 – 1157 . Google Scholar Crossref Search ADS PubMed Björklund Anders , Moffitt Robert ( 1987 ). "The Estimation of Wage Gains and Welfare Gains in Self-Selection." The Review of Economics and Statistics , 69 , 42 – 49 . Google Scholar Crossref Search ADS Blossfeld H.-P. , Roßbach H.-G. , Maurice J. von ( 2011 ). "Education as a Lifelong Process—The German National Educational Panel Study (NEPS)." Zeitschrift für Erziehungswissenschaft , 14 [Special Issue 14-2011] . Brinch Christian N. , Mogstad Magne , Wiswall Matthew ( 2017 ). "Beyond LATE with a Discrete Instrument." Journal of Political Economy , 125 , 985 – 1039 . Google Scholar Crossref Search ADS Card David ( 1995 ). "Using Geographic Variation in College Proximity to Estimate the Return to Schooling." In Aspects of Labour Market Behaviour: Essays in Honour of John Vanderkamp , edited by Grant K. , Christofides L. , Swidinsky R. . University of Toronto Press , pp. 201 – 222 . Card David ( 2001 ). "Estimating the Return to Schooling: Progress on Some Persistent Econometric Problems." Econometrica , 69 , 1127 – 1160 . Google Scholar Crossref Search ADS Carneiro Pedro , Hansen Karsten T. , Heckman James J. ( 2003 ). "2001 Lawrence R. Klein Lecture: Estimating Distributions of Treatment Effects with an Application to the Returns to Schooling and Measurement of the Effects of Uncertainty on College Choice." International Economic Review , 44 ( 2 ), 361 – 422 . Google Scholar Crossref Search ADS Carneiro Pedro , Hansen Karsten T. , Heckman James J. ( 2001 ). "Removing the Veil of Ignorance in Assessing the Distributional Impacts of Social Policies." Swedish Economic Policy Review , 8 , 273 – 301 . Carneiro Pedro , Heckman James J. , Vytlacil Edward J. ( 2010 ). "Evaluating Marginal Policy Changes and the Average Effect of Treatment for Individuals at the Margin." Econometrica , 78 , 377 – 394 . Google Scholar Crossref Search ADS PubMed Carneiro Pedro , Heckman James J. , Vytlacil Edward J. ( 2011 ). "Estimating Marginal Returns to Education." American Economic Review , 101 ( 6 ), 2754 – 2781 . Google Scholar Crossref Search ADS PubMed Cawley John , Heckman James J. , Vytlacil Edward J. ( 2001 ). "Three Observations on Wages and Measured Cognitive Ability." Labour Economics , 8 , 419 – 442 . Google Scholar Crossref Search ADS Cervellati Matteo , Sunde Uwe ( 2005 ). "Human Capital Formation, Life Expectancy, and the Process of Development." American Economic Review , 95 ( 5 ), 1653 – 1672 . Google Scholar Crossref Search ADS PubMed Clark Damon , Martorell Paco ( 2014 ). "The Signaling Value of a High School Diploma." Journal of Political Economy , 122 , 282 – 318 . Google Scholar Crossref Search ADS Cornelissen Thomas , Dustmann Christian , Raute Anna , Schönberg Uta ( forthcoming ). "Who Benefits from Universal Childcare? Estimating Marginal Returns to Early Childcare Attendance." Journal of Political Economy . Costa Dora L. ( 2015 ). "Health and the Economy in the United States from 1750 to the Present." Journal of Economic Literature , 53 , 503 – 570 . Google Scholar Crossref Search ADS PubMed Cunha F. , Heckman J. J. , Lochner L. J. , Masterov D. V. ( 2006 ). "Interpreting the Evidence on Life Cycle Skill Formation." In Handbook of the Economics of Education , Vol. 1 , edited by Hanushek E. A. , Welch F. . North-Holland . Cunha Flavio , Heckman James J. ( 2007 ). "The Technology of Skill Formation." American Economic Review , 97 ( 2 ), 31 – 47 . Google Scholar Crossref Search ADS Currie Janet , Moretti Enrico ( 2003 ). "Mother's Education and The Intergenerational Transmission of Human Capital: Evidence From College Openings." The Quarterly Journal of Economics , 118 , 1495 – 1532 . Google Scholar Crossref Search ADS Cutler David M. , Lleras-Muney Adriana ( 2010 ). "Understanding Differences in Health Behaviors by Education." Journal of Health Economics , 29 , 1 – 28 . Google Scholar Crossref Search ADS PubMed de Walque Damien ( 2007 ). "Does Education Affect Smoking Behaviors?: Evidence using the Vietnam Draft as an Instrument for College Education." Journal of Health Economics , 26 , 877 – 895 . Google Scholar Crossref Search ADS PubMed Fisher Gwenith , Stachowski Alicia , Infurna Frank , Faul Jessica , Grosch James , Tetrick Lois ( 2014 ). "Mental Work Demands, Retirement, and Longitudinal Trajectories of Cognitive Functioning." Journal of Occupational Health Psychology , 19 , 231 – 242 . Google Scholar Crossref Search ADS PubMed German Federal Statistical Office (various issues , 1959–1991 ). "Statistisches Jahrbuch für die Bundesrepublik Deutschland." Tech. rep. , German Federal Statistical Office (Statistisches Bundesamt) , Wiesbaden . Glymour M. , Kawachi I. , Jencks C. , Berkman L. ( 2008 ). "Does Childhood Schooling Affect Old Age Memory or Mental Status? Using State Schooling Laws as Natural Experiments." Journal of Epidemiology and Community Health , 62 , 532 – 537 . Google Scholar Crossref Search ADS PubMed Grimard Franque , Parent Daniel ( 2007 ). "Education and Smoking: Were Vietnam War Draft Avoiders also more Likely to Avoid Smoking?" Journal of Health Economics , 26 , 896 – 926 . Google Scholar Crossref Search ADS PubMed Hansen Karsten T. , Heckman James J. , Mullen Kathleen J. ( 2004 ). "The Effect of Schooling and Ability on Achievement Test Scores." Journal of Econometrics , 121 , 39 – 98 . Google Scholar Crossref Search ADS Heckman J. J. , Lochner L. J. , Todd P. E. ( 1999 ). "Earnings Equations and Rates of Return: The Mincer Equation and Beyond." In Handbook of the Economics of Education , Vol. 1 , edited by Hanushek E. , Welch F. . Elsevier . Heckman James J. ( 1990 ). "Varieties of Selection Bias." American Economic Review , 80 ( 2 ), 313 – 318 . Heckman James J. , Pinto Rodrigo , Savelyev Peter ( 2013 ). "Understanding the Mechanisms through Which an Influential Early Childhood Program Boosted Adult Outcomes." American Economic Review , 103 ( 6 ), 2052 – 2086 . Google Scholar Crossref Search ADS PubMed Heckman James J. , Urzua Sergio , Vytlacil Edward J. ( 2006 ). "Understanding Instrumental Variables in Models with Essential Heterogeneity." The Review of Economics and Statistics , 88 , 389 – 432 . Google Scholar Crossref Search ADS Heckman James J. , Vytlacil Edward J. ( 2005 ). "Structural Equations, Treatment Effects, and Econometric Policy Evaluation." Econometrica , 73 , 669 – 738 . Google Scholar Crossref Search ADS Heckman James J. , Vytlacil Edward J. ( 2007 ). "Econometric Evaluation of Social Programs, Part II: Using the Marginal Treatment Effect to Organize Alternative Econometric Estimators to Evaluate Social Programs, and to Forecast their Effects in New." In Handbook of Econometrics , Vol. 6 , edited by Heckman J. J. , Leamer E. E. . Elsevier . Jürges Hendrik , Reinhold Steffen , Salm Martin ( 2011 ). "Does Schooling Affect Health Behavior? Evidence from the Educational Expansion in Western Germany." Economics of Education Review , 30 , 862 – 872 . Google Scholar Crossref Search ADS Kamhöfer Daniel , Schmitz Hendrik ( 2016 ). "Reanalyzing Zero Returns to Education in Germany." Journal of Applied Econometrics , 31 , 912 – 919 . Google Scholar Crossref Search ADS Kamhöfer Daniel , Schmitz Hendrik , Westphal Matthias ( 2015 ). "Heterogeneity in Marginal Non-monetary Returns to Higher Education." Tech. rep. , Ruhr Economic Papers , RWI Essen , No. 591 . Lang Frieder , Weiss David , Stocker Andreas , Rosenbladt Bernhard von ( 2007 ). "The Returns to Cognitive Abilities and Personality Traits in Germany." Schmollers Jahrbuch: Journal of Applied Social Science Studies/Zeitschrift für Wirtschafts- und Sozialwissenschaften , 127 , 183 – 192 . Lengerer Andrea , Schroedter Julia , Boehle Mara , Hubert Tobias , Wolf Christof ( 2008 ). "Harmonisierung der Mikrozensen 1962 bis 2005." GESIS-Methodenbericht 12/2008 . GESIS–Leibniz Institute for the Social Sciences , German Microdata Lab, Mannheim . LIfBi ( 2011 ). "Starting Cohort 6 Main Study 2010/11 (B67) Adults Information on the Competence Test." Tech. rep. , Leibniz Institute for Educational Trajectories (LIfBi) – National Educational Panel Study . LIfBi ( 2015 ). "Startkohorte 6: Erwachsene (SC6) – Studienübersicht Wellen 1 bis 5." Tech. rep. , Leibniz Institute for Educational Trajectories (LIfBi) – National Educational Panel Study . Mazumder Bhashkar ( 2008 ). "Does Education Improve Health? A Reexamination of the Evidence from Compulsory Schooling Laws." Economic Perspectives , 32 , 2 – 16 . Meng Xiangfei , D'Arcy Carl ( 2012 ). "Education and Dementia in the Context of the Cognitive Reserve Hypothesis: A Systematic Review with Meta-Analyses and Qualitative Analyses." PLoS ONE , 7 , e38268 . Google Scholar Crossref Search ADS PubMed Nybom Martin ( 2017 ). "The Distribution of Lifetime Earnings Returns to College." Journal of Labor Economics , 35 , 903 – 952 . Google Scholar Crossref Search ADS OECD ( 2015a ). "Education Policy Outlook 2015: Germany." Report, Organisation for Economic Co-operation and Development (OECD) . OECD ( 2015b ). "Education Policy Outlook 2015: Making Reforms Happen." Report, Organisation for Economic Co-operation and Development (OECD) . Oreopoulos Philip , Petronijevic Uros ( 2013 ). "Making College Worth It: A Review of the Returns to Higher Education." The Future of Children , 23 , 41 – 65 . Google Scholar Crossref Search ADS PubMed Oreopoulos Philip , Salvanes Kjell ( 2011 ). "Priceless: The Nonpecuniary Benefits of Schooling." Journal of Economic Perspectives , 25 ( 3 ), 159 – 184 . Google Scholar Crossref Search ADS Picht Georg ( 1964 ). Die deutsche Bildungskatastrophe: Analyse und Dokumentation . Walter Verlag . Pischke Jörn-Steffen , Wachter Till von ( 2008 ). "Zero Returns to Compulsory Schooling in Germany: Evidence and Interpretation." The Review of Economics and Statistics , 90 , 592 – 598 . Google Scholar Crossref Search ADS Robinson Peter M ( 1988 ). "Root-N-Consistent Semiparametric Regression." Econometrica , 56 , 931 – 954 . Google Scholar Crossref Search ADS Rohwedder Susann , Willis Robert J. ( 2010 ). "Mental Retirement." Journal of Economic Perspectives , 24 , 119 – 138 . Google Scholar Crossref Search ADS PubMed Salthouse Timothy A. ( 2006 ). "Mental Exercise and Mental Aging: Evaluating the Validity of the "Use It or Lose It" Hypothesis." Perspectives on Psychological Science , 1 , 68 – 87 . Google Scholar Crossref Search ADS PubMed Schneeweis Nicole , Skirbekk Vegard , Winter-Ebmer Rudolf ( 2014 ). "Does Education Improve Cognitive Performance Four Decades After School Completion?" Demography , 51 , 619 – 643 . Google Scholar Crossref Search ADS PubMed Stephens Melvin Jr , Yang Dou-Yan ( 2014 ). "Compulsory Education and the Benefits of Schooling." American Economic Review , 104 ( 6 ), 1777 – 1792 . Google Scholar Crossref Search ADS Stern Yaakov ( 2012 ). "Cognitive Reserve in Ageing and Alzheimer's Disease." The Lancet Neurology , 11 , 1006 – 1012 . Google Scholar Crossref Search ADS PubMed Stern Yaakov , Albert Steven , Tang Ming-Xin , Tsai Wei-Yen ( 1999 ). "Rate of Memory Decline in AD is Related to Education and Occupation: Cognitive Reserve?" Neurology , 53 , 1942 – 1947 . Google Scholar Crossref Search ADS PubMed Vytlacil Edward ( 2002 ). "Independence, Monotonicity, and Latent Index Models: An Equivalence Result." Econometrica , 70 , 331 – 341 . Google Scholar Crossref Search ADS Weinert S. , Artelt C. , Prenzel M. , Senkbeil M. , Ehmke T. , Carstensen C. ( 2011 ). "Development of Competencies across the Life Span." Zeitschrift für Erziehungswissenschaft , 14 , 67 – 86 . Google Scholar Crossref Search ADS Weisser Ansgar ( 2005 ). "18. Juli 1961 – Entscheidung zur Gründung der Ruhr-Universität Bochum." Tech. rep. , Internet-Portal Westfälische Geschichte , http://www.westfaelische-geschichte.de/web495 . Zimmermann Stefan , Artelt Cordula , Weinert Sabine ( 2014 ). "The Assessment of Reading Speed in Adults and First-Year Students." Tech. rep. , Leibniz Institute for Educational Trajectories (LIfBi) – National Educational Panel Study . © The Author(s) 2018. Published by Oxford University Press on behalf of European Economic Association. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of the European Economic Association Oxford University Press http://www.deepdyve.com/lp/oxford-university-press/heterogeneity-in-marginal-non-monetary-returns-to-higher-education-MHJCZtIWR0 Kamhöfer, Daniel, A; Schmitz,, Hendrik; Westphal,, Matthias Journal of the European Economic Association , Volume 17 (1) – Feb 1, 2019 /lp/ou_press/heterogeneity-in-marginal-non-monetary-returns-to-higher-education-MHJCZtIWR0 Journal of the European Economic Association / Economics, Econometrics and Finance / Economics, Econometrics and Finance... © The Author(s) 2018. Published by Oxford University Press on behalf of European Economic Association. 10.1093/jeea/jvx058 Abstract In this paper we estimate the effects of college education on cognitive abilities, health, and wages, exploiting exogenous variation in college availability. By means of semiparametric local instrumental variables techniques we estimate marginal treatment effects in an environment of essential heterogeneity. The results suggest positive average effects on cognitive abilities, wages, and physical health. Yet, there is heterogeneity in the effects, which points toward selection into gains. Although the majority of individuals benefits from more education, the average causal effect for individuals with the lowest unobserved desire to study is zero for all outcomes. Mental health effects, however, are absent for the entire population. 1. Introduction "The whole world is going to university—Is it worth it?" The Economist's headline read in March 2015.1 Although convincing causal evidence on positive labor market returns to higher education is still rare and nearly exclusively available for the United States, even less is known about the non-monetary returns to college education (see Oreopoulos and Petronijevic 2013; Barrow and Malamud 2015). Although non-monetary factors are acknowledged to be important outcomes of education (Oreopoulos and Salvanes 2011), evidence on the effect of college education is so far limited to health behaviors (see in what follows). We estimate the long-lasting marginal returns to college education in Germany decades after leaving college. As a benchmark, we start by looking at wage returns to higher education but the paper's focus is on the non-monetary returns that might also be seen as mediators of the more often studied effect of education on wages. These non-monetary returns are cognitive abilities and health. Cognitive abilities and health belong to the most important non-monetary determinants of individual well-being. Moreover, the stock of both factors also influences the economy as a whole (see, among many others, Heckman et al. 1999 and Cawley et al. 2001 for cognitive abilities and Acemoglu and Johnson 2007, Cervellati and Sunde 2005, and Costa 2015 for health). Yet, non-monetary returns to college education are not fully understood (Oreopoulos and Salvanes 2011). Psychological research broadly distinguishes between effects of education on the long-term cognitive ability differential that are either due to a change in the cognitive reserve (i.e., the cognitive capacity) or due to an altered age-related decline (see, e.g., Stern 2012). Still, even the compound manifestation of the overall effect has rarely been studied for college education over a short-term horizon2 and—as far as we are aware—it has never been assessed for the long run. Few studies analyze the returns to college education on health behaviors (Currie and Moretti 2003; Grimard and Parent 2007; de Walque 2007). We use a slightly modified version of the marginal treatment effect approach introduced and forwarded by Björklund and Moffitt (1987) and Heckman and Vytlacil (2005). The main feature of this approach is to explicitly model the choice for education, thus turning back from a mere statistical view of exploiting exogenous variation in education to identify casual effects toward a description of the behavior of economic agents. Translated into our research question, the MTE is the effect of education on different outcomes for individuals at the margin of taking higher education. The MTE can be used to generate all conventional treatment parameters, such as the average treatment effect (ATE). On top of this, comparing the marginal effects along the probability of taking higher education is also informative in its own right: different marginal effects do not just reveal effect heterogeneity but also some of its underlying structure (for instance, selection into gains). This is an important property that the local average treatment effect—LATE, as identified by conventional two stage least squares methods—would miss. The individuals in our sample made their college decision between 1958 and 1990 and graduated in the case of college education between 1963 and 1995. Our outcome variables (wages, standardized measures of cognitive abilities3 and mental and physical health) are assessed between 2010 and 2012, thus, 20–54 years after the college decision. Our instrument is a measure of the relative availability of college spots (operationalized by the number of enrolled students divided by the number of inhabitants) in the area of residence at the time of the secondary school graduation. Using detailed information on the arguably exogenous expansions of college capacities in all 326 West German districts (cities or rural areas) during the so-called "educational expansion" between the 1960s and 1980s generates variation in the availability of higher education. By deriving treatment effects over the entire support of the probability of college attendance, this paper contributes to the literature mainly in two important ways. First, this is the first study that analyzes the long-term effect of college education on cognitive abilities and general health measures (instead of specific health behaviors). Long-run effects on skills are crucial in showing the sustainability of human capital investments after the age of 19. Along this line, this outcome can complement existing evidence in identifying the fundamental value of college education since—unlike studies on monetary returns—effects on cognitive skills do neither directly exhibit signaling (see the debate on discrepancy between private and social returns as in Clark and Martorell 2014) nor adverse general equilibrium effects (as skills are not determined by both, forces of demand and supply). Second, by going beyond the point estimate of the LATE, we provide a more comprehensive picture in an environment of essential heterogeneity. The results suggest positive average returns to college education for wages, cognitive abilities, and physical health. Yet, the returns are heterogeneous—thus, we find evidence for selection into gains—and even close to zero for the around 30% of individuals with the lowest desire to study. Mental health effects are zero throughout the population. Thus, our findings can be interpreted as evidence for remarkable positive average returns for those who took college education in the past. Yet, a further expansion in college education, as sometimes called for, is likely not to pay off as this would mostly affect individuals in the part of the distribution that are not found to be positively affected by education. We also try to substantiate our results by looking at potential mechanisms of the average effects. Although we cannot causally differentiate all channels and the data allow us to provide suggestive evidence only, our findings may be interpreted as follows. Mentally more demanding jobs, jobs with a less health deteriorating effects and better health behaviors probably add to the explanation of skill and health returns to education. The paper is organized as follows. Section 2 briefly introduces the German educational system and describes the exogenous variation we exploit. Section 3 outlines the empirical approach. Section 4 presents the data. The main results are reported in Section 5 whereas Section 6 addresses some of its potential underlying pathways. Section 7 concludes. 2. Institutional Background and Exogenous Variation 2.1. The German Higher Educational System After graduating from secondary school, adolescents in Germany either enroll into higher education or start an apprenticeship. The latter is part-time training-on-the-job and part-time schooling. This vocational training usually takes three years and individuals often enter the firm (or another firm in the sector) as a full-time employee afterward. To be eligible for higher education in Germany, individuals need a university entrance degree. In the years under review, only academic secondary schools (Gymnasien) with 13 years of schooling in total award this degree (Abitur). Although the tracking from elementary schools to secondary schools takes place rather early at the age of 10, students can switch secondary school tracks in every grade. It is also possible to enroll into academic schools after graduating from basic or intermediate schools in order to receive a university entrance degree. In Germany, mainly two institutions offer higher education: universities/colleges4 and universities of applied science (Fachhochschulen). The regular time to receive the formerly common Diplom degree (master's equivalent) was 4.5 years at both institutions. Colleges are usually large institutions that offer degrees in various subjects. The other type of higher educational institutions, universities of applied science, are usually smaller than colleges and often specialized in one field of study (e.g., business schools). Moreover, universities of applied science have a less theoretical curriculum and a teaching structure that is similar to schools. Nearly all institutions of higher education in Germany do not charge any tuition fees. However, students have to cover their own costs of living. On the other hand, their peers in apprenticeship training earn a small salary. Possible budget constraints (e.g., transaction costs arising through the need to move to another city in order to go to college) are likely determinants of the decision to enroll into higher education. 2.2. Exogenous Variation in College Education over Time Although the higher educational system as described in Section 2.1 did not change in the years under review, the accessibility (in terms of mere quantity but also distribution within Germany) of tertiary education changed significantly, providing us with a source of exogenous variation. This so called "educational expansion" falls well into the period of study (1958–1990). Within this period, the shrinking transaction costs of studying may have changed incentives and the mere presence of new or growing colleges could also have nudged individuals toward higher education that otherwise would not have studied. In this paper, we consider two processes in order to quantify the educational expansion. The first is the openings of new colleges, the second is the extension in capacity of all colleges (we refer to both as college availability).5 College availability as an instrument for higher education was introduced to the literature by Card (1995) and has frequently been employed since then (e.g., Currie and Moretti 2003), also to estimate the MTE (e.g., Carneiro et al. 2011; Nybom 2017). We exploit the rapid increase in the number of new colleges and in the number of available spots to study as exogenous variation in the college decision. Between 1958 (the earliest secondary school graduation year in our sample) and 1990 the number of colleges in Germany doubled from 33 to 66.6 In particular, the opening of new colleges introduced discrete discontinuities in choice sets. As an example, students had to travel 50 km, on average, to the closest college before a college was opened in their district (measured from district centroid to centroid), see Figure 1. Figure A.1 in the Appendix gives an impression of the spatial variation in college availability over time. Figure 1. View largeDownload slide Average distance to the closest college over time for districts with a college opening. Own illustration. Information on colleges are taken from the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). The distances (in km) between the districts are calculated using district centroids. These distances are weighted by the number of individuals observed in the particular district-year cells in our estimation sample of the NEPS-Starting Cohort 6 data. The resulting average distances are depicted by the black circles. Note that prior to time period 0, the average distance changes over time either due to sample composition or a college opening in a neighboring district. Only districts with a college opening are taken into account. Figure 1. View largeDownload slide Average distance to the closest college over time for districts with a college opening. Own illustration. Information on colleges are taken from the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). The distances (in km) between the districts are calculated using district centroids. These distances are weighted by the number of individuals observed in the particular district-year cells in our estimation sample of the NEPS-Starting Cohort 6 data. The resulting average distances are depicted by the black circles. Note that prior to time period 0, the average distance changes over time either due to sample composition or a college opening in a neighboring district. Only districts with a college opening are taken into account. There was an increase in the size of existing colleges and, therefore, in the number of available spots to study as well. The average number of students per college was 5,013 in 1958 and 15,438 in 1990. Of the 33 colleges in 1958, 30 still existed in 1990 and had an average size of 23,099 students. The total number of students increased from 155,000 in 1958 to 1 million in 1990. Figure 2 shows the trends in college openings and enrolled students (normalized by the number of inhabitants) for the five most-populated German states. Although the actual numbers used in the regressions vary on the much smaller district level, the state level figures simplify the visualization of the pattern. Figure 2. View largeDownload slide Number of colleges and students over the time in selected states. Own illustration. College opening and size information are taken from the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). Yearly information on the district-specific population size is based on personal correspondence with the statistical offices of the federal states. For sake of lucidity the trends are only plotted for the five most populated states. Figure 2. View largeDownload slide Number of colleges and students over the time in selected states. Own illustration. College opening and size information are taken from the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). Yearly information on the district-specific population size is based on personal correspondence with the statistical offices of the federal states. For sake of lucidity the trends are only plotted for the five most populated states. Factors that have driven the increase in the number of colleges and their size can briefly be summarized into four groups: (i) The large majority of the population had a low level of education. This did not only result from WWII but also from the "anti-intellectualism" (Picht 1964, p. 66) in the Third Reich, and the notion of education in Imperial Germany before, befitting the social status of certain individuals only. (ii) An increase in the number of academic secondary schools at the same time (as analyzed in Jürges et al. 2011; Kamhöfer and Schmitz 2016 for instance) qualified a larger share of school graduates to enroll into higher education (Bartz 2007). (iii) A change in production technologies led to an increase in firm's demand for high-skilled workers—especially, given the low level of educational participation (Weisser 2005). (iv) Political decision makers were afraid that "without an increase in the number of skilled graduates the West German economy would not be able to compete with communist rivals" (Jürges et al. 2011, p. 846, in reference to Picht 1964). Although these reasons (maybe except for the firm's demand for more educated workers) affected the 10 West German federal states—that are in charge of educational policy—in the same way, the measures taken and the timing of actions differed widely between states. Because of local politics (e.g., the balancing of regional interests and avoiding clusters of colleges) there was also a large amount of variation in college openings within the federal states. See Online Appendix B to the paper for a much more detailed description of the political process involved. A major concern for instrument validity is that, even though the political process did not follow a unified structure and included some randomness in the final choice of locations and timing of openings, regions where colleges were opened differed from those that already had colleges before (or that never established any). Table 1 reports some numbers on the regional level as of the year 1962 (the earliest possible year available to us with representative data).7 Regions that already had colleges before did not differ in terms of sociodemographics (except for population densities, as mostly large cities had colleges before) but were somewhat stronger in terms of socioeconomic indices. The differences were not large however. Given that we include district fixed-effects and a large set of socioeconomic controls (including the socioeconomic environment before the college decision, see Section 4), this should not be a problematic issue. Table 1. Comparison of regions with and without college openings before college opens using administrative data. (1) (2) (3) (4) (5) (6) College opening... Before Between Later than 1958 1958–1990 1990 or never Mean s.d. Mean s.d. Mean s.d. Observations Number of regions 27 30 190 Sociodemographic characteristics Female (in %) 53.0 (2.0) 53.0 (1.4) 52.9 (4.3) Average age (in years) 37.2 (1.1) 37.0 (1.1) 36.6 (1.9) Singles (in %) 38.8 (2.5) 37.7 (2.3) 38.9 (4.6) Population density per km2 in 1962 1381.9 (1076.7) 1170.1 (1047.3) 327.1 (479.7) Change in population density 1962–1990 1.6 (186.3) −71.0 (202.8) 31.5 (98.5) Migrational background (in %) 2.7 (3.0) 1.6 (1.5) 2.1 (2.3) Socioeconomic characteristics Share of employees to all individuals (in %) 47.0 (3.6) 45.3 (4.2) 46.2 (5.2) Employees with an income > 600 DM (in %) 27.3 (3.8) 24.8 (5.3) 25.9 (6.4) Employees by industry (in %) – Primary 2.1 (5.2) 5.2 (5.2) 2.8 (5.5) – Secondary 52.9 (8.4) 54.7 (6.2) 54.3 (8.9) – Tertiary 45.0 (9.3) 40.1 (8.3) 42.9 (9.6) Employees in blue collar occup. (in %) 53.6 (9.4) 59.0 (7.9) 56.5 (9.3) Employees in academic occup. (in %) 22.0 (4.4) 17.5 (4.3) 20.3 (5.9) (1) (2) (3) (4) (5) (6) College opening... Before Between Later than 1958 1958–1990 1990 or never Mean s.d. Mean s.d. Mean s.d. Observations Number of regions 27 30 190 Sociodemographic characteristics Female (in %) 53.0 (2.0) 53.0 (1.4) 52.9 (4.3) Average age (in years) 37.2 (1.1) 37.0 (1.1) 36.6 (1.9) Singles (in %) 38.8 (2.5) 37.7 (2.3) 38.9 (4.6) Population density per km2 in 1962 1381.9 (1076.7) 1170.1 (1047.3) 327.1 (479.7) Change in population density 1962–1990 1.6 (186.3) −71.0 (202.8) 31.5 (98.5) Migrational background (in %) 2.7 (3.0) 1.6 (1.5) 2.1 (2.3) Socioeconomic characteristics Share of employees to all individuals (in %) 47.0 (3.6) 45.3 (4.2) 46.2 (5.2) Employees with an income > 600 DM (in %) 27.3 (3.8) 24.8 (5.3) 25.9 (6.4) Employees by industry (in %) – Primary 2.1 (5.2) 5.2 (5.2) 2.8 (5.5) – Secondary 52.9 (8.4) 54.7 (6.2) 54.3 (8.9) – Tertiary 45.0 (9.3) 40.1 (8.3) 42.9 (9.6) Employees in blue collar occup. (in %) 53.6 (9.4) 59.0 (7.9) 56.5 (9.3) Employees in academic occup. (in %) 22.0 (4.4) 17.5 (4.3) 20.3 (5.9) Notes: Own calculations based on Micro Census 1962, see Lengerer et al. (2008). Regions are defined through administrative Regierungsbezirk entries and the degree urbanization (Gemeindegrößenklasse) and may cover more than one district. College information is aggregated at regional level and a region is considered to have a college if at least one of its districts has a college. Calculations for population density and change in population density based on district-level data acquired through personal correspondence with the statistical offices of the federal states. Data are available on request. The variables "employees in blue collar occup." and "employees in academic occup." state the shares of employees in the region in an occupation that is usually conducted by a blue collar worker/a college graduate, respectively. Standard deviations (s.d.) are given in italics in parentheses. View Large Table 1. Comparison of regions with and without college openings before college opens using administrative data. (1) (2) (3) (4) (5) (6) College opening... Before Between Later than 1958 1958–1990 1990 or never Mean s.d. Mean s.d. Mean s.d. Observations Number of regions 27 30 190 Sociodemographic characteristics Female (in %) 53.0 (2.0) 53.0 (1.4) 52.9 (4.3) Average age (in years) 37.2 (1.1) 37.0 (1.1) 36.6 (1.9) Singles (in %) 38.8 (2.5) 37.7 (2.3) 38.9 (4.6) Population density per km2 in 1962 1381.9 (1076.7) 1170.1 (1047.3) 327.1 (479.7) Change in population density 1962–1990 1.6 (186.3) −71.0 (202.8) 31.5 (98.5) Migrational background (in %) 2.7 (3.0) 1.6 (1.5) 2.1 (2.3) Socioeconomic characteristics Share of employees to all individuals (in %) 47.0 (3.6) 45.3 (4.2) 46.2 (5.2) Employees with an income > 600 DM (in %) 27.3 (3.8) 24.8 (5.3) 25.9 (6.4) Employees by industry (in %) – Primary 2.1 (5.2) 5.2 (5.2) 2.8 (5.5) – Secondary 52.9 (8.4) 54.7 (6.2) 54.3 (8.9) – Tertiary 45.0 (9.3) 40.1 (8.3) 42.9 (9.6) Employees in blue collar occup. (in %) 53.6 (9.4) 59.0 (7.9) 56.5 (9.3) Employees in academic occup. (in %) 22.0 (4.4) 17.5 (4.3) 20.3 (5.9) (1) (2) (3) (4) (5) (6) College opening... Before Between Later than 1958 1958–1990 1990 or never Mean s.d. Mean s.d. Mean s.d. Observations Number of regions 27 30 190 Sociodemographic characteristics Female (in %) 53.0 (2.0) 53.0 (1.4) 52.9 (4.3) Average age (in years) 37.2 (1.1) 37.0 (1.1) 36.6 (1.9) Singles (in %) 38.8 (2.5) 37.7 (2.3) 38.9 (4.6) Population density per km2 in 1962 1381.9 (1076.7) 1170.1 (1047.3) 327.1 (479.7) Change in population density 1962–1990 1.6 (186.3) −71.0 (202.8) 31.5 (98.5) Migrational background (in %) 2.7 (3.0) 1.6 (1.5) 2.1 (2.3) Socioeconomic characteristics Share of employees to all individuals (in %) 47.0 (3.6) 45.3 (4.2) 46.2 (5.2) Employees with an income > 600 DM (in %) 27.3 (3.8) 24.8 (5.3) 25.9 (6.4) Employees by industry (in %) – Primary 2.1 (5.2) 5.2 (5.2) 2.8 (5.5) – Secondary 52.9 (8.4) 54.7 (6.2) 54.3 (8.9) – Tertiary 45.0 (9.3) 40.1 (8.3) 42.9 (9.6) Employees in blue collar occup. (in %) 53.6 (9.4) 59.0 (7.9) 56.5 (9.3) Employees in academic occup. (in %) 22.0 (4.4) 17.5 (4.3) 20.3 (5.9) Notes: Own calculations based on Micro Census 1962, see Lengerer et al. (2008). Regions are defined through administrative Regierungsbezirk entries and the degree urbanization (Gemeindegrößenklasse) and may cover more than one district. College information is aggregated at regional level and a region is considered to have a college if at least one of its districts has a college. Calculations for population density and change in population density based on district-level data acquired through personal correspondence with the statistical offices of the federal states. Data are available on request. The variables "employees in blue collar occup." and "employees in academic occup." state the shares of employees in the region in an occupation that is usually conducted by a blue collar worker/a college graduate, respectively. Standard deviations (s.d.) are given in italics in parentheses. View Large Yet, changes in district characteristics that are potentially related to the outcome variables might be a more important problem. There could, for instance, be changes in the population structure that both induce a higher demand for college education and go along with improved cognitive abilities and health. This could be the case if the regions with college openings were more "dynamic" with a younger and potentially increasing population. Table 1 shows a decline in the population density by 6% between 1962 and 1990 in the areas that opened colleges whereas there were no average changes in the areas with preexisting colleges and a 10% increase in the areas that never opened any. This reflects different regional trends in population ageing. As one example, the Ruhr Area in the west, where three colleges were opened, experienced a population decline and comparably stronger population ageing over time. Again, these differences are not dramatically large, but we might be worried of different trends in health and cognitive abilities that are correlated with college expansion. If this was the case—more expansion in areas that have a more ageing population with deteriorating health and cognitive abilities—we might underestimate the effect of college eduction on these outcomes. We include a district-specific time trend to account for this in the analysis. The expansion in secondary schooling noted previously was unrelated to the college expansion. Although college expansion naturally took place in a small number of districts, expansion in secondary schooling was across all regions. In addition, Kamhöfer and Schmitz (2016) do not find any local average treatment effects of school expansion on cognitive abilities and wages. Thus, it seems unlikely that selective increases in cognitive abilities due to secondary school expansion invalidate the instrument. Nevertheless, again, district-specific time trends should capture large parts if this was a problem. So essentially, what we do is the following: we look within each district and attribute changes in the college (graduation/enrollment) rate from the general trend (by controlling for cohort FE) and the district specific trend (which might be due to continually increased access to higher secondary education) to either changes in the college spots or a new opening of a college nearby. We use discontinuities in college access over time that cannot be exploited using data on individuals that make the college decision at the same point in time (for instance cohort studies) as some of the previous literature that used college availability as an instrument did. Details on how we exploit the variation in college availability in the empirical specification are discussed in Section 4.4 after presenting the data. 3. Empirical Strategy Our estimation framework widely builds on Heckman and Vytlacil (2005) and Carneiro et al. (2011). Derivations and in-depth discussion of most issues can be found there. We start with the potential outcome model, where Y1 and Y0 are the potential outcomes with and without treatment. The observed outcome Y either equals Y1 in case an individual received a treatment—which is college education here—or Y0 in the absence of treatment (the individual identifier i is implied). Obviously, treatment participation is voluntary, rendering a treatment dummy D in a simple linear regression endogenous. In the marginal treatment effect framework, this is explicitly modeled by using a choice equation, that is, we specify the following latent index model: \begin{equation} Y^1 = X^{\prime }\beta _1 + U_1, \end{equation} (1) \begin{equation} Y^0 = X^{\prime }\beta _0 + U_0, \end{equation} (2) \begin{equation} D^* = Z^{\prime }\delta - V, \quad \mbox{where }\, D = \boldsymbol {1}[ D^* \ge 0] = \boldsymbol {1}[ Z^{\prime }\delta \ge V]. \end{equation} (3) The vector X contains observable, and U1, U0 unobservable factors that affect the potential outcomes.8D* is the latent desire to take up college education that depends on observed variables Z and unobservables V. Z includes all variables in X plus the instruments. Whenever D* exceeds a threshold (set to zero without loss of generality), the individual opts for college education, otherwise she does not. U1, U0, V are potentially correlated, inducing the endogeneity problem (as well as heterogenous returns) as we observe Y(=DY1 + (1 − D)Y0), D, X, Z, but not U1, U0, V. Following this model, individuals are indifferent between higher education and directly entering the labor market (e.g., through an apprenticeship) whenever the index of observables Z΄δ is equal to the unobservables V. Thus, if we knew the switching point (point of indifference) and its corresponding value of the observables, we could make sharp restriction on the value of the unobservables. This property is exploited in the estimation. Since for every value of the index Z΄δ one needs individuals with and without higher education, it is important to meaningfully aggregate the index by a monotonous transformation that for example returns the quantiles of Z΄δ and V. One such rank-preserving transformation is done by the cumulative distribution function that returns the propensity score P(Z) (quantiles of Z) and UD (quantiles of V).9 If we vary the excluded instruments in Z΄δ from the lowest to the highest value while holding the covariates X constant, more and more individuals will select into higher education. Those who react to this shift also reveal their rank in the unobservable distribution. Thus, the unobservables are fixed given the propensity score and it is feasible to evaluate any outcome for those who select into treatment at any quantile UD that is identified by the instrument-induced change of the higher education choice. In general, estimating marginal effects by UD does not require stronger assumptions than those required by the LATE since Vytlacil (2002) showed its equivalence.10 Yet, strong instruments are beneficial for robustly identifying effects over the support of P(Z). This, however, is testable. The marginal treatment effect (MTE), then, is the marginal (gross) benefit of taking the treatment for those who are just indifferent between taking and not-taking it and can be expressed as \begin{equation*} {\mathit {MTE}}(x,u_D) = \frac{\partial E(Y|x, p)}{\partial p}. \end{equation*} This is the effect of an incremental increase in the propensity score on the observed outcome. The MTE varies along the line of UD in case of heterogeneous treatment effects that arise if individuals self-select into the treatment based on their expected idiosyncratic gains. This is a situation Heckman et al. (2006) call "essential heterogeneity". This is an important structural property that the MTE can recover: If individuals already react at low values of the instrument, where the observed part of the latent desire of selecting into higher education (P(Z)) is still very low, a prerequisite for yet going to college is that V is marginally lower. These individuals could choose college against all (observed) odds because they are more intrinsically talented or motivated as indicated by a low V. If this is translated into higher future gains (U1 − U0), the MTE would exhibit a significant negative slope: As P(Z) rises, marginal individuals need less and less compensation in terms of unobserved and expected returns to yet choose college—this is called selection into gains. As Basu (2011, 2014) notes, essential heterogeneity is not restricted to active sorting into gains but is always an issue if selection is based on factors that are not completely independent of the gains. Thus, in health economic applications, where gains are arguably harder to predict for the individual than, say, monetary returns, essential heterogeneity is also an important phenomenon. In this case the common treatment parameters ATE, ATT, and LATE do not coincide. The MTE can be interpreted as a more fundamental parameter than the usual ones as it unfolds all local switching effects by intrinsic "willingness" to study and not only some weighted average of those.11 The main component for estimating the MTE is the conditional expectation E(Y | X, p). Heckman and Vytlacil (2007) show that if we plug in the counterfactuals in (1) and (2) in the potential outcome equation, rearrange and apply the expectation E(. | X, p) to all expressions and impose an exclusion restriction of p on Y (exposed in what follows), we get an expression that can be estimated: \begin{eqnarray} E(Y|X, p) & =& X^{\prime }\beta _0 + X^{\prime }(\beta _1 -\beta _0) \cdot p + E(U_1 - U_0 | D=1, X) \cdot p \nonumber \\ & =& X^{\prime }\beta _0 + X^{\prime }(\beta _1 -\beta _0) \cdot p + K(p), \end{eqnarray} (4) where K(p) is some not further specified function of the propensity score if one wants to avoid distributional assumptions of the error terms. Thus, the estimation of the MTE involves estimating the propensity score in order to estimate equation (4) and, finally, taking its derivative with respect to p. Note that this derivative—and hence the effect of college education—depends on heterogeneity due to observed components X and unobserved components K(p), since this structure was imposed by equations (1) and (2): \begin{eqnarray} \frac{\partial E(Y|X, p)}{\partial p} & =& X^{\prime }(\beta _1 -\beta _0) + \frac{\partial K(p)}{\partial p}. \end{eqnarray} (5) To achieve non-parametric identification of the terms in equation (5), the Conditional Independence Assumption has to be imposed on the instrument \begin{equation*} (U_1,U_0,V)\!\perp \!\!\!\perp Z|X \end{equation*} meaning that the error terms are independent of Z given X. That is, after conditioning on X a shift in the instruments Z (or the single index P(Z)) has no effect on the potential outcome distributions. Non-parametrically estimating separate MTEs for every data cell determined by X is hardly ever feasible due to a lack of observations and powerful instruments within each such cell. Yet, in case of parametric or semiparametric specifications a conditional independence assumption is not sufficient to decompose the effect into observed and unobserved sources of heterogeneity. To separately identify the right hand side of equation (5) unconditional independence is required: (U1, U0, V) ⊥⊥ Z, X (Carneiro et al. 2011, for more details consult the Online Appendices).12 In a pragmatic approach, one can now either follow Brinch et al. (2017) or Cornelissen et al. (forthcoming) who do not aim at causally separating the causes of the effect heterogeneity. In this case a conventional exclusion restriction on the instruments suffices for estimating the overall level and the curvature of the MTE. Our solution in bringing the empirical framework to the data without too strong assumptions, is to estimate marginal effects that only vary over the unobservables while fixing the X-effects at mean value. This means to deviate from (4) by restricting β1 = β0 = β except for the intercepts α1, α0 in (1) and (2) such that E(Y | X, p) becomes \begin{eqnarray} E(Y|X, p) & = X^{\prime }\beta + (\alpha _1 -\alpha _0) \cdot p + K(p). \end{eqnarray} (6) Thus, we allow for different levels of potential outcomes, whereas we keep conditioning on X. This might look like a strong restriction at first sight but is no more different than the predominant approach in empirical economics of trying to identify average treatment effects where the treatment indicator is typically not interacted with other observables. Certainly, this does not rule out that the MTE varies by observable characteristics. Even with the true population effects that are varying over X, note that the derivative of equation (4) w.r.t. the propensity score is constant in X. Hence, only the level of the MTE changes for certain subpopulations determined by X, the curvature remains unaffected. Thus, estimation of equation (6) delivers an MTE that has a level that is averaged over all subpopulations without changing the curvature. In this way all crucial elements of the MTE are preserved, since we are interested in the average effect and its heterogeneity with respect to the unobservables for the whole population. How this heterogeneity is varying for certain subpopulations is of less importance and also the literature has focused on MTEs where the X-part is averaged out. On the other hand we gain with this approach by considerably relaxing our identifying assumption from an unconditional to a conditional independence of the instrument. One advantage in not estimating heterogeneity in the observables can arise if X contains many variables that each take many different values. In this case, problems of weak instruments can inflate the results.13 In estimating (6), we follow Carneiro et al. (2010, 2011) again and use semi-parametric techniques as suggested by Robinson (1988).14 Standard errors are clustered at the district level and were generated by bootstrapping the entire procedure using 200 replications. 4. Data 4.1. Sample Selection and College Education Our main data source are individual level data from the German National Educational Panel Study (NEPS), see Blossfeld et al. (2011). The NEPS data map the educational trajectories of more than 60,000 individuals in total. The data set consists of a multi-cohort sequence design and covers six age groups, called "starting cohorts": newborns and their parents, pre-school children, children in school grades 5 and 9, college freshmen students, and adults. Within each starting cohort the data are organized in a longitudinal manner, that is, individuals are interviewed repeatedly. For each starting cohort, the interviews cover extensive information on competence development, learning environments, educational decisions, migrational background, and socioeconomic outcomes. We aim at analyzing longer term effects of college education and, therefore, restrict the analysis to the "adults starting cohort". For this age group six waves are available with interviews conducted between 2007/2008 (wave 1) and 2013 (wave 6), see LIfBi (2015). Moreover, the NEPS includes detailed retrospective information on the educational and occupational history as well as the living conditions at the age of 15—about three years before individuals decide for higher education. From the originally 17,000 respondents in the adults starting cohort, born between 1944 and 1989, we exclude observations for four reasons: First, we focus on individuals from West Germany due to the different educational system in the former German Democratic Republic (GDR), thereby dropping 3,500 individuals living in the GDR at the age of the college decision. Second, to allow for long-term effects we make a cut-off at college attendance before 1990 and drop 2,800 individuals who graduated from secondary school in 1990 or later. Third, we drop 1,000 individuals with missing geographic information. An attractive (and for our analysis necessary) feature of the NEPS data is that they include information on the district (German Kreis) of residence during secondary schooling that is used in assigning the instrument in the selection equation. The fourth reason for losing observations is that the dependent variables are not available for each respondent, see in what follows. Our final sample includes between 2,904 and 4,813 individuals, depending on the outcome variable. The explanatory variable "college degree" takes on the value 1 if an individual has any higher educational degree, and 0 otherwise. Dropouts are treated as all other individuals without college education. More than one fourth of the sample has a college degree, whereas three fourths do not. 4.2. Dependent Variables Wages. The data set covers a wide range of individual employment information such as monthly income and weekly hours worked. We calculate the hourly gross wage for 2013 (wave 6) by dividing the monthly gross labor market income by the actual weekly working hours (including extra hours) times the average number of weeks per month, 4.3. A similar strategy is, for example, applied by Pischke and von Wachter (2008) to calculate hourly wages using German data. For this outcome variable, we restrict our sample to individuals in working age up to 65 years and drop observations with hourly wages below €5 and above the 99th quantile (€77.52) as this might result from misreporting. Table 2 reports descriptive statistics and reveals considerably higher hourly wages for individuals with college degree. The full distribution of wages (and the other outcomes) for both groups is shown in Figure A.2 in the Appendix. In the regression analysis we use log gross hourly wages. Table 2. Descriptive statistics dependent variables. (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Observations 3,378 4,813 4,813 3,995 4,576 2,904 with college degree (in %) 31.0 28.1 28.1 27.8 28.1 28.0 Raw values Mean with degree 27.95 53.31 51.15 39.69 29.76 13.37 Mean without degree 19.35 50.39 50.53 35.99 22.75 9.36 Maximum possible value –a 100 100 51 39 22 Transformed values Mean with degree 3.25 0.23 0.04 0.32 0.63 0.61 Mean without degree 2.88 −0.09 −0.02 −0.12 −0.25 −0.24 (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Observations 3,378 4,813 4,813 3,995 4,576 2,904 with college degree (in %) 31.0 28.1 28.1 27.8 28.1 28.0 Raw values Mean with degree 27.95 53.31 51.15 39.69 29.76 13.37 Mean without degree 19.35 50.39 50.53 35.99 22.75 9.36 Maximum possible value –a 100 100 51 39 22 Transformed values Mean with degree 3.25 0.23 0.04 0.32 0.63 0.61 Mean without degree 2.88 −0.09 −0.02 −0.12 −0.25 −0.24 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Gross hourly wage given in euros. Gross hourly wage is transformed to its log value, the other variables are transformed in units of standard deviation with mean 0 and standard deviation 1. a. The gross hourly wage is truncated below at €5 and above at the highest quantile (€77.52). View Large Table 2. Descriptive statistics dependent variables. (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Observations 3,378 4,813 4,813 3,995 4,576 2,904 with college degree (in %) 31.0 28.1 28.1 27.8 28.1 28.0 Raw values Mean with degree 27.95 53.31 51.15 39.69 29.76 13.37 Mean without degree 19.35 50.39 50.53 35.99 22.75 9.36 Maximum possible value –a 100 100 51 39 22 Transformed values Mean with degree 3.25 0.23 0.04 0.32 0.63 0.61 Mean without degree 2.88 −0.09 −0.02 −0.12 −0.25 −0.24 (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Observations 3,378 4,813 4,813 3,995 4,576 2,904 with college degree (in %) 31.0 28.1 28.1 27.8 28.1 28.0 Raw values Mean with degree 27.95 53.31 51.15 39.69 29.76 13.37 Mean without degree 19.35 50.39 50.53 35.99 22.75 9.36 Maximum possible value –a 100 100 51 39 22 Transformed values Mean with degree 3.25 0.23 0.04 0.32 0.63 0.61 Mean without degree 2.88 −0.09 −0.02 −0.12 −0.25 −0.24 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Gross hourly wage given in euros. Gross hourly wage is transformed to its log value, the other variables are transformed in units of standard deviation with mean 0 and standard deviation 1. a. The gross hourly wage is truncated below at €5 and above at the highest quantile (€77.52). View Large Health. Two variables from the health domain are used as outcome measures: the Physical Health Component Summary Score (PCS) and the Mental Health Component Summary Score (MCS), both from 2011/2012 (wave 4).15 These summary scores are based on the SF12 questionnaire, which is an internationally standardized set of 12 items regarding eight dimensions of the individual health status. The eight dimensions comprise physical functioning, physical role functioning, bodily pain, general health perceptions, vitality, social role functioning, emotional role functioning and mental health. A scale ranging from 0 to 100 is calculated for each of these eight dimensions. The eight dimensions or subscales are then aggregated to the two main dimensions mental and physical health, using explorative factor analysis (Andersen et al. 2007). For our regression analysis, we standardize the aggregated scales (MCS and PCS) to have mean 0 and standard deviation 1, where higher values indicate better health. Columns (2) and (3) of Table 2 report sample means of the health measures across individuals by college graduation. Those with college degree have, on average, a better physical health score. With respect to mental health, both groups differ only marginally. Cognitive Abilities. Cognitive abilities summarize the "ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought" (American Psychological Association 1995), where the sum of these abilities is referred to as intelligence. Psychologists distinguish several concepts of intelligence with different cognitive abilities. However, they all include measures of verbal comprehension, memory and recall as well as processing speed. Although comprehensive cognitive intelligence tests take hours, a growing number of socioeconomic surveys includes much shorter proxies that measure specific skill components. The short ability tests are usually designed by psychologists and the results are highly correlated with the results of more comprehensive intelligence tests (cf. Lang et al. 2007 for a comparison of cognitive skill tests in the German Socio-economic Panel with larger psychological test batteries). The NEPS includes three kinds of competence tests that cover various domains of cognitive functioning: reading speed, reading competence, and mathematical competence.16 All competence tests were conducted once in 2010/2011 (wave 3) or 2012/2013 (wave 5), respectively, as paper and pencil tests under the supervision of a trained interviewer and the test language was German. The first test measures reading speed.17 The participants receive a booklet consisting of 51 short true-or-false questions and the test duration is 2 min. Each question has between 5 and 18 words. The participants have to answer as many questions as possible in the given window. The test score is the number of correct answers. Since the test aims at the answering speed, the questions only deal with general knowledge and use easy language. One question/statement, for example, reads "There is a bath tub in every garage." The mean number of correct answers in our estimation sample is 39.69 (out of 51) for college graduates and 35.99 for others, see Table 2. For more information, see Zimmermann et al. (2014). The reading competence test measures understanding of texts. It lasts 28 min and covers 32 items. The test consists of three different tasks. First, participants have to answer multiple choice questions about the content of a text, where only one out of four possible answers is right. In a decision-making task, the participants are asked whether statements are right or wrong according to the text. In a third task, participants need to assign possible titles out of a list to sections of the text. The test includes several types of texts, for example, comments, instructions, and advertising texts (LIfBi 2011). Again, the test score reflects the number of correct answers. Participants with college degree score on average 29.76 and without 22.75 (out of 39).18 The mathematical literacy test evaluates "recognizing and [...] applying [of] mathematics in realistic, mainly extra-mathematical situations" (LIfBi 2011, p. 8). The test has 22 items and takes 28 min. It follows the principle of the OECD-PISA tests and consists of the areas quantity, space and shape, change and relations, as well as data and change, and measures the cognitive competencies in the areas of application of skills, modeling, arguing, communicating, representing, as well as problem solving; see LIfBi (2011). Individuals without college degree score on average 9.36 (out of 22) and persons who graduated from college receive 4 points more. Due to the rather long test duration given the total interview time, not every respondent had to do all three tests. Similarly to the OECD-PISA tests for high school students, individuals were randomly assigned a booklet with either all three or two out of the three tests. 3,995 individuals did the reading speed test, 4,576 the reading competence test, and 2,904 math. Since the tests measure different competencies that refer to distinct cognitive abilities, we may not combine the different test scores into an overall score but give the results separately (see Anderson 2007). 4.3. Control Variables Individuals in our sample made their college decision between 1958 and 1990. The NEPS allows us to consider important socioeconomic characteristics that probably affect both the college education decision as well as the outcomes today (variables denoted with X in Section 3). This is general demographic information such gender, migrational background, and family structure, parental characteristics like parent's educational background. Moreover, we include two blocks of controls that were determined before the educational decision was made. Pre-college living conditions include family structure, parental job situation, and household income at the age of 15, whereas pre-college education includes educational achievements (number of repeated grades and secondary school graduation mark). Table A.1 in the Appendix provides more detailed descriptions of all variables and reports the sample means by treatment status. Apart from higher wages, abilities and a better physical health status (as seen in Table 2), individuals with a college degree are more likely to be males from an urban district without a migrational background. Moreover, they are more likely to have healthy parents (in terms of mortality). Other variables seem to differ less between both groups. We also account for cohort effects of mother and father, district fixed effects as well as district-specific time trends (see Mazumder 2008; Stephens and Yang 2014 for the importance of the latter). 4.4. Instrument The processes of college expansion discussed in Section 2.2 probably shifted individuals also with a lower desire to study into college education. Such powerful exogenous variation is beneficial for our approach as we try to identify the MTE along the distribution of the desire to study. We assign each individual the college availability as instrument (that is, a variable in Z but not in X). In doing so, we use the information on the district of the secondary school graduation and the year of the college decision, which is the year of secondary school graduation. The district—there are 326 districts in West Germany—is either a city or a certain rural area. The question is how to exploit the regional variation in openings and spots most efficiently as it is almost infeasible to control for all distances to all colleges simultaneously. Our approach to this question is to create an index that best reflects the educational environment in Germany and combines the distance with the number of college spots, \begin{eqnarray} Z_{it}=\sum _{j}^{326}K( {\mathit {dist}}_{ij}) \times \Bigg (\frac{\# {\mathit {students}}_{jt}}{\# {\mathit {inhabitants}}_{jt}}\Bigg ). \end{eqnarray} (7) The college availability instrument Zit basically includes the total number of college spots (measured by the number of students) per inhabitant in district j (out of the 326 districts in total) individual i faces in year t weighted by the distance between i's home district and district j. Weighting the number of students by the population of the district takes into account that districts with the same number of inhabitants might have colleges of a different size. This local availability is then weighted by the Gaussian kernel distance |$K( {\mathit {dist}}_j)$| between the centroid of the home district and the centroid of district j. The kernel puts a lot of weight to close colleges and a very small weight to distant ones. Since individuals can choose between many districts with colleges, we calculate the sum of all district-specific college availabilities within the kernel bandwidth. Using a bandwidth of 250 km, this basically amounts to |$K( {\mathit {dist}}_j) = \phi ( {\mathit {dist}}_j/250)$| where ϕ is the standard normal pdf. Although 250 km sounds like a large bandwidth, this implies that colleges in the same district receive a weight of 0.4, whereas the weight for colleges that are 100 km away is 0.37, but it is reduced to 0.24 for 250 km. Colleges that are 500 km away only get a very low weight of 0.05. A smaller bandwidth of, say, 100 km would mean that already colleges that are 250 km away receive a weight of 0.02 that implies the assumption that individuals basically do not take them into account at all. Most likely this does not reflect actual behavior. As a robustness check, however, we carry out all estimations with bandwidths between 100 and 250 km and the results are remarkably stable, see Online Appendix Figure C.1. Table 3 presents the descriptive statistics. We also provide background information on certain descriptive measures on distance and student density. Table 3. Descriptive statistics of instruments and background information. (1) (2) (3) (4) Statistics Mean SD Min Max Instrument: college availability 0.459 0.262 0.046 1.131 Background information on college availability (implicitly included in the instrument) Distance to nearest college 27.580 26.184 0 172.269 At least one college in district 0.130 0.337 0 1 Colleges within 100 km 5.860 3.401 0 16 College spots per inhabitant within 100 km 0.034 0.019 0 0.166 (1) (2) (3) (4) Statistics Mean SD Min Max Instrument: college availability 0.459 0.262 0.046 1.131 Background information on college availability (implicitly included in the instrument) Distance to nearest college 27.580 26.184 0 172.269 At least one college in district 0.130 0.337 0 1 Colleges within 100 km 5.860 3.401 0 16 College spots per inhabitant within 100 km 0.034 0.019 0 0.166 Notes: Own calculations based on NEPS-Starting Cohort 6 data and German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). Distances are calculated as the Euclidean distance between two respective district centroids. View Large Table 3. Descriptive statistics of instruments and background information. (1) (2) (3) (4) Statistics Mean SD Min Max Instrument: college availability 0.459 0.262 0.046 1.131 Background information on college availability (implicitly included in the instrument) Distance to nearest college 27.580 26.184 0 172.269 At least one college in district 0.130 0.337 0 1 Colleges within 100 km 5.860 3.401 0 16 College spots per inhabitant within 100 km 0.034 0.019 0 0.166 (1) (2) (3) (4) Statistics Mean SD Min Max Instrument: college availability 0.459 0.262 0.046 1.131 Background information on college availability (implicitly included in the instrument) Distance to nearest college 27.580 26.184 0 172.269 At least one college in district 0.130 0.337 0 1 Colleges within 100 km 5.860 3.401 0 16 College spots per inhabitant within 100 km 0.034 0.019 0 0.166 Notes: Own calculations based on NEPS-Starting Cohort 6 data and German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). Distances are calculated as the Euclidean distance between two respective district centroids. View Large The instrument jointly uses college openings and increases in size. Size is measured in enrollment as there is no available information on actual college spots. This might be considered worrisome as enrollment might reflect demand factors that are potentially endogenous. Although we believe that this is not a major problem as most study programs in the colleges were used to capacity, we also, as a robustness check, neglect information on enrollment and merely exploit information on college openings by using \begin{equation} Z_{it}=\sum _{j}^{326}K( {\mathit {dist}}_{ij}) \times \boldsymbol {1}{[\mbox{college{$\,\,$}available}_{jt}],} \end{equation} (8) where |$\boldsymbol {1}{[\cdot ]}$| is the indicator function. The results when using this instrument are comparable, with minor differences, to those from the baseline specification as shown in Figure A.3 in the Appendix. Certainly, the overall findings and conclusions are not affected by this choice. We prefer the combined instrument as this uses information from both aspects of the educational expansion. 5. Results 5.1. OLS Although we are primarily interested in analyzing the returns to college education for the marginal individuals, we start with ordinary least squares (OLS) estimations as a benchmark. Column (1) in Table 4, panel A, reports results for hourly wages, columns (2) and (3) for the two health measures, whereas columns (4)–(6) do the same for the three measures of cognitive abilities. Each cell reports the coefficient of college education from a separate regression. After conditioning on observables, individuals with a college degree earn approximately 28% higher wages, on average. Although PCS is higher by around 0.3 of a standard deviation—recall that all outcomes but wages are standardized—there is no significant relation with MCS. Individuals with a college degree read, on average, 0.4 SD faster than those without college education. Moreover, they approximately have a by 0.7 SD better text understanding and mathematical literacy. All in all, the results are pretty much in line with the differences in standardized means as shown in Table 2, slightly attenuated, however, due to the inclusion of control variables. Table 4. Regression results for OLS and first stage estimations. (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Panel A: OLS results College degree 0.277*** 0.277*** 0.003 0.398*** 0.729*** 0.653*** (0.019) (0.033) (0.036) (0.037) (0.032) (0.044) Panel B: 2SLS first-stage results College availability 2.368*** 2.576*** 2.576*** 2.521*** 2.327*** 2.454*** (0.132) (0.122) (0.122) (0.132) (0.119) (0.159) Observations 3,378 4,813 4,813 3,995 4,576 2,904 (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Panel A: OLS results College degree 0.277*** 0.277*** 0.003 0.398*** 0.729*** 0.653*** (0.019) (0.033) (0.036) (0.037) (0.032) (0.044) Panel B: 2SLS first-stage results College availability 2.368*** 2.576*** 2.576*** 2.521*** 2.327*** 2.454*** (0.132) (0.122) (0.122) (0.132) (0.119) (0.159) Observations 3,378 4,813 4,813 3,995 4,576 2,904 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Regressions also include a full set of control variables as well as year-of-birth and district fixed effects, and district-specific linear trends. District clustered standard errors in parentheses. ***p < 0.01. View Large Table 4. Regression results for OLS and first stage estimations. (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Panel A: OLS results College degree 0.277*** 0.277*** 0.003 0.398*** 0.729*** 0.653*** (0.019) (0.033) (0.036) (0.037) (0.032) (0.044) Panel B: 2SLS first-stage results College availability 2.368*** 2.576*** 2.576*** 2.521*** 2.327*** 2.454*** (0.132) (0.122) (0.122) (0.132) (0.119) (0.159) Observations 3,378 4,813 4,813 3,995 4,576 2,904 (1) (2) (3) (4) (5) (6) Health measure Cognitive ability component Gross hourly wage PCS MCS Read. speed Read. comp. Math liter. Panel A: OLS results College degree 0.277*** 0.277*** 0.003 0.398*** 0.729*** 0.653*** (0.019) (0.033) (0.036) (0.037) (0.032) (0.044) Panel B: 2SLS first-stage results College availability 2.368*** 2.576*** 2.576*** 2.521*** 2.327*** 2.454*** (0.132) (0.122) (0.122) (0.132) (0.119) (0.159) Observations 3,378 4,813 4,813 3,995 4,576 2,904 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Regressions also include a full set of control variables as well as year-of-birth and district fixed effects, and district-specific linear trends. District clustered standard errors in parentheses. ***p < 0.01. View Large Panel B of Table 4 reports the first stage results of the 2SLS estimations. The coefficients of the instrument point into the expected direction and are highly significant. As to be expected, they barely change across the outcome variables (as the first-stage specifications only differ in the number of observations across the columns). In order to get a feeling for the effect size of college availability in the first-stage, we consider, as an example, the college opening in the city of Essen in 1972. In 1978, about 11,000 students studied there. To illustrate the effect of the opening, we assume a constant population size of 700,000 inhabitants. The kernel weight of new spots in the same district is 0.4 (=K(0)). According to equation (7), the instrument value increases by 0.006 (rounded). Given the coefficient of college availability of 2.4, an individual who made the college decision in Essen in 1978 had a 1.44 percentage points higher probability to go to college due to the opening of the college in Essen (compared to an individual who made the college decision in 1971). This seems to be a plausible effect. The effect of the college opening in Essen on individuals who live in districts other than Essen is smaller, depending on the distance to Essen. 5.2. Marginal Treatment Effects Figure 3(a) shows the distribution of the propensity scores used in estimating the MTE by treatment and control group. They are obtained by logit regressions of the college degree on all Z and X variables. Full regression results of the first and the second stage of the 2SLS estimations are reported in the Online Appendices. For both groups, the propensity score varies from 0 to about 1. Moreover, there is a common support of the propensity score almost on the unit interval. Variation in the propensity score where the effects of the X variables are integrated out is used to identify local effects. Figure 3. View largeDownload slide Distribution of propensity scores. Own illustration based on NEPS-Starting Cohort 6 data. The left panel shows the propensity score (PS) density by treatment status. The right panel illustrates the joint PS density (dashed line). The solid line shows the PS variation solely caused by variation in Z, since the X-effects have been integrated out. Further note that in the right panel the densities were both normalized such that they sum up to one over the 100 points where we evaluate the density. Figure 3. View largeDownload slide Distribution of propensity scores. Own illustration based on NEPS-Starting Cohort 6 data. The left panel shows the propensity score (PS) density by treatment status. The right panel illustrates the joint PS density (dashed line). The solid line shows the PS variation solely caused by variation in Z, since the X-effects have been integrated out. Further note that in the right panel the densities were both normalized such that they sum up to one over the 100 points where we evaluate the density. This variation is presented in Figure 3(b). It shows the conditional support of P when the influence of the linear X-index of observables on the propensity score is integrated out (∫fP(Z, X)dX). Here, the support ranges nearly from 0 to 0.8 only caused by variation in the instrument—the identifying variation. This is important in the semiparametric estimation since it shows the regions in which we can credibly identify (conditional on our assumptions) marginal effects without having to rely on inter- or extrapolations to regions where we do not have identifying variation. We calculate the MTE using a local linear regression with a bandwidth that ranges from 0.10 to 0.16 depending on the outcome variable.19 We calculate the marginal effects along the quantiles UD by evaluating the derivative of the treatment effect with respect to the propensity score (see equation (6) in Section 3). Figure 4 shows the MTE for all outcome variables. The upper left panel presents the MTE for wages. We find that individuals with low values of UD have the highest monetary returns to college education. Low values of UD mean that these are the individuals who are very likely to study as already small values of P(z) exceed UD, see the transformed choice equation in Section 3. The returns are close to 80% for the smallest values of UD and then approach 0 at UD ≈ 0.7. Thus, we tend to interpret these findings as clear and strong positive returns for the 70% of individuals with the highest desire to study, whereas there is no clear evidence for any returns for the remaining 30%. Hence, there is obviously selection into gains with respect to wages, where individuals with higher (realized) returns self-select into more education. This reflects the notion that individuals make choices based on their expected gains. Figure 4. View largeDownload slide Marginal treatment effects for cognitive abilities and health. Own illustration based on NEPS-Starting Cohort 6 data. For gross hourly wage, the log value is taken. Health and cognitive skill outcomes are standardized to mean 0 and standard deviation 1. The MTE (vertical axis) is measured in logs for wage and in units of standard deviations of the health and cognitive skill outcomes. The dashed lines give the 95% confidence intervals based on clustered bootstrapped standard errors with 200 replications. Calculations based on a local linear regression where the influence of the control variables was isolated using a semiparametric Robinson estimator (Robinson 1988) for each outcome variable. The optimal, exact bandwidths for the local linear regressions are: for wage 0.10, PCS 0.13, MCS 0.16, reading competence 0.10, for reading speed 0.11, math score 0.12. Figure 4. View largeDownload slide Marginal treatment effects for cognitive abilities and health. Own illustration based on NEPS-Starting Cohort 6 data. For gross hourly wage, the log value is taken. Health and cognitive skill outcomes are standardized to mean 0 and standard deviation 1. The MTE (vertical axis) is measured in logs for wage and in units of standard deviations of the health and cognitive skill outcomes. The dashed lines give the 95% confidence intervals based on clustered bootstrapped standard errors with 200 replications. Calculations based on a local linear regression where the influence of the control variables was isolated using a semiparametric Robinson estimator (Robinson 1988) for each outcome variable. The optimal, exact bandwidths for the local linear regressions are: for wage 0.10, PCS 0.13, MCS 0.16, reading competence 0.10, for reading speed 0.11, math score 0.12. The curve of marginal treatment effects resembles the one found by Carneiro et al. (2011) for the United States with the main difference that we do not find negative effects (but just zero) for a part of the distribution. The effect sizes are also comparable although ours are somewhat smaller. For instance, Carneiro et al. (2011) find highest returns of 28% per year of college, whereas we find 80% for the college degree that, on average, takes 4.5 years to be earned. What could explain these wage returns? Two potential channels of higher earnings could be better cognitive skills and/or better health due to increased education. The findings on skills and health that we discuss in the following could, thus, be read as investigations into mechanisms for the positive wage returns. However, at least for health, this would only be one potential interpretation as health might also be directly affected by income. The right column of Figure 4 plots the results for cognitive skills. The distribution of marginal treatment effects is remarkably similar to the one for wages. We see that, also in terms of cognitive skills, not everybody benefits from more education. Some individuals, again those with high desire to study, strongly benefit, while the effects approach zero for individuals with UD > 0.6. This holds for reading speed, reading competence, as well as mathematical literacy. The largest returns are as high as 2–3 standard deviations, again, for the small group with highest college readiness only. Thus, we observe the same selection into gains as with wages and the findings could be interpreted as returns to cognitive abilities from education being a potential pathway for positive earnings returns. The findings are somewhat different for health, as seen in the lower left part of Figure 4. First of all, the returns are much more homogeneous than those for wages and skills. Although there is still some heterogeneity in returns to physical health (though to a smaller degree than before) returns are completely homogeneous for mental health. Moreover, the returns are zero throughout for mental health. Physical health effects are positive (although not always statistically significant) for around 75% of the individuals whereas they are close to zero for the 25% with the lowest desire to study. The main findings of this paper can be summarized as follows: – Education leads to higher wages and cognitive abilities for the same approximately 60% of individuals. This can also be read as suggestive evidence for cognitive abilities being a channel for the effect of education on wages. – Education does not pay off for everybody. However, in no case are the effects negative. Thus, education does never harm in terms of gross wages, skills and health. (Obviously, this view only considers potential benefits and disregards costs—thus, net benefits might well be negative for some individuals.) – There are clear signs of selection into gains. Those individuals who realize the highest returns to education are those who are most ready to take it. With policy initiatives such as the "Higher Education Pact 2020" Germany continuously increases participation in higher education in order to meet OECD standards (see OECD 2015a,b). Our results imply that this might not pay off, at least in terms of productivity (measured by wages), cognitive abilities, and health. Without fully simulating the results of further increased numbers of students in Germany, it is safe to assume that additional students would be those with higher values of UD as those with the high desire to study are in large parts already enrolled. But these additional students are the ones that do not seem to benefit from college education. However, this projection needs to be taken with a grain of salt as our findings are based on education in the 1960s–1980s and current education might yield different effects. We carry out two kinds of robustness checks with respect to the definition of the instrument (see Section 4.4). Figure A.3 in the Appendix reports the findings when the instrument definition does not consider the increases in college size. The MTE curves do not exactly stay the same as before but the main conclusions are unchanged. Wage returns are slightly more homogeneous. The results for reading competence and mathematical literacy are virtually the same whereas for reading speed homogeneously positive effects are found. However, the confidence bands of the curves for both definitions of the instrument widely overlap. This also holds for the health measures. The MTE curve for MCS is slightly shifted upward and the one for PCS is more homogeneous but the difference in the curves across both kinds of instruments are not significant. Although the likelihood that two valid instruments exactly deliver the same results is fairly low in any application (and basically zero when so many points are evaluated as is the case here), the broad picture that leads to the conclusions stated previously is invariant to the change in the instrument definition. In Online Appendix C, we report the results of robustness check where we use different kernel bandwidths to weight the college distance (bandwidths between 100 and 250 km). Here the differences are indeed widely absent. Although the condensation of college availability in equation (7) seems somewhat arbitrary, these robustness checks show that the specification of the instrument does not affect our conclusions. 5.3. Treatment Parameters Table 5 reports the conventional treatment parameters estimated using the MTE and the respective weights as described previously and more formally derived and explained in, for example, Heckman et al. (2006). In particular, we calculate the average treatment effect (ATE), the average treatment effect on the treated (ATT), the average treatment effect on the untreated (ATU) and the local average treatment effect (LATE). The estimated weights applied to the returns for each UD on the MTE curve are shown in Figure 5. Figure 5. View largeDownload slide Treatment parameter weights conditional on the propensity score. Own illustration based on NEPS-Starting Cohort 6 data. Weights were calculated using the entire sample of 8,672 observations for that we have instrument and control variable information in spite of availability of the outcome variable. Figure 5. View largeDownload slide Treatment parameter weights conditional on the propensity score. Own illustration based on NEPS-Starting Cohort 6 data. Weights were calculated using the entire sample of 8,672 observations for that we have instrument and control variable information in spite of availability of the outcome variable. Table 5. Estimated treatment parameters for main results. (1) (2) (3) (4) Treatment parameter ATE ATT ATU LATE Main outcomes Log gross wage 0.43 0.59 0.36 0.49 (0.06) (0.07) (0.07) (0.05) PCS 0.45 0.86 0.29 0.55 (0.13) (0.13) (0.16) (0.09) MCS 0.10 0.09 0.10 0.05 (0.10) (0.12) (0.13) (0.08) Reading competence 1.10 1.88 0.78 1.18 (0.13) (0.15) (0.16) (0.08) Reading speed 0.72 1.17 0.54 0.70 (0.14) (0.15) (0.18) (0.11) Mathematical literacy 1.11 1.56 0.93 1.13 (0.17) (0.21) (0.19) (0.14) (1) (2) (3) (4) Treatment parameter ATE ATT ATU LATE Main outcomes Log gross wage 0.43 0.59 0.36 0.49 (0.06) (0.07) (0.07) (0.05) PCS 0.45 0.86 0.29 0.55 (0.13) (0.13) (0.16) (0.09) MCS 0.10 0.09 0.10 0.05 (0.10) (0.12) (0.13) (0.08) Reading competence 1.10 1.88 0.78 1.18 (0.13) (0.15) (0.16) (0.08) Reading speed 0.72 1.17 0.54 0.70 (0.14) (0.15) (0.18) (0.11) Mathematical literacy 1.11 1.56 0.93 1.13 (0.17) (0.21) (0.19) (0.14) Notes: Own calculations based on NEPS-Starting Cohort 6 data. The MTE is estimated with a semiparametric Robinson estimator. The LATE is estimated using the IV weights depicted in Figure 5. Therefore, the LATE in this table deviates slightly from corresponding 2SLS estimates. Standard error estimated using a clustered bootstrap (at district level) with 200 replications. View Large Table 5. Estimated treatment parameters for main results. (1) (2) (3) (4) Treatment parameter ATE ATT ATU LATE Main outcomes Log gross wage 0.43 0.59 0.36 0.49 (0.06) (0.07) (0.07) (0.05) PCS 0.45 0.86 0.29 0.55 (0.13) (0.13) (0.16) (0.09) MCS 0.10 0.09 0.10 0.05 (0.10) (0.12) (0.13) (0.08) Reading competence 1.10 1.88 0.78 1.18 (0.13) (0.15) (0.16) (0.08) Reading speed 0.72 1.17 0.54 0.70 (0.14) (0.15) (0.18) (0.11) Mathematical literacy 1.11 1.56 0.93 1.13 (0.17) (0.21) (0.19) (0.14) (1) (2) (3) (4) Treatment parameter ATE ATT ATU LATE Main outcomes Log gross wage 0.43 0.59 0.36 0.49 (0.06) (0.07) (0.07) (0.05) PCS 0.45 0.86 0.29 0.55 (0.13) (0.13) (0.16) (0.09) MCS 0.10 0.09 0.10 0.05 (0.10) (0.12) (0.13) (0.08) Reading competence 1.10 1.88 0.78 1.18 (0.13) (0.15) (0.16) (0.08) Reading speed 0.72 1.17 0.54 0.70 (0.14) (0.15) (0.18) (0.11) Mathematical literacy 1.11 1.56 0.93 1.13 (0.17) (0.21) (0.19) (0.14) Notes: Own calculations based on NEPS-Starting Cohort 6 data. The MTE is estimated with a semiparametric Robinson estimator. The LATE is estimated using the IV weights depicted in Figure 5. Therefore, the LATE in this table deviates slightly from corresponding 2SLS estimates. Standard error estimated using a clustered bootstrap (at district level) with 200 replications. View Large Whereas the local average treatment effect is an average effect weighted by the conditional density of the instrument, the ATT (vice versa for the ATU) for example gives more weight to those individuals that select already into higher education at low UD values (indicating low intrinsic reluctance for higher education). The reason is that their likelihood of being in any "treatment group" is higher compared to individuals with higher values of UD. The ATE places equal weight over the whole support. In all cases but mental health and reading speed, the LATE parameters in column (4) approximately double compared to the OLS estimates. Increasing local average treatment effects (compared to OLS) seem to be counterintuitive as one often expects OLS to overestimate the true effects. Yet, this is not an uncommon finding and in a world with heterogeneous effects often explained by the group of compliers that potentially has higher individual treatment effects than the average individual (Card 2001). This is directly obvious by comparing the LATE to column (1) that is another indication of selection into gains. Regarding the other treatment parameters, the LATE lies within the range of the ATT and the ATU. Note that these are the "empirical", conditional-on-the-sample parameters as calculated in Basu et al. (2007), that is, the treatment parameters conditional on the common support of the propensity score. The population ATE, however, would require full support on the unity interval.20 As depicted in Figure 3, we do not have full support in the data at hand. Although we observe individuals with and without college degree for most probabilities to study, we cannot observe an individual with a probability arbitrarily close to 100% without college degree (and arbitrarily close to 0% with a degree). Instead, the parameters in Table 5 were computed using the marginal treatment effects on the common support only. However, as this reaches from 0.002 to 0.969 it seems fair to say that this probably comes very close to the true parameters. Table 5 is informative in particular for two reasons. First, it boils down the MTE to single numbers such that the average effect size immediately becomes clear. And, second, differences between the parameters again emphasize the role of effect heterogeneity. Together with the bootstrapped standard errors the table reveals that the ATT and the ATU structurally differ from each other for all outcomes but mental health. Hence, the treatment group of college graduates seems to benefit from higher education in terms of wages, skills, and physical health compared to the non-graduates. One reason is that they might choose to study because of their idiosyncratic skill returns. Yet, it is also likely to be windfall gains that go along with monetary college premiums that the decision was more likely to be based on. Nonetheless, this also is evidence for selection into gains. The effect sizes for all (ATE), for the university degree subgroup (ATT), and for those without higher education (ATU) in Table 5 capture the overall returns to college education, not the per-year effects. On average, the per-year effect is approximately the overall effect divided by 4.5 years (the regular time it takes to receive a Diplom degree), if we assume linear additivity of the yearly effects. The per-year effects for mathematical literacy and reading competence are about 25% of a standard deviation for all parameters. For reading speed the effects are around 15% of an SD, whereas the wage effects are around 10%. These effects are of considerable size, yet slightly smaller than those found in the previous literature on different treatments and, importantly, different compliers. For instance, ability returns to an additional year of compulsory schooling were found to be up to 0.5 SD (see, e.g., Banks and Mazzonna 2012). To get an idea of the total effect of college education on, say, math skills, the following example might help. If you start at the median of the standardized unconditional math score distribution (Φ(0) = 50%), the average effect of 1.11 of a standard deviation, all other things the same, will make you end up at the 87% quantile of that distribution (Φ(0 + 1.11) = 87%)—in the thought experiment of being the only treated in the peer group. As suggested by the pattern of the marginal treatment effects in Figure 4, the health returns to higher education are smaller than the skill returns, still they are around 10% of an SD per year (except for the zero effect on mental health). Given the previous literature, the results seem reasonable. Regarding statistical significance of the effects, note that we use several outcome variables and potentially run into multiple testing problems. Yet, we refrain from taking this into account by a complex algorithm that also accounts for the correlation of the six outcome variables and argue the following way: All ATEs and ATTs are highly statistically significant. Thus, our multiple testing procedure with six outcomes should not be a major issue. Even with a most conservative Bonferroni correction, critical values for statistical significance at the 5% level would increase from 1.96 to 2.65 and would not change any conclusions regarding significance.21 6. Potential Mechanisms for Health and Cognitive Abilities In this section, we investigate the role of potential mechanisms through which college education may work. It is likely to affect the observed level of health and cognitive abilities through the attained stock of health capital and the cognitive reserve—the mind's ability to tolerate brain damage (Stern 2012; Meng and D'Arcy 2012). There are probably three channels through which education affects long-run health and cognitive abilities: – in college: a direct effect from education; – post-college: a diminished age-related decline in health and skills due to the higher health capital/cognitive reserve attained in college (e.g., the "cognitive reserve hypothesis", Stern et al. 1999); – post-college: different health behavior or different jobs that are less detrimental to health and more cognitively demanding (Stern 2012). The post-college mechanisms that compensate for the decline also contain implicit multiplying factors like complementarities and self-productivity, see Cunha et al. (2006) and Cunha and Heckman (2007). The NEPS data includes various job characteristics and health behaviors that potentially reduce the age-related skill/health decline. However, the data neither allow us to disentangle these components empirically (i.e., observing changes in one channel that are exogenous from other channels) nor to analyze how the effect on the mechanism causally maps into higher skills or better health (as, e.g., in Heckman et al. 2013). Thus, it should be noted that this sub-analysis is merely suggestive and by no means a comprehensive analysis on the mechanisms of the effects found in the previous section. Moreover, the following analysis focusses on the potential channel of different jobs and health behaviors. It does the same as before (same controls, same estimation strategy and instrument) but replaces the outcome variables by the indicators of potential mechanisms. Cognitive Abilities. The main driving force behind skill formation after college might lie in activities on the job. When individuals with college education engage in more cognitively demanding activities, for example, more sophisticated jobs, this might mentally exercise their minds (Rohwedder and Willis 2010). This effect of mental training is sometimes referred to as use-it-or-lose-it hypothesis, see Rohwedder and Willis (2010) or Salthouse (2006). If such an exercise effect leads to alternating brain networks that "may compensate for the pathological disruption of preexisting networks" (Meng and D'Arcy 2012, p. 2), a higher demand for cognitively demanding tasks (as a result of college education) increases the individual's cognitive capacity. In order to investigate if a more cognitively demanding job might be a potential mechanism (as, e.g., suggested by Fisher et al. 2014), we use information on the individual's activities on the job. All four outcome variables considered in this subsection are binary, their definitions, sample means effects of college education are given in Table 6. For the sake of brevity we focus on the most relevant treatment parameters here and do not discuss the MTE curvatures. Table 6. Potential mechanisms for cognitive skills. Parameter Definition Sample mean ATE ATT ATU Math: percentages =1 if job requires calculating with 0.711 0.20 0.23 0.19 percentages and fractions (0.06) (0.07) (0.07) Reading =1 if respondent often spends more 0.777 0.23 0.30 0.30 than 2 hours reading (0.03) (0.03) (0.04) Writing =1 if respondent often writes more 0.704 0.39 0.64 0.29 than 1 page (0.07) (0.09) (0.07) Learning new things =1 if respondent reports to learn new 0.671 0.22 0.31 0.18 things often (0.07) (0.09) (0.07) Parameter Definition Sample mean ATE ATT ATU Math: percentages =1 if job requires calculating with 0.711 0.20 0.23 0.19 percentages and fractions (0.06) (0.07) (0.07) Reading =1 if respondent often spends more 0.777 0.23 0.30 0.30 than 2 hours reading (0.03) (0.03) (0.04) Writing =1 if respondent often writes more 0.704 0.39 0.64 0.29 than 1 page (0.07) (0.09) (0.07) Learning new things =1 if respondent reports to learn new 0.671 0.22 0.31 0.18 things often (0.07) (0.09) (0.07) Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Standard error estimated using a clustered bootstrap (district level) and reported in parentheses. View Large Table 6. Potential mechanisms for cognitive skills. Parameter Definition Sample mean ATE ATT ATU Math: percentages =1 if job requires calculating with 0.711 0.20 0.23 0.19 percentages and fractions (0.06) (0.07) (0.07) Reading =1 if respondent often spends more 0.777 0.23 0.30 0.30 than 2 hours reading (0.03) (0.03) (0.04) Writing =1 if respondent often writes more 0.704 0.39 0.64 0.29 than 1 page (0.07) (0.09) (0.07) Learning new things =1 if respondent reports to learn new 0.671 0.22 0.31 0.18 things often (0.07) (0.09) (0.07) Parameter Definition Sample mean ATE ATT ATU Math: percentages =1 if job requires calculating with 0.711 0.20 0.23 0.19 percentages and fractions (0.06) (0.07) (0.07) Reading =1 if respondent often spends more 0.777 0.23 0.30 0.30 than 2 hours reading (0.03) (0.03) (0.04) Writing =1 if respondent often writes more 0.704 0.39 0.64 0.29 than 1 page (0.07) (0.09) (0.07) Learning new things =1 if respondent reports to learn new 0.671 0.22 0.31 0.18 things often (0.07) (0.09) (0.07) Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Standard error estimated using a clustered bootstrap (district level) and reported in parentheses. View Large College education has strong effects on all four outcomes. It increases the likelihood to be in a job that requires calculating with percentages and fractions, that involves reading or writing and in which individuals often learn new things. The effect sizes are very large which is not too surprising as many of the jobs that entail these mentally demanding tasks require a college diploma as a quasi-formal condition of employment. Moreover, as observed before, there seems to be effect heterogeneity here as well and selection into gains as all average treatment effects on the treated are larger than the treatment effects on the untreated (except for the case of reading more than 2 h). The differences are particularly strong for writing and for learning new things. All in all, the findings suggest that cognitively more demanding jobs due to college education might play a role in explaining long-run cognitive returns to education. Note again, however, that these findings are only suggestive evidence for a causal mechanism. It might as well be that it is the other way around and the cognitive abilities attained in college induce a selection into these job types. Health Concerning the health mechanisms, we study job-related effects and effects on health behavior. The NEPS data cover engagement in several physical activities on the job, for example, working in a standing position, working in an uncomfortable position (like bending often), walking or cycling long distances, or carrying heavy loads. Table 7 reports definitions, sample means and effects. The binary indicators are coded as 1 if the respondent reports to engage in the activity (and 0 otherwise) in the upper panel of the table. Table 7. Potential mechanisms for health. Parameter Definition Sample mean ATE ATT ATU Physically demanding activities on the job Standing position =1 if often working in a standing 0.302 −0.37 −0.56 −0.30 position for 2 or more hours (0.07) (0.09) 0.08) Uncomfortable pos. =1 if respondent needs to bend, crawl, 0.190 −0.20 −0.37 −0.13 lie down, keen or squat (0.05) (0.06) (0.06) Walking =1 if job often requires walking, 0.242 −0.39 −0.56 −0.32 running or cycling (0.06) (0.07) (0.07) Carrying =1 if often carrying a load of at least 0.182 −0.40 −0.50 −0.37 10 kg (0.05) (0.05) (0.05) Health behaviors Obesity =1 if body mass index (=weight in 0.155 −0.08 −0.15 −0.05 kg/height in m2) > 30 (0.04) (0.05) (0.05) Smoking =1 if currently smoking 0.270 −0.18 −0.23 −0.16 (0.06) (0.06) (0.07) Alcohol amount =1 if three or more drinks when 0.187 −0.14 −0.13 −0.14 consuming alcohol (0.05) (0.06) (0.06) Sport =1 if any sporting exercise in the 0.717 0.16 0.31 0.10 previous 3 months (0.07) (0.07) (0.09) Parameter Definition Sample mean ATE ATT ATU Physically demanding activities on the job Standing position =1 if often working in a standing 0.302 −0.37 −0.56 −0.30 position for 2 or more hours (0.07) (0.09) 0.08) Uncomfortable pos. =1 if respondent needs to bend, crawl, 0.190 −0.20 −0.37 −0.13 lie down, keen or squat (0.05) (0.06) (0.06) Walking =1 if job often requires walking, 0.242 −0.39 −0.56 −0.32 running or cycling (0.06) (0.07) (0.07) Carrying =1 if often carrying a load of at least 0.182 −0.40 −0.50 −0.37 10 kg (0.05) (0.05) (0.05) Health behaviors Obesity =1 if body mass index (=weight in 0.155 −0.08 −0.15 −0.05 kg/height in m2) > 30 (0.04) (0.05) (0.05) Smoking =1 if currently smoking 0.270 −0.18 −0.23 −0.16 (0.06) (0.06) (0.07) Alcohol amount =1 if three or more drinks when 0.187 −0.14 −0.13 −0.14 consuming alcohol (0.05) (0.06) (0.06) Sport =1 if any sporting exercise in the 0.717 0.16 0.31 0.10 previous 3 months (0.07) (0.07) (0.09) Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Standard error estimated using a clustered bootstrap (at district level) and reported in parentheses. View Large Table 7. Potential mechanisms for health. Parameter Definition Sample mean ATE ATT ATU Physically demanding activities on the job Standing position =1 if often working in a standing 0.302 −0.37 −0.56 −0.30 position for 2 or more hours (0.07) (0.09) 0.08) Uncomfortable pos. =1 if respondent needs to bend, crawl, 0.190 −0.20 −0.37 −0.13 lie down, keen or squat (0.05) (0.06) (0.06) Walking =1 if job often requires walking, 0.242 −0.39 −0.56 −0.32 running or cycling (0.06) (0.07) (0.07) Carrying =1 if often carrying a load of at least 0.182 −0.40 −0.50 −0.37 10 kg (0.05) (0.05) (0.05) Health behaviors Obesity =1 if body mass index (=weight in 0.155 −0.08 −0.15 −0.05 kg/height in m2) > 30 (0.04) (0.05) (0.05) Smoking =1 if currently smoking 0.270 −0.18 −0.23 −0.16 (0.06) (0.06) (0.07) Alcohol amount =1 if three or more drinks when 0.187 −0.14 −0.13 −0.14 consuming alcohol (0.05) (0.06) (0.06) Sport =1 if any sporting exercise in the 0.717 0.16 0.31 0.10 previous 3 months (0.07) (0.07) (0.09) Parameter Definition Sample mean ATE ATT ATU Physically demanding activities on the job Standing position =1 if often working in a standing 0.302 −0.37 −0.56 −0.30 position for 2 or more hours (0.07) (0.09) 0.08) Uncomfortable pos. =1 if respondent needs to bend, crawl, 0.190 −0.20 −0.37 −0.13 lie down, keen or squat (0.05) (0.06) (0.06) Walking =1 if job often requires walking, 0.242 −0.39 −0.56 −0.32 running or cycling (0.06) (0.07) (0.07) Carrying =1 if often carrying a load of at least 0.182 −0.40 −0.50 −0.37 10 kg (0.05) (0.05) (0.05) Health behaviors Obesity =1 if body mass index (=weight in 0.155 −0.08 −0.15 −0.05 kg/height in m2) > 30 (0.04) (0.05) (0.05) Smoking =1 if currently smoking 0.270 −0.18 −0.23 −0.16 (0.06) (0.06) (0.07) Alcohol amount =1 if three or more drinks when 0.187 −0.14 −0.13 −0.14 consuming alcohol (0.05) (0.06) (0.06) Sport =1 if any sporting exercise in the 0.717 0.16 0.31 0.10 previous 3 months (0.07) (0.07) (0.09) Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Standard error estimated using a clustered bootstrap (at district level) and reported in parentheses. View Large We find that college education reduces the probability of engaging in all four physically demanding activities. Again, the estimated effects are very large in size, implying that it is the college diploma that qualifies for a white-collar office-job position. These effects might explain why we find physical health effects of education and are in line with the absence of mental health effects. White-collar jobs are usually less demanding with respect to physical health but not at all less stressful. Besides physical activities on the job, health behaviors may be considered as an important dimension of the general formation of health over the life-cycle, see Cutler and Lleras-Muney (2010). To analyze this, we resort to the following variables in our data set: a binary indicator for obesity (body mass index exceeds 30) as a compound lifestyle measure and more direct behavioral variables like an indicator for smoking, the amount of alcohol consumption (1 if having at least three or more drinks when consuming alcohol), as well as physical activity measured by an indicator of having taken any sport exercise in the previous 3 months. The lower panel in Table 7 reports the sample means and treatment effects. College education leads to a decrease in the probability of being obese, but increases the probability of smoking. This is in line with LATE estimates of the effect of college education in the United States of Grimard and Parent (2007) and de Walque (2007). College education also seems to negatively affect alcohol consumption and increases the likelihood to engage in sport exercise. Again, the effect sizes are large, if not as large compared to the other potential mechanisms. Moreover, some of them are only marginally statistically significant. Taken together, college education affects potential health mechanisms in the expected direction. Again, there is effect heterogeneity, observable in different treatment parameters for the same outcome variables. Since health is a high dimensional measure, the potential mechanisms at hand are of course not able to explain the health returns to college education entirely. Nevertheless, the findings encourage us in our interpretation of the effects of college education on physical health. 7. Conclusion This paper uses the Marginal Treatment Effect framework introduced and advanced by Björklund and Moffitt (1987) and Heckman and Vytlacil (2005, 2007) to estimate returns to college education under essential heterogeneity. We use representative data from the German National Educational Panel Study (NEPS). Our outcome measures are wages, cognitive abilities, and health. Cognitive abilities are assessed using state-of-the-art cognitive competence tests on individual reading speed, text understanding, and mathematical literacy. As expected, all outcome variables are positively correlated with having a college degree in our data set. Using an instrument that exploit exogenous variation in the supply of colleges, we estimate marginal returns to college education. The main findings of this paper are as follows: College education improves average wages, cognitive abilities and physical health (but not mental health). There is heterogeneity in the effects and clear signs of selection into gains. Those individuals who realize the highest returns to education are those who are most ready to take it. Moreover, education does not pay off for everybody. Although it is never harmful, we find zero causal effects for around 30%–40% of the population. Thus, although college education is beneficial on average, further increasing the number of students—as sometimes called for—is less likely to pay off, as the current marginal students are those who are mostly in the range of zero causal effects. Potential mechanisms of skill returns are more demanding jobs that slow down the cognitive decline in later ages. Regarding health we find positive effects of higher education on BMI, non-smoking, sports participation and alcohol consumption. All in all, given that the average individual clearly seems to benefit from education and provided that the continuing technological progress has skills become more and more valuable, education should still be an answer to the technological change for the average individual. One limitation of this paper is that we are not able to stratify the analysis by study subject. This is left for future work. Appendix: Additional Figures and Tables Figure A.1. View largeDownload slide Spatial variation of colleges across districts and over time. Own illustration based on the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). The maps show all 326 West-German districts (Kreise, spatial units of 2009) but Berlin in the years 1958 (first year in the sample), 1970, 1980, and 1990 (last year in the sample). Districts usually cover a bigger city or some administratively connected villages. If a district has at least one college, the district is depicted in black. Only few districts have more than one college. For those districts the number of students is added up in the calculations but multiple colleges are not depicted separately in the maps. Figure A.1. View largeDownload slide Spatial variation of colleges across districts and over time. Own illustration based on the German Statistical Yearbooks 1959–1991 (German Federal Statistical Office various issues, 1959–1991). The maps show all 326 West-German districts (Kreise, spatial units of 2009) but Berlin in the years 1958 (first year in the sample), 1970, 1980, and 1990 (last year in the sample). Districts usually cover a bigger city or some administratively connected villages. If a district has at least one college, the district is depicted in black. Only few districts have more than one college. For those districts the number of students is added up in the calculations but multiple colleges are not depicted separately in the maps. Figure A.2. View largeDownload slide Distribution of dependent variables by college graduation. Own illustration based on NEPS-Starting Cohort 6 data. Figure A.2. View largeDownload slide Distribution of dependent variables by college graduation. Own illustration based on NEPS-Starting Cohort 6 data. Figure A.3. View largeDownload slide Sensitivity in marginal treatment effects when using only the sum of the kernel weighted college distances. Own illustration based on NEPS-Starting Cohort 6 data. For gross hourly wage, the log value is taken. Health and cognitive skill outcomes are standardized to mean 0 and standard deviation 1. The MTE (vertical axis) is measured in logs for wage and in units of standard deviations of the health and cognitive skill outcomes. The dashed lines give the 95% confidence intervals. Calculations based on a local linear regression where the influence of the control variables was isolated using a semiparametric Robinson estimator (Robinson 1988) for each outcome variable. Figure A.3. View largeDownload slide Sensitivity in marginal treatment effects when using only the sum of the kernel weighted college distances. Own illustration based on NEPS-Starting Cohort 6 data. For gross hourly wage, the log value is taken. Health and cognitive skill outcomes are standardized to mean 0 and standard deviation 1. The MTE (vertical axis) is measured in logs for wage and in units of standard deviations of the health and cognitive skill outcomes. The dashed lines give the 95% confidence intervals. Calculations based on a local linear regression where the influence of the control variables was isolated using a semiparametric Robinson estimator (Robinson 1988) for each outcome variable. Table A.1. Control variables and means by college degree. Respondents Variable Definition with college degree w/o college degree General information Female =1 if respondent is female 40.38 54.18 Year of birth (FE) Year of birth of the respondent 1959 1959 Migrational background =1 if respondent was born abroad 0.89 0.64 No native speaker =1 if mother tongue is not German 0.30 0.43 Rural district =1 if current district is rural 16.79 24.96 Mother still alive =1 if mother is still alive in 2009/10 65.38 63.83 Father still alive =1 if father is still alive in 2009/10 45.27 42.3 Pre-college living conditions Married before college =1 if respondent got married before the year of the college decision or in the same year 0.20 0.44 Parent before college =1 if respondent became a parent before the year of the college decision or in the same year 0.30 0.17 Siblings Number of siblings 1.56 1.87 First born =1 if respondent was the first born in the family 33.73 29.01 Age 15: lived by single parent =1 if respondent was raised by single parent 5.33 5.32 Age 15: lived in patchwork family =1 if respondent was raised in a patchwork family 1.11 0.27 Age 15: orphan =1 if respondent was an orphan at the age of 15 0.10 0.20 Age 15: mother employed =1 if mother was employed at the respondent's age of 15 45.93 46.87 Age 15: mother never unemployed =1 if mother was never unemployed until the respondent's age of 15 61.24 62.29 Age 15: father employed =1 if father was employed at the respondent's age of 15 92.46 90.73 Age 15: father never unemployed =1 if father was never unemployed until the respondent's age of 15 98.45 97.14 Pre-college education Final school grade: excellence =1 if the overall grade of the highest school degree was excellent 4.59 1.79 Final school grade: good =1 if the overall grade of the highest school degree was good 31.51 25.83 Final school grade: satisfactory =1 if the overall grade of the highest school degree was satisfactory 17.97 28.03 Final school grade: sufficient or worse =1 if the overall grade of the highest school degree was sufficient or worse 1.04 1.42 Repeated one grade =1 if student needed to repeat one grade in elementary or secondary school 19.97 20.51 Repeated two or more grades =1 if student needed to repeat two or more grades in elementary or secondary school 2.74 1.85 Military service =1 if respondent was drafted for compulsory military service 28.03 23.89 Parental characteristics (M: mother, F: father) M: year of birth (FE) Year of birth of the respondent's mother 1930 1932 M: migrational background =1 if mother was born abroad 5.47 4.85 M: at least inter. edu =1 if mother has at least an intermediate secondary school degree 17.97 5.95 M: vocational training =1 if mother's highest degree is vocational training 20.86 16.18 M: further job qualification =1 if mother has further job qualification (e.g., Meister degree) 4.29 1.73 F: year of birth (FE) Year of birth of the respondent's father 1927 1929 F: migrational background =1 if father was born abroad 6.36 5.54 F: at least inter. edu =1 if father has at least an intermediate secondary school degree 20.86 8.09 F: vocational training =1 if father's highest degree is vocational training 19.12 21.99 F: further job qualification =1 if father has further job qualification (e.g., Meister degree) 11.46 6.76 Number of observations (PCS and MCS sample) 1,352 3,461 Respondents Variable Definition with college degree w/o college degree General information Female =1 if respondent is female 40.38 54.18 Year of birth (FE) Year of birth of the respondent 1959 1959 Migrational background =1 if respondent was born abroad 0.89 0.64 No native speaker =1 if mother tongue is not German 0.30 0.43 Rural district =1 if current district is rural 16.79 24.96 Mother still alive =1 if mother is still alive in 2009/10 65.38 63.83 Father still alive =1 if father is still alive in 2009/10 45.27 42.3 Pre-college living conditions Married before college =1 if respondent got married before the year of the college decision or in the same year 0.20 0.44 Parent before college =1 if respondent became a parent before the year of the college decision or in the same year 0.30 0.17 Siblings Number of siblings 1.56 1.87 First born =1 if respondent was the first born in the family 33.73 29.01 Age 15: lived by single parent =1 if respondent was raised by single parent 5.33 5.32 Age 15: lived in patchwork family =1 if respondent was raised in a patchwork family 1.11 0.27 Age 15: orphan =1 if respondent was an orphan at the age of 15 0.10 0.20 Age 15: mother employed =1 if mother was employed at the respondent's age of 15 45.93 46.87 Age 15: mother never unemployed =1 if mother was never unemployed until the respondent's age of 15 61.24 62.29 Age 15: father employed =1 if father was employed at the respondent's age of 15 92.46 90.73 Age 15: father never unemployed =1 if father was never unemployed until the respondent's age of 15 98.45 97.14 Pre-college education Final school grade: excellence =1 if the overall grade of the highest school degree was excellent 4.59 1.79 Final school grade: good =1 if the overall grade of the highest school degree was good 31.51 25.83 Final school grade: satisfactory =1 if the overall grade of the highest school degree was satisfactory 17.97 28.03 Final school grade: sufficient or worse =1 if the overall grade of the highest school degree was sufficient or worse 1.04 1.42 Repeated one grade =1 if student needed to repeat one grade in elementary or secondary school 19.97 20.51 Repeated two or more grades =1 if student needed to repeat two or more grades in elementary or secondary school 2.74 1.85 Military service =1 if respondent was drafted for compulsory military service 28.03 23.89 Parental characteristics (M: mother, F: father) M: year of birth (FE) Year of birth of the respondent's mother 1930 1932 M: migrational background =1 if mother was born abroad 5.47 4.85 M: at least inter. edu =1 if mother has at least an intermediate secondary school degree 17.97 5.95 M: vocational training =1 if mother's highest degree is vocational training 20.86 16.18 M: further job qualification =1 if mother has further job qualification (e.g., Meister degree) 4.29 1.73 F: year of birth (FE) Year of birth of the respondent's father 1927 1929 F: migrational background =1 if father was born abroad 6.36 5.54 F: at least inter. edu =1 if father has at least an intermediate secondary school degree 20.86 8.09 F: vocational training =1 if father's highest degree is vocational training 19.12 21.99 F: further job qualification =1 if father has further job qualification (e.g., Meister degree) 11.46 6.76 Number of observations (PCS and MCS sample) 1,352 3,461 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Mean values refer to the MCS and PCS sample. FE = variable values are included as fixed effects in the analysis. View Large Table A.1. Control variables and means by college degree. Respondents Variable Definition with college degree w/o college degree General information Female =1 if respondent is female 40.38 54.18 Year of birth (FE) Year of birth of the respondent 1959 1959 Migrational background =1 if respondent was born abroad 0.89 0.64 No native speaker =1 if mother tongue is not German 0.30 0.43 Rural district =1 if current district is rural 16.79 24.96 Mother still alive =1 if mother is still alive in 2009/10 65.38 63.83 Father still alive =1 if father is still alive in 2009/10 45.27 42.3 Pre-college living conditions Married before college =1 if respondent got married before the year of the college decision or in the same year 0.20 0.44 Parent before college =1 if respondent became a parent before the year of the college decision or in the same year 0.30 0.17 Siblings Number of siblings 1.56 1.87 First born =1 if respondent was the first born in the family 33.73 29.01 Age 15: lived by single parent =1 if respondent was raised by single parent 5.33 5.32 Age 15: lived in patchwork family =1 if respondent was raised in a patchwork family 1.11 0.27 Age 15: orphan =1 if respondent was an orphan at the age of 15 0.10 0.20 Age 15: mother employed =1 if mother was employed at the respondent's age of 15 45.93 46.87 Age 15: mother never unemployed =1 if mother was never unemployed until the respondent's age of 15 61.24 62.29 Age 15: father employed =1 if father was employed at the respondent's age of 15 92.46 90.73 Age 15: father never unemployed =1 if father was never unemployed until the respondent's age of 15 98.45 97.14 Pre-college education Final school grade: excellence =1 if the overall grade of the highest school degree was excellent 4.59 1.79 Final school grade: good =1 if the overall grade of the highest school degree was good 31.51 25.83 Final school grade: satisfactory =1 if the overall grade of the highest school degree was satisfactory 17.97 28.03 Final school grade: sufficient or worse =1 if the overall grade of the highest school degree was sufficient or worse 1.04 1.42 Repeated one grade =1 if student needed to repeat one grade in elementary or secondary school 19.97 20.51 Repeated two or more grades =1 if student needed to repeat two or more grades in elementary or secondary school 2.74 1.85 Military service =1 if respondent was drafted for compulsory military service 28.03 23.89 Parental characteristics (M: mother, F: father) M: year of birth (FE) Year of birth of the respondent's mother 1930 1932 M: migrational background =1 if mother was born abroad 5.47 4.85 M: at least inter. edu =1 if mother has at least an intermediate secondary school degree 17.97 5.95 M: vocational training =1 if mother's highest degree is vocational training 20.86 16.18 M: further job qualification =1 if mother has further job qualification (e.g., Meister degree) 4.29 1.73 F: year of birth (FE) Year of birth of the respondent's father 1927 1929 F: migrational background =1 if father was born abroad 6.36 5.54 F: at least inter. edu =1 if father has at least an intermediate secondary school degree 20.86 8.09 F: vocational training =1 if father's highest degree is vocational training 19.12 21.99 F: further job qualification =1 if father has further job qualification (e.g., Meister degree) 11.46 6.76 Number of observations (PCS and MCS sample) 1,352 3,461 Respondents Variable Definition with college degree w/o college degree General information Female =1 if respondent is female 40.38 54.18 Year of birth (FE) Year of birth of the respondent 1959 1959 Migrational background =1 if respondent was born abroad 0.89 0.64 No native speaker =1 if mother tongue is not German 0.30 0.43 Rural district =1 if current district is rural 16.79 24.96 Mother still alive =1 if mother is still alive in 2009/10 65.38 63.83 Father still alive =1 if father is still alive in 2009/10 45.27 42.3 Pre-college living conditions Married before college =1 if respondent got married before the year of the college decision or in the same year 0.20 0.44 Parent before college =1 if respondent became a parent before the year of the college decision or in the same year 0.30 0.17 Siblings Number of siblings 1.56 1.87 First born =1 if respondent was the first born in the family 33.73 29.01 Age 15: lived by single parent =1 if respondent was raised by single parent 5.33 5.32 Age 15: lived in patchwork family =1 if respondent was raised in a patchwork family 1.11 0.27 Age 15: orphan =1 if respondent was an orphan at the age of 15 0.10 0.20 Age 15: mother employed =1 if mother was employed at the respondent's age of 15 45.93 46.87 Age 15: mother never unemployed =1 if mother was never unemployed until the respondent's age of 15 61.24 62.29 Age 15: father employed =1 if father was employed at the respondent's age of 15 92.46 90.73 Age 15: father never unemployed =1 if father was never unemployed until the respondent's age of 15 98.45 97.14 Pre-college education Final school grade: excellence =1 if the overall grade of the highest school degree was excellent 4.59 1.79 Final school grade: good =1 if the overall grade of the highest school degree was good 31.51 25.83 Final school grade: satisfactory =1 if the overall grade of the highest school degree was satisfactory 17.97 28.03 Final school grade: sufficient or worse =1 if the overall grade of the highest school degree was sufficient or worse 1.04 1.42 Repeated one grade =1 if student needed to repeat one grade in elementary or secondary school 19.97 20.51 Repeated two or more grades =1 if student needed to repeat two or more grades in elementary or secondary school 2.74 1.85 Military service =1 if respondent was drafted for compulsory military service 28.03 23.89 Parental characteristics (M: mother, F: father) M: year of birth (FE) Year of birth of the respondent's mother 1930 1932 M: migrational background =1 if mother was born abroad 5.47 4.85 M: at least inter. edu =1 if mother has at least an intermediate secondary school degree 17.97 5.95 M: vocational training =1 if mother's highest degree is vocational training 20.86 16.18 M: further job qualification =1 if mother has further job qualification (e.g., Meister degree) 4.29 1.73 F: year of birth (FE) Year of birth of the respondent's father 1927 1929 F: migrational background =1 if father was born abroad 6.36 5.54 F: at least inter. edu =1 if father has at least an intermediate secondary school degree 20.86 8.09 F: vocational training =1 if father's highest degree is vocational training 19.12 21.99 F: further job qualification =1 if father has further job qualification (e.g., Meister degree) 11.46 6.76 Number of observations (PCS and MCS sample) 1,352 3,461 Notes: Own calculations based on NEPS-Starting Cohort 6 data. Definitions are taken from the data manual. Mean values refer to the MCS and PCS sample. FE = variable values are included as fixed effects in the analysis. View Large The editor in charge of this paper was Claudio Michelacci. Acknowledgments We thank the editor and two anonymous referees for many helpful suggestions which improved the paper considerably. We are grateful to Pedro Carneiro, Arnaud Chevalier, Damon Clark, Eleonora Fichera, Martin Fischer, Hendrik Jürges and Corinna Kleinert for valuable comments and Claudia Fink for excellent research assistance. Furthermore, we would like to thank the participants of several conferences and seminars for helpful discussions. Access to Micro Census data at the GESIS-German Microdata Lab, Mannheim, is gratefully acknowledged. Financial support from the Deutsche Forschungsgemeinschaft (DFG, Grant number SCHM 3140/1-1) is gratefully acknowledged. Matthias Westphal is affiliated with and was also partly funded by the Ruhr Graduate School in Economics. Hendrik Schmitz and Matthias Westphal are furthermore affiliated with the Leibniz Science Campus Ruhr. This paper uses data from the National Educational Panel Study (NEPS): Starting Cohort Adults, 10.5157/NEPS:SC6:5.1.0. From 2008 to 2013, NEPS data was collected as part of the Framework Program for the Promotion of Empirical Educational Research funded by the German Federal Ministry of Education and Research (BMBF). As of 2014, NEPS is carried out by the Leibniz Institute for Educational Trajectories (LIfBi) at the University of Bamberg in cooperation with a nationwide network. Footnotes 1 The Economist, edition March 28th to April 3rd 2015. 2 Hansen et al. (2004) use a control function approach to adjust for education in the short-term development of cognitive abilities. Carneiro et al. (2001, 2003) analyze the short-term effects of college education. Glymour et al. (2008), Banks and Mazzonna (2012), Schneeweis et al. (2014), and Kamhöfer and Schmitz (2016) analyze the effects of secondary schooling on long-term cognitive skills. 3 See Section 4 for a detailed definition of cognitive abilities. We use the terms "cognitive abilities", "cognitive skills", and "skills" interchangeably. 4 We use the words university and college as synonyms to refer to German Universitäten and closely related institutions like institutes of technology (Technische Universitäten/Technische Hochschulen), an institutional type that combines features of colleges and universities applied science (Gesamthochschulen) and universities of the armed forces (Bundeswehruniversitäten/Bundeswehrhochschulen). 5 The working paper version Kamhöfer et al. (2015) also uses the introduction of a student loan program as further source exogenous variation. Using this instrument does not affect the findings at all but is not considered here for the sake of legibility of the paper. 6 All data are taken from the German Statistical Yearbooks, 1959–1991, see German Federal Statistical Office (various issues, 1959–1991). We only use colleges and no other higher educational institutes described in Section 2.1 (e.g., universities of applied science). Administrative data on openings and the number of students are not available for other institutions than colleges. However, since other higher educational institutions are small in size and highly specialized, they should be less relevant for the higher education decision and, thus, neglecting them should not affect the results. 7 Table 1 uses a different data source than the main analysis and the local level is slightly broader than districts, see the notes to the table. 8 Note that the general derivation does not require linear indices. However, it is standard to assume linearity when it comes to estimation. 9 By applying, for instance, the standard normal distribution to the left and the right of the equation: Z΄δ ≥ V ⇔ Φ(Z΄δ) ≥ Φ(V) ⇔ P(Z) ≥ UD, where P(Z) ≡ P(D = 1|Z) = Φ(Z΄δ). 10 In this model the exclusion restriction is implicit since Z has an effect on D* but not on Y1, Y0. Monotonicity is implied by the choice equation since D* monotonously either increases are decreases the higher the values of Z. 11 To make this explicit, all treatment parameters (TEj(x)) can be decomposed into a weight (hj(x, uD)) and the MTE: |$TE_j(x)=\int _{0}^{1} {\mathit {MTE}}(x,u_D)h_j(x,u_D)du_D$|⁠. See, for example, Heckman and Vytlacil (2007) for the exact expressions of the weights for common parameters. 12 Essentially, this is equivalent to a simple 2SLS case. If one wants to identify observable effect heterogeneity (i.e., interact the treatment indicator with control variables in the regression model) the instrument needs to be independent unconditional of these controls. 13 On the other hand, estimating with heterogeneity in the observables can lead to an efficiency gain. 14 Semi-parametrically, the MTE can only be identified over the support of P. The greater the variation in Z (conditional on X) and, thus P(Z), the larger the range over which the MTE can be identified. This may be considered a drawback of the MTE approach, in particular, because treatment parameters that have weight unequal to zero outside the support of the propensity score are not identified using semi-parameteric techniques. This is sometimes called the "identification at infinity" requirement (see Heckman 1990) of the MTE. However, we argue that the MTE over the support of P is already very informative. We use semi-parametric estimates of the MTE and restrict the results to the empirical ATE or ATT that are identified for those individuals who are in the sample (see Basu et al. 2007). Alternatively one might use a flexible approximation of K(p) based on a polynomial of the propensity score as done by Basu et al. (2007). This amounts to estimating |$E(Y|X, p) = X^{\prime }\beta + (\alpha _1 -\alpha _0) \cdot p + \sum _{j=1}^k \phi _j p^j$| by OLS and using the estimated coefficients to calculate |$\widehat{\,\it MTE\,}\!(x,p) = (\widehat{\alpha }_1 - \widehat{\alpha }_0) + \sum _{j=1}^k \widehat{\phi }_j j p^{j-1}$|⁠. 15 The working paper version also considers health satisfaction with results very similar to PCS (Kamhöfer et al. 2015). 16 For a general overview over test designs and applications in the NEPS, see Weinert et al. (2011). 17 The test measures the "assessment of automatized reading processes", where a "low degree of automation in decoding [...] will hinder the comprehension process", that is, understanding of texts (Zimmermann et al. 2014, p. 1). The test was newly designed for NEPS but based on the well-established Salzburg reading screening test design principles (LIfBi 2011). 18 The total number of possible points exceeds 32 because some items were worth more than one point. 19 We assess the optimal bandwidth in the local linear regression using Stata's lpoly rule of thumb. Our results are also robust to the inclusion of higher order polynomials in the local (polynomial) regression. The optimal, exact bandwidths are: wage 0.10, PCS 0.13, MCS 0.16, reading competence 0.10, for reading speed 0.11, math score 0.12. 20 The ATT would require for every college graduate in the population a non-graduate with the same propensity score (including 0%). For the ATU one would need the opposite: a graduate for every non-graduate with the same propensity score including 100%. 21 Also taking into account the outcomes from Section 6 and assuming that we test 18 times would increase the critical value to 2.98 in the (overly conservative) Bonferroni correction. References Acemoglu Daron , Johnson Simon ( 2007 ). "Disease and Development: The Effect of Life Expectancy on Economic Growth." Journal of Political Economy , 115 , 925 – 985 . Google Scholar Crossref Search ADS American Psychological Association ( 1995 ). "Intelligence: Knowns and Unknowns." Report of a task force convened by the American Psychological Association. Andersen Hanfried H. , Mühlbacher Axel , Nübling Matthias , Schupp Jürgen , Wagner Gert G. ( 2007 ). "Computation of Standard Values for Physical and Mental Health Scale Scores Using the SOEP Version of SF-12v2." Schmollers Jahrbuch: Journal of Applied Social Science Studies/Zeitschrift für Wirtschafts- und Sozialwissenschaften , 127 , 171 – 182 . Anderson John ( 2007 ). Cognitive Psychology and its Implications , 7 ed. Worth Publishers , New York . Banks James , Mazzonna Fabrizio ( 2012 ). "The Effect of Education on Old Age Cognitive Abilities: Evidence from a Regression Discontinuity Design." The Economic Journal , 122 , 418 – 448 . Barrow Lisa , Malamud Ofer ( 2015 ). "Is College a Worthwhile Investment?" Annual Review of Economics , 7 , 519 – 555 . Google Scholar Crossref Search ADS Bartz Olaf ( 2007 ). "Expansion und Umbau–Hochschulreformen in der Bundesrepublik Deutschland zwischen 1964 und 1977." Die Hochschule , 2007 , 154 – 170 . Basu Anirban ( 2011 ). "Estimating Decision-Relevant Comparative Effects using Instrumental Variables." Statistics in Biosciences , 3 , 6 – 27 . Google Scholar Crossref Search ADS PubMed Basu Anirban ( 2014 ). "Person-Centered Treatment (PeT) Effects using Instrumental Variables: An Application to Evaluating Prostate Cancer Treatments." Journal of Applied Econometrics , 29 , 671 – 691 . Google Scholar Crossref Search ADS PubMed Basu Anirban , Heckman James J. , Navarro-Lozano Salvador , Urzua Sergio ( 2007 ). "Use of Instrumental Variables in the Presence of Heterogeneity and Self-selection: An Application to Treatments of Breast Cancer Patients." Health Economics , 16 , 1133 – 1157 . Google Scholar Crossref Search ADS PubMed Björklund Anders , Moffitt Robert ( 1987 ). "The Estimation of Wage Gains and Welfare Gains in Self-Selection." The Review of Economics and Statistics , 69 , 42 – 49 . Google Scholar Crossref Search ADS Blossfeld H.-P. , Roßbach H.-G. , Maurice J. von ( 2011 ). "Education as a Lifelong Process—The German National Educational Panel Study (NEPS)." Zeitschrift für Erziehungswissenschaft , 14 [Special Issue 14-2011] . Brinch Christian N. , Mogstad Magne , Wiswall Matthew ( 2017 ). "Beyond LATE with a Discrete Instrument." Journal of Political Economy , 125 , 985 – 1039 . Google Scholar Crossref Search ADS Card David ( 1995 ). "Using Geographic Variation in College Proximity to Estimate the Return to Schooling." In Aspects of Labour Market Behaviour: Essays in Honour of John Vanderkamp , edited by Grant K. , Christofides L. , Swidinsky R. . University of Toronto Press , pp. 201 – 222 . Card David ( 2001 ). "Estimating the Return to Schooling: Progress on Some Persistent Econometric Problems." Econometrica , 69 , 1127 – 1160 . Google Scholar Crossref Search ADS Carneiro Pedro , Hansen Karsten T. , Heckman James J. ( 2003 ). "2001 Lawrence R. Klein Lecture: Estimating Distributions of Treatment Effects with an Application to the Returns to Schooling and Measurement of the Effects of Uncertainty on College Choice." International Economic Review , 44 ( 2 ), 361 – 422 . Google Scholar Crossref Search ADS Carneiro Pedro , Hansen Karsten T. , Heckman James J. ( 2001 ). "Removing the Veil of Ignorance in Assessing the Distributional Impacts of Social Policies." Swedish Economic Policy Review , 8 , 273 – 301 . Carneiro Pedro , Heckman James J. , Vytlacil Edward J. ( 2010 ). "Evaluating Marginal Policy Changes and the Average Effect of Treatment for Individuals at the Margin." Econometrica , 78 , 377 – 394 . Google Scholar Crossref Search ADS PubMed Carneiro Pedro , Heckman James J. , Vytlacil Edward J. ( 2011 ). "Estimating Marginal Returns to Education." American Economic Review , 101 ( 6 ), 2754 – 2781 . Google Scholar Crossref Search ADS PubMed Cawley John , Heckman James J. , Vytlacil Edward J. ( 2001 ). "Three Observations on Wages and Measured Cognitive Ability." Labour Economics , 8 , 419 – 442 . Google Scholar Crossref Search ADS Cervellati Matteo , Sunde Uwe ( 2005 ). "Human Capital Formation, Life Expectancy, and the Process of Development." American Economic Review , 95 ( 5 ), 1653 – 1672 . Google Scholar Crossref Search ADS PubMed Clark Damon , Martorell Paco ( 2014 ). "The Signaling Value of a High School Diploma." Journal of Political Economy , 122 , 282 – 318 . Google Scholar Crossref Search ADS Cornelissen Thomas , Dustmann Christian , Raute Anna , Schönberg Uta ( forthcoming ). "Who Benefits from Universal Childcare? Estimating Marginal Returns to Early Childcare Attendance." Journal of Political Economy . Costa Dora L. ( 2015 ). "Health and the Economy in the United States from 1750 to the Present." Journal of Economic Literature , 53 , 503 – 570 . Google Scholar Crossref Search ADS PubMed Cunha F. , Heckman J. J. , Lochner L. J. , Masterov D. V. ( 2006 ). "Interpreting the Evidence on Life Cycle Skill Formation." In Handbook of the Economics of Education , Vol. 1 , edited by Hanushek E. A. , Welch F. . North-Holland . Cunha Flavio , Heckman James J. ( 2007 ). "The Technology of Skill Formation." American Economic Review , 97 ( 2 ), 31 – 47 . Google Scholar Crossref Search ADS Currie Janet , Moretti Enrico ( 2003 ). "Mother's Education and The Intergenerational Transmission of Human Capital: Evidence From College Openings." The Quarterly Journal of Economics , 118 , 1495 – 1532 . Google Scholar Crossref Search ADS Cutler David M. , Lleras-Muney Adriana ( 2010 ). "Understanding Differences in Health Behaviors by Education." Journal of Health Economics , 29 , 1 – 28 . Google Scholar Crossref Search ADS PubMed de Walque Damien ( 2007 ). "Does Education Affect Smoking Behaviors?: Evidence using the Vietnam Draft as an Instrument for College Education." Journal of Health Economics , 26 , 877 – 895 . Google Scholar Crossref Search ADS PubMed Fisher Gwenith , Stachowski Alicia , Infurna Frank , Faul Jessica , Grosch James , Tetrick Lois ( 2014 ). "Mental Work Demands, Retirement, and Longitudinal Trajectories of Cognitive Functioning." Journal of Occupational Health Psychology , 19 , 231 – 242 . Google Scholar Crossref Search ADS PubMed German Federal Statistical Office (various issues , 1959–1991 ). "Statistisches Jahrbuch für die Bundesrepublik Deutschland." Tech. rep. , German Federal Statistical Office (Statistisches Bundesamt) , Wiesbaden . Glymour M. , Kawachi I. , Jencks C. , Berkman L. ( 2008 ). "Does Childhood Schooling Affect Old Age Memory or Mental Status? Using State Schooling Laws as Natural Experiments." Journal of Epidemiology and Community Health , 62 , 532 – 537 . Google Scholar Crossref Search ADS PubMed Grimard Franque , Parent Daniel ( 2007 ). "Education and Smoking: Were Vietnam War Draft Avoiders also more Likely to Avoid Smoking?" Journal of Health Economics , 26 , 896 – 926 . Google Scholar Crossref Search ADS PubMed Hansen Karsten T. , Heckman James J. , Mullen Kathleen J. ( 2004 ). "The Effect of Schooling and Ability on Achievement Test Scores." Journal of Econometrics , 121 , 39 – 98 . Google Scholar Crossref Search ADS Heckman J. J. , Lochner L. J. , Todd P. E. ( 1999 ). "Earnings Equations and Rates of Return: The Mincer Equation and Beyond." In Handbook of the Economics of Education , Vol. 1 , edited by Hanushek E. , Welch F. . Elsevier . Heckman James J. ( 1990 ). "Varieties of Selection Bias." American Economic Review , 80 ( 2 ), 313 – 318 . Heckman James J. , Pinto Rodrigo , Savelyev Peter ( 2013 ). "Understanding the Mechanisms through Which an Influential Early Childhood Program Boosted Adult Outcomes." American Economic Review , 103 ( 6 ), 2052 – 2086 . Google Scholar Crossref Search ADS PubMed Heckman James J. , Urzua Sergio , Vytlacil Edward J. ( 2006 ). "Understanding Instrumental Variables in Models with Essential Heterogeneity." The Review of Economics and Statistics , 88 , 389 – 432 . Google Scholar Crossref Search ADS Heckman James J. , Vytlacil Edward J. ( 2005 ). "Structural Equations, Treatment Effects, and Econometric Policy Evaluation." Econometrica , 73 , 669 – 738 . Google Scholar Crossref Search ADS Heckman James J. , Vytlacil Edward J. ( 2007 ). "Econometric Evaluation of Social Programs, Part II: Using the Marginal Treatment Effect to Organize Alternative Econometric Estimators to Evaluate Social Programs, and to Forecast their Effects in New." In Handbook of Econometrics , Vol. 6 , edited by Heckman J. J. , Leamer E. E. . Elsevier . Jürges Hendrik , Reinhold Steffen , Salm Martin ( 2011 ). "Does Schooling Affect Health Behavior? Evidence from the Educational Expansion in Western Germany." Economics of Education Review , 30 , 862 – 872 . Google Scholar Crossref Search ADS Kamhöfer Daniel , Schmitz Hendrik ( 2016 ). "Reanalyzing Zero Returns to Education in Germany." Journal of Applied Econometrics , 31 , 912 – 919 . Google Scholar Crossref Search ADS Kamhöfer Daniel , Schmitz Hendrik , Westphal Matthias ( 2015 ). "Heterogeneity in Marginal Non-monetary Returns to Higher Education." Tech. rep. , Ruhr Economic Papers , RWI Essen , No. 591 . Lang Frieder , Weiss David , Stocker Andreas , Rosenbladt Bernhard von ( 2007 ). "The Returns to Cognitive Abilities and Personality Traits in Germany." Schmollers Jahrbuch: Journal of Applied Social Science Studies/Zeitschrift für Wirtschafts- und Sozialwissenschaften , 127 , 183 – 192 . Lengerer Andrea , Schroedter Julia , Boehle Mara , Hubert Tobias , Wolf Christof ( 2008 ). "Harmonisierung der Mikrozensen 1962 bis 2005." GESIS-Methodenbericht 12/2008 . GESIS–Leibniz Institute for the Social Sciences , German Microdata Lab, Mannheim . LIfBi ( 2011 ). "Starting Cohort 6 Main Study 2010/11 (B67) Adults Information on the Competence Test." Tech. rep. , Leibniz Institute for Educational Trajectories (LIfBi) – National Educational Panel Study . LIfBi ( 2015 ). "Startkohorte 6: Erwachsene (SC6) – Studienübersicht Wellen 1 bis 5." Tech. rep. , Leibniz Institute for Educational Trajectories (LIfBi) – National Educational Panel Study . Mazumder Bhashkar ( 2008 ). "Does Education Improve Health? A Reexamination of the Evidence from Compulsory Schooling Laws." Economic Perspectives , 32 , 2 – 16 . Meng Xiangfei , D'Arcy Carl ( 2012 ). "Education and Dementia in the Context of the Cognitive Reserve Hypothesis: A Systematic Review with Meta-Analyses and Qualitative Analyses." PLoS ONE , 7 , e38268 . Google Scholar Crossref Search ADS PubMed Nybom Martin ( 2017 ). "The Distribution of Lifetime Earnings Returns to College." Journal of Labor Economics , 35 , 903 – 952 . Google Scholar Crossref Search ADS OECD ( 2015a ). "Education Policy Outlook 2015: Germany." Report, Organisation for Economic Co-operation and Development (OECD) . OECD ( 2015b ). "Education Policy Outlook 2015: Making Reforms Happen." Report, Organisation for Economic Co-operation and Development (OECD) . Oreopoulos Philip , Petronijevic Uros ( 2013 ). "Making College Worth It: A Review of the Returns to Higher Education." The Future of Children , 23 , 41 – 65 . Google Scholar Crossref Search ADS PubMed Oreopoulos Philip , Salvanes Kjell ( 2011 ). "Priceless: The Nonpecuniary Benefits of Schooling." Journal of Economic Perspectives , 25 ( 3 ), 159 – 184 . Google Scholar Crossref Search ADS Picht Georg ( 1964 ). Die deutsche Bildungskatastrophe: Analyse und Dokumentation . Walter Verlag . Pischke Jörn-Steffen , Wachter Till von ( 2008 ). "Zero Returns to Compulsory Schooling in Germany: Evidence and Interpretation." The Review of Economics and Statistics , 90 , 592 – 598 . Google Scholar Crossref Search ADS Robinson Peter M ( 1988 ). "Root-N-Consistent Semiparametric Regression." Econometrica , 56 , 931 – 954 . Google Scholar Crossref Search ADS Rohwedder Susann , Willis Robert J. ( 2010 ). "Mental Retirement." Journal of Economic Perspectives , 24 , 119 – 138 . Google Scholar Crossref Search ADS PubMed Salthouse Timothy A. ( 2006 ). "Mental Exercise and Mental Aging: Evaluating the Validity of the "Use It or Lose It" Hypothesis." Perspectives on Psychological Science , 1 , 68 – 87 . Google Scholar Crossref Search ADS PubMed Schneeweis Nicole , Skirbekk Vegard , Winter-Ebmer Rudolf ( 2014 ). "Does Education Improve Cognitive Performance Four Decades After School Completion?" Demography , 51 , 619 – 643 . Google Scholar Crossref Search ADS PubMed Stephens Melvin Jr , Yang Dou-Yan ( 2014 ). "Compulsory Education and the Benefits of Schooling." American Economic Review , 104 ( 6 ), 1777 – 1792 . Google Scholar Crossref Search ADS Stern Yaakov ( 2012 ). "Cognitive Reserve in Ageing and Alzheimer's Disease." The Lancet Neurology , 11 , 1006 – 1012 . Google Scholar Crossref Search ADS PubMed Stern Yaakov , Albert Steven , Tang Ming-Xin , Tsai Wei-Yen ( 1999 ). "Rate of Memory Decline in AD is Related to Education and Occupation: Cognitive Reserve?" Neurology , 53 , 1942 – 1947 . Google Scholar Crossref Search ADS PubMed Vytlacil Edward ( 2002 ). "Independence, Monotonicity, and Latent Index Models: An Equivalence Result." Econometrica , 70 , 331 – 341 . Google Scholar Crossref Search ADS Weinert S. , Artelt C. , Prenzel M. , Senkbeil M. , Ehmke T. , Carstensen C. ( 2011 ). "Development of Competencies across the Life Span." Zeitschrift für Erziehungswissenschaft , 14 , 67 – 86 . Google Scholar Crossref Search ADS Weisser Ansgar ( 2005 ). "18. Juli 1961 – Entscheidung zur Gründung der Ruhr-Universität Bochum." Tech. rep. , Internet-Portal Westfälische Geschichte , http://www.westfaelische-geschichte.de/web495 . Zimmermann Stefan , Artelt Cordula , Weinert Sabine ( 2014 ). "The Assessment of Reading Speed in Adults and First-Year Students." Tech. rep. , Leibniz Institute for Educational Trajectories (LIfBi) – National Educational Panel Study . © The Author(s) 2018. Published by Oxford University Press on behalf of European Economic Association. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) Journal of the European Economic Association – Oxford University Press Published: Feb 1, 2019 "Disease and Development: The Effect of Life Expectancy on Economic Growth." "Intelligence: Knowns and Unknowns." "Computation of Standard Values for Physical and Mental Health Scale Scores Using the SOEP Version of SF-12v2." citation_publisher=Worth Publishers, New York; Cognitive Psychology and its Implications "The Effect of Education on Old Age Cognitive Abilities: Evidence from a Regression Discontinuity Design." "Is College a Worthwhile Investment?" "Expansion und Umbau–Hochschulreformen in der Bundesrepublik Deutschland zwischen 1964 und 1977." "Estimating Decision-Relevant Comparative Effects using Instrumental Variables." "Person-Centered Treatment (PeT) Effects using Instrumental Variables: An Application to Evaluating Prostate Cancer Treatments." "Use of Instrumental Variables in the Presence of Heterogeneity and Self-selection: An Application to Treatments of Breast Cancer Patients." "The Estimation of Wage Gains and Welfare Gains in Self-Selection." "Education as a Lifelong Process—The German National Educational Panel Study (NEPS)." "Beyond LATE with a Discrete Instrument." "Using Geographic Variation in College Proximity to Estimate the Return to Schooling." "Estimating the Return to Schooling: Progress on Some Persistent Econometric Problems." "2001 Lawrence R. Klein Lecture: Estimating Distributions of Treatment Effects with an Application to the Returns to Schooling and Measurement of the Effects of Uncertainty on College Choice." "Removing the Veil of Ignorance in Assessing the Distributional Impacts of Social Policies." "Evaluating Marginal Policy Changes and the Average Effect of Treatment for Individuals at the Margin." "Estimating Marginal Returns to Education." "Three Observations on Wages and Measured Cognitive Ability." "Human Capital Formation, Life Expectancy, and the Process of Development." "The Signaling Value of a High School Diploma." "Who Benefits from Universal Childcare? Estimating Marginal Returns to Early Childcare Attendance." "Health and the Economy in the United States from 1750 to the Present." "Interpreting the Evidence on Life Cycle Skill Formation." "The Technology of Skill Formation." "Mother's Education and The Intergenerational Transmission of Human Capital: Evidence From College Openings." "Understanding Differences in Health Behaviors by Education." "Does Education Affect Smoking Behaviors?: Evidence using the Vietnam Draft as an Instrument for College Education." "Mental Work Demands, Retirement, and Longitudinal Trajectories of Cognitive Functioning." "Statistisches Jahrbuch für die Bundesrepublik Deutschland." "Does Childhood Schooling Affect Old Age Memory or Mental Status? Using State Schooling Laws as Natural Experiments." "Education and Smoking: Were Vietnam War Draft Avoiders also more Likely to Avoid Smoking?" "The Effect of Schooling and Ability on Achievement Test Scores." "Earnings Equations and Rates of Return: The Mincer Equation and Beyond." "Varieties of Selection Bias." "Understanding the Mechanisms through Which an Influential Early Childhood Program Boosted Adult Outcomes." "Understanding Instrumental Variables in Models with Essential Heterogeneity." "Structural Equations, Treatment Effects, and Econometric Policy Evaluation." "Econometric Evaluation of Social Programs, Part II: Using the Marginal Treatment Effect to Organize Alternative Econometric Estimators to Evaluate Social Programs, and to Forecast their Effects in New." "Does Schooling Affect Health Behavior? Evidence from the Educational Expansion in Western Germany." "Reanalyzing Zero Returns to Education in Germany." "Heterogeneity in Marginal Non-monetary Returns to Higher Education." "The Returns to Cognitive Abilities and Personality Traits in Germany." "Harmonisierung der Mikrozensen 1962 bis 2005." "Starting Cohort 6 Main Study 2010/11 (B67) Adults Information on the Competence Test." "Startkohorte 6: Erwachsene (SC6) – Studienübersicht Wellen 1 bis 5." "Does Education Improve Health? A Reexamination of the Evidence from Compulsory Schooling Laws." "Education and Dementia in the Context of the Cognitive Reserve Hypothesis: A Systematic Review with Meta-Analyses and Qualitative Analyses." "The Distribution of Lifetime Earnings Returns to College." "Education Policy Outlook 2015: Germany." "Education Policy Outlook 2015: Making Reforms Happen." "Making College Worth It: A Review of the Returns to Higher Education." "Priceless: The Nonpecuniary Benefits of Schooling." citation_publisher=Walter Verlag, ; Die deutsche Bildungskatastrophe: Analyse und Dokumentation "Zero Returns to Compulsory Schooling in Germany: Evidence and Interpretation." "Root-N-Consistent Semiparametric Regression." "Mental Retirement." "Mental Exercise and Mental Aging: Evaluating the Validity of the "Use It or Lose It" Hypothesis." "Does Education Improve Cognitive Performance Four Decades After School Completion?" "Compulsory Education and the Benefits of Schooling." "Cognitive Reserve in Ageing and Alzheimer's Disease." "Rate of Memory Decline in AD is Related to Education and Occupation: Cognitive Reserve?" "Independence, Monotonicity, and Latent Index Models: An Equivalence Result." "Development of Competencies across the Life Span." "18. Juli 1961 – Entscheidung zur Gründung der Ruhr-Universität Bochum." "The Assessment of Reading Speed in Adults and First-Year Students." Kamhöfer, D.A., Schmitz, .H., & Westphal, .M. (2019). Heterogeneity in Marginal Non-Monetary Returns to Higher Education. Journal of the European Economic Association, 17(1), Kamhöfer, DanielA, Hendrik Schmitz, and Matthias Westphal. "Heterogeneity in Marginal Non-Monetary Returns to Higher Education." Journal of the European Economic Association 17.1 (2019).
CommonCrawl
Variation and oscillation for harmonic operators in the inverse Gaussian setting CPAA Home A second-order accurate structure-preserving scheme for the Cahn-Hilliard equation with a dynamic boundary condition February 2022, 21(2): 393-418. doi: 10.3934/cpaa.2021182 A convergent finite difference method for computing minimal Lagrangian graphs Brittany Froese Hamfeldt , and Jacob Lesniewski Department of Mathematical Sciences, New Jersey Institute of Technology, University Heights, Newark, NJ 07102 * Corresponding Author Received February 2021 Revised June 2021 Published February 2022 Early access November 2021 Fund Project: The first author was partially supported by NSF DMS-1619807 and NSF DMS-1751996. The second author was partially supported by NSF DMS-1619807 We consider the numerical construction of minimal Lagrangian graphs, which is related to recent applications in materials science, molecular engineering, and theoretical physics. It is known that this problem can be formulated as an additive eigenvalue problem for a fully nonlinear elliptic partial differential equation. We introduce and implement a two-step generalized finite difference method, which we prove converges to the solution of the eigenvalue problem. Numerical experiments validate this approach in a range of challenging settings. We further discuss the generalization of this new framework to Monge-Ampère type equations arising in optimal transport. This approach holds great promise for applications where the data does not naturally satisfy the mass balance condition, and for the design of numerical methods with improved stability properties. Keywords: finite difference methods, minimal Lagrangian graphs, second boundary value problem, fully nonlinear elliptic equations, eigenvalue problems. Mathematics Subject Classification: Primary: 65N06, 65N12, 65N25; Secondary: 35J15, 35J25, 35J60, 35J66. Citation: Brittany Froese Hamfeldt, Jacob Lesniewski. A convergent finite difference method for computing minimal Lagrangian graphs. Communications on Pure & Applied Analysis, 2022, 21 (2) : 393-418. doi: 10.3934/cpaa.2021182 G. Barles and P. E. Souganidis, Convergence of approximation schemes for fully nonlinear second order equations, Asym. Anal., 4 (1991), 271-283. Google Scholar P. W. Bates, G. W. Wei and S. Zhao, Minimal molecular surfaces and their applications, J. Comp. Chem., 29 (2008), 380-391. Google Scholar J. D. Benamou, B. D. Froese and A. M. Oberman, Numerical solution of the optimal transportation problem using the Monge-Ampère equation, J. Comput. Phys., 260 (2014), 107-126. doi: 10.1016/j.jcp.2013.12.015. Google Scholar J. D. Benamou, A. Oberman and B. Froese, Numerical solution of the second boundary value problem for the elliptic Monge-Ampère equation, Inst. Nation. Recherche Inform. Automat., 2012, 37 pp. Google Scholar D. P. Bertsekas, Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003. Google Scholar S. Brendle and M. Warren, A boundary value problem for minimal Lagrangian graphs, J. Differ. Geom., 84 (2010), 267-287. Google Scholar S. C. Brenner, T. Gudi, M. Neilan and L. Y. Sung, C0 penalty methods for the fully nonlinear Monge-Ampére equation, Math. Comp., 80 (2011), 1979-1995. doi: 10.1090/S0025-5718-2011-02487-7. Google Scholar C. Budd and J. Williams, Moving mesh generation using the parabolic Monge-Ampère equation, SIAM J. Sci. Comput., 31 (2009), 3438-3465. doi: 10.1137/080716773. Google Scholar M. G. Crandall, H. Ishii and P. L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.), 27 (1992), 1-67. doi: 10.1090/S0273-0979-1992-00266-5. Google Scholar E. J. Dean and R. Glowinski, Numerical methods for fully nonlinear elliptic equations of the Monge-Ampère type, Comput. Meth. Appl. Mech. Engrg., 195 (2006), 1344-1386. doi: 10.1016/j.cma.2005.05.023. Google Scholar P. Delanoë, Classical solvability in dimension two of the second boundary-value problem associated with the Monge-Ampere operator, Ann. Inst. Hen. Poin. Non Lin. Anal., 8 (1991), 443-457. doi: 10.1016/j.anihpc.2007.03.001. Google Scholar B. Engquist and B. D. Froese, Application of the Wasserstein metric to seismic signals, Commun. Math. Sci., 12 (2014), 979-988. doi: 10.4310/CMS.2014.v12.n5.a7. Google Scholar X. Feng and M. Neilan, Vanishing moment method and moment solutions for fully nonlinear second order partial differential equations, SIAM J. Sci. Comput., 38 (2009), 74-98. doi: 10.1007/s10915-008-9221-9. Google Scholar B. D. Froese, A numerical method for the elliptic Monge-Ampère equation with transport boundary conditions, SIAM J. Sci. Comput., 34 (2012), A1432-A1459. doi: 10.1137/110822372. Google Scholar B. D. Froese, Meshfree finite difference approximations for functions of the eigenvalues of the Hessian, Numer. Math., 138 (2018), 75-99. doi: 10.1007/s00211-017-0898-2. Google Scholar S. Haker, L. Zhu, A. Tannenbaum and S. Angenent, Optimal mass transport for registration and warping, Int. J. Comp. Vis., 60 (2004), 225-240. Google Scholar B. Hamfeldt, Convergent approximation of non-continuous surfaces of prescribed Gaussian curvature, Comm. Pure Appl. Anal., 17 (2018), 671-707. doi: 10.3934/cpaa.2018036. Google Scholar B. Hamfeldt, Convergence framework for the second boundary value problem for the Monge-Ampère equation, SIAM J. Numer. Anal., 57 (2019), 945-971. doi: 10.1137/18M1201913. Google Scholar B. F. Hamfeldt and T. Salvador, Higher-order adaptive finite difference methods for fully nonlinear elliptic equations, SIAM J. Sci. Comput., 75 (2018), 1282-1306. doi: 10.1007/s10915-017-0586-5. Google Scholar R. Harvey and H. B. Lawson, Calibrated geometries, Act. Math., 148 (1982), 47-157. doi: 10.1007/BF02392726. Google Scholar R. Jensen, The maximum principle for viscosity solutions of fully nonlinear second order partial differential equations, Arch. Rat. Mech. Anal., 101 (1988), 1-27. doi: 10.1007/BF00281780. Google Scholar C. Y. Kao, S. Osher and J. Qian, Lax?Friedrichs sweeping scheme for static Hamilton?Jacobi equations, J. Comput. phys., 196 (2004), 367-391. doi: 10.1016/j.jcp.2003.11.007. Google Scholar R. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and Time-Dependent Problems (Classics in Applied Mathematics Classics in Applied Mathemat), SIAM, Philadelphia, PA, USA, 2007. doi: 10.1137/1.9780898717839. Google Scholar Y. Lian and K. Zhang, Boundary Lipschitz regularity and the Hopf lemma for fully nonlinear elliptic equations, arXiv: 1812.11357. Google Scholar A. Oberman, The convex envelope is the solution of a nonlinear obstacle problem, Proc. Amer. Math. Soc., 135 (2007), 1689-1694. doi: 10.1090/S0002-9939-07-08887-9. Google Scholar A. M. Oberman, Convergent difference schemes for degenerate elliptic and parabolic equations: Hamilton?Jacobi equations and free boundary problems, SIAM J. Numer. Anal., 44 (2006), 879-895. doi: 10.1137/S0036142903435235. Google Scholar A. M. Oberman, Wide stencil finite difference schemes for the elliptic Monge-Ampère equation and functions of the eigenvalues of the {H}essian, Disc. Cont. Dynam. Syst. Ser. B, 10 (2008), 221-238. doi: 10.3934/dcdsb.2008.10.221. Google Scholar C. R. Prins, R. Beltman, J. H. M. ten Thije Boonkkamp, W. L. IJzerman, and T. W. Tukker, A least-squares method for optimal transport using the Monge-Ampère equation, SIAM J. Sci. Comp., 37 (2015), B937?B961. doi: 10.1137/140986414. Google Scholar L. Qi and J. Sun, A nonsmooth version of Newton's method, Math. program., 58 (1993), 353-367. doi: 10.1007/BF01581275. Google Scholar J. Qian, Y. T. Zhang and H. K. Zhao, A fast sweeping method for static convex Hamilton?Jacobi equations, J. Sci. Comput., 31 (2007), 237-271. doi: 10.1007/s10915-006-9124-6. Google Scholar K. Smoczyk and M. T. Wang, Mean curvature flows of Lagrangian submanifolds with convex potentials, J. Differ. Geom., 62 (2002), 243-257. Google Scholar E. L. Thomas, D. M. Anderson, C. S. Henkee and D. Hoffman, Periodic area-minimizing surfaces in block copolymers, Nat., 334 (1988): 598. Google Scholar R. P. Thomas and S. T. Yau, Special Lagrangians, stable bundles and mean curvature flow, Commun. Anal. Geom., 10 (2002), 1075-1113. doi: 10.4310/CAG.2002.v10.n5.a8. Google Scholar J. Urbas, On the second boundary value problem for equations of Monge-Ampere type, J. Rein. Angew. Math., 487 (1997), 115-124. doi: 10.1515/crll.1997.487.115. Google Scholar H. Zhao, A fast sweeping method for eikonal equations, Math. Comput., 74 (2005), 603-627. doi: 10.1090/S0025-5718-04-01678-3. Google Scholar Figure 1. Discrete solution to Poisson's equation when viewed as an eigenvalue problem 19]">Figure 2. Examples of quadtree meshes. White squares are inside the domain, while gray squares intersect the boundary [19] 19]">Figure 3. Potential neighbors are circled in gray. Examples of selected neighbors are circled in black [19] Figure 4. Examples of neighbors $ x_1, x_2 $ needed to construct a monotone approximation of the directional derivative in the direction $ n $ at the boundary point $ x_0 $ Figure 5. Domain and computed target ellipse Figure 6. Computed maps from a square $ X $ to various targets $ Y $ Figure 7. Circular domain $ X $ and square target $ Y $ Figure 8. Circular domain $ X $ and degenerate target $ Y $ Table 1. Error in mapping an ellipse to an ellipse $ h $ $ \|u^h- u_{\text{ex}}\|_\infty $ Ratio Observed Order $ 2.625\times 10^{-1} $ $ 1.304 \times 10^{-1} $ $ 1.313\times 10^{-1} $ $ 5.703\times 10^{-2} $ 2.287 1.194 Table 2. Error in mapping a circle to a line segment $ 6.875\times 10^{-2} $ $ 3.812 \times 10^{-2} $ 2.396 1.261 $ 8.59\times 10^{-3} $ $ 4.636\times 10^{-3} $ 2.333 1.222 Youngmok Jeon, Dongwook Shin. Immersed hybrid difference methods for elliptic boundary value problems by artificial interface conditions. Electronic Research Archive, 2021, 29 (5) : 3361-3382. doi: 10.3934/era.2021043 Navnit Jha. Nonpolynomial spline finite difference scheme for nonlinear singuiar boundary value problems with singular perturbation and its mechanization. Conference Publications, 2013, 2013 (special) : 355-363. doi: 10.3934/proc.2013.2013.355 Inara Yermachenko, Felix Sadyrbaev. Types of solutions and multiplicity results for second order nonlinear boundary value problems. Conference Publications, 2007, 2007 (Special) : 1061-1069. doi: 10.3934/proc.2007.2007.1061 Isabeau Birindelli, Stefania Patrizi. A Neumann eigenvalue problem for fully nonlinear operators. Discrete & Continuous Dynamical Systems, 2010, 28 (2) : 845-863. doi: 10.3934/dcds.2010.28.845 Chunhui Qiu, Rirong Yuan. On the Dirichlet problem for fully nonlinear elliptic equations on annuli of metric cones. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5707-5730. doi: 10.3934/dcds.2017247 Martino Bardi, Paola Mannucci. On the Dirichlet problem for non-totally degenerate fully nonlinear elliptic equations. Communications on Pure & Applied Analysis, 2006, 5 (4) : 709-731. doi: 10.3934/cpaa.2006.5.709 J. R. L. Webb. Uniqueness of the principal eigenvalue in nonlocal boundary value problems. Discrete & Continuous Dynamical Systems - S, 2008, 1 (1) : 177-186. doi: 10.3934/dcdss.2008.1.177 Sofia Giuffrè, Giovanna Idone. On linear and nonlinear elliptic boundary value problems in the plane with discontinuous coefficients. Discrete & Continuous Dynamical Systems, 2011, 31 (4) : 1347-1363. doi: 10.3934/dcds.2011.31.1347 Mark I. Vishik, Sergey Zelik. Attractors for the nonlinear elliptic boundary value problems and their parabolic singular limit. Communications on Pure & Applied Analysis, 2014, 13 (5) : 2059-2093. doi: 10.3934/cpaa.2014.13.2059 Pablo Blanc. A lower bound for the principal eigenvalue of fully nonlinear elliptic operators. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3613-3623. doi: 10.3934/cpaa.2020158 Robert Jensen, Andrzej Świech. Uniqueness and existence of maximal and minimal solutions of fully nonlinear elliptic PDE. Communications on Pure & Applied Analysis, 2005, 4 (1) : 199-207. doi: 10.3934/cpaa.2005.4.187 VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad. A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition. Communications on Pure & Applied Analysis, 2018, 17 (1) : 39-52. doi: 10.3934/cpaa.2018003 Angelo Favini, Rabah Labbas, Stéphane Maingot, Maëlis Meisner. Boundary value problem for elliptic differential equations in non-commutative cases. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 4967-4990. doi: 10.3934/dcds.2013.33.4967 Mariane Bourgoing. Viscosity solutions of fully nonlinear second order parabolic equations with $L^1$ dependence in time and Neumann boundary conditions. Discrete & Continuous Dynamical Systems, 2008, 21 (3) : 763-800. doi: 10.3934/dcds.2008.21.763 Guoliang Zhang, Shaoqin Zheng, Tao Xiong. A conservative semi-Lagrangian finite difference WENO scheme based on exponential integrator for one-dimensional scalar nonlinear hyperbolic equations. Electronic Research Archive, 2021, 29 (1) : 1819-1839. doi: 10.3934/era.2020093 Qun Lin, Hehu Xie. Recent results on lower bounds of eigenvalue problems by nonconforming finite element methods. Inverse Problems & Imaging, 2013, 7 (3) : 795-811. doi: 10.3934/ipi.2013.7.795 Zalman Balanov, Carlos García-Azpeitia, Wieslaw Krawcewicz. On variational and topological methods in nonlinear difference equations. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2813-2844. doi: 10.3934/cpaa.2018133 Johnny Henderson, Rodica Luca. Existence of positive solutions for a system of nonlinear second-order integral boundary value problems. Conference Publications, 2015, 2015 (special) : 596-604. doi: 10.3934/proc.2015.0596 Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079 Junjie Zhang, Shenzhou Zheng, Chunyan Zuo. $ W^{2, p} $-regularity for asymptotically regular fully nonlinear elliptic and parabolic equations with oblique boundary values. Discrete & Continuous Dynamical Systems - S, 2021, 14 (9) : 3305-3318. doi: 10.3934/dcdss.2021080 Brittany Froese Hamfeldt Jacob Lesniewski
CommonCrawl
This is a transcript of the talk that I gave to the RIOT science club on 1st October 2020. The video of the talk is on YouTube . The transcript was very kindly made by Chris F Carroll, but I have modified it a bit here to increase clarity. Links to the original talk appear throughout. This is the original video of the talk. My title slide is a picture of UCL's front quad, taken on the day that it was the starting point for the second huge march that attempted to stop the Iraq war. That's a good example of the folly of believing things that aren't true. "Today I speak to you of war. A war that has pitted statistician against statistician for nearly 100 years. A mathematical conflict that has recently come to the attention of the normal people and these normal people look on in fear, in horror, but mostly in confusion because they have no idea why we're fighting." Kristin Lennox (Director of Statistical Consulting, Lawrence Livermore National Laboratory) That sums up a lot of what's been going on. The problem is that there is near unanimity among statisticians that p values don't tell you what you need to know but statisticians themselves haven't been able to agree on a better way of doing things. This talk is about the probability that if we claim to have made a discovery we'll be wrong. This is what people very frequently want to know. And that is not the p value. You want to know the probability that you'll make a fool of yourself by claiming that an effect is real when in fact it's nothing but chance. Just to be clear, what I'm talking about is how you interpret the results of a single unbiased experiment. Unbiased in the sense the experiment is randomized, and all the assumptions made in the analysis are exactly true. Of course in real life false positives can arise in any number of other ways: faults in the randomization and blinding, incorrect assumptions in the analysis, multiple comparisons, p hacking and so on, and all of these things are going to make the risk of false positives even worse. So in a sense what I'm talking about is your minimum risk of making a false positive even if everything else were perfect. The conclusion of this talk will be: If you observe a p value close to 0.05 and conclude that you've discovered something, then the chance that you'll be wrong is not 5%, but is somewhere between 20% and 30% depending on the exact assumptions you make. If the hypothesis was an implausible one to start with, the false positive risk will be much higher. There's nothing new about this at all. This was written by a psychologist in 1966. The major point of this paper is that the test of significance does not provide the information concerning phenomena characteristically attributed to it, and that a great deal of mischief has been associated with its use. Bakan, D. (1966) Psychological Bulletin, 66 (6), 423 – 237 Bakan went on to say this is already well known, but if so it's certainly not well known, even today, by many journal editors or indeed many users. The p value Let's start by defining the p value. An awful lot of people can't do this but even if you can recite it, it's surprisingly difficult to interpret it. I'll consider it in the context of comparing two independent samples to make it a bit more concrete. So the p value is defined thus: If there were actually no effect -for example if the true means of the two samples were equal, so the difference was zero -then the probability of observing a value for the difference between means which is equal to or greater than that actually observed is called the p value. Now there's at least five things that are dodgy with that, when you think about it. It sounds very plausible but it's not. "If there are actually no effect …": first of all this implies that the denominator for the probability is the number of cases in which there is no effect and this is not known. "… or greater than…" : why on earth should we be interested in values that haven't been observed? We know what the effect size that was observed was, so why should we be interested in values that are greater than that which haven't been observed? It doesn't compare the hypothesis of no effect with anything else. This is put well by Sellke et al in 2001, "knowing that the data are rare when there is no true difference [that's what the p value tells you] is of little use unless one determines whether or not they are also rare when there is a true difference". In order to understand things properly, you've got to have not only the null hypothesis but also an alternative hypothesis. Since the definition assumes that the null hypothesis is true, it's obvious that it can't tell us about the probability that the null hypothesis is true. The definition invites users to make the error of the transposed conditional. That sounds a bit fancy but it's very easy to say what it is. The probability that you have four legs given that you're a cow is high but the probability that you're a cow given that you've got four legs is quite low many animals that have four legs that aren't cows. Take a legal example. The probability of getting the evidence given that you're guilty may be known. (It often isn't of course — but that's the sort of thing you can hope to get). But it's not what you want. What you want is the probability that you're guilty given the evidence. The probability you're catholic given that you're the pope is probably very high, but the probability you're a pope given that you're a catholic is very low. So now to the nub of the matter. The probability of the observations given that the null hypothesis is the p value. But it's not what you want. What you want is the probability that the null hypothesis is true given the observations. The first statement is a deductive process; the second process is inductive and that's where the problems lie. These probabilities can be hugely different and transposing the conditional simply doesn't work. The False Positive Risk The false positive risk avoids these problems. Define the false positive risk as follows. If you declare a result to be "significant" based on a p value after doing a single unbiased experiment, the False Positive Risk is the probability that your result is in fact a false positive. That, I maintain, is what you need to know. The problem is that in order to get it, you need Bayes' theorem and as soon as that's mentioned, contention immediately follows. Suppose we call the null-hypothesis H0, and the alternative hypothesis H1. For example, H0 can be that the true effect size is zero and H1 can be the hypothesis that there's a real effect, not just chance. Bayes' theorem states that the odds on H1 being true, rather than H0 , after you've done the experiment are equal to the likelihood ratio times the odds on there being a real effect before the experiment: In general we would want a Bayes' factor here, rather than the likelihood ratio, but under my assumptions we can use the likelihood ratio, which is a much simpler thing [explanation here]. The likelihood ratio represents the evidence supplied by the experiment. It's what converts the prior odds to the posterior odds, in the language of Bayes' theorem. The likelihood ratio is a purely deductive quantity and therefore uncontentious. It's the probability of the observations if there's a real effect divided by the probability of the observations if there's no effect. Notice a simplification you can make: if the prior odds equal 1, then the posterior odds are simply equal to the likelihood ratio. "Prior odds of 1" means that it's equally probable before the experiment that there was an effect or that there's no effect. Put another way, prior odds of 1 means that the prior probability of H0 and of H1 are equal: both are 0.5. That's probably the nearest you can get to declaring equipoise. Comparison: Consider Screening Tests I wrote a statistics textbook in 1971 [download it here] which by and large stood the test of time but the one thing I got completely wrong was the limitations of p values. Like many other people I came to see my errors through thinking about screening tests. These are very much in the news at the moment because of the COVID-19 pandemic. The illustration of the problems they pose which follows is now quite commonplace. Suppose you test 10,000 people and that 1 in a 100 of those people have the condition, e.g. Covid-19, and 99 don't have it. The prevalence in the population you're testing is 1 in a 100. So you have 100 people with the condition and 9,900 who don't. If the specificity of the test is 95%, you get 5% false positives. This is very much like a null-hypothesis test of significance. But you can't get the answer without considering the alternative hypothesis, which null-hypothesis significance tests don't do. So now add the upper arm to the Figure above. You've got 1% (so that's 100 people) who have the condition, so if the sensitivity of the test is 80% (that's like the power of a significance test) then you get to the total number of positive tests is 80 plus 495 and the proportion of tests that are false is 495 false positives divided by the total number of positives, which is 86%. A test that gives 86% false positives is pretty disastrous. It is not 5%! Most people are quite surprised by that when they first come across it. Now look at significance tests in a similar way Now we can do something similar for significance tests (though the parallel is not exact, as I'll explain). Suppose we do 1,000 tests and in 10% of them there's a real effect, and in 90% of them there is no effect. If the significance level, so-called, is 0.05 then we get 5% false positive tests, which is 45 false positives. But that's as far as you can go with a null-hypothesis significance test. You can't tell what's going on unless you consider the other arm. If the power is 80% then we get 80 true positive tests and 20 false negative tests, so the total number of positive tests is 80 plus 45 and the false positive risk is the number of false positives divided by the total number of positives which is 36 percent. So the p value is not the false positive risk. And the type 1 error rate is not the false positive risk. The difference between them lies not in the numerator, it lies in the denominator. In the example above, of the 900 tests in which the null-hypothesis was true, there were 45 false positives. So looking at it from the classical point of view, the false positive risk would turn out to be 45 over 900 which is 0.05 but that's not what you want. What you want is the total number of false positives, 45, divided by the total number of positives (45+80), which is 0.36. The p value is NOT the probability that your results occurred by chance. The false positive risk is. A complication: "p-equals" vs "p-less-than" But now we have to come to a slightly subtle complication. It's been around since the 1930s and it was made very explicit by Dennis Lindley in the 1950s. Yet it is unknown to most people which is very weird. The point is that there are two different ways in which we can calculate the likelihood ratio and therefore two different ways of getting the false positive risk. A lot of writers including Ioannidis and Wacholder and many others use the "p less than" approach. That's what that tree diagram gives you. But it is not what is appropriate for interpretation of a single experiment. It underestimates the false positive risk. What we need is the "p equals" approach, and I'll try and explain that now. Suppose we do a test and we observe p = 0.047 then all we are interested in is, how tests behave that come out with p = 0.047. We aren't interested in any other different p value. That p value is now part of the data. The tree diagram approach we've just been through gave a false positive risk of only 6%, if you assume that the prevalence of true effects was 0.5 (prior odds of 1). 6% isn't much different from 5% so it might seem okay. But the tree diagram approach, although it is very simple, still asks the wrong question. It looks at all tests that gives p ≤ 0.05, the "p-less-than" case. If we observe p = 0.047 then we should look only at tests that give p = 0.047 rather than looking at all tests which come out with p ≤ 0.05. If you're doing it with simulations of course as in my 2014 paper then you can't expect any tests to give exactly 0.047; what you can do is look at all the tests that come out with p in a narrow band around there, say 0.045 ≤ p ≤ 0.05. This approach gives a different answer from the tree diagram approach. If you look at only tests that give p values between 0.045 and 0.05, the false positive risk turns out to be not 6% but at least 26%. I say at least, because that assumes a prior probability of there being a real effect of 50:50. If only 10% of the experiments had a real effect of (a prior of 0.1 in the tree diagram) this rises to 76% of false positives. That really is pretty disastrous. Now of course the problem is you don't know this prior probability. The problem with Bayes theorem is that there exists an infinite number of answers. Not everyone agrees with my approach, but it is one of the simplest. The likelihood-ratio approach to comparing two hypotheses The likelihood ratio -that is to say, the relative probabilities of observing the data given two different hypotheses, is the natural way to compare two hypotheses. For example, in our case one hypothesis is the zero effect (that's the null-hypothesis) and the other hypothesis is that there's a real effect of the observed size. That's the maximum likelihood estimate of the real effect size. Notice that we are not saying that the effect size is exactly zero; but rather we are asking whether a zero effect explains the observations better than a real effect. Now this amounts to putting a "lump" of probability on there being a zero effect. If you put a prior probability of 0.5 for there being a zero effect, you're saying the prior odds are 1. If you are willing to put a lump of probability on the null-hypothesis, then there are several methods of doing that. They all give similar results to mine within a factor of two or so. Putting a lump of probability on their being a zero effect, for example a prior probability of 0.5 of there being zero effect, is regarded by some people as being over-sceptical (though others might regard 0.5 as high, given that most bright ideas are wrong). E.J. Wagenmakers summed it up in a tweet: "at least Bayesians attempt to find an approximate answer to the right question instead of struggling to interpret an exact answer to the wrong question [that's the p value]". Some results. The 2014 paper used simulations, and that's a good way to see what's happening in particular cases. But to plot curves of the sort shown in the next three slides we need exact calculations of FPR and how to do this was shown in the 2017 paper (see Appendix for details). Comparison of p-equals and p-less-than approaches The slide at slide at 26:05 is designed to show the difference between the "p-equals" and the "p-less than" cases. On each diagram the dashed red line is the "line of equality": that's where the points would lie if the p value were the same as the false positive risk. You can see that in every case the blue lines -the false positive risk -is greater than the p value. And for any given observed p value, the p-equals approach gives a bigger false positive risk than the p-less-than approach. For a prior probability of 0.5 then the false positive risk is about 26% when you've observed p = 0.05. So from now on I shall use only the "p-equals" calculation which is clearly what's relevant to a test of significance. The false positive risk as function of the observed p value for different sample sizes Now another set of graphs (slide at 27:46), for the false positive risk as a function of the observed p value, but this time we'll vary the number in each sample. These are all for comparing two independent samples. The curves are red for n = 4 ; green for n = 8 ; blue for n = 16. The top row is for an implausible hypothesis with a prior of 0.1, the bottom row for a plausible hypothesis with a prior of 0.5. The left column shows arithmetic plots; the right column shows the same curves in log-log plots, The power these lines correspond to is: n = 4 (red) has power 22% n = 8 (green) has power 46% n = 16 (blue) one has power 78% Now you can see these behave in a slightly curious way. For most of the range it's what you'd expect: n = 4 gives you a higher false positive risk than n = 8 and that still higher than n = 16 the blue line. The curves behave in an odd way around 0.05; they actually begin to cross, so the false positive risk for p values around 0.05 is not strongly dependent on sample size. But the important point is that in every case they're above the line of equality, so the false positive risk is much bigger than the p value in any circumstance. False positive risk as a function of sample size (i.e. of power) Now the really interesting one (slide at 29:34). When I first did the simulation study I was challenged by the fact that the false positive risk actually becomes 1 if the experiment is a very powerful one. That seemed a bit odd. The plot here is the false positive risk FPR50 which I define as "the false positive risk for prior odds of 1, i.e. a 50:50 chance of being a real effect or not a real effect. Let's just concentrate on the p = 0.05 curve (blue). Notice that, because the number per sample is changing, the power changes throughout the curve. For example on the p = 0.05 curve for n = 4 (that's the lowest sample size plotted), power is 0.22, but if we go to the other end of the curve, n = 64 (the biggest sample size plotted), the power is 0.9999. That's something not achieved very often in practice. But how is it that p = 0.05 can give you a false positive risk which approaches 100%? Even with p = 0.001 the false positive risk will eventually approach 100% though it does so later and more slowly. In fact this has been known for donkey's years. It's called the Jeffreys-Lindley paradox, though there's nothing paradoxical about it. In fact it's exactly what you'd expect. If the power is 99.99% then you expect almost every p value to be very low. Everything is detected if we have a high power like that. So it would be very rare, with that very high power, to get a p value as big as 0.05. Almost every p value will be much less than 0.05, and that's why observing a p value as big as 0.05 would, in that case, provide strong evidence for the null-hypothesis. Even p = 0.01 would provide strong evidence for the null hypothesis when the power is very high because almost every p value would be much less than 0.01. This is a direct consequence of using the p-equals definition which I think is what's relevant for testing hypotheses. So the Jeffreys-Lindley phenomenon makes absolute sense. In contrast, if you use the p-less-than approach, the false positive risk would decrease continuously with the observed p value. That's why, if you have a big enough sample (high enough power), even the smallest effect becomes "statistically significant", despite the fact that the odds may favour strongly the null hypothesis. [Here, 'the odds' means the likelihood ratio calculated by the p-equals method.] A real life example Now let's consider an actual practical example. The slide shows a study of transcranial electromagnetic stimulation published in Science magazine (so a bit suspect to begin with). The study concluded (among other things) that an improved associated memory performance was produced by transcranial electromagnetic stimulation, p = 0.043. In order to find out how big the sample sizes were I had to dig right into the supplementary material. It was only 8. Nonetheless let's assume that they had an adequate power and see what we make of it. In fact it wasn't done in a proper parallel group way, it was done as 'before and after' the stimulation, and sham stimulation, and it produces one lousy asterisk. In fact most of the paper was about functional magnetic resonance imaging, memory was mentioned only as a subsection of Figure 1, but this is what was tweeted out because it sounds more dramatic than other things and it got a vast number of retweets. Now according to my calculations p = 0.043 means there's at least an 18% chance that it's false positive. How better might we express the result of this experiment? We should say, conventionally, that the increase in memory performance was 1.88 ± 0.85 (SEM) with confidence interval 0.055 to 3.7 (extra words recalled on a baseline of about 10). Thus p = 0.043. But then supplement this conventional statement with This implies a false positive risk, FPR50, (i.e. the probability that the results occurred by chance only) of at least 18%, so the result is no more than suggestive. There are several other ways you can put the same idea. I don't like them as much because they all suggest that it would be helpful to create a new magic threshold at FPR50 = 0.05, and that's as undesirable as defining a magic threshold at p = 0.05. For example you could say that the increase in performance gave p = 0.043, and in order to reduce the false positive risk to 0.05 it would be necessary to assume that the prior probability of there being a real effect was 81%. In other words, you'd have to be almost certain that there was a real effect before you did the experiment before that result became convincing. Since there's no independent evidence that that's true, the result is no more than suggestive. Or you could put it this way: the increase in performance gave p = 0.043. In order to reduce the false positive risk to 0.05 it would have been necessary to observe p = 0.0043, so the result is no more than suggestive. The reason I now prefer the first of these possibilities is because the other two involve an implicit threshold of 0.05 for the false positive risk and that's just as daft as assuming a threshold of 0.05 for the p value. The web calculator Scripts in R are provided with all my papers. For those who can't master R Studio, you can do many of the calculations very easily with our web calculator [for latest links please go to http://www.onemol.org.uk/?page_id=456]. There are three options : if you want to calculate the false positive risk for a specified p value and prior, you enter the observed p value (e.g. 0.049), the prior probability that there's a real effect (e.g. 0.5), the normalized effect size (e.g. 1 standard deviation) and the number in each sample. All the numbers cited here are based on an effect size if 1 standard deviation, but you can enter any value in the calculator. The output panel updates itself automatically. We see that the false positive risk for the p-equals case is 0.26 and the likelihood ratio is 2.8 (I'll come back to that in a minute). Using the web calculator or using the R programs which are provided with the papers, this sort of table can be very quickly calculated. The top row shows the results if we observe p = 0.05. The prior probability that you need to postulate to get a 5% false positive risk would be 87%. You'd have to be almost ninety percent sure there was a real effect before the experiment, in order to to get a 5% false positive risk. The likelihood ratio comes out to be about 3; what that means is that your observations will be about 3 times more likely if there was a real effect than if there was no effect. 3:1 is very low odds compared with the 19:1 odds which you might incorrectly infer from p = 0.05. The false positive risk for a prior of 0.5 (the default value) which I call the FPR50, would be 27% when you observe p = 0.05. In fact these are just directly related to each other. Since the likelihood ratio is a purely deductive quantity, we can regard FPR50 as just being a transformation of the likelihood ratio and regard this as also a purely deductive quantity. For example, 1 / (1 + 2.8) = 0.263, the FPR50. But in order to interpret it as a posterior probability then you do have to go into Bayes' theorem. If the prior probability of a real effect was only 0.1 then that would correspond to a 76% false positive risk when you've observed p = 0.05. If we go to the other extreme, when we observe p = 0.001 (bottom row of the table) the likelihood ratio is 100 -notice not 1000, but 100 -and the false positive risk, FPR50 , would be 1%. That sounds okay but if it was an implausible hypothesis with only a 10% prior chance of being true (last column of Table), then the false positive risk would be 8% even when you observe p = 0.001: even in that case it would still be above 5%. In fact, to get the FPR down to 0.05 you'd have to observe p = 0.00043, and that's good food for thought. So what do you do to prevent making a fool of yourself? Never use the words significant or non-significant and then don't use those pesky asterisks please, it makes no sense to have a magic cut off. Just give a p value. Don't use bar graphs. Show the data as a series of dots. Always remember, it's a fundamental assumption of all significance tests that the treatments are randomized. When this isn't the case, you can still calculate a test but you can't expect an accurate result. This is well-illustrated by thinking about randomisation tests. So I think you should still state the p value and an estimate of the effect size with confidence intervals but be aware that this tells you nothing very direct about the false positive risk. The p value should be accompanied by an indication of the likely false positive risk. It won't be exact but it doesn't really need to be; it does answer the right question. You can for example specify the FPR50, the false positive risk based on a prior probability of 0.5. That's really just a more comprehensible way of specifying the likelihood ratio. You can use other methods, but they all involve an implicit threshold of 0.05 for the false positive risk. That isn't desirable. So p = 0.04 doesn't mean you discovered something, it means it might be worth another look. In fact even p = 0.005 can under some circumstances be more compatible with the null-hypothesis than with there being a real effect. We must conclude, however reluctantly, that Ronald Fisher didn't get it right. Matthews (1998) said, "the plain fact is that 70 years ago Ronald Fisher gave scientists a mathematical machine for turning boloney into breakthroughs and flukes into funding". Robert Matthews Sunday Telegraph, 13 September 1998. But it's not quite fair to blame R. A. Fisher because he himself described the 5% point as a "quite a low standard of significance". Q: "There are lots of competing ideas about how best to deal with the issue of statistical testing. For the non-statistician it is very hard to evaluate them and decide on what is the best approach. Is there any empirical evidence about what works best in practice? For example, training people to do analysis in different ways, and then getting them to analyze data with known characteristics. If not why not? It feels like we wouldn't rely so heavily on theory in e.g. drug development, so why do we in stats? A: The gist: why do we rely on theory and statistics? Well, we might as well say, why do we rely on theory in mathematics? That's what it is! You have concrete theories and concrete postulates. Which you don't have in drug testing, that's just empirical. Q: Is there any empirical evidence about what works best in practice, so for example training people to do analysis in different ways? and then getting them to analyze data with known characteristics and if not why not? A: Why not: because you never actually know unless you're doing simulations what the answer should be. So no, it's not known which works best in practice. That being said, simulation is a great way to test out ideas. My 2014 paper used simulation, and it was only in the 2017 paper that the maths behind the 2014 results was worked out. I think you can rely on the fact that a lot of the alternative methods give similar answers. That's why I felt justified in using rather simple assumptions for mine, because they're easier to understand and the answers you get don't differ greatly from much more complicated methods. In my 2019 paper there's a comparison of three different methods, all of which assume that it's reasonable to test a point (or small interval) null-hypothesis (one that says that treatment effect is exactly zero), but given that assumption, all the alternative methods give similar answers within a factor of two or so. A factor of two is all you need: it doesn't matter if it's 26% or 52% or 13%, the conclusions in real life are much the same. So I think you might as well use a simple method. There is an even simpler one than mine actually, proposed by Sellke et al. (2001) that gives a very simple calculation from the p value and that gives a false positive risk of 29 percent when you observe p = 0.05. My method gives 26%, so there's no essential difference between them. It doesn't matter which you use really. Q: The last question gave an example of training people so maybe he was touching on how do we teach people how to analyze their data and interpret it accurately. Reporting effect sizes and confidence intervals alongside p values has been shown to improve interpretation in teaching contexts. I wonder whether in your own experience that you have found that this helps as well? Or can you suggest any ways to help educators, teachers, lecturers, to help the next generation of researchers properly? A: Yes I think you should always report the observed effect size and confidence limits for it. But be aware that confidence intervals tell you exactly the same thing as p values and therefore they too are very suspect. There's a simple one-to-one correspondence between p values and confidence limits. So if you use the criterion, "the confidence limits exclude zero difference" to judge whether there's a real effect you're making exactly the same mistake as if you use p ≤ 0.05 to to make the judgment. So they they should be given for sure, because they're sort of familiar but you do need, separately, some sort of a rough estimate of the false positive risk too. Q: I'm struggling a bit with the "p equals" intuition. How do you decide the band around 0.047 to use for the simulations? Presumably the results are very sensitive to this band. If you are using an exact p value in a calculation rather than a simulation, the probability of exactly that p value to many decimal places will presumably become infinitely small. Any clarification would be appreciated. A: Yes, that's not too difficult to deal with: you've got to use a band which is wide enough to get a decent number in. But the result is not at all sensitive to that: if you make it wider, you'll get larger numbers in both numerator and denominator so the result will be much the same. In fact, that's only a problem if you do it by simulation. If you do it by exact calculation it's easier. To do a 100,000 or a million t-tests with my R script in simulation, doesn't take long. But it doesn't depend at all critically on the width of the interval; and in any case it's not necessary to do simulations, you can do the exact calculation. Q: Even if an exact calculation can't be done—it probably can—you can get a better and better approximation by doing more simulations and using narrower and narrower bands around 0.047? A: Yes, the larger the number of simulated tests that you do, the more accurate the answer. I did check it with a million occasionally. But once you've done the maths you can get exact answers much faster. The slide at 53:17 shows how you do the exact calculation. • The Student's t value along the bottom • Probability density at the side • The blue line is the distribution you get under the null-hypothesis, with a mean of 0 and a standard deviation of 1 in this case. • So the red areas are the rejection areas for a t-test. • The green curve is the t distribution (it's a non-central t-distribution which is what you need in this case) for the alternative hypothesis. • The yellow area is the power of the test, which here is 78% • The orange area is (1 – power) so it's 22% The p-less-than calculation considers all values in the red area or in the yellow area as being positives. The p-equals calculation uses not the areas, but the ordinates here, the probability densities. The probability (density) of getting a t value of 2.04 under the null hypothesis is y0 = 0.053. And the probability (density) under the alternative hypothesis is y1 = 0.29. It's true that the probability of getting t = 2.04 exactly is infinitesimally small (the area of an infinitesimally narrow band around t = 2.04) but the ratio if the two infinitesimally small probabilities is perfectly well-define). so for the p-equals approach, the likelihood ratio in favour of the alternative hypothesis would be L10 = y1 / 2y0 (the factor of 2 arises because of the two red tails) and that gives you a likelihood ratio of 2.8. That corresponds to an FPR50 of 26% as we explained. That's exactly what you get from simulation. I hope that was reasonably clear. It may not have been if you aren't familiar with looking at those sorts of things. Q: To calculate FPR50 -false positive risk for a 50:50 prior -I need to assume an effect size. Which one do you use in the calculator? Would it make sense to calculate FPR50 for a range of effect sizes? A: Yes if you use the web calculator or the R scripts then you need to specify what the normalized effect size is. You can use your observed one. If you're trying to interpret real data, you've got an estimated effect size and you can use that. For example when you've observed p = 0.05 that corresponds to a likelihood ratio of 2.8 when you use the true effect size (that's known when you do simulations). All you've got is the observed effect size. So they're not the same of course. But you can easily show with simulations, that if you use the observed effect size in place of the the true effect size (which you don't generally know) then that likelihood ratio goes up from about 2.8 to 3.6; it's around 3, either way. You can plug your observed normalised effect size into the calculator and you won't be led far astray. This shown in section 5 of the 2017 paper (especially section 5.1). Q: Consider hypothesis H1 versus H2 which is the interpretation to go with? A: Well I'm not quite clear still what the two interpretations the questioner is alluding to are but I shouldn't rely on the p value. The most natural way to compare two hypotheses is the calculate the likelihood ratio. You can do a full Bayesian analysis. Some forms of Bayesian analysis can give results that are quite similar to the p values. But that can't possibly be generally true because are defined differently. Stephen Senn produced an example where there was essentially no problem with p value, but that was for a one-sided test with a fairly bizarre prior distribution. In general in Bayes, you specify a prior distribution of effect sizes, what you believe before the experiment. Now, unless you have empirical data for what that distribution is, which is very rare indeed, then I just can't see the justification for that. It's bad enough making up the probability that there's a real effect compared with there being no real effect. To make up a whole distribution just seems to be a bit like fantasy. Mine is simpler because by considering a point null-hypothesis and a point alternative hypothesis, what in general would be called Bayes' factors become likelihood ratios. Likelihood ratios are much easier to understand than Bayes' factors because they just give you the relative probability of observing your data under two different hypotheses. This is a special case of Bayes' theorem. But as I mentioned, any approach to Bayes' theorem which assumes a point null hypothesis gives pretty similar answers, so it doesn't really matter which you use. There was edition of the American Statistician last year which had 44 different contributions about "the world beyond p = 0.05″. I found it a pretty disappointing edition because there was no agreement among people and a lot of people didn't get around to making any recommendation. They said what was wrong, but didn't say what you should do in response. The one paper that I did like was the one by Benjamin & Berger. They recommended their false positive risk estimate (as I would call it; they called it something different but that's what it amounts to) and that's even simpler to calculate than mine. It's a little more pessimistic, it can give a bigger false positive risk for a given p value, but apart from that detail, their recommendations are much the same as mine. It doesn't really matter which you choose. Q: If people want a procedure that does not too often lead them to draw wrong conclusions, is it fine if they use a p value? A: No, that maximises your wrong conclusions, among the available methods! The whole point is, that the false positive risk is a lot bigger than the p value under almost all circumstances. Some people refer to this as the p value exaggerating the evidence; but it only does so if you incorrectly interpret the p value as being the probability that you're wrong. It certainly is not that. Q: Your thoughts on, there's lots of recommendations about practical alternatives to p values. Most notably the Nature piece that was published last year—something like 400 signatories—that said that we should retire the p value. Their alternative was to just report effect sizes and confidence intervals. Now you've said you're not against anything that should be standard practice, but I wonder whether this alternative is actually useful, to retire the p value? A: I don't think the 400 author piece in Nature recommended ditching p values at all. It recommended ditching the 0.05 threshold, and just stating a p value. That would mean abandoning the term "statistically significant" which is so shockingly misleading for the reasons I've been talking about. But it didn't say that you shouldn't give p values, and I don't think it really recommended an alternative. I would be against not giving p values because it's the p value which enables you to calculate the equivalent false positive risk which would be much harder work if people didn't give the p value. If you use the false positive risk, you'll inevitably get a larger false negative rate. So, if you're using it to make a decision, other things come into it than the false positive risk and the p value. Namely, the cost of missing an effect which is real (a false negative), and the cost of getting a false positive. They both matter. If you can estimate the costs associated with either of them, then then you can draw some sort of optimal conclusion. Certainly the costs of getting false positives or rather low for most people. In fact, there may be a great advantage to your career to publish a lot of false positives, unfortunately. This is the problem that the RIOT science club is dealing with I guess. Q: What about changing the alpha level? To tinker with the alpha level has been popular in the light of the replication crisis, to make it even a more difficult test pass when testing your hypothesis. Some people have said that it should be 0.005 should be the threshold. A: Daniel Benjamin said that and a lot of other authors. I wrote to them about that and they said that they didn't really think it was very satisfactory but it would be better than the present practice. They regarded it as a sort of interim thing. It's true that you would have fewer false positives if you did that, but it's a very crude way of treating the false positive risk problem. I would much prefer to make a direct estimate, even though it's rough, of the false positive risk rather than just crudely reducing to p = 0.005. I do have a long paragraph in one of the papers discussing this particular thing {towards the end of Conclusions in the 2017 paper). If you were willing to assume a 50:50 prior chance of there being a real effect the p = 0.005 would correspond to FPR50 = 0.034, which sounds satisfactory (from Table, above, or web calculator). But if, for example, you are testing a hypothesis about teleportation or mind-reading or homeopathy then you probably wouldn't be willing to give a prior of 50% to that being right before the experiment. If the prior probability of there being a real effect were 0.1, rather than 0.5, the Table above shows that observation of p = 0.005 would suggest, in my example, FPR = 0.24 and a 24% risk of a false positive would still be disastrous. In this case you would have to have observed p = 0.00043 in order to reduce the false positive risk to 0.05. So no fixed p value threshold will cope adequately with every problem. For up-to-date links to the web calculator, and to papers, start at http://www.onemol.org.uk/?page_id=456 Colquhoun, 2014, An investigation of the false discovery rate and the misinterpretation of p-values https://royalsocietypublishing.org/doi/full/10.1098/rsos.140216 Colquhoun, 2017, The reproducibility of research and the misinterpretation of p-values https://royalsocietypublishing.org/doi/10.1098/rsos.171085 Colquhoun, 2019, The False Positive Risk: A Proposal Concerning What to Do About p-Values https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1529622 Benjamin & Berger, Three Recommendations for Improving the Use of p-Values Sellke, T., Bayarri, M. J., and Berger, J. O. (2001), "Calibration of p Values for Testing Precise Null Hypotheses," The American Statistician, 55, 62–71. DOI: 10.1198/000313001300339950. [Taylor & Francis Online], https://twitter.com/david_colquhoun https://twitter.com/riotscienceclub Tagged P values, seminars, statistics | 2 Comments This piece is almost identical with today's Spectator Health article. This week there has been enormously wide coverage in the press for one of the worst papers on acupuncture that I've come across. As so often, the paper showed the opposite of what its title and press release, claimed. For another stunning example of this sleight of hand, try Acupuncturists show that acupuncture doesn't work, but conclude the opposite: journal fails, published in the British Journal of General Practice). Presumably the wide coverage was a result of the hyped-up press release issued by the journal, BMJ Acupuncture in Medicine. That is not the British Medical Journal of course, but it is, bafflingly, published by the BMJ Press group, and if you subscribe to press releases from the real BMJ. you also get them from Acupuncture in Medicine. The BMJ group should not be mixing up press releases about real medicine with press releases about quackery. There seems to be something about quackery that's clickbait for the mainstream media. As so often, the press release was shockingly misleading: It said Acupuncture may alleviate babies' excessive crying Needling twice weekly for 2 weeks reduced crying time significantly This is totally untrue. Here's why. Luckily the Science Media Centre was on the case quickly: read their assessment. The paper made the most elementary of all statistical mistakes. It failed to make allowance for the jelly bean problem. The paper lists 24 different tests of statistical significance and focusses attention on three that happen to give a P value (just) less than 0.05, and so were declared to be "statistically significant". If you do enough tests, some are bound to come out "statistically significant" by chance. They are false postives, and the conclusions are as meaningless as "green jelly beans cause acne" in the cartoon. This is called P-hacking and it's a well known cause of problems. It was evidently beyond the wit of the referees to notice this naive mistake. It's very doubtful whether there is anything happening but random variability. And that's before you even get to the problem of the weakness of the evidence provided by P values close to 0.05. There's at least a 30% chance of such values being false positives, even if it were not for the jelly bean problem, and a lot more than 30% if the hypothesis being tested is implausible. I leave it to the reader to assess the plausibility of the hypothesis that a good way to stop a baby crying is to stick needles into the poor baby. If you want to know more about P values try Youtube or here, or here. One of the people asked for an opinion on the paper was George Lewith, the well-known apologist for all things quackish. He described the work as being a "good sized fastidious well conducted study ….. The outcome is clear". Thus showing an ignorance of statistics that would shame an undergraduate. On the Today Programme, I was interviewed by the formidable John Humphrys, along with the mandatory member of the flat-earth society whom the BBC seems to feel obliged to invite along for "balance". In this case it was professional acupuncturist, Mike Cummings, who is an associate editor of the journal in which the paper appeared. Perhaps he'd read the Science media centre's assessment before he came on, because he said, quite rightly, that "in technical terms the study is negative" "the primary outcome did not turn out to be statistically significant" to which Humphrys retorted, reasonably enough, "So it doesn't work". Cummings' response to this was a lot of bluster about how unfair it was for NICE to expect a treatment to perform better than placebo. It was fascinating to hear Cummings admit that the press release by his own journal was simply wrong. Listen to the interview here Another obvious flaw of the study is that the nature of the control group. It is not stated very clearly but it seems that the baby was left alone with the acupuncturist for 10 minutes. A far better control would have been to have the baby cuddled by its mother, or by a nurse. That's what was used by Olafsdottir et al (2001) in a study that showed cuddling worked just as well as another form of quackery, chiropractic, to stop babies crying. Manufactured doubt is a potent weapon of the alternative medicine industry. It's the same tactic as was used by the tobacco industry. You scrape together a few lousy papers like this one and use them to pretend that there's a controversy. For years the tobacco industry used this tactic to try to persuade people that cigarettes didn't give you cancer, and that nicotine wasn't addictive. The main stream media obligingly invite the representatives of the industry who convey to the reader/listener that there is a controversy, when there isn't. Acupuncture is no longer controversial. It just doesn't work -see Acupuncture is a theatrical placebo: the end of a myth. Try to imagine a pill that had been subjected to well over 3000 trials without anyone producing convincing evidence for a clinically useful effect. It would have been abandoned years ago. But by manufacturing doubt, the acupuncture industry has managed to keep its product in the news. Every paper on the subject ends with the words "more research is needed". No it isn't. Acupuncture is pre-scientific idea that was moribund everywhere, even in China, until it was revived by Mao Zedong as part of the appalling Great Proletarian Revolution. Now it is big business in China, and 100 percent of the clinical trials that come from China are positive. if you believe them, you'll truly believe anything. Soon after the Today programme in which we both appeared, the acupuncturist, Mike Cummings, posted his reaction to the programme. I thought it worth posting the original version in full. Its petulance and abusiveness are quite remarkable. I thank Cummings for giving publicity to the video of our appearance, and for referring to my Wikipedia page. I leave it to the reader to judge my competence, and his, in the statistics of clinical trials. And it's odd to be described as a "professional blogger" when the 400+ posts on dcscience.net don't make a penny -in fact they cost me money. In contrast, he is the salaried medical director of the British Medical Acupuncture Society. It's very clear that he has no understanding of the error of the transposed conditional, nor even the mulltiple comparison problem (and neither, it seems, does he know the meaning of the word 'protagonist'). I ignored his piece, but several friends complained to the BMJ for allowing such abusive material on their blog site. As a result a few changes were made. The "baying mob" is still there, but the Wikipedia link has gone. I thought that readers might be interested to read the original unexpurgated version. It shows, better than I ever could, the weakness of the arguments of the alternative medicine community. To quote Upton Sinclair: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it." It also shows that the BBC still hasn't learned the lessons in Steve Jones' excellent "Review of impartiality and accuracy of the BBC's coverage of science". Every time I appear in such a programme, they feel obliged to invite a member of the flat earth society to propagate their make-believe. Acupuncture for infantile colic – misdirection in the media or over-reaction from a sceptic blogger? 26 Jan, 17 | by Dr Mike Cummings So there has been a big response to this paper press released by BMJ on behalf of the journal Acupuncture in Medicine. The response has been influenced by the usual characters – retired professors who are professional bloggers and vocal critics of anything in the realm of complementary medicine. They thrive on oiling up and flexing their EBM muscles for a baying mob of fellow sceptics (see my 'stereotypical mental image' here). Their target in this instant is a relatively small trial on acupuncture for infantile colic.[1] Deserving of being press released by virtue of being the largest to date in the field, but by no means because it gave a definitive answer to the question of the efficacy of acupuncture in the condition. We need to wait for an SR where the data from the 4 trials to date can be combined. On this occasion I had the pleasure of joining a short segment on the Today programme on BBC Radio 4 led by John Humphreys. My protagonist was the ever-amusing David Colquhoun (DC), who spent his short air-time complaining that the journal was even allowed to be published in the first place. You can learn all about DC care of Wikipedia – he seems to have a surprisingly long write up for someone whose profession career was devoted to single ion channels, perhaps because a significant section of the page is devoted to his activities as a quack-busting blogger. So why would BBC Radio 4 invite a retired basic scientist and professional sceptic blogger to be interviewed alongside one of the journal editors – a clinician with expertise in acupuncture (WMA)? At no point was it made manifest that only one of the two had ever been in a position to try to help parents with a baby that they think cries excessively. Of course there are a lot of potential causes of excessive crying, but I am sure DC would agree that it is unlikely to be attributable to a single ion channel. So what about the research itself? I have already said that the trial was not definitive, but it was not a bad trial. It suffered from under-recruiting, which meant that it was underpowered in terms of the statistical analysis. But it was prospectively registered, had ethical approval and the protocol was published. Primary and secondary outcomes were clearly defined, and the only change from the published protocol was to combine the two acupuncture groups in an attempt to improve the statistical power because of under recruitment. The fact that this decision was made after the trial had begun means that the results would have to be considered speculative. For this reason the editors of Acupuncture in Medicine insisted on alteration of the language in which the conclusions were framed to reflect this level of uncertainty. DC has focussed on multiple statistical testing and p values. These are important considerations, and we could have insisted on more clarity in the paper. P values are a guide and the 0.05 level commonly adopted must be interpreted appropriately in the circumstances. In this paper there are no definitive conclusions, so the p values recorded are there to guide future hypothesis generation and trial design. There were over 50 p values reported in this paper, so by chance alone you must expect some to be below 0.05. If one is to claim statistical significance of an outcome at the 0.05 level, ie a 1:20 likelihood of the event happening by chance alone, you can only perform the test once. If you perform the test twice you must reduce the p value to 0.025 if you want to claim statistical significance of one or other of the tests. So now we must come to the predefined outcomes. They were clearly stated, and the results of these are the only ones relevant to the conclusions of the paper. The primary outcome was the relative reduction in total crying time (TC) at 2 weeks. There were two significance tests at this point for relative TC. For a statistically significant result, the p values would need to be less than or equal to 0.025 – neither was this low, hence my comment on the Radio 4 Today programme that this was technically a negative trial (more correctly 'not a positive trial' – it failed to disprove the null hypothesis ie that the samples were drawn from the same population and the acupuncture intervention did not change the population treated). Finally to the secondary outcome – this was the number of infants in each group who continued to fulfil the criteria for colic at the end of each intervention week. There were four tests of significance so we need to divide 0.05 by 4 to maintain the 1:20 chance of a random event ie only draw conclusions regarding statistical significance if any of the tests resulted in a p value at or below 0.0125. Two of the 4 tests were below this figure, so we say that the result is unlikely to have been chance alone in this case. With hindsight it might have been good to include this explanation in the paper itself, but as editors we must constantly balance how much we push authors to adjust their papers, and in this case the editor focussed on reducing the conclusions to being speculative rather than definitive. A significant result in a secondary outcome leads to a speculative conclusion that acupuncture 'may' be an effective treatment option… but further research will be needed etc… Now a final word on the 3000 plus acupuncture trials that DC loves to mention. His point is that there is no consistent evidence for acupuncture after over 3000 RCTs, so it clearly doesn't work. He first quoted this figure in an editorial after discussing the largest, most statistically reliable meta-analysis to date – the Vickers et al IPDM.[2] DC admits that there is a small effect of acupuncture over sham, but follows the standard EBM mantra that it is too small to be clinically meaningful without ever considering the possibility that sham (gentle acupuncture plus context of acupuncture) can have clinically relevant effects when compared with conventional treatments. Perhaps now the best example of this is a network meta-analysis (NMA) using individual patient data (IPD), which clearly demonstrates benefits of sham acupuncture over usual care (a variety of best standard or usual care) in terms of health-related quality of life (HRQoL).[3] I got an email from the BMJ asking me to take part in a BMJ Head-to-Head debate about acupuncture. I did one of these before, in 2007, but it generated more heat than light (the only good thing to come out of it was the joke about leprechauns). So here is my polite refusal. Thanks for the invitation, Perhaps you should read the piece that I wrote after the Today programme http://www.dcscience.net/2017/01/20/if-your-baby-is-crying-what-do-you-do-stick-pins-in-it/#follow Why don't you do these Head to Heads about genuine controversies? To do them about homeopathy or acupuncture is to fall for the "manufactured doubt" stratagem that was used so effectively by the tobacco industry to promote smoking. It's the favourite tool of snake oil salesman too, and th BMJ should see that and not fall for their tricks. Such pieces night be good clickbait, but they are bad medicine and bad ethics. Tagged acupuncture, alternative medicine, badscience, CAM, false discovery rate, false positives, George Lewith, Michael Cummings, P hacking, statistics, TCM, Traditional Chinese medicine | 11 Comments Statistics and the law: the prosecutor's fallacy This post arose from a recent meeting at the Royal Society. It was organised by Julie Maxton to discuss the application of statistical methods to legal problems. I found myself sitting next to an Appeal Court Judge who wanted more explanation of the ideas. Here it is. Some preliminaries The papers that I wrote recently were about the problems associated with the interpretation of screening tests and tests of significance. They don't allude to legal problems explicitly, though the problems are the same in principle. They are all open access. The first appeared in 2014: http://rsos.royalsocietypublishing.org/content/1/3/140216 Since the first version of this post, March 2016, I've written two more papers and some popular pieces on the same topic. There's a list of them at http://www.onemol.org.uk/?page_id=456. I also made a video for YouTube of a recent talk. In these papers I was interested in the false positive risk (also known as the false discovery rate) in tests of significance. It turned out to be alarmingly large. That has serious consequences for the credibility of the scientific literature. In legal terms, the false positive risk means the proportion of cases in which, on the basis of the evidence, a suspect is found guilty when in fact they are innocent. That has even more serious consequences. Although most of what I want to say can be said without much algebra, it would perhaps be worth getting two things clear before we start. The rules of probability. (1) To get any understanding, it's essential to understand the rules of probabilities, and, in particular, the idea of conditional probabilities. One source would be my old book, Lectures on Biostatistics (now free), The account on pages 19 to 24 give a pretty simple (I hope) description of what's needed. Briefly, a vertical line is read as "given", so Prob(evidence | not guilty) means the probability that the evidence would be observed given that the suspect was not guilty. (2) Another potential confusion in this area is the relationship between odds and probability. The relationship between the probability of an event occurring, and the odds on the event can be illustrated by an example. If the probability of being right-handed is 0.9, then the probability of being not being right-handed is 0.1. That means that 9 people out of 10 are right-handed, and one person in 10 is not. In other words for every person who is not right-handed there are 9 who are right-handed. Thus the odds that a randomly-selected person is right-handed are 9 to 1. In symbols this can be written \[ \mathrm{probability=\frac{odds}{1 + odds}} \] In the example, the odds on being right-handed are 9 to 1, so the probability of being right-handed is 9 / (1+9) = 0.9. Conversely, \[ \mathrm{odds =\frac{probability}{1 – probability}} \] In the example, the probability of being right-handed is 0.9, so the odds of being right-handed are 0.9 / (1 – 0.9) = 0.9 / 0.1 = 9 (to 1). With these preliminaries out of the way, we can proceed to the problem. The legal problem The first problem lies in the fact that the answer depends on Bayes' theorem. Although that was published in 1763, statisticians are still arguing about how it should be used to this day. In fact whenever it's mentioned, statisticians tend to revert to internecine warfare, and forget about the user. Bayes' theorem can be stated in words as follows \[ \mathrm{\text{posterior odds ratio} = \text{prior odds ratio} \times \text{likelihood ratio}} \] "Posterior odds ratio" means the odds that the person is guilty, relative to the odds that they are innocent, in the light of the evidence, and that's clearly what one wants to know. The "prior odds" are the odds that the person was guilty before any evidence was produced, and that is the really contentious bit. Sometimes the need to specify the prior odds has been circumvented by using the likelihood ratio alone, but, as shown below, that isn't a good solution. The analogy with the use of screening tests to detect disease is illuminating. Screening tests A particularly straightforward application of Bayes' theorem is in screening people to see whether or not they have a disease. It turns out, in many cases, that screening gives a lot more wrong results (false positives) than right ones. That's especially true when the condition is rare (the prior odds that an individual suffers from the condition is small). The process of screening for disease has a lot in common with the screening of suspects for guilt. It matters because false positives in court are disastrous. The screening problem is dealt with in sections 1 and 2 of my paper. or on this blog (and here). A bit of animation helps the slides, so you may prefer the Youtube version. The rest of my paper applies similar ideas to tests of significance. In that case the prior probability is the probability that there is in fact a real effect, or, in the legal case, the probability that the suspect is guilty before any evidence has been presented. This is the slippery bit of the problem both conceptually, and because it's hard to put a number on it. But the examples below show that to ignore it, and to use the likelihood ratio alone, could result in many miscarriages of justice. In the discussion of tests of significance, I took the view that it is not legitimate (in the absence of good data to the contrary) to assume any prior probability greater than 0.5. To do so would presume you know the answer before any evidence was presented. In the legal case a prior probability of 0.5 would mean assuming that there was a 50:50 chance that the suspect was guilty before any evidence was presented. A 50:50 probability of guilt before the evidence is known corresponds to a prior odds ratio of 1 (to 1) If that were true, the likelihood ratio would be a good way to represent the evidence, because the posterior odds ratio would be equal to the likelihood ratio. It could be argued that 50:50 represents some sort of equipoise, but in the example below it is clearly too high, and if it is less that 50:50, use of the likelihood ratio runs a real risk of convicting an innocent person. The following example is modified slightly from section 3 of a book chapter by Mortera and Dawid (2008). Philip Dawid is an eminent statistician who has written a lot about probability and the law, and he's a member of the legal group of the Royal Statistical Society. My version of the example removes most of the algebra, and uses different numbers. Example: The island problem The "island problem" (Eggleston 1983, Appendix 3) is an imaginary example that provides a good illustration of the uses and misuses of statistical logic in forensic identification. A murder has been committed on an island, cut off from the outside world, on which 1001 (= N + 1) inhabitants remain. The forensic evidence at the scene consists of a measurement, x, on a "crime trace" characteristic, which can be assumed to come from the criminal. It might, for example, be a bit of the DNA sequence from the crime scene. Say, for the sake of example, that the probability of a random member of the population having characteristic x is P = 0.004 (i.e. 0.4% ), so the probability that a random member of the population does not have the characteristic is 1 – P = 0.996. The mainland police arrive and arrest a random islander, Jack. It is found that Jack matches the crime trace. There is no other relevant evidence. How should this match evidence be used to assess the claim that Jack is the murderer? We shall consider three arguments that have been used to address this question. The first is wrong. The second and third are right. (For illustration, we have taken N = 1000, P = 0.004.) (1) Prosecutor's fallacy Prosecuting counsel, arguing according to his favourite fallacy, asserts that the probability that Jack is guilty is 1 – P , or 0.996, and that this proves guilt "beyond a reasonable doubt". The probability that Jack would show characteristic x if he were not guilty would be 0.4% i.e. Prob(Jack has x | not guilty) = 0.004. Therefore the probability of the evidence, given that Jack is guilty, Prob(Jack has x | Jack is guilty), is one 1 – 0.004 = 0.996. But this is Prob(evidence | guilty) which is not what we want. What we need is the probability that Jack is guilty, given the evidence, P(Jack is guilty | Jack has characteristic x). To mistake the latter for the former is the prosecutor's fallacy, or the error of the transposed conditional. Dawid gives an example that makes the distinction clear. "As an analogy to help clarify and escape this common and seductive confusion, consider the difference between "the probability of having spots, if you have measles" -which is close to 1 and "the probability of having measles, if you have spots" -which, in the light of the many alternative possible explanations for spots, is much smaller." (2) Defence counter-argument Counsel for the defence points out that, while the guilty party must have characteristic x, he isn't the only person on the island to have this characteristic. Among the remaining N = 1000 innocent islanders, 0.4% have characteristic x, so the number who have it will be NP = 1000 x 0.004 = 4 . Hence the total number of islanders that have this characteristic must be 1 + NP = 5 . The match evidence means that Jack must be one of these 5 people, but does not otherwise distinguish him from any of the other members of it. Since just one of these is guilty, the probability that this is Jack is thus 1/5, or 0.2— very far from being "beyond all reasonable doubt". (3) Bayesian argument The probability of the having characteristic x (the evidence) would be Prob(evidence | guilty) = 1 if Jack were guilty, but if Jack were not guilty it would be 0.4%, i.e. Prob(evidence | not guilty) = P. Hence the likelihood ratio in favour of guilt, on the basis of the evidence, is \[ LR=\frac{\text{Prob(evidence } | \text{ guilty})}{\text{Prob(evidence }|\text{ not guilty})} = \frac{1}{P}=250 \] In words, the evidence would be 250 times more probable if Jack were guilty than if he were innocent. While this seems strong evidence in favour of guilt, it still does not tell us what we want to know, namely the probability that Jack is guilty in the light of the evidence: Prob(guilty | evidence), or, equivalently, the odds ratio -the odds of guilt relative to odds of innocence, given the evidence, To get that we must multiply the likelihood ratio by the prior odds on guilt, i.e. the odds on guilt before any evidence is presented. It's often hard to get a numerical value for this. But in our artificial example, it is possible. We can argue that, in the absence of any other evidence, Jack is no more nor less likely to be the culprit than any other islander, so that the prior probability of guilt is 1/(N + 1), corresponding to prior odds on guilt of 1/N. We can now apply Bayes's theorem to obtain the posterior odds on guilt: \[ \text {posterior odds} = \text{prior odds} \times LR = \left ( \frac{1}{N}\right ) \times \left ( \frac{1}{P} \right )= 0.25 \] Thus the odds of guilt in the light of the evidence are 4 to 1 against. The corresponding posterior probability of guilt is \[ Prob( \text{guilty } | \text{ evidence})= \frac{1}{1+NP}= \frac{1}{1+4}=0.2 \] This is quite small –certainly no basis for a conviction. This result is exactly the same as that given by the Defence Counter-argument', (see above). That argument was simpler than the Bayesian argument. It didn't explicitly use Bayes' theorem, though it was implicit in the argument. The advantage of using the former is that it looks simpler. The advantage of the explicitly Bayesian argument is that it makes the assumptions more clear. In summary The prosecutor's fallacy suggested, quite wrongly, that the probability that Jack was guilty was 0.996. The likelihood ratio was 250, which also seems to suggest guilt, but it doesn't give us the probability that we need. In stark contrast, the defence counsel's argument, and equivalently, the Bayesian argument, suggested that the probability of Jack's guilt as 0.2. or odds of 4 to 1 against guilt. The potential for wrong conviction is obvious. Although this argument uses an artificial example that is simpler than most real cases, it illustrates some important principles. (1) The likelihood ratio is not a good way to evaluate evidence, unless there is good reason to believe that there is a 50:50 chance that the suspect is guilty before any evidence is presented. (2) In order to calculate what we need, Prob(guilty | evidence), you need to give numerical values of how common the possession of characteristic x (the evidence) is the whole population of possible suspects (a reasonable value might be estimated in the case of DNA evidence), We also need to know the size of the population. In the case of the island example, this was 1000, but in general, that would be hard to answer and any answer might well be contested by an advocate who understood the problem. These arguments lead to four conclusions. (1) If a lawyer uses the prosecutor's fallacy, (s)he should be told that it's nonsense. (2) If a lawyer advocates conviction on the basis of likelihood ratio alone, s(he) should be asked to justify the implicit assumption that there was a 50:50 chance that the suspect was guilty before any evidence was presented. (3) If a lawyer uses Defence counter-argument, or, equivalently, the version of Bayesian argument given here, (s)he should be asked to justify the estimates of the numerical value given to the prevalence of x in the population (P) and the numerical value of the size of this population (N). A range of values of P and N could be used, to provide a range of possible values of the final result, the probability that the suspect is guilty in the light of the evidence. (4) The example that was used is the simplest possible case. For more complex cases it would be advisable to ask a professional statistician. Some reliable people can be found at the Royal Statistical Society's section on Statistics and the Law. If you do ask a professional statistician, and they present you with a lot of mathematics, you should still ask these questions about precisely what assumptions were made, and ask for an estimate of the range of uncertainty in the value of Prob(guilty | evidence) which they produce. Postscript: real cases Another paper by Philip Dawid, Statistics and the Law, is interesting because it discusses some recent real cases: for example the wrongful conviction of Sally Clark because of the wrong calculation of the statistics for Sudden Infant Death Syndrome. On Monday 21 March, 2016, Dr Waney Squier was struck off the medical register by the General Medical Council because they claimed that she misrepresented the evidence in cases of Shaken Baby Syndrome (SBS). This verdict was questioned by many lawyers, including Michael Mansfield QC and Clive Stafford Smith, in a letter. "General Medical Council behaving like a modern inquisition" The latter has already written "This shaken baby syndrome case is a dark day for science – and for justice".. The evidence for SBS is based on the existence of a triad of signs (retinal bleeding, subdural bleeding and encephalopathy). It seems likely that these signs will be present if a baby has been shake, i.e Prob(triad | shaken) is high. But this is irrelevant to the question of guilt. For that we need Prob(shaken | triad). As far as I know, the data to calculate what matters are just not available. It seem that the GMC may have fallen for the prosecutor's fallacy. Or perhaps the establishment won't tolerate arguments. One is reminded, once again, of the definition of clinical experience: "Making the same mistakes with increasing confidence over an impressive number of years." (from A Sceptic's Medical Dictionary by Michael O'Donnell. A Sceptic's Medical Dictionary BMJ publishing, 1997). Appendix (for nerds). Two forms of Bayes' theorem The form of Bayes' theorem given at the start is expressed in terms of odds ratios. The same rule can be written in terms of probabilities. (This was the form used in the appendix of my paper.) For those interested in the details, it may help to define explicitly these two forms. In terms of probabilities, the probability of guilt in the light of the evidence (what we want) is \[ \text{Prob(guilty } | \text{ evidence}) = \text{Prob(evidence } | \text{ guilty}) \frac{\text{Prob(guilty })}{\text{Prob(evidence })} \] In terms of odds ratios, the odds ratio on guilt, given the evidence (which is what we want) is \[ \frac{ \text{Prob(guilty } | \text{ evidence})} {\text{Prob(not guilty } | \text{ evidence}} = \left ( \frac{ \text{Prob(guilty)}} {\text {Prob((not guilty)}} \right ) \left ( \frac{ \text{Prob(evidence } | \text{ guilty})} {\text{Prob(evidence } | \text{ not guilty}} \right ) \] or, in words, \[ \text{posterior odds of guilt } =\text{prior odds of guilt} \times \text{likelihood ratio} \] This is the precise form of the equation that was given in words at the beginning. A derivation of the equivalence of these two forms is sketched in a document which you can download. It's worth pointing out the following connection between the legal argument (above) and tests of significance. (1) The likelihood ratio works only when there is a 50:50 chance that the suspect is guilty before any evidence is presented (so the prior probability of guilt is 0.5, or, equivalently, the prior odds ratio is 1). (2) The false positive rate in signiifcance testing is close to the P value only when the prior probability of a real effect is 0.5, as shown in section 6 of the P value paper. However there is another twist in the significance testing argument. The statement above is right if we take as a positive result any P < 0.05. If we want to interpret a value of P = 0.047 in a single test, then, as explained in section 10 of the P value paper, we should restrict attention to only those tests that give P close to 0.047. When that is done the false positive rate is 26% even when the prior is 0.5 (and much bigger than 30% if the prior is smaller –see extra Figure), That justifies the assertion that if you claim to have discovered something because you have observed P = 0.047 in a single test then there is a chance of at least 30% that you'll be wrong. Is there, I wonder, any legal equivalent of this argument? Tagged Clive Stafford Smith, false conviction, false discovery rate, False positive risk, false positives, FPR, Law, lawyers, Michael Mansfield, Philip Dawid, Squier, statistics, Waney Squier | 10 Comments Placebo effects are weak: regression to the mean is the main reason ineffective treatments appear to work "Statistical regression to the mean predicts that patients selected for abnormalcy will, on the average, tend to improve. We argue that most improvements attributed to the placebo effect are actually instances of statistical regression." "Thus, we urge caution in interpreting patient improvements as causal effects of our actions and should avoid the conceit of assuming that our personal presence has strong healing powers." McDonald et al., (1983) In 1955, Henry Beecher published "The Powerful Placebo". I was in my second undergraduate year when it appeared. And for many decades after that I took it literally, They looked at 15 studies and found that an average 35% of them got "satisfactory relief" when given a placebo. This number got embedded in pharmacological folk-lore. He also mentioned that the relief provided by placebo was greatest in patients who were most ill. Consider the common experiment in which a new treatment is compared with a placebo, in a double-blind randomised controlled trial (RCT). It's common to call the responses measured in the placebo group the placebo response. But that is very misleading, and here's why. The responses seen in the group of patients that are treated with placebo arise from two quite different processes. One is the genuine psychosomatic placebo effect. This effect gives genuine (though small) benefit to the patient. The other contribution comes from the get-better-anyway effect. This is a statistical artefact and it provides no benefit whatsoever to patients. There is now increasing evidence that the latter effect is much bigger than the former. How can you distinguish between real placebo effects and get-better-anyway effect? The only way to measure the size of genuine placebo effects is to compare in an RCT the effect of a dummy treatment with the effect of no treatment at all. Most trials don't have a no-treatment arm, but enough do that estimates can be made. For example, a Cochrane review by Hróbjartsson & Gøtzsche (2010) looked at a wide variety of clinical conditions. Their conclusion was: "We did not find that placebo interventions have important clinical effects in general. However, in certain settings placebo interventions can influence patient-reported outcomes, especially pain and nausea, though it is difficult to distinguish patient-reported effects of placebo from biased reporting." In some cases, the placebo effect is barely there at all. In a non-blind comparison of acupuncture and no acupuncture, the responses were essentially indistinguishable (despite what the authors and the journal said). See "Acupuncturists show that acupuncture doesn't work, but conclude the opposite" So the placebo effect, though a real phenomenon, seems to be quite small. In most cases it is so small that it would be barely perceptible to most patients. Most of the reason why so many people think that medicines work when they don't isn't a result of the placebo response, but it's the result of a statistical artefact. Regression to the mean is a potent source of deception The get-better-anyway effect has a technical name, regression to the mean. It has been understood since Francis Galton described it in 1886 (see Senn, 2011 for the history). It is a statistical phenomenon, and it can be treated mathematically (see references, below). But when you think about it, it's simply common sense. You tend to go for treatment when your condition is bad, and when you are at your worst, then a bit later you're likely to be better, The great biologist, Peter Medawar comments thus. "If a person is (a) poorly, (b) receives treatment intended to make him better, and (c) gets better, then no power of reasoning known to medical science can convince him that it may not have been the treatment that restored his health" (Medawar, P.B. (1969:19). The Art of the Soluble: Creativity and originality in science. Penguin Books: Harmondsworth). This is illustrated beautifully by measurements made by McGorry et al., (2001). Patients with low back pain recorded their pain (on a 10 point scale) every day for 5 months (they were allowed to take analgesics ad lib). The results for four patients are shown in their Figure 2. On average they stay fairly constant over five months, but they fluctuate enormously, with different patterns for each patient. Painful episodes that last for 2 to 9 days are interspersed with periods of lower pain or none at all. It is very obvious that if these patients had gone for treatment at the peak of their pain, then a while later they would feel better, even if they were not actually treated. And if they had been treated, the treatment would have been declared a success, despite the fact that the patient derived no benefit whatsoever from it. This entirely artefactual benefit would be the biggest for the patients that fluctuate the most (e.g this in panels a and d of the Figure). Figure 2 from McGorry et al, 2000. Examples of daily pain scores over a 6-month period for four participants. Note: Dashes of different lengths at the top of a figure designate an episode and its duration. The effect is illustrated well by an analysis of 118 trials of treatments for non-specific low back pain (NSLBP), by Artus et al., (2010). The time course of pain (rated on a 100 point visual analogue pain scale) is shown in their Figure 2. There is a modest improvement in pain over a few weeks, but this happens regardless of what treatment is given, including no treatment whatsoever. FIG. 2 Overall responses (VAS for pain) up to 52-week follow-up in each treatment arm of included trials. Each line represents a response line within each trial arm. Red: index treatment arm; Blue: active treatment arm; Green: usual care/waiting list/placebo arms. ____: pharmacological treatment; – – – -: non-pharmacological treatment; . . .. . .: mixed/other. The authors comment "symptoms seem to improve in a similar pattern in clinical trials following a wide variety of active as well as inactive treatments.", and "The common pattern of responses could, for a large part, be explained by the natural history of NSLBP". In other words, none of the treatments work. This paper was brought to my attention through the blog run by the excellent physiotherapist, Neil O'Connell. He comments "If this finding is supported by future studies it might suggest that we can't even claim victory through the non-specific effects of our interventions such as care, attention and placebo. People enrolled in trials for back pain may improve whatever you do. This is probably explained by the fact that patients enrol in a trial when their pain is at its worst which raises the murky spectre of regression to the mean and the beautiful phenomenon of natural recovery." O'Connell has discussed the matter in recent paper, O'Connell (2015), from the point of view of manipulative therapies. That's an area where there has been resistance to doing proper RCTs, with many people saying that it's better to look at "real world" outcomes. This usually means that you look at how a patient changes after treatment. The hazards of this procedure are obvious from Artus et al.,Fig 2, above. It maximises the risk of being deceived by regression to the mean. As O'Connell commented "Within-patient change in outcome might tell us how much an individual's condition improved, but it does not tell us how much of this improvement was due to treatment." In order to eliminate this effect it's essential to do a proper RCT with control and treatment groups tested in parallel. When that's done the control group shows the same regression to the mean as the treatment group. and any additional response in the latter can confidently attributed to the treatment. Anything short of that is whistling in the wind. Needless to say, the suboptimal methods are most popular in areas where real effectiveness is small or non-existent. This, sad to say, includes low back pain. It also includes just about every treatment that comes under the heading of alternative medicine. Although these problems have been understood for over a century, it remains true that "It is difficult to get a man to understand something, when his salary depends upon his not understanding it." Upton Sinclair (1935) Responders and non-responders? One excuse that's commonly used when a treatment shows only a small effect in proper RCTs is to assert that the treatment actually has a good effect, but only in a subgroup of patients ("responders") while others don't respond at all ("non-responders"). For example, this argument is often used in studies of anti-depressants and of manipulative therapies. And it's universal in alternative medicine. There's a striking similarity between the narrative used by homeopaths and those who are struggling to treat depression. The pill may not work for many weeks. If the first sort of pill doesn't work try another sort. You may get worse before you get better. One is reminded, inexorably, of Voltaire's aphorism "The art of medicine consists in amusing the patient while nature cures the disease". There is only a handful of cases in which a clear distinction can be made between responders and non-responders. Most often what's observed is a smear of different responses to the same treatment -and the greater the variability, the greater is the chance of being deceived by regression to the mean. For example, Thase et al., (2011) looked at responses to escitalopram, an SSRI antidepressant. They attempted to divide patients into responders and non-responders. An example (Fig 1a in their paper) is shown. The evidence for such a bimodal distribution is certainly very far from obvious. The observations are just smeared out. Nonetheless, the authors conclude "Our findings indicate that what appears to be a modest effect in the grouped data – on the boundary of clinical significance, as suggested above – is actually a very large effect for a subset of patients who benefited more from escitalopram than from placebo treatment. " I guess that interpretation could be right, but it seems more likely to be a marketing tool. Before you read the paper, check the authors' conflicts of interest. The bottom line is that analyses that divide patients into responders and non-responders are reliable only if that can be done before the trial starts. Retrospective analyses are unreliable and unconvincing. Some more reading Senn, 2011 provides an excellent introduction (and some interesting history). The subtitle is "Here Stephen Senn examines one of Galton's most important statistical legacies – one that is at once so trivial that it is blindingly obvious, and so deep that many scientists spend their whole career being fooled by it." The examples in this paper are extended in Senn (2009), "Three things that every medical writer should know about statistics". The three things are regression to the mean, the error of the transposed conditional and individual response. You can read slightly more technical accounts of regression to the mean in McDonald & Mazzuca (1983) "How much of the placebo effect is statistical regression" (two quotations from this paper opened this post), and in Stephen Senn (2015) "Mastering variation: variance components and personalised medicine". In 1988 Senn published some corrections to the maths in McDonald (1983). The trials that were used by Hróbjartsson & Gøtzsche (2010) to investigate the comparison between placebo and no treatment were looked at again by Howick et al., (2013), who found that in many of them the difference between treatment and placebo was also small. Most of the treatments did not work very well. Regression to the mean is not just a medical deceiver: it's everywhere Although this post has concentrated on deception in medicine, it's worth noting that the phenomenon of regression to the mean can cause wrong inferences in almost any area where you look at change from baseline. A classical example concern concerns the effectiveness of speed cameras. They tend to be installed after a spate of accidents, and if the accident rate is particularly high in one year it is likely to be lower the next year, regardless of whether a camera had been installed or not. To find the true reduction in accidents caused by installation of speed cameras, you would need to choose several similar sites and allocate them at random to have a camera or no camera. As in clinical trials. looking at the change from baseline can be very deceptive. Statistical postscript Lastly, remember that it you avoid all of these hazards of interpretation, and your test of significance gives P = 0.047. that does not mean you have discovered something. There is still a risk of at least 30% that your 'positive' result is a false positive. This is explained in Colquhoun (2014),"An investigation of the false discovery rate and the misinterpretation of p-values". I've suggested that one way to solve this problem is to use different words to describe P values: something like this. P > 0.05 very weak evidence P = 0.05 weak evidence: worth another look P = 0.01 moderate evidence for a real effect P = 0.001 strong evidence for real effect But notice that if your hypothesis is implausible, even these criteria are too weak. For example, if the treatment and placebo are identical (as would be the case if the treatment were a homeopathic pill) then it follows that 100% of positive tests are false positives. It's worth mentioning that the question of responders versus non-responders is closely-related to the classical topic of bioassays that use quantal responses. In that field it was assumed that each participant had an individual effective dose (IED). That's reasonable for the old-fashioned LD50 toxicity test: every animal will die after a sufficiently big dose. It's less obviously right for ED50 (effective dose in 50% of individuals). The distribution of IEDs is critical, but it has very rarely been determined. The cumulative form of this distribution is what determines the shape of the dose-response curve for fraction of responders as a function of dose. Linearisation of this curve, by means of the probit transformation used to be a staple of biological assay. This topic is discussed in Chapter 10 of Lectures on Biostatistics. And you can read some of the history on my blog about Some pharmacological history: an exam from 1959. Tagged acupuncture, alternative medicine, CAM, chiropractic, osteopathy, physiotherapy, placebo, placebo effect, regression to the mean, statistics | 31 Comments How long until the next bomb? Why there's no reason to think that nuclear deterrence works Every day one sees politicians on TV assuring us that nuclear deterrence works because there no nuclear weapon has been exploded in anger since 1945. They clearly have no understanding of statistics. With a few plausible assumptions, we can easily calculate that the time until the next bomb explodes could be as little as 20 years. Be scared, very scared. The first assumption is that bombs go off at random intervals. Since we have had only one so far (counting Hiroshima and Nagasaki as a single event), this can't be verified. But given the large number of small influences that control when a bomb explodes (whether in war or by accident), it is the natural assumption to make. The assumption is given some credence by the observation that the intervals between wars are random [download pdf]. If the intervals between bombs are random, that implies that the distribution of the length of the intervals is exponential in shape, The nature of this distribution has already been explained in an earlier post about the random lengths of time for which a patient stays in an intensive care unit. If you haven't come across an exponential distribution before, please look at that post before moving on. All that we know is that 70 years have elapsed since the last bomb. so the interval until the next one must be greater than 70 years. The probability that a random interval is longer than 70 years can be found from the cumulative form of the exponential distribution. If we denote the true mean interval between bombs as $\mu$ then the probability that an intervals is longer than 70 years is \[ \text{Prob}\left( \text{interval > 70}\right)=\exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} \] We can get a lower 95% confidence limit (call it $\mu_\mathrm{lo}$) for the mean interval between bombs by the argument used in Lecture on Biostatistics, section 7.8 (page 108). If we imagine that $\mu_\mathrm{lo}$ were the true mean, we want it to be such that there is a 2.5% chance that we observe an interval that is greater than 70 years. That is, we want to solve \[ \exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} = 0.025\] That's easily solved by taking natural logs of both sides, giving \[ \mu_\mathrm{lo} = \frac{-70}{\ln{\left(0.025\right)}}= 19.0\text{ years}\] A similar argument leads to an upper confidence limit, $\mu_\mathrm{hi}$, for the mean interval between bombs, by solving \[ \exp{\left(\frac{-70}{\mu_\mathrm{hi}}\right)} = 0.975\] \[ \mu_\mathrm{hi} = \frac{-70}{\ln{\left(0.975\right)}}= 2765\text{ years}\] If the worst case were true, and the mean interval between bombs was 19 years. then the distribution of the time to the next bomb would have an exponential probability density function, $f(t)$, \[ f(t) = \frac{1}{19} \exp{\left(\frac{-70}{19}\right)} \] There would be a 50% chance that the waiting time until the next bomb would be less than the median of this distribution, =19 ln(0.5) = 13.2 years. In summary, the observation that there has been no explosion for 70 years implies that the mean time until the next explosion lies (with 95% confidence) between 19 years and 2765 years. If it were 19 years, there would be a 50% chance that the waiting time to the next bomb could be less than 13.2 years. Thus there is no reason at all to think that nuclear deterrence works well enough to protect the world from incineration. Another approach My statistical colleague, the ace probabilist Alan Hawkes, suggested a slightly different approach to the problem, via likelihood. The likelihood of a particular value of the interval between bombs is defined as the probability of making the observation(s), given a particular value of $\mu$. In this case, there is one observation, that the interval between bombs is more than 70 years. The likelihood, $L\left(\mu\right)$, of any specified value of $\mu$ is thus \[L\left(\mu\right)=\text{Prob}\left( \text{interval > 70 | }\mu\right) = \exp{\left(\frac{-70}{\mu}\right)} \] If we plot this function (graph on right) shows that it increases with $\mu$ continuously, so the maximum likelihood estimate of $\mu$ is infinity. An infinite wait until the next bomb is perfect deterrence. But again we need confidence limits for this. Since the upper limit is infinite, the appropriate thing to calculate is a one-sided lower 95% confidence limit. This is found by solving \[ \exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} = 0.05\] which gives \[ \mu_\mathrm{lo} = \frac{-70}{\ln{\left(0.05\right)}}= 23.4\text{ years}\] The first approach gives 95% confidence limits for the average time until we get incinerated as 19 years to 2765 years. The second approach gives the lower limit as 23.4 years. There is no important difference between the two methods of calculation. This shows that the bland assurances of politicians that "nuclear deterrence works" is not justified. It is not the purpose of this post to predict when the next bomb will explode, but rather to point out that the available information tells us very little about that question. This seems important to me because it contradicts directly the frequent assurances that deterrence works. The only consolation is that, since I'm now 79, it's unlikely that I'll live long enough to see the conflagration. Anyone younger than me would be advised to get off their backsides and do something about it, before you are destroyed by innumerate politicians. While talking about politicians and war it seems relevant to reproduce Peter Kennard's powerful image of the Iraq war. and with that, to quote the comment made by Tony Blair's aide, Lance Price It's a bit like my feeling about priests doing the twelve stations of the cross. Politicians and priests masturbating at the expense of kids getting slaughtered (at a safe distance, of course). Tagged bomb, deterrant, exponential distribution, Markov, nuclear, politics, statistics, Trident | 4 Comments Two more cases of hype in glamour journals: magnets, cocoa and memory In the course of thinking about metrics, I keep coming across cases of over-promoted research. An early case was "Why honey isn't a wonder cough cure: more academic spin". More recently, I noticed these examples. "Effect of Vitamin E and Memantine on Functional Decline in Alzheimer Disease".(Spoiler -very little), published in the Journal of the American Medical Association. " and " Primary Prevention of Cardiovascular Disease with a Mediterranean Diet" , in the New England Journal of Medicine (which had second highest altmetric score in 2013) and "Sleep Drives Metabolite Clearance from the Adult Brain", published in Science In all these cases, misleading press releases were issued by the journals themselves and by the universities. These were copied out by hard-pressed journalists and made headlines that were certainly not merited by the work. In the last three cases, hyped up tweets came from the journals. The responsibility for this hype must eventually rest with the authors. The last two papers came second and fourth in the list of highest altmetric scores for 2013 Here are to two more very recent examples. It seems that every time I check a highly tweeted paper, it turns out that it is very second rate. Both papers involve fMRI imaging, and since the infamous dead salmon paper, I've been a bit sceptical about them. But that is irrelevant to what follows. Boost your memory with electricity That was a popular headline at the end of August. It referred to a paper in Science magazine: "Targeted enhancement of cortical-hippocampal brain networks and associative memory" (Wang, JX et al, Science, 29 August, 2014) This study was promoted by the Northwestern University "Electric current to brain boosts memory". And Science tweeted along the same lines. Science's link did not lead to the paper, but rather to a puff piece, "Rebooting memory with magnets". Again all the emphasis was on memory, with the usual entirely speculative stuff about helping Alzheimer's disease. But the paper itself was behind Science's paywall. You couldn't read it unless your employer subscribed to Science. All the publicity led to much retweeting and a big altmetrics score. Given that the paper was not open access, it's likely that most of the retweeters had not actually read the paper. When you read the paper, you found that is mostly not about memory at all. It was mostly about fMRI. In fact the only reference to memory was in a subsection of Figure 4. This is the evidence. That looks desperately unconvincing to me. The test of significance gives P = 0.043. In an underpowered study like this, the chance of this being a false discovery is probably at least 50%. A result like this means, at most, "worth another look". It does not begin to justify all the hype that surrounded the paper. The journal, the university's PR department, and ultimately the authors, must bear the responsibility for the unjustified claims. Science does not allow online comments following the paper, but there are now plenty of sites that do. NHS Choices did a fairly good job of putting the paper into perspective, though they failed to notice the statistical weakness. A commenter on PubPeer noted that Science had recently announced that it would tighten statistical standards. In this case, they failed. The age of post-publication peer review is already reaching maturity Boost your memory with cocoa Another glamour journal, Nature Neuroscience, hit the headlines on October 26, 2014, in a paper that was publicised in a Nature podcast and a rather uninformative press release. "Enhancing dentate gyrus function with dietary flavanols improves cognition in older adults. Brickman et al., Nat Neurosci. 2014. doi: 10.1038/nn.3850.". The journal helpfully lists no fewer that 89 news items related to this study. Mostly they were something like "Drinking cocoa could improve your memory" (Kat Lay, in The Times). Only a handful of the 89 reports spotted the many problems. A puff piece from Columbia University's PR department quoted the senior author, Dr Small, making the dramatic claim that "If a participant had the memory of a typical 60-year-old at the beginning of the study, after three months that person on average had the memory of a typical 30- or 40-year-old." Like anything to do with diet, the paper immediately got circulated on Twitter. No doubt most of the people who retweeted the message had not read the (paywalled) paper. The links almost all led to inaccurate press accounts, not to the paper itself. But some people actually read the paywalled paper and post-publication review soon kicked in. Pubmed Commons is a good site for that, because Pubmed is where a lot of people go for references. Hilda Bastian kicked off the comments there (her comment was picked out by Retraction Watch). Her conclusion was this. "It's good to see claims about dietary supplements tested. However, the results here rely on a chain of yet-to-be-validated assumptions that are still weakly supported at each point. In my opinion, the immodest title of this paper is not supported by its contents." (Hilda Bastian runs the Statistically Funny blog -"The comedic possibilities of clinical epidemiology are known to be limitless", and also a Scientific American blog about risk, Absolutely Maybe.) NHS Choices spotted most of the problems too, in "A mug of cocoa is not a cure for memory problems". And so did Ian Musgrave of the University of Adelaide who wrote "Most Disappointing Headline Ever (No, Chocolate Will Not Improve Your Memory)", Here are some of the many problems. The paper was not about cocoa. Drinks containing 900 mg cocoa flavanols (as much as in about 25 chocolate bars) and 138 mg of (−)-epicatechin were compared with much lower amounts of these compounds The abstract, all that most people could read, said that subjects were given "high or low cocoa–containing diet for 3 months". Bit it wasn't a test of cocoa: it was a test of a dietary "supplement". The sample was small (37ppeople altogether, split between four groups), and therefore under-powered for detection of the small effect that was expected (and observed) The authors declared the result to be "significant" but you had to hunt through the paper to discover that this meant P = 0.04 (hint -it's 6 lines above Table 1). That means that there is around a 50% chance that it's a false discovery. The test was short -only three months The test didn't measure memory anyway. It measured reaction speed, They did test memory retention too, and there was no detectable improvement. This was not mentioned in the abstract, Neither was the fact that exercise had no detectable effect. The study was funded by the Mars bar company. They, like many others, are clearly looking for a niche in the huge "supplement" market, The claims by the senior author, in a Columbia promotional video that the drink produced "an improvement in memory" and "an improvement in memory performance by two or three decades" seem to have a very thin basis indeed. As has the statement that "we don't need a pharmaceutical agent" to ameliorate a natural process (aging). High doses of supplements are pharmaceutical agents. To be fair, the senior author did say, in the Columbia press release, that "the findings need to be replicated in a larger study—which he and his team plan to do". But there is no hint of this in the paper itself, or in the title of the press release "Dietary Flavanols Reverse Age-Related Memory Decline". The time for all the publicity is surely after a well-powered study, not before it. The high altmetrics score for this paper is yet another blow to the reputation of altmetrics. One may well ask why Nature Neuroscience and the Columbia press office allowed such extravagant claims to be made on such a flimsy basis. What's going wrong? These two papers have much in common. Elaborate imaging studies are accompanied by poor functional tests. All the hype focusses on the latter. These led me to the speculation ( In Pubmed Commons) that what actually happens is as follows. Authors do big imaging (fMRI) study. Glamour journal says coloured blobs are no longer enough and refuses to publish without functional information. Authors tag on a small human study. Paper gets published. Hyped up press releases issued that refer mostly to the add on. Journal and authors are happy. But science is not advanced. It's no wonder that Dorothy Bishop wrote "High-impact journals: where newsworthiness trumps methodology". It's time we forgot glamour journals. Publish open access on the web with open comments. Post-publication peer review is working But boycott commercial publishers who charge large amounts for open access. It shouldn't cost more than about £200, and more and more are essentially free (my latest will appear shortly in Royal Society Open Science). Hilda Bastian has an excellent post about the dangers of reading only the abstract "Science in the Abstract: Don't Judge a Study by its Cover" I was upbraided on Twitter by Euan Adie, founder of Almetric.com, because I didn't click through the altmetric symbol to look at the citations "shouldn't have to tell you to look at the underlying data David" and "you could have saved a lot of Google time". But when I did do that, all I found was a list of media reports and blogs -pretty much the same as Nature Neuroscience provides itself. More interesting, I found that my blog wasn't listed and neither was PubMed Commons. When I asked why, I was told "needs to regularly cite primary research. PubMed, PMC or repository links". But this paper is behind a paywall. So I provide (possibly illegally) a copy of it, so anyone can verify my comments. The result is that altmetric's dumb algorithms ignore it. In order to get counted you have to provide links that lead nowhere. So here's a link to the abstract (only) in Pubmed for the Science paper http://www.ncbi.nlm.nih.gov/pubmed/25170153 and here's the link for the Nature Neuroscience paper http://www.ncbi.nlm.nih.gov/pubmed/25344629 It seems that altmetrics doesn't even do the job that it claims to do very efficiently. It worked. By later in the day, this blog was listed in both Nature's metrics section and by altmetrics. com. But comments on Pubmed Commons were still missing, That's bad because it's an excellent place for post-publications peer review. Tagged Academia, Alzheimer's, badscience, cocoa, false discovery rate, false positives, hype, Journalism, magnetic, memory, perverse incentives, statistics | 5 Comments What is meant by the "accuracy" of screening tests? The two posts on this blog about the hazards of s=ignificance testing have proved quite popular. See Part 1: the screening problem, and Part 2: Part 2: the false discovery rate. They've had over 20,000 hits already (though I still have to find a journal that will print the paper based on them). Yet another Alzheiner's screening story hit the headlines recently and the facts got sorted out in the follow up section of the screening post. If you haven't read that already, it might be helpful to do so before going on to this post. This post has already appeared on the Sense about Science web site. They asked me to explain exactly what was meant by the claim that the screening test had an "accuracy of 87%". That was mentioned in all the media reports, no doubt because it was the only specification of the quality of the test in the press release. Here is my attempt to explain what it means. The "accuracy" of screening tests Anything about Alzheimer's disease is front line news in the media. No doubt that had not escaped the notice of Kings College London when they issued a press release about a recent study of a test for development of dementia based on blood tests. It was widely hailed in the media as a breakthrough in dementia research. For example, the BBC report is far from accurate). The main reason for the inaccurate reports is, as so often, the press release. It said "They identified a combination of 10 proteins capable of predicting whether individuals with MCI would develop Alzheimer's disease within a year, with an accuracy of 87 percent" The original paper says "Sixteen proteins correlated with disease severity and cognitive decline. Strongest associations were in the MCI group with a panel of 10 proteins predicting progression to AD (accuracy 87%, sensitivity 85% and specificity 88%)." What matters to the patient is the probability that, if they come out positive when tested, they will actually get dementia. The Guardian quoted Dr James Pickett, head of research at the Alzheimer's Society, as saying "These 10 proteins can predict conversion to dementia with less than 90% accuracy, meaning one in 10 people would get an incorrect result." That statement simply isn't right (or, at least, it's very misleading). The proper way to work out the relevant number has been explained in many places -I did it recently on my blog. The easiest way to work it out is to make a tree diagram. The diagram is like that previously discussed here, but with a sensitivity of 85% and a specificity of 88%, as specified in the paper. In order to work out the number we need, we have to specify the true prevalence of people who will develop dementia, in the population being tested. In the tree diagram, this has been taken as 10%. The diagram shows that, out of 1000 people tested, there are 85 + 108 = 193 with a positive test result. Out ot this 193, rather more than half (108) are false positives, so if you test positive there is a 56% chance that it's a false alarm (108/193 = 0.56). A false discovery rate of 56% is far too high for a good test. This figure of 56% seems to be the basis for a rather good post by NHS Choices with the title "Blood test for Alzheimer's 'no better than coin toss' If the prevalence were taken as 5% (a value that's been given for the over-60 age group) that fraction of false alarms would rise to a disastrous 73%. How are these numbers related to the claim that the test is "87% accurate"? That claim was parroted in most of the media reports, and it is why Dr Pickett said "one in 10 people would get an incorrect result". The paper itself didn't define "accuracy" anywhere, and I wasn't familiar with the term in this context (though Stephen Senn pointed out that it is mentioned briefly in the Wiikipedia entry for Sensitivity and Specificity). The senior author confirmed that "accuracy" means the total fraction of tests, positive or negative, that give the right result. We see from the tree diagram that, out of 1000 tests, there are 85 correct positive tests and 792 correct negative tests, so the accuracy (with a prevalence of 0.1) is (85 + 792)/1000 = 88%, close to the value that's cited in the paper. Accuracy, defined in this way, seems to me not to be a useful measure at all. It conflates positive and negative results and they need to be kept separate to understand the problem. Inspection of the tree diagram shows that it can be expressed algebraically as accuracy = (sensitivity × prevalence) + (specificity × (1 − prevalence)) It is therefore merely a weighted mean of sensitivity and specificity (weighted by the prevalence). With the numbers in this case, it varies from 0.88 (when prevalence = 0) to 0.85 (when prevalence = 1). Thus it will inevitably give a much more flattering view of the test than the false discovery rate. No doubt, it is too much to expect that a hard-pressed journalist would have time to figure this out, though it isn't clear that they wouldn't have time to contact someone who understands it. But it is clear that it should have been explained in the press release. It wasn't. In fact, reading the paper shows that the test was not being proposed as a screening test for dementia at all. It was proposed as a way to select patients for entry into clinical trials. The population that was being tested was very different from the general population of old people, being patients who come to memory clinics in trials centres (the potential trials population) How best to select patients for entry into clinical trials is a matter of great interest to people who are running trials. It is of very little interest to the public. So all this confusion could have been avoided if Kings had refrained from issuing a press release at all, for a paper like this. I guess universities think that PR is more important than accuracy. That's a bad mistake in an age when pretentions get quickly punctured on the web. This post first appeared on the Sense about Science web site. Tagged accuracy, Alzheimer's, epidemiology, screening, sensitivity, specificity, statistics | 6 Comments On the hazards of significance testing. Part 2: the false discovery rate, or how not to make a fool of yourself with P values This post is now a bit out of date: there is a summary of my more recent efforts (papers, videos and pop stuff) can be found on Prof Sivilotti's OneMol pages. What follows is a simplified version of part of a paper that appeared as a preprint on arXiv in July. It appeared as a peer-reviewed paper on 19th November 2014, in the new Royal Society Open Science journal. If you find anything wrong, or obscure, please email me. Be vicious. There is also a simplified version, given as a talk on Youtube.. It's a follow-up to my very first paper, which was written in 1959 – 60, while I was a fourth year undergraduate(the history is at a recent blog). I hope this one is better. '". . . before anything was known of Lydgate's skill, the judgements on it had naturally been divided, depending on a sense of likelihood, situated perhaps in the pit of the stomach, or in the pineal gland, and differing in its verdicts, but not less valuable as a guide in the total deficit of evidence" 'George Eliot (Middlemarch, Chap. 45) "The standard approach in teaching, of stressing the formal definition of a p-value while warning against its misinterpretation, has simply been an abysmal failure" Sellke et al. (2001) `The American Statistician' (55), 62–71 The last post was about screening. It showed that most screening tests are useless, in the sense that a large proportion of people who test positive do not have the condition. This proportion can be called the false discovery rate. You think you've discovered the condition, but you were wrong. Very similar ideas can be applied to tests of significance. If you read almost any scientific paper you'll find statements like "this result was statistically significant (P = 0.047)". Tests of significance were designed to prevent you from making a fool of yourself by claiming to have discovered something, when in fact all you are seeing is the effect of random chance. In this case we define the false discovery rate as the probability that, when a test comes out as 'statistically significant', there is actually no real effect. You can also make a fool of yourself by failing to detect a real effect, but this is less harmful to your reputation. It's very common for people to claim that an effect is real, not just chance, whenever the test produces a P value of less than 0.05, and when asked, it's common for people to think that this procedure gives them a chance of 1 in 20 of making a fool of themselves. Leaving aside that this seems rather too often to make a fool of yourself, this interpretation is simply wrong. The purpose of this post is to justify the following proposition. If you observe a P value close to 0.05, your false discovery rate will not be 5%. It will be at least 30% and it could easily be 80% for small studies. This makes slightly less startling the assertion in John Ioannidis' (2005) article, Why Most Published Research Findings Are False. That paper caused quite a stir. It's a serious allegation. In fairness, the title was a bit misleading. Ioannidis wasn't talking about all science. But it has become apparent that an alarming number of published works in some fields can't be reproduced by others. The worst offenders seem to be clinical trials, experimental psychology and neuroscience, some parts of cancer research and some attempts to associate genes with disease (genome-wide association studies). Of course the self-correcting nature of science means that the false discoveries get revealed as such in the end, but it would obviously be a lot better if false results weren't published in the first place. How can tests of significance be so misleading? Tests of statistical significance have been around for well over 100 years now. One of the most widely used is Student's t test. It was published in 1908. 'Student' was the pseudonym for William Sealy Gosset, who worked at the Guinness brewery in Dublin. He visited Karl Pearson's statistics department at UCL because he wanted statistical methods that were valid for testing small samples. The example that he used in his paper was based on data from Arthur Cushny, the first holder of the chair of pharmacology at UCL (subsequently named the A.J. Clark chair, after its second holder) The outcome of a significance test is a probability, referred to as a P value. First, let's be clear what the P value means. It will be simpler to do that in the context of a particular example. Suppose we wish to know whether treatment A is better (or worse) than treatment B (A might be a new drug, and B a placebo). We'd take a group of people and allocate each person to take either A or B and the choice would be random. Each person would have an equal chance of getting A or B. We'd observe the responses and then take the average (mean) response for those who had received A and the average for those who had received B. If the treatment (A) was no better than placebo (B), the difference between means should be zero on average. But the variability of the responses means that the observed difference will never be exactly zero. So how big does it have to be before you discount the possibility that random chance is all you were seeing. You do the test and get a P value. Given the ubiquity of P values in scientific papers, it's surprisingly rare for people to be able to give an accurate definition. Here it is. The P value is the probability that you would find a difference as big as that observed, or a still bigger value, if in fact A and B were identical. If this probability is low enough, the conclusion would be that it's unlikely that the observed difference (or a still bigger one) would have occurred if A and B were identical, so we conclude that they are not identical, i.e. that there is a genuine difference between treatment and placebo. This is the classical way to avoid making a fool of yourself by claiming to have made a discovery when you haven't. It was developed and popularised by the greatest statistician of the 20th century, Ronald Fisher, during the 1920s and 1930s. It does exactly what it says on the tin. It sounds entirely plausible. Another way to look at significance tests One way to look at the problem is to notice that the classical approach considers only what would happen if there were no real effect or, as a statistician would put it, what would happen if the null hypothesis were true. But there isn't much point in knowing that an event is unlikely when the null hypothesis is true unless you know how likely it is when there is a real effect. We can look at the problem a bit more realistically by means of a tree diagram, very like that used to analyse screening tests, in the previous post. In order to do this, we need to specify a couple more things. First we need to specify the power of the significance test. This is the probability that we'll detect a difference when there really is one. By 'detect a difference' we mean that the test comes out with P < 0.05 (or whatever level we set). So it's analogous with the sensitivity of a screening test. In order to calculate sample sizes, it's common to set the power to 0.8 (obviously 0.99 would be better, but that would often require impracticably large samples). The second thing that we need to specify is a bit trickier, the proportion of tests that we do in which there is a real difference. This is analogous to the prevalence of the disease in the population being tested in the screening example. There is nothing mysterious about it. It's an ordinary probability that can be thought of as a long-term frequency. But it is a probability that's much harder to get a value for than the prevalence of a disease. If we were testing a series of 30C homeopathic pills, all of the pills, regardless of what it says on the label, would be identical with the placebo controls so the prevalence of genuine effects, call it P(real), would be zero. So every positive test would be a false positive: the false discovery rate would be 100%. But in real science we want to predict the false discovery rate in less extreme cases. Suppose, for example, that we test a large number of candidate drugs. Life being what it is, most of them will be inactive, but some will have a genuine effect. In this example we'd be lucky if 10% had a real effect, i.e. were really more effective than the inactive controls. So in this case we'd set the prevalence to P(real) = 0.1. We can now construct a tree diagram exactly as we did for screening tests. Suppose that we do 1000 tests. In 90% of them (900 tests) there is no real effect: the null hypothesis is true. If we use P = 0.05 as a criterion for significance then, according to the classical theory, 5% of them (45 tests) will give false positives, as shown in the lower limb of the tree diagram. If the power of the test was 0.8 then we'll detect 80% of the real differences so there will be 80 correct positive tests. The total number of positive tests is 45 + 80 = 125, and the proportion of these that are false positives is 45/125 = 36 percent. Our false discovery rate is far bigger than the 5% that many people still believe they are attaining. In contrast, 98% of negative tests are right (though this is less surprising because 90% of experiments really have no effect). The equation You can skip this section without losing much. As in the case of screening tests, this result can be calculated from an equation. The same equation works if we substitute power for sensitivity, P(real) for prevalence, and siglev for (1 – specificity) where siglev is the cut off value for "significance", 0.05 in our examples. The false discovery rate (the probability that, if a "signifcant" result is found, there is actually no real effect) is given by \[FDR = \frac{siglev\left(1-P(real)\right)}{power.P(real) + siglev\left(1-P(real)\right) }\; \] In the example above, power = 0.8, siglev = 0.05 and P(real) = 0.1, so the false discovery rate is \[\frac{0.05 (1-0.1)}{0.8 \times 0.1 + 0.05 (1-0.1) }\; = 0.36 \] So 36% of "significant" results are wrong, as found in the tree diagram. Some subtleties The argument just presented should be quite enough to convince you that significance testing, as commonly practised, will lead to disastrous numbers of false positives. But the basis of how to make inferences is still a matter that's the subject of intense controversy among statisticians, so what is an experimenter to do? It is difficult to give a consensus of informed opinion because, although there is much informed opinion, there is rather little consensus. A personal view follows. Colquhoun (1970), Lectures on Biostatistics, pp 94-95. This is almost as true now as it was when I wrote it in the late 1960s, but there are some areas of broad agreement. There are two subtleties that cause the approach outlined above to be a bit contentious. The first lies in the problem of deciding the prevalence, P(real). You may have noticed that if the frequency of real effects were 50% rather than 10%, the approach shown in the diagram would give a false discovery rate of only 6%, little different from the 5% that's embedded in the consciousness of most experimentalists. But this doesn't get us off the hook, for two reasons. For a start, there is no reason at all to think that there will be a real effect there in half of the tests that we do. Of course if P(real) were even bigger than 0.5, the false discovery rate would fall to zero, because when P(real) = 1, all effects are real and therefore all positive tests are correct. There is also a more subtle point. If we are trying to interpret the result of a single test that comes out with a P value of, say, P = 0.047, then we should not be looking at all significant results (those with P < 0.05), but only at those tests that come out with P = 0.047. This can be done quite easily by simulating a long series of t tests, and then restricting attention to those that come out with P values between, say, 0.045 and 0.05. When this is done we find that the false discovery rate is at least 26%. That's for the best possible case where the sample size is good (power of the test is 0.8) and the prevalence of real effects is 0.5. When, as in the tree diagram, the prevalence of real effects is 0.1, the false discovery rate is 76%. That's enough to justify Ioannidis' statement that most published results are wrong. One problem with all of the approaches mentioned above was the need to guess at the prevalence of real effects (that's what a Bayesian would call the prior probability). James Berger and colleagues (Sellke et al., 2001) have proposed a way round this problem by looking at all possible prior distributions and so coming up with a minimum false discovery rate that holds universally. The conclusions are much the same as before. If you claim to have found an effects whenever you observe a P value just less than 0.05, you will come to the wrong conclusion in at least 29% of the tests that you do. If, on the other hand, you use P = 0.001, you'll be wrong in only 1.8% of cases. Valen Johnson (2013) has reached similar conclusions by a related argument. A three-sigma rule As an alternative to insisting on P < 0.001 before claiming you've discovered something, you could use a 3-sigma rule. In other words, insist that an effect is at least three standard deviations away from the control value (as opposed to the two standard deviations that correspond to P = 0.05). The three sigma rule means using P= 0.0027 as your cut off. This, according to Berger's rule, implies a false discovery rate of (at least) 4.5%, not far from the value that many people mistakenly think is achieved by using P = 0.05 as a criterion. Particle physicists go a lot further than this. They use a 5-sigma rule before announcing a new discovery. That corresponds to a P value of less than one in a million (0.57 x 10−6). According to Berger's rule this corresponds to a false discovery rate of (at least) around 20 per million. Of course their experiments can't be randomised usually, so it's as well to be on the safe side. Underpowered experiments All of the problems discussed so far concern the near-ideal case. They assume that your sample size is big enough (power about 0.8 say) and that all of the assumptions made in the test are true, that there is no bias or cheating and that no negative results are suppressed. The real-life problems can only be worse. One way in which it is often worse is that sample sizes are too small, so the statistical power of the tests is low. The problem of underpowered experiments has been known since 1962, but it has been ignored. Recently it has come back into prominence, thanks in large part to John Ioannidis and the crisis of reproducibility in some areas of science. Button et al. (2013) said "We optimistically estimate the median statistical power of studies in the neuroscience field to be between about 8% and about 31%" This is disastrously low. Running simulated t tests shows that with a power of 0.2, not only do you have only a 20% chance of detecting a real effect, but that when you do manage to get a "significant" result there is a 76% chance that it's a false discovery. And furthermore, when you do find a "significant" result, the size of the effect will be over-estimated by a factor of nearly 2. This "inflation effect" happens because only those experiments that happen, by chance, to have a larger-than-average effect size will be deemed to be "significant". What should you do to prevent making a fool of yourself? The simulated t test results, and some other subtleties, will be described in a paper, and/or in a future post. But I hope that enough has been said here to convince you that there are real problems in the sort of statistical tests that are universal in the literature. The blame for the crisis in reproducibility has several sources. One of them is the self-imposed publish-or-perish culture, which values quantity over quality, and which has done enormous harm to science. The mis-assessment of individuals by silly bibliometric methods has contributed to this harm. Of all the proposed methods, altmetrics is demonstrably the most idiotic. Yet some vice-chancellors have failed to understand that. Another is scientists' own vanity, which leads to the PR department issuing disgracefully hyped up press releases. In some cases, the abstract of a paper states that a discovery has been made when the data say the opposite. This sort of spin is common in the quack world. Yet referees and editors get taken in by the ruse (e.g see this study of acupuncture). The reluctance of many journals (and many authors) to publish negative results biases the whole literature in favour of positive results. This is so disastrous in clinical work that a pressure group has been started; altrials.net "All Trials Registered | All Results Reported". Yet another problem is that it has become very hard to get grants without putting your name on publications to which you have made little contribution. This leads to exploitation of young scientists by older ones (who fail to set a good example). Peter Lawrence has set out the problems. And, most pertinent to this post, a widespread failure to understand properly what a significance test means must contribute to the problem. Young scientists are under such intense pressure to publish, they have no time to learn about statistics. Here are some things that can be done. Notice that all statistical tests of significance assume that the treatments have been allocated at random. This means that application of significance tests to observational data, e.g. epidemiological surveys of diet and health, is not valid. You can't expect to get the right answer. The easiest way to understand this assumption is to think about randomisation tests (which should have replaced t tests decades ago, but which are still rare). There is a simple introduction in Lectures on Biostatistics (chapters 8 and 9). There are other assumptions too, about the distribution of observations, independence of measurements), but randomisation is the most important. Never, ever, use the word "significant" in a paper. It is arbitrary, and, as we have seen, deeply misleading. Still less should you use "almost significant", "tendency to significant" or any of the hundreds of similar circumlocutions listed by Matthew Hankins on his Still not Significant blog. If you do a significance test, just state the P value and give the effect size and confidence intervals (but be aware that this is just another way of expressing the P value approach: it tells you nothing whatsoever about the false discovery rate). Observation of a P value close to 0.05 means nothing more than 'worth another look'. In practice, one's attitude will depend on weighing the losses that ensue if you miss a real effect against the loss to your reputation if you claim falsely to have made a discovery. If you want to avoid making a fool of yourself most of the time, don't regard anything bigger than P < 0.001 as a demonstration that you've discovered something. Or, slightly less stringently, use a three-sigma rule. Despite the gigantic contributions that Ronald Fisher made to statistics, his work has been widely misinterpreted. We must, however reluctantly, concede that there is some truth in the comment made by an astute journalist: "The plain fact is that 70 years ago Ronald Fisher gave scientists a mathematical machine for turning baloney into breakthroughs, and °flukes into funding. It is time to pull the plug". Robert Matthews Sunday Telegraph, 13 September 1998. There is now a video on YouTube that attempts to explain explain simply the essential ideas. The video has now been updated. The new version has better volume and it used term 'false positive risk', rather than the earlier term 'false discovery rate', to avoid confusion with the use of the latter term in the context of multiple comparisons. The false positive risk: a proposal concerning what to do about p-values (version 2) 31 March 2014 I liked Stephen Senn's first comment on twitter (the twitter stream is storified here). He said " I may have to write a paper 'You may believe you are NOT a Bayesian but you're wrong'". I maintain that the analysis here is merely an exercise in conditional probabilities. It bears a formal similarity to a Bayesian argument, but is free of more contentious parts of the Bayesian approach. This is amplified in a comment, below. I just noticed that my first boss, Heinz Otto Schild.in his 1942 paper about the statistical analysis of 2+2 dose biological assays (written while he was interned at the beginning of the war) chose to use 99% confidence limits, rather than the now universal 95% limits. The later are more flattering to your results, but Schild was more concerned with precision than self-promotion. Tagged Bayesian, false discovery rate, P values, significance, statistics | 29 Comments On the hazards of significance testing. Part 1: the screening problem This post is about why screening healthy people is generally a bad idea. It is the first in a series of posts on the hazards of statistics. There is nothing new about it: Graeme Archer recently wrote a similar piece in his Telegraph blog. But the problems are consistently ignored by people who suggest screening tests, and by journals that promote their work. It seems that it can't be said often enough. The reason is that most screening tests give a large number of false positives. If your test comes out positive, your chance of actually having the disease is almost always quite small. False positive tests cause alarm, and they may do real harm, when they lead to unnecessary surgery or other treatments. Tests for Alzheimer's disease have been in the news a lot recently. They make a good example, if only because it's hard to see what good comes of being told early on that you might get Alzheimer's later when there are no good treatments that can help with that news. But worse still, the news you are given is usually wrong anyway. Consider a recent paper that described a test for "mild cognitive impairment" (MCI), a condition that may, but often isn't, a precursor of Alzheimer's disease. The 15-minute test was published in the Journal of Neuropsychiatry and Clinical Neurosciences by Scharre et al (2014). The test sounded pretty good. It had a specificity of 95% and a sensitivity of 80%. Specificity (95%) means that 95% of people who are healthy will get the correct diagnosis: the test will be negative. Sensitivity (80%) means that 80% of people who have MCI will get the correct diagnosis: the test will be positive. To understand the implication of these numbers we need to know also the prevalence of MCI in the population that's being tested. That was estimated as 1% of people have MCI. Or, for over-60s only, 5% of people have MCI. Now the calculation is easy. Suppose 10.000 people are tested. 1% (100 people) will have MCI, of which 80% (80 people) will be diagnosed correctly. And 9,900 do not have MCI, of which 95% will test negative (correctly). The numbers can be laid out in a tree diagram. The total number of positive tests is 80 + 495 = 575, of which 495 are false positives. The fraction of tests that are false positives is 495/575= 86%. Thus there is a 14% chance that if you test positive, you actually have MCI. 86% of people will be alarmed unnecessarily. Even for people over 60. among whom 5% of the population have MC!, the test is gives the wrong result (54%) more often than it gives the right result (46%). The test is clearly worse than useless. That was not made clear by the authors, or by the journal. It was not even made clear by NHS Choices. It should have been. It's easy to put the tree diagram in the form of an equation. Denote sensitivity as sens, specificity as spec and prevalence as prev. The probability that a positive test means that you actually have the condition is given by \[\frac{sens.prev}{sens.prev + \left(1-spec\right)\left(1-prev\right) }\; \] In the example above, sens = 0.8, spec = 0.95 and prev = 0.01, so the fraction of positive tests that give the right result is \[\frac{0.8 \times 0.01}{0.8 \times 0.01 + \left(1 – 0.95 \right)\left(1 – 0.01\right) }\; = 0.139 \] So 13.9% of positive tests are right, and 86% are wrong, as found in the tree diagram. The lipid test for Alzheimers' Another Alzheimers' test has been in the headlines very recently. It performs even worse than the 15-minute test, but nobody seems to have noticed. It was published in Nature Medicine, by Mapstone et al. (2014). According to the paper, the sensitivity is 90% and the specificity is 90%, so, by constructing a tree, or by using the equation, the probability that you are ill, given that you test positive is a mere 8% (for a prevalence of 1%). And even for over-60s (prevalence 5%), the value is only 32%, so two-thirds of positive tests are still wrong. Again this was not pointed out by the authors. Nor was it mentioned by Nature Medicine in its commentary on the paper. And once again, NHS Choices missed the point. Why does there seem to be a conspiracy of silence about the deficiencies of screening tests? It has been explained very clearly by people like Margaret McCartney who understand the problems very well. Is it that people are incapable of doing the calculations? Surely not. Is it that it's better for funding to pretend you've invented a good test, when you haven't? Do journals know that anything to do with Alzheimers' will get into the headlines, and don't want to pour cold water on a good story? Whatever the explanation, it's bad science that can harm people. March 12 2014. This post was quickly picked up by the ampp3d blog, run by the Daily Mirror. Conrad Quilty-Harper showed some nice animations under the heading How a "90% accurate" Alzheimer's test can be wrong 92% of the time. March 12 2014. As so often, the journal promoted the paper in a way that wasn't totally accurate. Hype is more important than accuracy, I guess. June 12 2014. The empirical evidence shows that "general health checks" (a euphemism for mass screening of the healthy) simply don't help. See review by Gøtzsche, Jørgensen & Krogsbøll (2014) in BMJ. They conclude "Doctors should not offer general health checks to their patients,and governments should abstain from introducing health check programmes, as the Danish minister of health did when she learnt about the results of the Cochrane review and the Inter99 trial. Current programmes, like the one in the United Kingdom,should be abandoned." Yet another over-hyped screening test for Alzheimer's in the media. And once again. the hype originated in the press release, from Kings College London this time. The press release says The term "accuracy" is not defined in the press release. And it isn't defined in the original paper either. I've written to senior author, Simon Lovestone to try to find out what it means. The original paper says A simple calculation, as shown above, tells us that with sensitivity 85% and specificity 88%. the fraction of people who have a positive test who are diagnosed correctly is 44%. So 56% of positive results are false alarms. These numbers assume that the prevalence of the condition in the population being tested is 10%, a higher value than assumed in other studies. If the prevalence were only 5% the results would be still worse: 73% of positive tests would be wrong. Either way, that's not good enough to be useful as a diagnostic method. In one of the other recent cases of Alzheimer's tests, six months ago, NHS Choices fell into the same trap. They changed it a bit after I pointed out the problem in the comments. They seem to have learned their lesson because their post on this study was titled "Blood test for Alzheimer's 'no better than coin toss' ". That's based on the 56% of false alarms mention above. The reports on BBC News and other media totally missed the point. But, as so often, their misleading reports were based on a misleading press release. That means that the university, and ultimately the authors, are to blame. I do hope that the hype has no connection with the fact that Conflicts if Interest section of the paper says "SL has patents filed jointly with Proteome Sciences plc related to these findings" What it doesn't mention is that, according to Google patents, Kings College London is also a patent holder, and so has a vested interest in promoting the product. Is it really too much to expect that hard-pressed journalists might do a simple calculation, or phone someone who can do it for them? Until that happens, misleading reports will persist. It was disappointing to see that the usually excellent Sarah Boseley in the Guardian didn't spot the problem either. And still more worrying that she quotes Dr James Pickett, head of research at the Alzheimer's Society, as saying These 10 proteins can predict conversion to dementia with less than 90% accuracy, meaning one in 10 people would get an incorrect result. That number is quite wrong. It isn't 1 in 10, it's rather more than 1 in 2. A resolution After corresponding with the author, I now see what is going on more clearly. The word "accuracy" was not defined in the paper, but was used in the press release and widely cited in the media. What it means is the ratio of the total number of true results (true positives + true negatives) to the total number of all results. This doesn't seem to me to be useful number to give at all, because it conflates false negatives and false positives into a single number. If a condition is rare, the number of true negatives will be large (as shown above), but this does not make it a good test. What matters most to patients is not accuracy, defined in this way, but the false discovery rate. The author makes it clear that the results are not intended to be a screening test for Alzheimer's. It's obvious from what's been said that it would be a lousy test. Rather, the paper was intended to identify patients who would eventually (well, within only 18 months) get dementia. The denominator (always the key to statistical problems) in this case is the highly atypical patients that who come to memory clinics in trials centres (the potential trials population). The prevalence in this very restricted population may indeed be higher that the 10 percent that I used above. Reading between the lines of the press release, you might have been able to infer some of thus (though not the meaning of "accuracy"). The fact that the media almost universally wrote up the story as a "breakthrough" in Alzeimer's detection, is a consequence of the press release and of not reading the original paper. I wonder whether it is proper for press releases to be issued at all for papers like this, which address a narrow technical question (selection of patients for trials). That us not a topic of great public interest. It's asking for misinterpretation and that's what it got. I don't suppose that it escaped the attention of the PR people at Kings that anything that refers to dementia is front page news, whether it's of public interest or not. When we had an article in Nature in 2008, I remember long discussions about a press release with the arts graduate who wrote it (not at our request). In the end we decides that the topic was not of sufficient public interest to merit a press release and insisted that none was issued. Perhaps that's what should have happened in this case too. This discussion has certainly illustrated the value of post-publication peer review. See, especially, the perceptive comments, below, from Humphrey Rang and from Dr Aston and from Dr Kline. 14 July 2014. Sense about Science asked me to write a guest blog to explain more fully the meaning of "accuracy", as used in the paper and press release. It's appeared on their site and will be reposted on this blog soon. Tagged false positives, screening, statistics | 43 Comments Some pharmacological history: an exam from 1959 Last year, I was sent my answer paper for one of my final exams, taken in 1959. This has triggered a bout of shamelessly autobiographical nostalgia. The answer sheets that I wrote had been kept by one of my teachers at Leeds, Dr George Mogey. After he died in 2003, aged 86, his widow, Audrey, found them and sent them to me. And after a hunt through the junk piled high in my office, I found the exam papers from that year too. George Mogey was an excellent teacher and a kind man. He gave most of the lectures to medical students, which we, as pharmacy/pharmacology students attended. His lectures were inspirational. Photo from his daughter, Nora Mogey Today, 56 years on, I can still recall vividly his lecture on anti-malarial drugs. At the end he paused dramatically and said "Since I started speaking, 100 people have died from malaria" (I don't recall the exact number). He was the perfect antidote to people who say you learn nothing from lectures. Straight after the war (when he had seen the problem of malaria at first hand) he went to work at the Wellcome Research Labs in Beckenham, Kent. The first head of the Wellcome Lab was Henry Dale. It had a distinguished record of basic research as well as playing a crucial role in vaccine production and in development of the safe use of digitalis. In the 1930s it had an important role in the development of proper methods for biological standardisation. This was crucial for ensuring that, for example, each batch of tincture ot digitalis had the same potency (it has been described previously on this blog in Plants as Medicines. When George Mogey joined the Wellcome lab, its head was J.W. Trevan (1887 – 1956) (read his Biographical Memoir, written by J.H. Gaddum). Trevan's most memorable contributions were in improving the statistics of biological assays. The ideas of individual effective dose and median effective dose were developed by him. His 1927 paper The Error of Determination of Toxicity is a classic of pharmacology. His advocacy of the well-defined quantity, median effective dose as a replacement for the ill-defined minimum effective dose was influential in the development of proper statistical analysis of biological assays in the 1930s. Trevan is something of hero to me. And he was said to be very forgetful. Gaddum, in his biographical memoir, recounts this story "One day when he had lost something and suspected that it had been tidied away by his secretary, he went round muttering 'It's all due to this confounded tidiness. It always leads to trouble. I won't have it in my lab.' " Trevan coined the abbreviation LD50 for the median lethal dose of a drug. George Mogey later acquired the car number plate LD50, in honour of Trevan, and his widow, Audrey, still has it (picture on right). Mogey wrote several papers with Trevan. In 1948 he presented one at a meeting of the Physiological Society. The programme included also A.V. Hill. E.J Denton, Bernhard [sic] Katz, J.Z. Young and Richard Keynes (Keynes was George Henry Lewes Student at Cambridge: Lewes was the Victorian polymath with whom the novelist George Eliot lived, openly unmarried, and a founder of the Physiological Society. He probably inspired the medical content of Eliot's best known novel, Middlemarch). Mogey may not have written many papers, but he was the sort of inspiring teacher that universities need. He had a letter in Nature on Constituents of Amanita Muscaria, the fly agaric toadstool, which appeared in 1965. That might explain why we went on a toadstool-hunting field trip. Amanita muscaria DC picture, 2005 The tradition of interest in statistics and biological assay must have rubbed off on me, because the answers I gave in the exam were very much in that tradition. Here is a snippet (click to download the whole answer sheet). A later answer was about probit analysis, an idea introduced by statistician Chester Bliss (1899–1979) in 1934, as an direct extension of Trevan's work. (I met Bliss in 1970 or 1971 when I was in Yale -we had dinner, went to a theatre -then back to his apartment where he insisted on showing me his collection of erotic magazines!) This paper was a pharmacology paper in my first final exam at the end of my third year. The external examiner was Walter Perry, head of pharmacology in Edinburgh (he went on to found the Open University). He had previously been head of Biological Standards at the National Institute for Medical Research, a job in which he had to know some statistics. In the oral exam he asked me a killer question "What is the difference between confidence limits and fiducial limits?". I had no real idea (and, as I discovered later, neither did he). After that, I went on to do the 4th year where we specialised in pharmacology, and I spent quite a lot of time trying to answer that question. The result was my first ever paper, published in the University of Leeds Medical Journal. I hinted, obliquely, that the idea of fiducial inference was probably Ronald Fisher's only real mistake. I think that is the general view now, but Fisher was such a towering figure in statistics that nobody said that straight out (he was still alive when this was written -he died in 1962). It is well-worth looking at a paper that Fisher gave to the Royal Statistical Society in 1935, The Logic of Inductive Inference. Then, as now, it was the custom for a paper to be followed by a vote of thanks, and a seconder. These, and the subsequent discussion, are all printed, and they could be quite vicious in a polite way. Giving the vote of thanks, Professor A.L. Bowley said "It is not the custom, when the Council invites a member to propose a vote of thanks on a paper, to instruct him to bless it. If to some extent I play the inverse role of Balaam, it is not without precedent;" And the seconder, Dr Isserlis, said "There is no doubt in my mind at all about that, but Professor Fisher, like other fond parents, may perhaps see in his offspring qualities which to his mind no other children possess; others, however, may consider that the offspring are not unique." Post-publication peer review was already alive and well in 1935. I was helped enormously in writing this paper by Dr B.L.Welch (1911 – 1989), whose first year course in statistics for biologists was a compulsory part of the course. Welch was famous particularly for having extended Student's t distribution to the case where the variances in two samples being compared are unequal (Welch, 1947). He gave his whole lecture with his back to the class while writing what he said on a set of blackboards that occupied the whole side of the room. No doubt he would have failed any course about how to give a lecture. I found him riveting. He went slowly, and you could always check your notes because it was all there on the blackboards. Walter Perry seemed to like my attempt to answer his question, despite the fact that it failed. After the 4th year final (a single 3 hour essay on drugs that affect protein synthesis) he offered me a PhD place in Edinburgh. He was one of my supervisors, though I never saw him except when he dropped into the lab for a cigarette between committee meetings. While in Edinburgh I met the famous statistician. David Finney, whose definitive book on the Statistics of Biological Assay was an enormous help when I later wrote Lectures on Biostatistics and a great help in getting my first job at UCL in 1964. Heinz Otto Schild. then the famous head of department, had written a paper in 1942 about the statistical analysis of 2+2 dose biological assays, while interned at the beginning of the war. He wanted someone to teach it to students, so he gave me a job. That wouldn't happen now, because that sort of statistics would be considered too difficult Incidentally, I notice that Schild uses 99% confidence limits in his paper, not the usual 95% limits which make your results look better It was clear even then, that the basis of statistical inference was an exceedingly contentious matter among statisticians. It still is, but the matter has renewed importance in view of the crisis of reproducibility in science. The question still fascinates me, and I'm planning to update my first paper soon. This time I hope it will be a bit better. Postscript: some old pictures While in nostalgic mood, here are a few old pictures. First, the only picture I have from undergraduate days. It was taken on a visit to May and Baker (of sulphonamide fame) in February 1957 (so I must have been in my first year). There were 15 or so in the class for the first three years (now, you can get 15 in a tutorial group). I'm in the middle of the back row (with hair!). The only names that I recall are those of the other two who went into the 4th year with me, Ed Abbs (rightmost on back row) and Stella Gregory (2nd from right, front row). Ed died young and Stella went to Australia. Just in front of me are James Dare (with bow tie) and Mr Nelson (who taught old fashioned pharmacognosy). James Dare taught pharmaceutics, but he also had a considerable interest in statistics and we did lots of calculations with electromechanical calculators -the best of them was a Monroe (here's a picture of one with the case removed to show the amazingly intricate mechanism). Monroe 8N-213 from http://www.science.uva.nl/museum/calclist.php The history of UCL's pharmacology goes back to 1905. For most of that time, it's been a pretty good department. It got top scores in all the research assessments until it was abolished by Malcolm Grant in 2007. That act of vandalism is documented in my diary section. For most of its history, there was one professor who was head of the department. That tradition ended in 1983,when Humphrey Rang left for Novartis. The established chair was then empty for two years, until Donald Jenkinson, then head of department, insisted with characteristic modesty, that I rather than he should take the chair. Some time during the subsequent reign of David Brown, it was decided to name the chairs, and mine became the A.J. Clark chair. It was decided that the headship of the department would rotate, between Donald, David Brown and me. But when it came to my turn, I decided I was much too interested in single ion channels to spend time pushing paper, and David Brown nobly extended his term. The A.J. Clark chair was vacant after I 'retired' in 2004, but in 2014, Lucia Sivilotti was appointed to the chair, a worthy successor in its quantitative tradition. The first group picture of UCL's Pharmacology department was from 1972. Heinz Schild is in the middle of the front row, with Desmond Laurence on his left. Between them they dominated the textbook market: Schild edited A.J. Clark's Pharmacology (now known as Rang and Dale). Laurence wrote a very successful text, Clinical Pharmacology. Click on the picture for a bigger version, with names, as recalled by Donald Jenkinson: (DHJ). I doubt whether many people now remember Ada Corbett (the tea lady) or Frank Ballhatchet from the mechanical workshop. He could do superb work, though the price was to spent 10 minutes chatting about his Land Rover, or listening to reminiscences of his time working on Thames barges. I still have a beautiful 8-way tap that he made. with a jerk-free indexing mechanism. The second Departmental picture was taken in June 1980. Humphrey Rang was head of department then. My colleagues David Ogden and Steven Siegelbaum are there. In those days we had a tea lady too, Joyce Mancini. (Click pictures to enlarge) Tagged B.L. Welch, George Mogey, H.O. Schild, inference, J.W. Trevan, pharmacology, statistics, UCL, University College London, University of Leeds | 4 Comments How big is the risk from eating red meat now? An update. A new study of the effects of eating red meat and processed meat got a lot of publicity. When I wrote about this in 2009, I concluded that the evidence for harm was pretty shaky. Is there reason to change my mind? The BBC's first report on 12 March was uncritical (though at least it did link to the original paper -big improvement). On 16th March, Ruth Alexander did a lot better, after asking renowned risk expert, David Spiegelhalter. You can hear him talk about it on Tim Harford's excellent More or Less programme. [Listen to the podcast]. David Spiegelhalter has already written an excellent article about the new work. Here's my perspective. We'll see how you can do the calculations yourself. The new paper was published on Archives of Internal Medicine [get reprint: pdf]. It looked at the association between red meat intake and mortality in two very large cohort studies, the Health Professionals Follow-up Study and the Nurses' Health Study. How big are the risks from red meat? First, it cannot be said too often that studies such as these observe a cohort of people and see what happens to those people who have chosen to eat red meat. If it were the high calorie intake rather than eating meat that causes the risk, then stopping eating meat won't help you in the slightest. The evidence for causality is reconsidered below. The new study reported a relative risk of death from any cause were 1.13 for one 85 g serving of red meat per day, and 1.20 for processed meat. For death from cardiovascular disease the risks were a slightly higher, 1.18 for read meat and 1.21 for processed meat, For dying from cancer the relative risks were a bit lower, 1.10 for red meat and 1.16 for processed meat. A relative risk of 1.13 means that if you eat 85 g of red meat every a day, roughly a hamburger for lunch, your risk of dying in some specified period, e.g. during the next year, is 13 percent higher than that for a similar person who doesn't eat the hamburgers. Let's suppose, for the sake of argument, that the relationship is entirely causal. This is the worst case (or perhaps the best case, because there would be something you could do). How big are your risks? Here are several ways of looking at the risk of eating a hamburger every day (thanks to David Speigelhalter for most of these). Later we'll see how you can calculate results like these for yourself. If you eat a hamburger every day, your risk of dying, e.g in the coming year, is 13 percent higher than for a similar person who doesn't eat hamburgers. In the UK there were around 50 cases of colorectal cancer per 100,000 population in 2008, so a 10% increase, even if it were real, and genuinely causative. would result in 55 rather than 50 cases per 100,000 people, annually. But if we look at the whole population there were 22,000 cases of colorectal cancer in the UK in 2009. A 10% increase would mean, if the association were causal, about 2200 extra cases per year as a result of eating hamburgers (but no extra cases if the association were not causal). Eating a hamburger a day shortens your life expectancy from 80 years to 79 years (sounds rather small) Eating a hamburger a day shortens your life by about half an hour a day, if you start at age 40 (sounds quite serious) The effect on life expectancy is similar to that of smoking 2 cigarettes a day or of being 5 Kg overweight (sounds less serious). The chance that the hamburger eater will die before the similar non-hamburger eater is 53 percent (compared with 50 percent for two similar people) (sounds quite trivial). Clearly it isn't easy to appreciate the size of the risk. Some ways of expressing the same thing sound much more serious than others. Only the first was given in the paper, and in the newspaper reports. The results. Is there any improvement in the evidence for causality? The risks reported in the new study are a bit lower than in the WCRF report (2007) which estimated a relative risk of dying from colorectal cancer as 1.21 (95% Confidence interval 1.04–1.42) with 50 g of red or processed meat per day, whereas in the new study the relative risk for cancer was only 1.10 (1.06-1.14) for a larger 'dose', 85 g of red meat, or 1.16 (1.09-1.23) for processed meat. A 2010 update on the 2007 WCRF report also reported a similar lower relative risk for colorectal cancer of 1.16 for 100 g/day of red or processed meat This reduction in size of the effect as samples get bigger is exactly what's expected for spurious correlations, as described by Ioannidis and others. One reason that I was so sceptical about causality in the earlier reports was that there was very little evidence of a relationship between the amount of meat eaten (the dose) and the risk of dying (the response), though the reports suggested otherwise. The new study does seem to show a shallow relationship between dose and response, the response being the relative risk (or hazard ratio) for dying (from any cause). The Figure shows the relationship in the Nurses' Health Study (it was similar for the other study). The dotted lines are 95% confidence limits (see here), and the lower limit seems to rule out a horizontal line, so the evidence for a relationship between dose and response is better than before, But that doesn't mean that there is necessarily a causal relationship. There are two important problems to consider. The legend to the figure says (omittng some details) that "The results were adjusted for age; body mass index; alcohol consumption, physical activity level, smoking status ; race (white or nonwhite); menopausal status and hormone use in women, family history of diabetes mellitus, myocardial infarction, or cancer; history of diabetes mellitus, hypertension, or hypercholesterolemia; and intakes of total energy, whole grains, fruits, and vegetables." So the data in the graph aren't raw observations but they've been through a mathematical mill. The corrections are absolutely essential, For example, Table 1 in the paper shows that the 20% of people who eat the most red meat had almost twice the total calorie intake of those who eat the least red meat. So without a correction there would be no way to tell whether it was high calorie intake or eating red meat that was causing an increased risk of death. Likewise, those who eat more red meat also smoke more, drink more alcohol, are heavier (higher body mass index) and take less exercise. Clearly everything depends on the accuracy of the corrections for total calorie intake etc. It was done by a method called the Cox proportional hazard multiple linear regression. Like any other method that makes assumptions, and there is no easy way to check on how accurate the corrections are. But it is known "that studies on colon cancer that adjusted for larger number of covarariates reported weaker risk ratios than studies which adjusted for a smaller number". The main problem is that there may be some other factor that has not been included in the corrections. Spiegelhalter points out "Maybe there's some other factor that both encourages Mike to eat more meat, and leads to a shorter life. It is quite plausible that income could be such a factor – lower income in the US is both associated with eating red meat and reduced life-expectancy, even allowing for measurable risk factors." How to do the calculations of risk. Now we come to the nerdy bit. Calculations of risk of the sort listed above can be done for any relative risk, and here's how. The effect on life expectancy is the hardest thing to calculate. To do that you need the actuaries' life table. You can download the tables from the Office of National Statistics. The calculations were based on the "England and Wales, Interim Life Tables, 1980-82 to 2008-10". Click on the worksheet for 2008 – 10 (tabs at the bottom). There are two tables there, one for males, one for females. I copied the data for males into a new spreadsheet, which, unlike that from ONS, is live [download live spreadsheet]. There is an explanation of what each column means at the bottom of the worksheet, with a description of the calculation method. In the spreadsheet, the data pasted from the full table are on the left. and lower down we see life expectancy, $ e_x $, from age 40 is 39.8 years On the right is an extended life table which is live. You enter into cell H5 any relative risk (hazard ratio), and the table recalculates itself. Lower down, we see that life expectancy from age 40 with risk ratio 1.13 is 38.6 years. If you enter 1.00 on cell H5 (highlighted), the table on the right is the same as that on the left (there are some trivial differences because of the way that ONS does the calculations). The life expectancy of a 40 year old man is 39.8 years, so the average age of death is 79.8 years. If you enter 1.13 in cell H5, the relative risk of dying (from any cause) for a hamburger per day, the table is recalculated and the life expectancy for a 40 year old man falls to 38.6, so the mean age of death is 78.6 years (these numbers are rounded to 80 and 79 in the summary at the top of this page). The Excel sheet copies the relative risk that you enter in cell H5 into column O and uses the value in column O to multiply the risk of death in the next year, $ q_x $. So, for example, with a hazard ratio of 1.13, the risk of dying between 40 and 41 is increased from $ q_{40} = 0.00162 $ to $ q_{40} = 0.00162 \times 1.13 = 0.00183 $, and similarly for each year. If you want the relative risk to vary from year to year, you can enter whatever values you want in column O. Loss of one year from your life expectancy when you are 40 implies loss of about half an hour per day of remaining life: (365 x 24 x 60)/(40 x 365) = 36 minutes per day. This is close to one microlife per day. A microlife is defined as 30 minutes of life expectancy. A man of 22 in the UK has 1,000,000 half-hours (57 years) ahead of him, the same as a 26 year-old woman. David Spiegelhalter explains that loss of one microlife per day is about the same risk as smoking two cigarettes per day. This is put into perspective when you realise that a single chest X-ray will cost you two microlives and a whole-body CT scan (which involves much larger exposure to X-rays) would cost you 180 microlives. The last way of expressing the risk is perhaps the most surprising. The chance that someone who has a hamburger for lunch every day will die before a similar non-hamburger eater is 53 percent (compared with 50 percent for two similar people). This calculation depends on a beautifully simple mathematical result. The result can be stated very simply, though its derivation (given by Spiegelhalter here, at the end) needs some maths. The probability that the hamburger eater will life longer than the non-hamburger eater is \[ \frac{1}{h+1}. \] When there is no risk, $ h = 1$, this is 0.5, a 50:50 chance of one person dying before the other. When the relative risk (hazard ratio) is $ h = 1.13 $ it is \[ \frac{1}{1.13+1} = 0.47, \] so there is a 100 – 47 = 53% chance that hamburger eater dies first. Another way to put the same result is to say that when a hazard ratio, $ h $, is kept up throughout their lives, the odds that hamburger eater dies before the non-hamburger eater is precisely $ h $. The odds of an event happening are defined as the ratio between the probability of it happening, $ p $, to the probability of it not happening, $ (1-p) $, i.e. \[ h = \frac {p} {(1-p)}. \] Rearranging this gives \[ p = \frac {h} {(1+h)}. \] When the risk ratio is $ h=1.13 $ this gives $ p = 0.53 $, as before. Based largely on the fact that the new study shows risks that are smaller than previous, smaller, studies, it seems to me that the evidence for the reality of the association is somewhat weaker than before. Although the new study, unlike the earlier ones, shows signs for a relationship between the amount of read meat eaten and risk of death, the confounding factors (total calories eaten, weight, smoking etc) are so strong that the evidence for causality is critically dependent on the accuracy of the corrections for these factors, and even more dependent on their not being another factor that has not been included. It can't be said too often that if the association is not causal then refraining from eating red meat won't have the slightest benefit. If it were, for example, the high calorie intake rather than eating meat that causes the risk, then stopping eating meat won't help you in the slightest. Even if the increased risk was entirely caused by eating meat, the worst consequence of eating red meat every day is to have a 53% chance of dying before someone who doesn't eat much meat, rather than a 50% chance. I won't become a vegetarian just yet (or, at least if I do it will be on ethical grounds, not because of health risk). Acknowledgment. I'm very grateful to Professor David Spiegelhalter for help and discussions about this post. Tagged bacon, David Spiegelhalter, ham, Life table, processed meat, red meat, risk, statistics, stochastic | 17 Comments Why philosophy is largely ignored by science I have in the past, taken an occasional interest in the philosophy of science. But in a lifetime doing science, I have hardly ever heard a scientist mention the subject. It is, on the whole, a subject that is of interest only to philosophers. It's true that some philosophers have had interesting things to say about the nature of inductive inference, but during the 20th century the real advances in that area came from statisticians, not from philosophers. So I long since decided that it would be more profitable to spend my time trying to understand R.A Fisher, rather than read even Karl Popper. It is harder work to do that, but it seemed the way to go. This post is based on the last part of chapter titled "In Praise of Randomisation. The importance of causality in medicine and its subversion by philosophers of science". A talk was given at the meeting at the British Academy in December 2007, and the book will be launched on November 28th 2011 (good job it wasn't essential for my CV with delays like that). The book is published by OUP for the British Academy, under the title Evidence, Inference and Enquiry (edited by Philip Dawid, William Twining, and Mimi Vasilaki, 504 pages, £85.00). The bulk of my contribution has already appeared here, in May 2009, under the heading Diet and health. What can you believe: or does bacon kill you?. It is one of the posts that has given me the most satisfaction, if only because Ben Goldacre seemed to like it, and he has done more than anyone to explain the critical importance of randomisation for assessing treatments and for assessing social interventions. Having long since decided that it was Fisher, rather than philosophers, who had the answers to my questions, why bother to write about philosophers at all? It was precipitated by joining the London Evidence Group. Through that group I became aware that there is a group of philosophers of science who could, if anyone took any notice of them, do real harm to research. It seems surprising that the value of randomisation should still be disputed at this stage, and of course it is not disputed by anybody in the business. It was thoroughly established after the start of small sample statistics at the beginning of the 20th century. Fisher's work on randomisation and the likelihood principle put inference on a firm footing by the mid-1930s. His popular book, The Design of Experiments made the importance of randomisation clear to a wide audience, partly via his famous example of the lady tasting tea. The development of randomisation tests made it transparently clear (perhaps I should do a blog post on their beauty). By the 1950s. the message got through to medicine, in large part through Austin Bradford Hill. Despite this, there is a body of philosophers who dispute it. And of course it is disputed by almost all practitioners of alternative medicine (because their treatments usually fail the tests). Here are some examples. "Why there's no cause to randomise" is the rather surprising title of a report by Worrall (2004; see also Worral, 2010), from the London School of Economics. The conclusion of this paper is "don't believe the bad press that 'observational studies' or 'historically controlled trials' get – so long as they are properly done (that is, serious thought has gone in to the possibility of alternative explanations of the outcome), then there is no reason to think of them as any less compelling than an RCT." In my view this conclusion is seriously, and dangerously, wrong –it ignores the enormous difficulty of getting evidence for causality in real life, and it ignores the fact that historically controlled trials have very often given misleading results in the past, as illustrated by the diet problem.. Worrall's fellow philosopher, Nancy Cartwright (Are RCTs the Gold Standard?, 2007), has made arguments that in some ways resemble those of Worrall. Many words are spent on defining causality but, at least in the clinical setting the meaning is perfectly simple. If the association between eating bacon and colorectal cancer is causal then if you stop eating bacon you'll reduce the risk of cancer. If the relationship is not causal then if you stop eating bacon it won't help at all. No amount of Worrall's "serious thought" will substitute for the real evidence for causality that can come only from an RCT: Worrall seems to claim that sufficient brain power can fill in missing bits of information. It can't. I'm reminded inexorably of the definition of "Clinical experience. Making the same mistakes with increasing confidence over an impressive number of years." In Michael O'Donnell's A Sceptic's Medical Dictionary. At the other philosophical extreme, there are still a few remnants of post-modernist rhetoric to be found in obscure corners of the literature. Two extreme examples are the papers by Holmes et al. and by Christine Barry. Apart from the fact that they weren't spoofs, both of these papers bear a close resemblance to Alan Sokal's famous spoof paper, Transgressing the boundaries: towards a transformative hermeneutics of quantum gravity (Sokal, 1996). The acceptance of this spoof by a journal, Social Text, and the subsequent book, Intellectual Impostures, by Sokal & Bricmont (Sokal & Bricmont, 1998), exposed the astonishing intellectual fraud if postmodernism (for those for whom it was not already obvious). A couple of quotations will serve to give a taste of the amazing material that can appear in peer-reviewed journals. Barry (2006) wrote "I wish to problematise the call from within biomedicine for more evidence of alternative medicine's effectiveness via the medium of the randomised clinical trial (RCT)." "Ethnographic research in alternative medicine is coming to be used politically as a challenge to the hegemony of a scientific biomedical construction of evidence." "The science of biomedicine was perceived as old fashioned and rejected in favour of the quantum and chaos theories of modern physics." "In this paper, I have deconstructed the powerful notion of evidence within biomedicine, . . ." The aim of this paper, in my view, is not obtain some subtle insight into the process of inference but to try to give some credibility to snake-oil salesmen who peddle quack cures. The latter at least make their unjustified claims in plain English. The similar paper by Holmes, Murray, Perron & Rail (Holmes et al., 2006) is even more bizarre. "Objective The philosophical work of Deleuze and Guattari proves to be useful in showing how health sciences are colonised (territorialised) by an all-encompassing scientific research paradigm "that of post-positivism " but also and foremost in showing the process by which a dominant ideology comes to exclude alternative forms of knowledge, therefore acting as a fascist structure. ", It uses the word fascism, or some derivative thereof, 26 times. And Holmes, Perron & Rail (Murray et al., 2007)) end a similar tirade with "We shall continue to transgress the diktats of State Science." It may be asked why it is even worth spending time on these remnants of the utterly discredited postmodernist movement. One reason is that rather less extreme examples of similar thinking still exist in some philosophical circles. Take, for example, the views expressed papers such as Miles, Polychronis and Grey (2006), Miles & Loughlin (2006), Miles, Loughlin & Polychronis (Miles et al., 2007) and Loughlin (2007).. These papers form part of the authors' campaign against evidence-based medicine, which they seem to regard as some sort of ideological crusade, or government conspiracy. Bizarrely they seem to think that evidence-based medicine has something in common with the managerial culture that has been the bane of not only medicine but of almost every occupation (and which is noted particularly for its disregard for evidence). Although couched in the sort of pretentious language favoured by postmodernists, in fact it ends up defending the most simple-minded forms of quackery. Unlike Barry (2006), they don't mention alternative medicine explicitly, but the agenda is clear from their attacks on Ben Goldacre. For example, Miles, Loughlin & Polychronis (Miles et al., 2007) say this. "Loughlin identifies Goldacre [2006] as a particularly luminous example of a commentator who is able not only to combine audacity with outrage, but who in a very real way succeeds in manufacturing a sense of having been personally offended by the article in question. Such moralistic posturing acts as a defence mechanism to protect cherished assumptions from rational scrutiny and indeed to enable adherents to appropriate the 'moral high ground', as well as the language of 'reason' and 'science' as the exclusive property of their own favoured approaches. Loughlin brings out the Orwellian nature of this manoeuvre and identifies a significant implication." If Goldacre and others really are engaged in posturing then their primary offence, at least according to the Sartrean perspective adopted by Murray et al. is not primarily intellectual, but rather it is moral. Far from there being a moral requirement to 'bend a knee' at the EBM altar, to do so is to violate one's primary duty as an autonomous being." This ferocious attack seems to have been triggered because Goldacre has explained in simple words what constitutes evidence and what doesn't. He has explained in a simple way how to do a proper randomised controlled trial of homeopathy. And he he dismantled a fraudulent Qlink pendant, purported to shield you from electromagnetic radiation but which turned out to have no functional components (Goldacre, 2007). This is described as being "Orwellian", a description that seems to me to be downright bizarre. In fact, when faced with real-life examples of what happens when you ignore evidence, those who write theoretical papers that are critical about evidence-based medicine may behave perfectly sensibly. Although Andrew Miles edits a journal, (Journal of Evaluation in Clinical Practice), that has been critical of EBM for years. Yet when faced with a course in alternative medicine run by people who can only be described as quacks, he rapidly shut down the course (A full account has appeared on this blog). It is hard to decide whether the language used in these papers is Marxist or neoconservative libertarian. Whatever it is, it clearly isn't science. It may seem odd that postmodernists (who believe nothing) end up as allies of quacks (who'll believe anything). The relationship has been explained with customary clarity by Alan Sokal, in his essay Pseudoscience and Postmodernism: Antagonists or Fellow-Travelers? (Sokal, 2006). Of course RCTs are not the only way to get knowledge. Often they have not been done, and sometimes it is hard to imagine how they could be done (though not nearly as often as some people would like to say). It is true that RCTs tell you only about an average effect in a large population. But the same is true of observational epidemiology. That limitation is nothing to do with randomisation, it is a result of the crude and inadequate way in which diseases are classified (as discussed above). It is also true that randomisation doesn't guarantee lack of bias in an individual case, but only in the long run. But it is the best that can be done. The fact remains that randomization is the only way to be sure of causality, and making mistakes about causality can harm patients, as it did in the case of HRT. Raymond Tallis (1999), in his review of Sokal & Bricmont, summed it up nicely "Academics intending to continue as postmodern theorists in the interdisciplinary humanities after S & B should first read Intellectual Impostures and ask themselves whether adding to the quantity of confusion and untruth in the world is a good use of the gift of life or an ethical way to earn a living. After S & B, they may feel less comfortable with the glamorous life that can be forged in the wake of the founding charlatans of postmodern Theory. Alternatively, they might follow my friend Roger into estate agency — though they should check out in advance that they are up to the moral rigours of such a profession." The conclusions that I have drawn were obvious to people in the business a half a century ago. (Doll & Peto, 1980) said "If we are to recognize those important yet moderate real advances in therapy which can save thousands of lives, then we need more large randomised trials than at present, not fewer. Until we have them treatment of future patients will continue to be determined by unreliable evidence." The towering figures are R.A. Fisher, and his followers who developed the ideas of randomisation and maximum likelihood estimation. In the medical area, Bradford Hill, Archie Cochrane, Iain Chalmers had the important ideas worked out a long time ago. In contrast, philosophers like Worral, Cartwright, Holmes, Barry, Loughlin and Polychronis seem to me to make no contribution to the accumulation of useful knowledge, and in some cases to hinder it. It's true that the harm they do is limited, but that is because they talk largely to each other. Very few working scientists are even aware of their existence. Perhaps that is just as well. Cartwright N (2007). Are RCTs the Gold Standard? Biosocieties (2007), 2: 11-20 Colquhoun, D (2010) University of Buckingham does the right thing. The Faculty of Integrated Medicine has been fired. http://www.dcscience.net/?p=2881 Miles A & Loughlin M (2006). Continuing the evidence-based health care debate in 2006. The progress and price of EBM. J Eval Clin Pract 12, 385-398. Miles A, Loughlin M, & Polychronis A (2007). Medicine and evidence: knowledge and action in clinical practice. J Eval Clin Pract 13, 481-503. Miles A, Polychronis A, & Grey JE (2006). The evidence-based health care debate – 2006. Where are we now? J Eval Clin Pract 12, 239-247. Murray SJ, Holmes D, Perron A, & Rail G (2007). Deconstructing the evidence-based discourse in health sciences: truth, power and fascis. Int J Evid Based Healthc 2006; : 4, 180–186. Sokal AD (1996). Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity. Social Text 46/47, Science Wars, 217-252. Sokal AD (2006). Pseudoscience and Postmodernism: Antagonists or Fellow-Travelers? In Archaeological Fantasies, ed. Fagan GG, Routledge,an imprint of Taylor & Francis Books Ltd. Sokal AD & Bricmont J (1998). Intellectual Impostures, New edition, 2003, Economist Books ed. Profile Books. Tallis R. (1999) Sokal and Bricmont: Is this the beginning of the end of the dark ages in the humanities? Worrall J. (2004) Why There's No Cause to Randomize. Causality: Metaphysics and Methods.Technical Report 24/04 . 2004. Worrall J (2010). Evidence: philosophy of science meets medicine. J Eval Clin Pract 16, 356-362. Iain Chalmers has drawn my attention to a some really interesting papers in the James Lind Library An account of early trials is given by Chalmers I, Dukan E, Podolsky S, Davey Smith G (2011). The adoption of unbiased treatment allocation schedules in clinical trials during the 19th and early 20th centuries. Fisher was not the first person to propose randomised trials, but he is the person who put it on a sound mathematical basis. Another fascinating paper is Chalmers I (2010). Why the 1948 MRC trial of streptomycin used treatment allocation based on random numbers. The distinguished statistician, David Cox contributed, Cox DR (2009). Randomization for concealment. Incidentally, if anyone still thinks there are ethical objections to random allocation, they should read the account of retrolental fibroplasia outbreak in the 1950s, Silverman WA (2003). Personal reflections on lessons learned from randomized trials involving newborn infants, 1951 to 1967. Chalmers also pointed out that Antony Eagle of Exeter College Oxford had written about Goldacre's epistemology. He describes himself as a "formal epistemologist". I fear that his criticisms seem to me to be carping and trivial. Once again, a philosopher has failed to make a contribution to the progress of knowledge. Tagged acupuncture, CAM, infererence, philosophy, randomisation, randomization, RCT, Ronald Fisher, statistics | 30 Comments
CommonCrawl
Mensuration in 3D - Surface Ar... A company packages its milk powder in a cylindrical container whose base has a diameter of $14\text{ cm}$ and height $\text{20 cm}$. The company places a label around the surface of the container (as shown in the figure). If the label is placed $\text{2 cm}$ from top and bottom, what is the area of the label? Hint: As the label is also cylindrical in shape so the area of the label is equal to the surface area of the cylinder which is equal to $2\pi rh$ as height and diameter are given in the question. We will calculate the radius by using the relation $\text{Radius=}\dfrac{Diameter}{2}$ and substitute the values in the formula to obtain the desired result. Complete step-by-step solution: We have given that a label is placed on the surface of a cylindrical container whose base has a diameter of $14\text{ cm}$ and height $\text{20 cm}$. The label is placed $\text{2 cm}$ from top and bottom. We have to find the area of the label. We have given that the diameter of the container is $14\text{ cm}$. We know that $\text{Radius=}\dfrac{Diameter}{2}$, so the radius will be $\dfrac{14\text{ cm}}{2}=7\text{ cm}$ We have given that the height of the container is $\text{20 cm}$ and the label is placed $\text{2 cm}$ from top and bottom, so the height of the label will be $20-\left( 2+2 \right)=16\text{ cm}$ Now, the label is also cylindrical in shape so the area of the label is equal to the surface area of the cylinder, which is given by $2\pi rh$. Substituting the values, we get $\begin{align} & \text{Area of label =}2\pi rh \\ & \text{Area of label =}2\times \dfrac{22}{7}\times 7\times 16 \\ & \text{Area of label =44}\times \text{16} \\ & \text{Area of label =704 c}{{\text{m}}^{2}} \\ \end{align}$ So, the area of label is $\text{704 c}{{\text{m}}^{2}}$. Note: The key concept to solve this question is the area of the cylinder. Here in this question we only consider the surface area of the cylinder because the label is placed only on the surface of the container. When we use the height of the label we have to subtract the height $4\text{ cm}$ from the height of the cylinder because the label is placed $\text{2 cm}$ from both top and bottom.
CommonCrawl
> physics > arXiv:2007.02193 Physics > Atomic Physics arXiv:2007.02193 (physics) [Submitted on 4 Jul 2020] Title:Lifetime-Limited Interrogation of Two Independent ${}^{27}\textrm{Al}^{+}$ Clocks Using Correlation Spectroscopy Authors:E. R. Clements (1 and 2), M. E. Kim (1), K. Cui (1 and 3), A. M. Hankin (1 and 2), S. M. Brewer (1), J. Valencia (1 and 2), J.-S. Chen (1 and 2), C.W. Chou (1), D. R. Leibrandt (1 and 2), D. B. Hume (1) ((1) National Institute of Standards and Technology, Boulder, CO, (2) Department of Physics, University of Colorado, Boulder, CO, (3) HEP Division, Argonne National Laboratory, Lemont, IL) Abstract: Laser decoherence limits the stability of optical clocks by broadening the observable resonance linewidths and adding noise during the dead time between clock probes. Correlation spectroscopy avoids these limitations by measuring correlated atomic transitions between two ensembles, which provides a frequency difference measurement independent of laser noise. Here, we apply this technique to perform stability measurements between two independent clocks based on the $^1S_0\leftrightarrow{}^3P_0$ transition in $^{27}$Al$^+$. By stabilizing the dominant sources of differential phase noise between the two clocks, we observe coherence between them during synchronous Ramsey interrogations as long as 8 s at a frequency of $1.12\times10^{15}$ Hz. The observed contrast in the correlation spectroscopy signal is consistent with the 20.6 s $^3P_0$ state lifetime and supports a measurement instability of $(1.8\pm0.5)\times 10^{-16}/\sqrt{\tau/\textrm{s}}$ for averaging periods longer than the probe duration when deadtime is negligible. Comments: 6 pages, 3 figures + supplemental material 7 pages, 4 figures, 1 table Subjects: Atomic Physics (physics.atom-ph) Cite as: arXiv:2007.02193 [physics.atom-ph] (or arXiv:2007.02193v1 [physics.atom-ph] for this version) Journal reference: Phys. Rev. Lett. 125, 243602 (2020) Related DOI: https://doi.org/10.1103/PhysRevLett.125.243602 From: Ethan Clements [view email] [v1] Sat, 4 Jul 2020 21:55:07 UTC (5,704 KB) physics.atom-ph
CommonCrawl
Histone modifications facilitate the coexpression of bidirectional promoters in rice Yuan Fang1, Lei Wang1, Ximeng Wang1, Qi You2, Xiucai Pan1, Jin Xiao1, Xiu-e Wang1, Yufeng Wu1, Zhen Su2 & Wenli Zhang1,3 Bidirectional gene pairs are highly abundant and mostly co-regulated in eukaryotic genomes. The structural features of bidirectional promoters (BDPs) have been well studied in yeast, humans and plants. However, the underlying mechanisms responsible for the coexpression of BDPs remain understudied, especially in plants. Here, we characterized chromatin features associated with rice BDPs. Several unique chromatin features were present in rice BDPs but were missing from unidirectional promoters (UDPs), including overrepresented active histone marks, canonical nucleosomes and underrepresented H3K27me3. In particular, overrepresented active marks (H3K4ac, H4K12ac, H4K16ac, H3K4me2 and H3K36me3) were truly overrepresented in type I BDPs but not in the other two BDPs, based on a Kolmogorov-Smirnov test. Our analyses indicate that active marks (H3K4ac, H4K12ac, H4K16ac, H3K4me3, H3K9ac and H3K27ac) may coordinate with repressive marks (H3K27me3 and H3K9me1/3) to build a unique chromatin structure that favors the coregulation of bidirectional gene pairs. Thus, our findings help to enhance the understanding of unique epigenetic mechanisms that regulate bidirectional gene pairs and may improve the manipulation of gene pairs for crop bioengineering. Bidirectional promoters (BDPs) regulate the bidirectional transcription of protein-coding gene pairs with head-to-head orientation, which means that the transcription of each gene occurs on a different DNA strand and in opposite directions. These promoters have been well characterized in the eukaryotic genomes of yeast [1, 2], Drosophila [3], humans [4, 5] and some plants [6, 7]. Investigations of BDPs in yeast and humans have shown that BDPs possess unique features compared to unidirectional promoters (UDPs). The sequences of BDPs have higher GC contents and fewer TATA boxes than those of UDPs [4, 5, 8]. The presence of overrepresented motifs, such as GABPA and YY1, has already been recognized as a characteristic of human BDPs [9–11]. Compared to UDPs, human BDPs have more epigenetic marks and chromatin related features, including RNA PolII binding sites, acetylation at H3, H3K9 and H3K27 and methylation at H3K4me2/3 [9, 12]. By contrast, H4 acetylation is underrepresented in human BDPs [11]. The majority of bidirectional gene pair products function in the same cellular pathway, and their involvement has been implicated in diverse processes, including DNA repair, the cell cycle, housekeeping, various metabolic pathways and human diseases [4, 10, 13–19]. Although the coexpression of bidirectional gene pairs is common in eukaryotic genomes [5, 20–23], the detailed underlying mechanisms that regulate coexpression are not well characterized. Thus, uncovering the unique regulatory mechanisms associated with BDPs will provide new insights for understanding eukaryotic gene regulation, especially co-regulation. Progress has been made in characterizing plant BDPs in Arabidopsis [6, 24, 25], rice [6], maize [7] and Populus [6] due to the recent availability of whole plant genome sequences and transcriptome data. Similar to BDPs in yeast and humans, plant BDPs have higher GC contents and fewer TATA boxes than UDPs [6, 20, 24, 26]. Moreover, plant BDPs are involved in the regulation of important agricultural traits [27–31]. However, information on the chromatin related features of plant BDPs is still lacking. In this study, we continued to perform a comprehensive analysis of chromatin-based epigenetic features in rice BDPs. BDPs were classified into three types (I, II and III with sizes of 0–250 bp, 250–500 bp and 500–1000 bp, respectively) as described previously [32]. The BDP size was defined as the intergenic distance between the transcription start sites (TSSs) of the corresponding gene pairs. We observed that type I BDPs (BDPs I) showed the highest percentage and strongest level of coexpression, which was in agreement with the highest level of coexpression from gene pairs with 200 bp separating their TSSs. We also found several unique chromatin features present in rice BDPs that are not found in UDPs, including the overrepresentation of active histone marks, canonical nucleosomes and the underrepresentation of H3K27me3. Strikingly, we found that overrepresented H3K4ac,H4K12ac, H4K16ac, H3K9ac and H3K27ac marks may play a significant role in the regulation of coexpressed gene pairs, indicating that histone acetylation functions in the co-regulation of gene pairs. Thus, our findings help to enhance the understanding of a unique epigenetic mechanism used in the regulation of BDPs, which could be used to improve the manipulation of gene pairs in crop bioengineering. DNA sequence features of rice bidirectional promoters To comprehensively characterize the DNA sequence profiles of the BDPs in rice, we first identified bidirectional gene pairs with head-to-head orientations using the updated version of the rice genome (The Institute for Genomic Research (TIGR), rice subsp.Japonica version 7.0) as described previously [32], which contains a total of 55,801 annotated genes. We identified a total of 290 type I BDPs, 294 type II BDPs (BDPs II) and 627 type III BDPs (BDPs III), with TSS intergenic distances of 0–250 bp (BDPs I), 250–500 bp (BDPs II) and 500–1000 bp (BDPs III), respectively. Our results were similar to the previously reported number of rice BDPs [24]. We then calculated the GC contents and observed approximately 54 %, 50 % and 45 % GC contents in types I, II and III BDPs, respectively (Additional file 1: Table S1). The GC contents in BDPs was significantly higher than from randomly selected UDPs (Fig. 1).This result confirmed the presence of GC-enriched sequences in eukaryotic BDPs [5, 9, 24]. In addition, the TATA box content was analyzed using the PLACE database [33]; TATA boxes were found in approximately 18 %, 52 % and 82 % of type I, II and III BDPs, respectively (Additional file 1: Table S1). The ratio of genes containing TATA boxes in type I BDPs was 30 % less compared to randomly selected type I UDPs (random I) (Additional file 1: Table S1). In general, our analysis showed that GC content is inversely related to BDP size. By contrast, TATA content is positively associated with BDP size. Our results are the first to demonstrate that many rice type I BDPs are GC-rich sequences lacking TATA boxes. After comparing the expression of gene pairs among the three BDP types, we found that type I BDPs had the highest expression level; whereas type III BDPs had the lowest expression level. In addition, we observed that the expression of one of the gene pairs was significantly higher (higher FPKM, p < 0.01)than its counterpart (lower FPKM, p < 0.01) (data not shown). This result indicated that the GC or TATA content may affect the expression level of the corresponding genes. Comparison of GC contents between BDPs and randomly selected UDPs. Type I, II and III BDPs: bidirectional promoters with intergenic sizes ranging from 0 to 250 bp; from 250 to 500 bp and from 500 to 1000 bp, respectively. R I, RII and RIII: randomly selected unidirectional promoters (UDPs) with sizes as 250 bp, 500 bp and 1000 bp starting from upstream of TSS of the downstream genes, respectively, were used as controls for type I,II and III BDPs, respectively. Statistical analysis was conducted with a two-sample K-S test Overrepresented motifs in rice BDPs involve in stress responses To determine the occurrence of conserved motifs within rice BDPs, which are potential binding sites for trans-factors involved in the regulation of bidirectional gene expression, we first classified BDPs into constitutive and tissue-specific categories according to the expression profiles of the bidirectional gene pairs in three rice tissues under normal conditions (leaf, callus and root) (Additional file 2: Table S2). We then identified the presence of overrepresented motifs with p-value cut-off of 0.05 using the PLACE and PlantCare databases [34]. When 1000 randomly selected UDPs were used as a control (Additional file 3: Table S3), we identified three overrepresented constitutive motifs (SORLIP2AT (GGGCC), SITEIIATCYTC (TGGGCY) and UP1ATMSD(GGCCCAWWW) in BDPs from the three rice tissues tested(Additional file 4: Table S4). This result was similar to previously reported findings [6]. These motifs are possibly involved in regulating phyA-responsive transcripts, the expression of PCNA (proliferating cell nuclear antigen) genes and the regulation of genes in auxiliary buds. In addition, we observed that TBF1HSF (GAAGAAGAA) was overrepresented in leaf tissue, whereas the ACGTABREMOTIFA2OSEM (ACGTGKC) and BOXIIPCCHS (ACGTGGC) motifs were dominant in callus tissue (Additional file 4: Table S4). The TBF1HSF motif (GAAGAAGAA) is associated with the expression of genes related to diverse defense responsive [35] and the regulation of thermo-tolerance in Arabidopsis [36]. The ACGTABREMOTIFA2OSEM motif (ACGTGKC) has been implicated in the regulation of genes associated with different metabolic pathways during drought stress in soybean [37], and the regulation of genes associated with ABA-responsive in Arabidopsis [38]. To investigate whether BDP-related gene pairs are involved in stress responses in rice, we analyzed differentially expressed bidirectional gene pairs under drought stress using publicly available RNA-seq data (GSE65022). When compared to control genes, 62 up-regulated gene pairs and 70 down-regulated gene pairs with fold change greater than 2 were identified under drought stress. We then identified 14 overrepresented motifs in the promoter regions of gene pairs that were both up-and down-regulated during drought stress (Additional file 5: Table S5). However, when compared with non-drought inducible BDPs (Additional file 3: Table S3), we found that 8 motifs (highlighted in red) were present in both drought-inducible and non-drought-inducible BDPs. Only the six remaining motifs were truly related to stress response, indicating that the gene pairs with promoters contain these motifs play diverse roles in plant development and stress responses. In addition, some of the well-characterized motifs involved in plant stress responses, such as ACGTABREMOTIFA2OSEM (ACGTGKC) [37], CACGTGMOTIF (CACGTG) [39], CAMTA1(CCGCGT) [40] and ABRERATCAL(MACGYGB) [41, 42] (Additional file 5: Table S5) were overrepresented in the promoters of both drought-inducible rice gene pairs and unidirectional genes that were upregulated under drought stress (Additional file 3: Table S3), This result was consistent with prior reports of overrepresented motifs in humans and plant BDPs compared to UDPs [5, 9, 24]. The presence of tissue-specific overrepresented motifs may play an important role in regulating plant development and stress-responses. The binding of various trans-factors to these motifs may be a unique mechanism responsible for the constitutive, tissue-specific and stress responsive expression of bidirectional genes. Coexpression of rice bidirectional gene pairs Bidirectional gene pairs in animals and Arabidopsis are usually highly coexpressed [20, 43]. However, the effect of the intergenic distance between the TSSs of a gene pair on the coexpression of the corresponding genes was unclear. We calculated the Pearson correlation coefficients for all bidirectional gene pairs using eleven total gene expression datasets extracted from the Rice Genome Annotation Project (http://rice.plantbiology.msu.edu/expression.shtml) as described previously [32]. We observed that the median coexpression values of bidirectional gene pairs were significantly higher than those of randomly selected two adjacent unidirectional genes (Fig. 2a). This result suggested that bidirectional gene pairs driven by BDPs tend to be more coexpressed than randomly selected two adjacent unidirectional genes. Based on the strength of the correlation (Pearson correlation coefficient) between the expression levels of the gene pairs, we divided the expression mode for bidirectional gene pairs into four categories: coexpression, anti-expression, independent expression and no expression (Additional file 6: Table S6). The coexpression rather than anti-expression were significantly different between BDPs and UDPs (Fig. 2b and c). In addition, the percentage of coexpressed gene pairs decreased with increasing BDP size (Additional file 6: Table S6). The highest frequency of coexpression was previously found in gene pairs separated by 200 bp [32], here we further observed that gene pairs were generally more frequently coexpressed when the intergenic distance was less than 500 bp. (Additional file 7: Figure S1). The high frequency of coexpression from BDPs with a 200 bp intergenic distance between the TSSs of each gene may be explained by the 200 bp spacing of nucleosomes; a similar finding was previously reported in Arabidopsis [20]. By contrast, no significant difference was observed between BDPs with more than 700 bp of TSS intergenic space and UDPs. We speculated that 200 bp is probably the optimal space for sharing regulatory elements and recruiting transcriptional machinery to enhance the coexpression of bidirectional gene pairs. Coexpression analysis of bidirectional gene pairs. a. Comparison of expression correlation between gene pairs from type I, II and III BDPs, and randomly selected unidirectional genes. The Pearson correlation coefficients were calculated from all gene pairs using the absolute expression values. A statistical analysis was performed using a two-sample K-S test, where ** p < 0.001. b. Comparison of coexpression correlation between coexpressed gene pairs and randomly selected unidirectional genes. The expression mode of each gene pair was classified into two categories, coexpression or anti-expression, based on the Pearson correlation coefficients. A positive Pearson correlation coefficient indicated coexpression (Fig. 2b ), and a negative Pearson correlation coefficient indicated anti-expression (Fig. 2c ). All gene pairs with positive Pearson correlation coefficients were selected for analysis. Significant difference were determined using a two-sample K-S test, where ** p < 0.001. c. Comparisons of the expression correlations between anti-expressed gene pairs and randomly selected unidirectional genes. All gene pairs with negative Pearson correlation coefficients were selected for analysis. A statistical analysis was performed using a two-sample K-S test, where ** p < 0.001 Taken together, the above analyses indicated that bidirectional gene pairs, especially in type I BDPs, are highly coexpressed in rice. However, the underlying mechanisms need to be further investigated. Overrepresented histone marks associated with rice BDPs Histone modifications play fundamental roles in controlling the chromatin-based regulation of gene expression in eukaryotic genomes. To profile the histone marks around BDPs, we performed chromatin immunoprecipitation (ChIP) followed by high through-put sequencing (ChIP-seq) for six histone marks as described previously [32], which included three active marks (H3K27ac, H3K4ac and H3K9ac) and three repressive marks (H3K9me1, H3K9me3 and H3K27me3). In addition, we also included six of active marks (H4K12ac, H3K4me3, H3K36me3, H3K4me2, H4K16ac and H3K23ac) previously characterized in rice [44, 45]. We selected rice unidirectional genes with expression levels (FPKM value) comparable as control. We observed that profiling of all marks was possible regardless of the number of control genes used because the distribution of each mark was similar when between one time (1×) and five times (5×) the number of bidirectional genes were analyzed (Additional file 8: Table S7). Thus, we decided to use 1X control genes for the following analysis. To confirm the accuracy of the ChIP-seq analysis, a qPCR assay was performed following a ChIP experiment using antibodies against H3K27ac and H4K12ac. In general, we found that ChIP-qPCR enrichment (% of input) for an individual BDP locus was consistent with the ChIP-seq result for that locus (normalized reads counts) (Additional file 9: Table S8). We then plotted the normalized reads across bidirectional gene pairs. Strikingly, we observed that the peak levels of each active mark (acetylation at H4K12, H3K27, H3K4 and H3K9, methylation at H3K4 and H3K36) were higher in type I BDPs than in UDPs (Fig. 3a and c). A similar trend was observed for type II and type III BDPs compared to the corresponding UDPs (Additional file 10: Figure S2a and c; Additional file 11: Figure S3 a and c), but the marks were more enriched in the genes in type I BDPs than type II and III BDPs. This result, which demonstrated that active marks are more enriched in rice BDPs compared to UDPs, is similar to findings in humans [9, 12]. Although occupancy of repressive marks (methylation at H3K9 and H3K27) was 10 times less than that of active marks, the amplitude of the oscillating peaks for H3K27me3 in type I BDPs was lower than in UDPs; this finding is contrary to the H3K9me3 enrichment observed in type I BDPs as compared to UDPs (Fig. 3e). Similarly, when compared to UDPs, less H3K27me3 and more H3K9me1/3 were also observed in type II and III BDPs (Additional file 10: Figure S22e; Additional file 11: Figure S3e). Profiling of histone marks across type I BDPs and UDP controls with the same gene number and expression level as the bidirectional gene pairs. Unidirectional genes with higher and lower FPKM values were aligned on the right and left side, respectively (Fig. 3 b, d and f). Bidirectional gene pairs with higher and lower FPKM values were aligned on the right and left sides of the BDPs, respectively (Fig. 3 a, c and e). Normalized reads counts indicated the enrichment of each mark were calculated by reads number per bp of genomic region per million reads. X-axes show the relative distances of BDPs (bp) in Fig. 3 a, c and e and their positions relative to the TSSs in Fig. 3 b, d and f; Y-axes show normalized reads counts (read number in per bp of genome per million reads) within 1 kb upstream and downstream of the TSS. a. Profiles of active marks: H4K12ac, H3K27ac, H3K4ac and H3K9ac in type I BDPs (Fig. 3 a) and UDPs (Fig. 3 b). b. Profiles of active marks: H3K4me2, H3K4me3 and H3K36me3 in type I BDPs (Fig. 3 c) and UDPs (Fig. 3 d). c. Profiles of repressive marks: H3K9me1, H3K9me3 and H3K27me3 in type I BDPs (Fig. 3 e) and UDPs (Fig. 3f ) To confirm that all of histone marks analyzed were truly overrepresented in BDPs, we performed a K-S test on the normalized reads counts from all histone marks distributed in the gene bodies of BDPs and UDPs (Additional file 12: Table S9). We found that significant changes in occupancy were only detected for H4K12ac, H4K16ac, H3K4ac, H3K4me2, H3K36me3 and canonical nucleosomes. Intriguingly, histone marks were mainly overrepresented in type I BDPs rather than in the other two BDPs. The K-S test result demonstrated that these five marks are truly overrepresented in type I BDPs compared to UDPs and the other two types of BDPs. In summary, the above analyses demonstrated that BDPs have characteristic chromatin features, especially in histone modifications, which may build a unique chromatin structure that affects the transcription of gene pairs. Histone marks associated with coexpression of bidirectional gene pairs For all of the analyzed chromatin features distributed around BDPs (Additional file 8: Table S7 and Additional file 12: Table S9), we observed that the largest significant enrichment of histone marks in type I BDPs and not in the other two types, which is consistent with the presence of more coexpressed gene pairs in type I BDPs. We suspected that some of the marks were responsible for the coexpression of bidirectional gene pairs. To test this hypothesis, we profiled all active marks between coexpressed and anti-expressed gene pairs. Interestingly, we observed a similar histone mark profile between coexpressed and anti-expressed genes with higher FPKM values. The occupancy of marks, however, was higher in coexpressed genes with lower FPKM values compared to anti-expressed counterparts (Fig. 4). In addition, the K-S test on gene bodies indicated a significant difference in the occupancy of all marks between higher FPKM values and lower FPKM values of anti-expressed gene pairs (Additional file 13: Table S10), suggesting that the presence of those marks is closely associated with the level of gene expression. By contrast, only H3K4me2, H3K23ac, H3K36me3 and nucleosome occupancy were significantly different between co-expressed gene pairs with higher FPKM values and lower FPKM values (Additional file 13: Table S10). However, there was no significant difference in other marks, including six active marks (H3K4ac, H4K12ac, H4K16ac, H3K9c, H3K4me3 and H3K27ac) and three repressive marks (H3K27me3, H3K9me1 and H3K9me3) observed between coexpressed gene pairs with higher FPKM values and lower FPKM values. In addition, we also performed the significant test for association of 12 histone marks plus nucleosome occupancy with coexpression in type I BDPs (data not shown), 12 of them (except for the H3K36me3 mark) were related to coexpression of type I BDPs, indicating that the correlation level between histone marks and co-expression was higher in type I BDPs compared to the whole BDPs tested. This is consistent with the highest percentage of coexpressed gene pairs (50 %) detected in type I BDPs. In contrast to unidirectional genes and anti-expressed gene pairs, this analysis demonstrated that six active and three repressive histone marks in coexpressed gene pairs were not related to gene expression, indicating that these nine marks may coordinate to create unique chromatin features responsible for the coexpression of bidirectional gene pairs. Thus, the significant test result showed that some of marks are associated with gene expression level, whereas others are possibly responsible for the coexpression of gene pairs. Profiling of histone marks and nucleosome occupancy between coexpressed and anti-expressed bidirectional gene pairs. Either coexpressed or anti-expressed bidirectional gene pairs with higher and lower FPKM values were aligned on the right and left sides of BDPs, respectively. a: H3K4me2; b: H3K4me3; c: H3K36me3; d: H3K4ac; e: H3K9ac; f: H4K12ac; g: H3K27ac; h: nucleosome occupancy. Normalized reads counts indicating the enrichment of each mark were calculated by reads number per bp of genomic region per million reads. X-axes show the relative distances from the BDP (bp); Y-axes show normalized reads counts (read number per bp of genome per million reads) within 1 kb upstream and downstream of the TSS Nucleosome positioning and occupancy associated with BDPs Nucleosome positioning and occupancy can modulate the regulation of gene expression in eukaryotes by either favoring or dis-favoring the accessibility of the underlying DNA elements to trans-factors [46, 47]. To examine the nucleosome positioning around BDPs, we performed a similar analysis as histone marks for profiling nucleosome positioning across bidirectional gene pairs. As expected, the nucleosome profiling exhibited a prominently less occupancy in each kind of BDPs than the flanking nucleosomal regions. Each BDP was immediately flanked by an array of regularly spaced, well-positioned nucleosomes with progressively elevated phasing status from the TSS to the gene body (Fig. 5). A similar trend was observed in unidirectional genes, but nucleosome occupancy around BDPs was significantly higher compared to UDPs (Fig. 5; Additional file 14: Figure S4). By calculating the highest amplitudes of the phased nucleosome (Additional file 8: Table S7), we found that the nucleosome occupancy increased by approximately 33 % and 27 % for genes in type I BDPs with higher FPKM and lower FPKM values, respectively, as compared to randomly selected unidirectional genes. A certain change was also observed in type II and III BDP genes with higher FPKM values, but no change in genes with lower-FPKM values (Additional file 8: Table S7). However, the K-S test indicated that the change in nucleosomal occupancy between bidirectional gene pairs and unidirectional genes was only significant for type I BDPs (Additional file 12: Table S9). Profile of nucleosome positioning around each type of BDP. Profile of nucleosome positioning shown around type I BDPs (blue line),type II (red line) and type III (green line) BDPs extending ±1 kb from each BDPs. Bidirectional gene pairs with higher and lower FPKM values are aligned on the right and left side of BDPs, respectively. Normalized MNase-seq reads count representing the nucleosome positions were calculated by reads number per bp of genomic region per million reads. X-axes show the relative distances from the BDPs (bp); Y-axes show normalized MNase-seq reads counts (read number per bp of genome per million reads) within ±1 kb of the TSS. Paired-end MNase-seq reads were normalized and used for nucleosome positioning profiling. The bottom diagram indicates the direction of different expression levels from each gene pair: the gene with higher expression (higher FPKM values) is located on the right side and the gene with lower expression (lower FPKM values) is located on the left side In summary, the nucleosome positioning status around the BDPs is similar to those flanking UDPs in animals and other plants [48–50]. A distinct nucleosome positioning symmetrically flanks around type I BDPs compared to the other two types of BDPs, which is possibly associated with the coexpression of BDP genes. Growing evidence has demonstrated that genes with the similar expression level tend to be physically close to each other, and are typically coexpressed within various eukaryotic genomes, including yeast [51], humans [52, 53], Drosophila [54, 55], nematode [56, 57], mouse [58] and Arabidopsis [21, 59, 60]. However, the underlying mechanisms responsible for the coexpression of gene pairs have remained unclear. Thus, uncovering regulatory mechanisms associated with BDPs will provide new insights into the understanding of eukaryotic gene regulations, especially in the expression of gene pairs. Chromatin structures and bidirectional transcription of BDPs Divergent transcription has been considered as an intrinsic property of many promoters in yeast and mammals [1, 2, 61–64]. Accumulating evidence from mammals indicates that divergent transcription is a consequence of genetically and epigenetically combined actions, mainly including inherent promoter DNA sequences and chromatin related changes [64–67]. BDPs are a good system for elucidating the chromatin-based regulatory mechanisms that control the bidirectional transcription of gene pairs in eukaryotes. Human BDPs contain unique features that UDPs lack, including overrepresented DNA motifs [9, 10], overrepresented active histone marks [9], and differences in the distribution of histone marks and functions of CTCF and cohesins between BDPs and UDPs [12]. However, the underlying chromatin based mechanisms for BDPs, especially in plants, were not well studied. By integrating ChIP-seq and MNase-seq datasets, we provided the first comprehensive characterization of chromatin features in rice BDPs. Our results demonstrate that rice BDPs have typical chromatin features associated with active promoters: a low occupancy of canonical nucleosomes in BDPs, well positioned +/−1 canonical nucleosomes and the enrichment of active histone marks. This finding indicated that some of these marks may be involved in bidirectional initiation and elongation. In human divergent promoters which are flanked by coding genes and non-coding DNA sequences, no difference was observed in the distribution of transcription initiation related active marks (acetylation at H3/H4) and +/−1 nucleosome positioning between upstream non-coding sequences and downstream coding gene. However, elongation-related marks (H3K79me2, H3K36me3 and H2Bub) were only present in downstream gene bodies rather than the upstream non-coding sequences, suggesting that transcription elongation is a key determinant of the final fate of transcriptional direction from divergent promoters [68–70]. Chromatin structure and coexpression of bidirectional gene pairs Nucleosome-free or low occupancy of nucleosomes have been reported in the plant unidirectional promoter regions. Majority of plant promoter regions are associated with DNase I hypersensitive sites (DHSs), which are sensitive to cleavage by DNase I or other nucleases [44, 71–73]. Anti-correlation between nucleosome occupancy within promoters and gene expression [74], but correlation between nucleosomes density around TSS/ gene bodies and gene expression, was detected in plant unidirectional promoters [72, 75]. Similarly, BDPs displayed a similar nucleosome positioning pattern within promoter regions, and a similar relationship between nucleosome/active marks distribution and bidirectional gene expression. In addition, nucleosomes occupancy distributed within or out of UDP/ BDPs are directly associated with the level of active histone marks in the corresponding region. Thus, UDPs and BDPs generally share similar chromatin structural features in regulating the corresponding gene expression. However, it is unclear about the relationship between chromatin structural features and coexpression of plant bidirectional gene pairs. A possible correlation between nucleosome occupancy and coexpression rates was observed in humans and yeast [76–78]. Additionally, the possible role of chromatin modifications in coexpression was explored in Drosophila [79], humans [53, 80–82] and yeast [83, 84]. However, direct evidence of the effects of histone marks on coexpression, especially in plants, was still missing. Compared to UDPs, we observed that all active marks tested and nucleosome occupancy were overrepresented in BDPs (Additional file 8: Table S7), but a K-S test on gene body indicated that only a subset of these active marks (H4K12ac, H4K16ac, H3K4ac, H3K4me2, H3K36me3 and nucleosome occupancy) were significantly overrepresented in type I BDPs (Additional file 12: Table S9). This differs from human BDPs, in which H4 acetylation is underrepresented [11]. Strikingly, the true overrepresentation of active marks and canonical nucleosomes strongly correlated with the highest coexpression level of bidirectional gene pairs in type I BDPs. This result suggested that these overrepresented chromatin features may create chromatin structures that favor the coregulation of gene pairs. This prediction was further supported by the lack of significant changes in active marks (H3K4ac, H4K12ac, H4K16ac, H3K9ac, H3K4me3 and H3K27ac) observed in coexpressed genes, whereas all marks tested were significantly different in anti-expressed genes (Additional file 13: Table S10). Especially, consistent with the highest percentage of coexpressed gene pairs in type I BDPs, we observed a stronger correlation between histone marks and coexpression in type I BDPs compared with the whole BDPs tested. Among the 13 marks tested, only H3K36me3 was not related to coexpression in type I BDPs (data not shown). Usually, active marks are directly correlated with gene expression, whereas repressive marks are anti-correlated with expression in eukaryotes [85, 86]; this is not the case for the coexpression of gene pairs. Similarly, repressive marks do not display anti-correlated with gene expression in coepxressed gene pairs as compared with unidirectional genes and antiexpressed gene pairs. Our analyses demonstrated that overrepresented active marks (H3K4ac, H4K12ac, H4K16ac, H3K9ac, H3K4me3 and H3K27ac) may coordinate with repressive marks (H3K27me3 and H3K9me1/3) to build a unique chromatin features favorable for coregulation of bidirectional gene pairs. A similar histone modification-based mechanism, involving coacetylation or deacetylation was found to affect the coexpression of neighboring genes in yeast [87, 88]. By integrating RNA-seq, ChIP-seq and MNase-seq datasets, we identified several unique chromatin features present in rice BDPs that are absent in UDPs, including overrepresented active histone marks, canonical nucleosomes and underrepresented H3K27me3. In particular, overrepresented acetylation at H3K4/K9/K27 and H4K12/K6 may play a significant role in regulating coexpression of gene pairs. Thus, our analyses indicated that the coexpression of bidirectional gene pairs is a consequence of the combined actions of multi-layer regulations, from DNA itself to specialized chromatin structures including nucleosome positioning and the coordination of active and repressive histone modifications. Plant materials Germinated rice cultivar "Nipponbare" seeds were sowed in soil and grown in a greenhouse for two weeks. Rice seedlings were then collected for ChIP-seq or ChIP-qPCR experiments as described below. Identification of bidirectional promoters We retrieved the rice (Oryza sativa, subsp. japonica) genomic sequence and annotation data sets from the Rice Genome Annotation Database at TIGR (http://www.tigr.org/tdb/e2k1/osa1) as described previously [32]. Bidirectional genes were defined as gene pairs with head-to-head orientation, less than 1000 bp between their TSSs, and transcription from opposite strands. Bidirectional promoters (BDPs) were classified by the physical length of intergenic region between the TSSs of each gene pair into either type I (0–250 bp), II (250–500 bp) or III (500–1000 bp). All gene pairs annotated as protein coding genes were included for further analysis. Unidirectional promoters (UDPs) randomly selected from unidirectional genes with expression level similar to the bidirectional gene pairs were used as controls. We downloaded publicly available RNA-seq datasets (GSM655033) from seedlings [44], that were grown in the same condition as those used for the ChIP-seq experiment. The drought-regulated expression of bidirectional gene pairs was analyzed using public RNA-seq data sets (GSE65022). The expression values (FPKM) of bidirectional gene pairs were calculated as described previously [44]. ChIP-seq, ChIP-qPCR and MNase-seq ChIP-seq datasets (GSE79033) [32] for H3K4ac (Millipore, 07–539), H3K9ac (Millipore, 07–352), H3K27ac (Abcam,ab4729), H3K27me3 (Millipore, 07–449), H3K9me1 (Millipore, 07–395) and H3K9me3 (Millipore, 07–442), were generated from seeding using a previously described method [44]. Six of previously characterized ChIP-seq and MNase-seq (SRP045236)46 datasets obtained from seedlings for H3K4me3, H3K4me2, H3K36me3 and K4K12ac (GSE26734) [44] and H4K16ac and H3K23ac (GSE69426) [45] were downloaded from NCBI for further analysis. All ChIP-seq datasets were analyzed using the same pipeline as previously described [44]. To confirm the ChIP-seq results, we conducted a ChIP-qPCR assay following ChIP experiments using two histone marks (H3K27ac and H4K12ac). Five of the BDPs were randomly selected to design primers (Additional file 15: Table S11) for ChIP-qPCR analysis. One primer set was triplicated in the qPCR assay. To profile the chromatin features of histone marks and nucleosome positioning associated with bidirectional gene pairs, we plotted the normalized ChIP-seq and MNase-seq reads across all bidirectional gene pairs and randomly selected unidirectional genes as controls. Coexpression analysis To calculate the expression mode of gene pairs, we used eleven of the raw expression datasets deposited in NCBI from the Rice Genome Annotation Project (http://rice.plantbiology.msu.edu/expression.shtml) as described previously [32]. Tophat was used to map the sequencing reads to the version 7 pseudo-molecules of the rice genome [89]. Cufflinks was used to calculate the expression abundances for RNA-seq libraries [90]. The presence or absence of expression values were assigned for digital gene expression (DGE) libraries. Genes were called as 'expressed' if at least one sequencing reads was mapped uniquely within an exon. For Pearson correlations, the FPKM values of bidirectional gene pairs were used in a matrix analysis. Genes with FPKM = 0 across all libraries were excluded from the analysis. A customized Perl script was used to calculate the PCCs (Pearson correlation coefficients) for each bidirectional gene pairs. To categorize the expression modes of bidirectional gene pairs (Additional file 6: Table S6), we randomly selected 1000 adjacent unidirectional genes regardless of the transcriptional direction of each gene as controls to calculate the Pearson correlation coefficient. Any bidirectional gene pairs with a Pearson correlation coefficient greater than the average number of all positive values (0.38) were defined as coexpressed; Any bidirectional gene pairs with a Pearson correlation coefficient less than the average number of all negative values (−0.20) were defined as antiexpressed; Bidirectional gene pairs with a Pearson correlation coefficient between 0.38 and −0.20 were defined as independent; Bidirectional gene pairs without test of Pearson correlation coefficient were defined as Null. Motif discovery and overrepresentation analysis A statistic algorithm based on Z score and p-value filtering [91, 92] was used to test the significance of identified cis-elements, and to discover elements involved in a certain gene set. In this study, 930 total plant motifs (cis-regulatory elements) with functional annotations were collected from several groups, including Plant Cis-acting Regulatory DNA Elements (PLACE) database [33], AthaMap webserver [93], PlantCARE database [34] and text-mining results. The same number and the same length of input sequences from rice UDPs promoters were randomly selected for 1000 times as negative controls for calculating the overrepresented motifs with Z-scores. According to inquiry sequence length, the background from the analyzed promoter regions was classified into three groups as performed for BDPs: 0–250 bp, 250–500 bp and 500–1000 bp. For example, when a list of inquiry sequences had length under 250 bp, cis-elements were scanned in the inquiry sequences, and the 250 bp promoter regions of rice genes and Z scores were calculated according to the following method. $$ Z=\frac{\overline{X}-\mu }{\sigma /\sqrt{n}} $$ \( \overline{X} \), mean value of a motif present in the list of inquiry sequences; μ, mean value of the same motif present in 1000 random lists of rice gene promoter regions with the same length (250 bp, 500 bp, 1000 bp); σ, standard deviation of the mean value for 1000 by randomly selected sequences; n, count of inquiry sequences. A list of inquiry sequence lengths within 500 or 1000 bp was analyzed as for 250 bp sequences above. Ultimately, motifs with a p-value of less than 0.05 were considered significantly enriched in the inquiry sequences compared to gene promoters in the whole rice genome. Normalized reads count To normalize the reads counts distributed in BDP/UDP regions for assessing nucleosome positioning (MNased reads) and histone marks, we first identified all uniquely mapped reads in the region 1000 bp downstream of the TSSs. This region was equally divided into 20 sliding windows. We then calculated the number of reads within a specific sliding window divided by the length of the sliding window (bp), followed by the number of reads within the mapped genome (Mb). The sum of each BDP/UDP per sliding window was divided into the number of BDPs/UDPs. For all mapped reads, their positions in the rice genome were used to determine the midpoint of the reads. Significance test A two-sample test was performed to test for significant differences in gene expression and chromatin features (histone modifications and nucleosome occupancy) between BDPs and UDPs as described previously [32]. Briefly, we calculated the normalized reads counts of each bidirectional gene pair and UDP controls distributed either across the whole gene body or within the highest peak ranging from 100 bp to 150 bp downstream of TSS, respectively. Normalized reads counts were derived by calculating the number of reads within the mapped genome (Million), and dividing by the length of the gene (bp). R was used for all two-sample Kolmogorov-Smirnov (K-S) tests within groups, and "two.sided" was selected as the alternative hypothesis. Two samples were considered significantly different if the two-tailed p-value was less than 0.05. Neil H, Malabat C, d'Aubenton-Carafa Y, Xu Z, Steinmetz LM, Jacquier A. Widespread bidirectional promoters are the major source of cryptic transcripts in yeast. Nature. 2009;457(7232):1038–42. Xu Z, Wei W, Gagneur J, Perocchi F, Clauder-Munster S, Camblong J, Guffanti E, Stutz F, Huber W, Steinmetz LM. Bidirectional promoters generate pervasive transcription in yeast. Nature. 2009;457(7232):1033–7. Yang L, Yu J. A comparative analysis of divergently-paired genes (DPGs) among Drosophila and vertebrate genomes. BMC Evol Biol. 2009;9:55. Adachi N, Lieber MR. Bidirectional gene organization: a common architectural feature of the human genome. Cell. 2002;109(7):807–9. Trinklein ND, Aldred SF, Hartman SJ, Schroeder DI, Otillar RP, Myers RM. An abundance of bidirectional promoters in the human genome. Genome Res. 2004;14(1):62–6. Dhadi SR, Krom N, Ramakrishna W. Genome-wide comparative analysis of putative bidirectional promoters from rice. Arabidopsis Populus Gene. 2009;429(1–2):65–73. Liu X, Zhou X, Li Y, Tian J, Zhang Q, Li S, Wang L, Zhao J, Chen R, Fan Y. Identification and functional characterization of bidirectional gene pairs and their intergenic regions in maize. BMC Genomics. 2014;15:338. Engstrom PG, Suzuki H, Ninomiya N, Akalin A, Sessa L, Lavorgna G, Brozzi A, Luzi L, Tan SL, Yang L, et al. Complex Loci in human and mouse genomes. PLoS Genet. 2006;2(4):e47. Lin JM, Collins PJ, Trinklein ND, Fu Y, Xi H, Myers RM, Weng Z. Transcription factor binding and modified histones in human bidirectional promoters. Genome Res. 2007;17(6):818–27. Wang G, Qi K, Zhao Y, Li Y, Juan L, Teng M, Li L, Liu Y, Wang Y. Identification of regulatory regions of bidirectional genes in cervical cancer. BMC Med Genomics. 2013;6 Suppl 1:S5. Wakano C, Byun JS, Di LJ, Gardner K. The dual lives of bidirectional promoters. Biochim Biophys Acta. 2012;1819(7):688–93. Bornelov S, Komorowski J, Wadelius C. Different distribution of histone modifications in genes with unidirectional and bidirectional transcription and a role of CTCF and cohesin in directing transcription. BMC Genomics. 2015;16:300. Ahn J, Gruen JR. The genomic organization of the histone clusters on human 6p21.3. Mamm Genome. 1999;10(7):768–70. Hansen JJ, Bross P, Westergaard M, Nielsen MN, Eiberg H, Borglum AD, Mogensen J, Kristiansen K, Bolund L, Gregersen N. Genomic structure of the human mitochondrial chaperonin genes: HSP60 and HSP10 are localised head to head on chromosome 2 separated by a bidirectional promoter. Hum Genet. 2003;112(1):71–7. West AB, Lockhart PJ, O'Farell C, Farrer MJ. Identification of a novel gene linked to parkin via a bi-directional promoter. J Mol Biol. 2003;326(1):11–9. Hogan GJ, Lee CK, Lieb JD. Cell cycle-specified fluctuation of nucleosome occupancy at gene promoters. PLoS Genet. 2006;2(9):e158. Yang MQ, Koehly LM, Elnitski LL. Comprehensive annotation of bidirectional promoters identifies co-regulation among breast and ovarian cancer genes. PLoS Comput Biol. 2007;3(4):e72. Kleinjan DA, Lettice LA. Long-range gene control and genetic disease. Adv Genet. 2008;61:339–88. Chen PY, Chang WS, Lai YK, Wu CW. c-Myc regulates the coordinated transcription of brain disease-related PDCD10-SERPINI1 bidirectional gene pair. Mol Cell Neurosci. 2009;42(1):23–32. Chen WH, de Meaux J, Lercher MJ. Co-expression of neighbouring genes in Arabidopsis: separating chromatin effects from direct interactions. BMC Genomics. 2010;11:178. Williams EJ, Bowles DJ. Coexpression of neighboring genes in the genome of Arabidopsis thaliana. Genome Res. 2004;14(6):1060–7. Ng YK, Wu W, Zhang L. Positive correlation between gene coexpression and positional clustering in the zebrafish genome. BMC Genomics. 2009;10:42. Tsai HK, Su CP, Lu MY, Shih CH, Wang D. Co-expression of adjacent genes in yeast cannot be simply attributed to shared regulatory system. BMC Genomics. 2007;8:352. Krom N, Ramakrishna W. Comparative analysis of divergent and convergent gene pairs and their expression patterns in rice, Arabidopsis, and populus. Plant Physiol. 2008;147(4):1763–73. Yang XHWC, Xia XY, Gang SH. Genome-wide analysis of intergenic regions in Arabidopsis thaliana suggests the existence of bidirectional promoters and genetic insulators.Current Topics in. Plant Biology. 2011;12:15–33. Dhadi SR, Deshpande A, Driscoll K, Ramakrishna W. Major cis-regulatory elements for rice bidirectional promoter activity reside in the 5′-untranslated regions. Gene. 2013;526(2):400–10. Bondino HG, Valle EM. A small intergenic region drives exclusive tissue-specific expression of the adjacent genes in Arabidopsis thaliana. BMC Mol Biol. 2009;10:95. Banerjee J, Sahoo DK, Dey N, Houtz RL, Maiti IB. An intergenic region shared by At4g35985 and At4g35987 in Arabidopsis thaliana is a tissue specific and stress inducible bidirectional promoter analyzed in transgenic arabidopsis and tobacco plants. PLoS One. 2013;8(11):e79622. Kourmpetli S, Lee K, Hemsley R, Rossignol P, Papageorgiou T, Drea S. Bidirectional promoters in seed development and related hormone/stress responses. BMC Plant Biol. 2013;13:187. Khan ZA, Abdin MZ, Khan JA. Functional characterization of a strong bi-directional constitutive plant promoter isolated from cotton leaf curl Burewala virus. PLoS One. 2015;10(3):e0121656. Liu SJ, Yue QJ, Zhang W. Structural and functional analysis of an asymmetric bidirectional promoter in Arabidopsis thaliana. J Integr Plant Biol. 2015;57(2):162–70. Fang Y, Wang X, Wang L, Pan X, Xiao J, Wang X-e, Wu Y, Zhang W. Functional characterization of open chromatin in bidirectional promoters of rice. Sci Rep. 2016;6:32088. Higo K, Ugawa Y, Iwamoto M, Korenaga T. Plant cis-acting regulatory DNA elements (PLACE) database: 1999. Nucleic Acids Res. 1999;27(1):297–300. Rombauts S, Dehais P, Van Montagu M, Rouze P. PlantCARE, a plant cis-acting regulatory element database. Nucleic Acids Res. 1999;27(1):295–6. Fabro G, Di Rienzo JA, Voigt CA, Savchenko T, Dehesh K, Somerville S, Alvarez ME. Genome-wide expression profiling Arabidopsis at the stage of Golovinomyces cichoracearum haustorium formation. Plant Physiol. 2008;146(3):1421–39. Ikeda M, Mitsuda N, Ohme-Takagi M. Arabidopsis HsfB1 and HsfB2b act as repressors of the expression of heat-inducible Hsfs but positively regulate the acquired thermotolerance. Plant Physiol. 2011;157(3):1243–54. Guimaraes-Dias F, Neves-Borges AC, Viana AA, Mesquita RO, Romano E, de Fatima Grossi-de-Sa M, Nepomuceno AL, Loureiro ME, Alves-Ferreira M. Expression analysis in response to drought stress in soybean: Shedding light on the regulation of metabolic pathway genes. Genet Mol Biol. 2012;35(1 (suppl)):222––232. Vandepoele K, Quimbaya M, Casneuf T, De Veylder L, Van de Peer Y. Unraveling transcriptional control in Arabidopsis using cis-regulatory elements and coexpression networks. Plant Physiol. 2009;150(2):535–46. Hudson ME, Quail PH. Identification of promoter motifs involved in the network of phytochrome A-regulated gene expression by combined analysis of genomic sequence and microarray data. Plant Physiol. 2003;133(4):1605–16. Benn G, Wang CQ, Hicks DR, Stein J, Guthrie C, Dehesh K. A key general stress response motif is regulated non-uniformly by CAMTA transcription factors. Plant J. 2014;80(1):82–92. Kaplan B, Davydov O, Knight H, Galon Y, Knight MR, Fluhr R, Fromm H. Rapid transcriptome changes induced by cytosolic Ca2+ transients reveal ABRE-related sequences as Ca2 + −responsive cis elements in Arabidopsis. Plant Cell. 2006;18(10):2733–48. Srivasta A, Mehta S, Lindlof A, Bhargava S. Over-represented promoter motifs in abiotic stress-induced DREB genes of rice and sorghum and their probable role in regulation of gene expression. Plant Signal Behav. 2010;5(7):775–84. Hurst LD, Pal C, Lercher MJ. The evolutionary dynamics of eukaryotic gene order. Nat Rev Genet. 2004;5(4):299–310. Zhang W, Wu Y, Schnable JC, Zeng Z, Freeling M, Crawford GE, Jiang J. High-resolution mapping of open chromatin in the rice genome. Genome Res. 2012;22(1):151–62. Lu L, Chen X, Sanders D, Qian S, Zhong X. High-resolution mapping of H4K16 and H3K23 acetylation reveals conserved and unique distribution patterns in Arabidopsis and rice. Epigenetics. 2015;10(11):1044–53. Struhl K, Segal E. Determinants of nucleosome positioning. Nat Struct Mol Biol. 2013;20(3):267–73. Zhang T, Zhang W, Jiang J. Genome-wide nucleosome occupancy and positioning and their impact on gene expression and evolution in plants. Plant physiol. 2015;168(4):1406–16. Wu Y, Zhang W, Jiang J. Genome-wide nucleosome positioning is orchestrated by genomic regions associated with DNase I hypersensitivity in rice. PLoS Genet. 2014;10(5):e1004378. Yuan GC, Liu YJ, Dion MF, Slack MD, Wu LF, Altschuler SJ, Rando OJ. Genome-scale identification of nucleosome positions in S. cerevisiae. Science. 2005;309(5734):626–30. Schones DE, Cui K, Cuddapah S, Roh TY, Barski A, Wang Z, Wei G, Zhao K. Dynamic regulation of nucleosome positioning in the human genome. Cell. 2008;132(5):887–98. Kruglyak S, Tang H. Regulation of adjacent yeast genes. Trends Genet. 2000;16(3):109–11. Caron H, van Schaik B, van der Mee M, Baas F, Riggins G, van Sluis P, Hermus MC, van Asperen R, Boon K, Voute PA, et al. The human transcriptome map: clustering of highly expressed genes in chromosomal domains. Science. 2001;291(5507):1289–92. Lercher MJ, Urrutia AO, Hurst LD. Clustering of housekeeping genes provides a unified model of gene order in the human genome. Nat Genet. 2002;31(2):180–3. Boutanaev AM, Kalmykova AI, Shevelyov YY, Nurminsky DI. Large clusters of co-expressed genes in the Drosophila genome. Nature. 2002;420(6916):666–9. Kalmykova AI, Nurminsky DI, Ryzhov DV, Shevelyov YY. Regulated chromatin domain comprising cluster of co-expressed genes in Drosophila melanogaster. Nucleic Acids Res. 2005;33(5):1435–44. Lercher MJ, Blumenthal T, Hurst LD. Coexpression of neighboring genes in Caenorhabditis elegans is mostly due to operons and duplicate genes. Genome Res. 2003;13(2):238–43. Roy PJ, Stuart JM, Lund J, Kim SK. Chromosomal clustering of muscle-expressed genes in Caenorhabditis elegans. Nature. 2002;418(6901):975–9. Singer GA, Lloyd AT, Huminiecki LB, Wolfe KH. Clusters of co-expressed genes in mammalian genomes are conserved by natural selection. Mol Biol Evol. 2005;22(3):767–75. Schmid M, Davison TS, Henz SR, Pape UJ, Demar M, Vingron M, Scholkopf B, Weigel D, Lohmann JU. A gene expression map of Arabidopsis thaliana development. Nat Genet. 2005;37(5):501–6. Zhan S, Horrocks J, Lukens LN. Islands of co-expressed neighbouring genes in Arabidopsis thaliana suggest higher-order chromosome domains. Plant J. 2006;45(3):347–57. Core LJ, Waterfall JJ, Lis JT. Nascent RNA sequencing reveals widespread pausing and divergent initiation at human promoters. Science. 2008;322(5909):1845–8. Preker P, Nielsen J, Kammler S, Lykke-Andersen S, Christensen MS, Mapendano CK, Schierup MH, Jensen TH. RNA exosome depletion reveals transcription upstream of active human promoters. Science. 2008;322(5909):1851–4. Seila AC, Calabrese JM, Levine SS, Yeo GW, Rahl PB, Flynn RA, Young RA, Sharp PA. Divergent transcription from active promoters. Science. 2008;322(5909):1849–51. Duttke SH, Lacadie SA, Ibrahim MM, Glass CK, Corcoran DL, Benner C, Heinz S, Kadonaga JT, Ohler U. Human promoters are intrinsically directional. Mol Cell. 2015;57(4):674–84. Berretta J, Morillon A. Pervasive transcription constitutes a new level of eukaryotic genome regulation. EMBO Rep. 2009;10(9):973–82. Wei W, Pelechano V, Jarvelin AI, Steinmetz LM. Functional consequences of bidirectional promoters. Trends Genet. 2011;27(7):267–76. Lepoivre C, Belhocine M, Bergon A, Griffon A, Yammine M, Vanhille L, Zacarias-Cabeza J, Garibal MA, Koch F, Maqbool MA, et al. Divergent transcription is associated with promoters of transcriptional regulators. BMC Genomics. 2013;14:914. Barski A, Cuddapah S, Cui K, Roh TY, Schones DE, Wang Z, Wei G, Chepelev I, Zhao K. High-resolution profiling of histone methylations in the human genome. Cell. 2007;129(4):823–37. Guenther MG, Levine SS, Boyer LA, Jaenisch R, Young RA. A chromatin landmark and transcription initiation at most promoters in human cells. Cell. 2007;130(1):77–88. Minsky N, Shema E, Field Y, Schuster M, Segal E, Oren M. Monoubiquitinated H2B is associated with the transcribed region of highly expressed genes in human cells. Nat Cell Biol. 2008;10(4):483–8. Zhang W, Zhang T, Wu Y, Jiang J. Genome-wide identification of regulatory DNA elements and protein-binding footprints using signatures of open chromatin in Arabidopsis. Plant Cell. 2012;24(7):2719–31. Vera DL, Madzima TF, Labonne JD, Alam MP, Hoffman GG, Girimurugan SB, Zhang J, McGinnis KM, Dennis JH, Bass HW. Differential nuclease sensitivity profiling of chromatin reveals biochemical footprints coupled to gene expression and functional DNA elements in maize. Plant Cell. 2014;26(10):3883–93. Fincher JA, Vera DL, Hughes DD, McGinnis KM, Dennis JH, Bass HW. Genome-wide prediction of nucleosome occupancy in maize reveals plant chromatin structural features at genes and other elements at multiple scales. Plant Physiol. 2013;162(2):1127–41. Liu MJ, Seddon AE, Tsai ZT, Major IT, Floer M, Howe GA, Shiu SH. Determinants of nucleosome positioning and their influence on plant gene expression. Genome Res. 2015;25(8):1182–95. Li G, Liu S, Wang J, He J, Huang H, Zhang Y, Xu L. ISWI proteins participate in the genome-wide nucleosome distribution in Arabidopsis. Plant J. 2014;78(4):706–14. Gilbert N, Boyle S, Fiegler H, Woodfine K, Carter NP, Bickmore WA. Chromatin architecture of the human genome: gene-rich domains are enriched in open chromatin fibers. Cell. 2004;118(5):555–66. Lee CK, Shibata Y, Rao B, Strahl BD, Lieb JD. Evidence for nucleosome depletion at active regulatory regions genome-wide. Nat Genet. 2004;36(8):900–5. Batada NN, Urrutia AO, Hurst LD. Chromatin remodelling is a major source of coexpression of linked genes in yeast. Trends Genet. 2007;23(10):480–4. Spellman PT, Rubin GM. Evidence for large domains of similarly expressed genes in the Drosophila genome. J Biol. 2002;1(1):5. Lunyak VV, Burgess R, Prefontaine GG, Nelson C, Sze SH, Chenoweth J, Schwartz P, Pevzner PA, Glass C, Mandel G, et al. Corepressor-dependent silencing of chromosomal regions encoding neuronal genes. Science. 2002;298(5599):1747–52. Lercher MJ, Urrutia AO, Pavlicek A, Hurst LD. A unification of mosaic structures in the human genome. Hum Mol Genet. 2003;12(19):2411–5. Sproul D, Gilbert N, Bickmore WA. The role of chromatin structure in regulating the expression of clustered genes. Nat Rev Genet. 2005;6(10):775–81. Lercher MJ, Hurst LD. Co-expressed yeast genes cluster over a long range but are not regularly spaced. J Mol Biol. 2006;359(3):825–31. Cohen BA, Mitra RD, Hughes JD, Church GM. A computational analysis of whole-genome expression data reveals chromosomal domains of gene expression. Nat Genet. 2000;26(2):183–6. Ha M, Ng DW, Li WH, Chen ZJ. Coordinated histone modifications are associated with gene expression variation within and between species. Genome Res. 2011;21(4):590–8. Zhang Y, Reinberg D. Transcription regulation by histone methylation: interplay between different covalent modifications of the core histone tails. Genes Dev. 2001;15(18):2343–60. Deng Y, Dai X, Xiang Q, Dai Z, He C, Wang J, Feng J. Genome-wide analysis of the effect of histone modifications on the coexpression of neighboring genes in Saccharomyces cerevisiae. BMC Genomics. 2010;11:550. Chen L, Zhao H. Gene expression analysis reveals that histone deacetylation sites may serve as partitions of chromatin gene expression domains. BMC Genomics. 2005;6:44. Trapnell C, Pachter L, Salzberg SL. TopHat: discovering splice junctions with RNA-Seq. Bioinformatics. 2009;25(9):1105–11. Trapnell C, Williams BA, Pertea G, Mortazavi A, Kwan G, van Baren MJ, Salzberg SL, Wold BJ, Pachter L. Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation. Nat Biotechnol. 2010;28(5):511–5. Yu J, Zhang Z, Wei J, Ling Y, Xu W, Su Z. SFGD: a comprehensive platform for mining functional information from soybean transcriptome data and its use in identifying acyl-lipid metabolism pathways. BMC Genomics. 2014;15:271. You Q, Zhang L, Yi X, Zhang Z, Xu W, Su Z. SIFGD: Setaria italica Functional Genomics Database. Mol Plant. 2015;8(6):967–70. Hehl R, Bulow L. AthaMap web tools for the analysis of transcriptional and posttranscriptional regulation of gene expression in Arabidopsis thaliana. Methods Mol Biol. 2014;1158:139–56. We gratefully thank Dr. Jiming Jiang for help with the generation of ChIP-seq datasets and Dr. Robin Buell for help with the QC on ChIP-seq libraries. This research was supported by grants from the National Natural Science Foundation of China (31371239, 31571579 and 31371291) and by the "Innovation and Enterprise Scholar" program of Jiangsu Province for WZ. WZ conceived and designed the study; XMW, LW and XP performed the ChIP-seq experiments; YF, QY, JX, YW, ZS and WZ performed the data analysis and interpreted the results; XW supervised the experiments; WZ wrote the paper. All authors read and approved the final version of the manuscript. State Key Laboratory for Crop Genetics and Germplasm Enhancement, Nanjing Agriculture University, Nanjing, Jiangsu, 210095, China Yuan Fang, Lei Wang, Ximeng Wang, Xiucai Pan, Jin Xiao, Xiu-e Wang, Yufeng Wu & Wenli Zhang State Key Laboratory of Plant Physiology and Biochemistry, CBS, China Agricultural University, Beijing, 100193, China Qi You & Zhen Su JiangSu Collaborative Innovation Center for Modern Crop Production (JCIC-MCP), Nanjing Agriculture University, Nanjing, Jiangsu, 210095, China Wenli Zhang Yuan Fang Lei Wang Ximeng Wang Qi You Xiucai Pan Jin Xiao Xiu-e Wang Yufeng Wu Zhen Su Correspondence to Zhen Su or Wenli Zhang. Contents of GC and TATA within rice BDPs. (PDF 403 kb) Summary of constitutive and tissue-specific BDPs used for motif identification. (PDF 522 kb) Summary of motifs identified in randomly selected 1000 UDPs, non-drought inducible BDPs and drought inducible UDPs. (PDF 514 kb) High frequency of overrepresented motifs within constitutive and tissue-specific BDPs. (PDF 334 kb) Overrepresented motifs in BDPs response to drought stress (PDF 348 kb) Expression modes of bidirectional gene pairs associated with rice BDPs. (XLS 307 kb) Coexpression analysis of gene pairs associated with BDPs with size separated by every 100 bp intergenic length The Pearson correlation coefficient was calculated from all gene pairs corresponding to BDPs separated every 100 bp using the absolute expression value. Statistical analysis was provided by a two-sample K-S test, where ** p < 0.001, * p < 0.05. (XLS 110 kb) Fold difference of intensity of histone marks and nucleosome occupancy between bidirectional genes and unidirectional genes. (PDF 246 kb) Comparison between ChIP-seq and ChIP-qPCR assay in an individual BDP locus. (PDF 247 kb) Profiling of histone marks across type II BDPs and UDP controls with the same gene number and same expression level as bidirectional gene pairs Unidirectional genes with higher and lower FPKM values were aligned on the right and left side, respectively (Additional file 10: Figure S2b, d and f). And bidirectional gene pairs with higher and lower FPKM values were aligned on the right and left sides of BDPs, respectively (Additional file 10: Figure S2a, c and e). Normalized reads counts indicated the enrichment of each mark were calculated by reads number per bp of genomic region per million reads. X-axes show the relative distance of BDPs (bp) in Additional file 10: Figure S2a, c and e and the position relative to TSS in Additional file 10: Figure S2b, d and f; Y-axes show normalized reads counts (read number in per bp genome in per million reads) within 1 kb upstream and downstream of TSS. A. Profiles of active marks: H4K12ac, H3K27ac, H3K4ac and H3K9ac in type II BDPs (Additional file 10: Figure S2a) and UDPs (Additional file 10: Figure S2b), respectively. B. Profiles of active marks: H3K4me2, H3K4me3 and H3K36me3 in type II BDPs (Additional file 10: Figure S2c) and UDPs (Additional file 10: Figure S2d), respectively. C. Profiles of repressive marks: H3K9me1, H3K9me3 and H3K27me3 in type II BDPs (Additional file 10: Figure S2e) and UDPs (Additional file 10: Figure S2f), respectively. (PDF 271 kb) Profiling of histone marks across type III BDPs and UDP controls with the same gene number and same expression level as bidirectional gene pairs Unidirectional genes with higher and lower FPKM values were aligned on the right and left side, respectively (Additional file 11: Figure S3b, d and f). Bidirectional gene pairs with higher and lower FPKM values were aligned on the right and left sides of BDPs, respectively (Additional file 11: Figure S3a, c and e). Normalized reads counts indicated the enrichment of each mark were calculated by reads number per bp of genomic region per million reads. X-axes show the relative distance of BDPs (bp) in Additional file 11: Figure S3a, a and e and the position relative to TSS in Additional file 11: Figure S3b, d and f; Y-axes show normalized reads counts (read number in per bp of genome per million reads) within 1 kb upstream and downstream of TSS. A. Profiles of active marks: H4K12ac, H3K27ac, H3K4ac and H3K9ac in type III BDPs (Additional file 11: Figure S3a) and UDPs (Additional file 11: Figure S3b), respectively. B. Profiles of active marks: H3K4me2, H3K4me3 and H3K36me3 in type III BDPs (Additional file 11: Figure S3c) and UDPs (Additional file 11: Figure S3d), respectively. C. Profiles of repressive marks: H3K9me1, H3K9me3 and H3K27me3 in type III BDPs (Additional file 11: Figure S3e) and UDPs (Additional file 11: Figure S3f), respectively. (PDF 313 kb) Kolmogorov-Smirnov test of eu-/hetero-chromatin marks and nucleosome occupancy reads between gene body of bidirectional gene pairs and control genes. (PDF 272 kb) Kolmogorov-Smirnov test of eu-/hetero-chromatin marks and nucleosome occupancy between gene body of coexpression and anti-expression bidirectional gene pairs. (PDF 325 kb) Comparison of profiling of nucleosome positioning between BDPs and UDPs. Unidirectional gene controls or bidirectional gene pairs with higher and lower FPKM values were aligned on the right and left sides of BDPs, respectively. Normalized MNase-seq reads count representing nucleosome positioning were calculated by reads number per bp of genomic region per million reads. X-axes in Additional file 14: Figure S4a, b and c show relative distance of BDPs (bp); Y axes in Additional file 14: Figure S4a, b and c show normalized MNase-seq reads counts. The number and expression level of unidirectional genes analyzed were the same as the corresponding bidirectional gene pairs. A. Profile of MNase-seq reads around type I BDPs and corresponding UDPs control with higher or lower FPKM values, respectively. B. Profile of MNase-seq reads around type II BDPs and corresponding UDPs control with higher or lower FPKM values, respectively. C. Profile of MNase-seq reads around type III BDPs and corresponding UDPs control with higher or lower FPKM values, respectively. (PDF 324 kb) Primer information used for the ChIP-qPCR assay. (PDF 246 kb) Fang, Y., Wang, L., Wang, X. et al. Histone modifications facilitate the coexpression of bidirectional promoters in rice. BMC Genomics 17, 768 (2016). https://doi.org/10.1186/s12864-016-3125-0 Bidirectional promoters,regulation of gene expression Coexpression histone marks Nucleosome positioning Plant genomics
CommonCrawl
Tue, 07 Jan 2020 05:36:58 GMT 1.1: Use the Language of Algebra [ "article:topic", "license:ccby", "showtoc:yes", "transcluded:yes", "authorname:openstaxmarecek", "source[1]-math-5117" ] https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FMonroe_Community_College%2FMTH_104_Intermediate_Algebra%2F1%253A_Foundations%2F1.2%253A_Use_the_Language_of_Algebra MTH 104 Intermediate Algebra 1: Foundations Lynn Marecek Professor (Mathematics) at Santa Ana College Publisher: OpenStax CNX Find Factors, Prime Factorizations, and Least Common Multiples Use Variables and Algebraic Symbols Simplify Expressions Using the Order of Operations Evaluate an Expression Identify and Combine Like Terms Translate an English Phrase to an Algebraic Expression By the end of this section, you will be able to: This chapter is intended to be a brief review of concepts that will be needed in an Intermediate Algebra course. A more thorough introduction to the topics covered in this chapter can be found in the Elementary Algebra chapter, Foundations. The numbers 2, 4, 6, 8, 10, 12 are called multiples of 2. A multiple of 2 can be written as the product of a counting number and 2. Similarly, a multiple of 3 would be the product of a counting number and 3. We could find the multiples of any number by continuing this process. Counting Number Multiples of 2 2 4 6 8 10 12 14 16 18 20 22 24 Multiples of 3 3 6 9 12 15 18 21 24 27 30 33 36 Multiples of 4 4 8 12 16 20 24 28 32 36 40 44 48 Multiples of 5 5 10 15 20 25 30 35 40 45 50 55 60 Multiples of 9 9 18 27 36 45 54 63 72 81 90 99 108 MULTIPLE OF A NUMBER A number is a multiple of \(n\) if it is the product of a counting number and \(n\). Another way to say that 15 is a multiple of 3 is to say that 15 is divisible by 3. That means that when we divide 3 into 15, we get a counting number. In fact, \(\mathrm{15÷3}\) is 5, so 15 is \(\mathrm{5⋅3}\).\(\mathrm{15÷3}\) is 5, so 15 is \(\mathrm{5⋅3}\). DIVISIBLE BY A NUMBER If a number mm is a multiple of n, then m is divisible by n. If we were to look for patterns in the multiples of the numbers 2 through 9, we would discover the following divisibility tests: A number is divisible by: 2 if the last digit is 0, 2, 4, 6, or 8. 3 if the sum of the digits is divisible by 3. 5 if the last digit is 5 or 0. 6 if it is divisible by both 2 and 3. 10 if it ends with 0. Is 5,625 divisible by ⓐ 2? ⓑ 3? ⓒ 5 or 10? ⓓ 6? ⓐ \(\text{Is 5,625 divisible by 2?}\) \( \begin{array}{ll} \text{Does it end in 0, 2, 4, 6 or 8?} & {\text{No.} \\ \text{5,625 is not divisible by 2.}} \end{array}\) ⓑ \(\text{5,625 divisible by 3?}\) \(\begin{array}{ll} {\text{What is the sum of the digits?} \\ \text{Is the sum divisible by 3?}} & {5+6+2+5=18 \\ \text{Yes.} \\ \text{5,625 is divisible by 3.}}\end{array}\) ⓒ \(\text{Is 5,625 divisible by 5 or 10?}\) \(\begin{array}{ll} \text{What is the last digit? It is 5.} & \text{5,625 is divisible by 5 but not by 10.} \end{array}\) ⓓ \(\begin{array}{ll}\text{Is it divisible by both 2 and 3?} & {\text{No, 5,625 is not divisible by 2, so 5,625 is} \\ \text{not divisible by 6.}} \end{array}\) Is 4,962 divisible by ⓐ 2? ⓑ 3? ⓒ 5? ⓓ 6? ⓔ 10? ⓐ yes ⓑ yes ⓒ no ⓓ yes ⓔ no ⓐ no ⓑ yes ⓒ yes ⓓ no ⓔ no In mathematics, there are often several ways to talk about the same ideas. So far, we've seen that if m is a multiple of n, we can say that m is divisible by n. For example, since 72 is a multiple of 8, we say 72 is divisible by 8. Since 72 is a multiple of 9, we say 72 is divisible by 9. We can express this still another way. Since \(\mathrm{8·9=72}\), we say that 8 and 9 are factors of 72. When we write \(\mathrm{72=8·9}\), we say we have factored 72.mathrm{8·9=72}\), we say that 8 and 9 are factors of 72. When we write \(\mathrm{72=8·9}\), we say we have factored 72. Other ways to factor 72 are \(\mathrm{1·72, \; 2·36, \; 3·24, \; 4·18,}\) and \(\mathrm{6⋅12}\). The number 72 has many factors: \(\mathrm{1,2,3,4,6,8,9,12,18,24,36,}\) and \(\mathrm{72}\).\(mathrm{1·72, \; 2·36, \; 3·24, \; 4·18,}\) and \(\mathrm{6⋅12}\). The number 72 has many factors: \(\mathrm{1,2,3,4,6,8,9,12,18,24,36,}\) and \(\mathrm{72}\). If \(\mathrm{a·b=m}\), then mathrm{a·b=m}\), then a and b are factors of m. Some numbers, such as 72, have many factors. Other numbers have only two factors. A prime number is a counting number greater than 1 whose only factors are 1 and itself. PRIME NUMBER AND COMPOSITE NUMBER A prime number is a counting number greater than 1 whose only factors are 1 and the number itself. A composite number is a counting number that is not prime. A composite number has factors other than 1 and the number itself. The counting numbers from 2 to 20 are listed in the table with their factors. Make sure to agree with the "prime" or "composite" label for each! The prime numbers less than 20 are 2, 3, 5, 7, 11, 13, 17, and 19. Notice that the only even prime number is 2. A composite number can be written as a unique product of primes. This is called the prime factorization of the number. Finding the prime factorization of a composite number will be useful in many topics in this course. PRIME FACTORIZATION The prime factorization of a number is the product of prime numbers that equals the number. To find the prime factorization of a composite number, find any two factors of the number and use them to create two branches. If a factor is prime, that branch is complete. Circle that prime. Otherwise it is easy to lose track of the prime numbers. If the factor is not prime, find two factors of the number and continue the process. Once all the branches have circled primes at the end, the factorization is complete. The composite number can now be written as a product of prime numbers. example \(\PageIndex{4}\): How to Find the Prime Factorization of a Composite Number Factor 48. We say \(\mathrm{2⋅2⋅2⋅2⋅3}\) is the prime factorization of 48. We generally write the primes in ascending order. Be sure to multiply the factors to verify your answer.\(\mathrm{2⋅2⋅2⋅2⋅3}\) is the prime factorization of 48. We generally write the primes in ascending order. Be sure to multiply the factors to verify your answer. If we first factored 48 in a different way, for example as \(\mathrm{6·8}\), the result would still be the same. Finish the prime factorization and verify this for yourself. Find the prime factorization of \(\mathrm{80}\). \(\mathrm{2⋅2⋅2⋅2⋅5}\) \(\mathrm{2⋅2⋅3⋅5}\) FIND THE PRIME FACTORIZATION OF A COMPOSITE NUMBER Find two factors whose product is the given number, and use these numbers to create two branches. If a factor is prime, that branch is complete. Circle the prime, like a leaf on the tree. If a factor is not prime, write it as the product of two factors and continue the process. Write the composite number as the product of all the circled primes. One of the reasons we look at primes is to use these techniques to find the least common multiple of two numbers. This will be useful when we add and subtract fractions with different denominators. LEAST COMMON MULTIPLE The least common multiple (LCM) of two numbers is the smallest number that is a multiple of both numbers. To find the least common multiple of two numbers we will use the Prime Factors Method. Let's find the LCM of 12 and 18 using their prime factors. example \(\PageIndex{7}\): How to Find the Least Common Multiple Using the Prime Factors Method Find the least common multiple (LCM) of 12 and 18 using the prime factors method. Notice that the prime factors of 12 \(\mathrm{(2·2·3)}\) and the prime factors of 18 \(\mathrm{(2⋅3⋅3)}\) are included in the LCM \(\mathrm{(2·2·3·3)}\). So 36 is the least common multiple of 12 and 18. By matching up the common primes, each common prime factor is used only once. This way you are sure that 36 is the least common multiple. Find the LCM of 9 and 12 using the Prime Factors Method. Find the LCM of 18 and 24 using the Prime Factors Method. FIND THE LEAST COMMON MULTIPLE USING THE PRIME FACTORS METHOD Write each number as a product of primes. List the primes of each number. Match primes vertically when possible. Bring down the columns. Multiply the factors. In algebra, we use a letter of the alphabet to represent a number whose value may change. We call this a variable and letters commonly used for variables are \(x,y,a,b,c.\) A variable is a letter that represents a number whose value may change. A number whose value always remains the same is called a constant. A constant is a number whose value always stays the same. To write algebraically, we need some operation symbols as well as numbers and variables. There are several types of symbols we will be using. There are four basic arithmetic operations: addition, subtraction, multiplication, and division. We'll list the symbols used to indicate these operations below. OPERATION SYMBOLS Say: The result is… Addition \(a+b\) \(a\) plus \(b\) the sum of \(a\) and \(b\) Subtraction \(a−b\) \(a\) minus \(b\) the difference of \(a\) and \(b\) Multiplication \(a⋅b,ab,(a)(b),(a)b,a(b)\) \(a\) times \(b\) the product of \(a\) and \(b\) Division \(a÷b,\space a/b,\space\frac{a}{b},\space b \overline{\smash{)}a}\) \(a\) divided by \(b\) the quotient of \(a\) and \(b\); \(a\) is called the dividend, and \(b\) is called the divisor When two quantities have the same value, we say they are equal and connect them with an equal sign. EQUALITY SYMBOL \(a=b\) is read "a is equal to b." The symbol "\(=\)" is called the equal sign. On the number line, the numbers get larger as they go from left to right. The number line can be used to explain the symbols "\(<\)" and "\(>\)". The expressions \(a<b\) or \(a>b\) can be read from left to right or right to left, though in English we usually read from left to right. In general, \[a<b \text{ is equivalent to }b>a. \text{For example, } 7<11 \text{ is equivalent to }11>7.\] \[a>b \text{ is equivalent to }b<a. \text{For example, } 17>4 \text{ is equivalent to }4<17.\] INEQUALITY SYMBOLS \(a\neq b\) a is not equal to b. \(a<b\) a is less than b. \(a\leq b\) a is less than or equal to b. \(a>b\) a is greater than b. \(a\geq b\) a is greater than or equal to b. Grouping symbols in algebra are much like the commas, colons, and other punctuation marks in English. They help identify an expression, which can be made up of number, a variable, or a combination of numbers and variables using operation symbols. We will introduce three types of grouping symbols now. GROUPING SYMBOLS \[\begin{array}{lc} \text{Parentheses} & \mathrm{()} \\ \text{Brackets} & \mathrm{[]} \\ \text{Braces} & \mathrm{ \{ \} } \end{array}\] Here are some examples of expressions that include grouping symbols. We will simplify expressions like these later in this section. \[\mathrm{8(14−8) \; \; \; \; \; \; \; \; 21−3[2+4(9−8)] \; \; \; \; \; \; \; \; 24÷ \{13−2[1(6−5)+4]\}}\] What is the difference in English between a phrase and a sentence? A phrase expresses a single thought that is incomplete by itself, but a sentence makes a complete statement. A sentence has a subject and a verb. In algebra, we have expressions and equations. An expression is a number, a variable, or a combination of numbers and variables using operation symbols. \[\begin{array}{lll} \textbf{Expression} & \textbf{Words} & \textbf{English Phrase} \\ \mathrm{3+5} & \text{3 plus 5} & \text{the sum of three and five} \\ \mathrm{n−1} & n\text{ minus one} & \text{the difference of } n \text{ and one} \\ \mathrm{6·7} & \text{6 times 7} & \text{the product of six and seven} \\ \frac{x}{y} & x \text{ divided by }y & \text{the quotient of }x \text{ and }y \end{array} \] Notice that the English phrases do not form a complete sentence because the phrase does not have a verb. An equation is two expressions linked by an equal sign. When you read the words the symbols represent in an equation, you have a complete sentence in English. The equal sign gives the verb. An equation is two expressions connected by an equal sign. \[\begin{array}{ll} \textbf{Equation} & \textbf{English Sentence} \\ 3+5=8 & \text{The sum of three and five is equal to eight.} \\ n−1=14 & n \text{ minus one equals fourteen.} \\ 6·7=42 & \text{The product of six and seven is equal to forty-two.} \\ x=53 & x \text{ is equal to fifty-three.} \\ y+9=2y−3 & y \text{ plus nine is equal to two } y \text{ minus three.} \end{array}\] Suppose we need to multiply 2 nine times. We could write this as \(\mathrm{2·2·2·2·2·2·2·2·2}\). This is tedious and it can be hard to keep track of all those 2s, so we use exponents. We write \(\mathrm{2·2·2}\) as \(\mathrm{2^3}\) and \(\mathrm{2·2·2·2·2·2·2·2·2}\) as \(\mathrm{2^9}\). In expressions such as \(\mathrm{2^3}\), the 2 is called the base and the 3 is called the exponent. The exponent tells us how many times we need to multiply the base. EXPONENTIAL NOTATION We say \(\mathrm{2^3}\) is in exponential notation and \(\mathrm{2·2·2}\) is in expanded notation. \(a^n\) means multiply a by itself, n times. The expression \(a^n\) is read a to the \(n^{th}\) power. While we read \(a^n\) as \("a\) to the \(n^{th}\) power", we usually read: \[\begin{array}{cc} a^2 & "a \text{ squared}" \\ a^3 & "a \text{ cubed}" \end{array}\] We'll see later why \(a^2\) and \(a^3\) have special names. Table shows how we read some expressions with exponents. In Words 72 7 to the second power or 7 squared 53 5 to the third power or 5 cubed 94 9 to the fourth power 125 12 to the fifth power To simplify an expression means to do all the math possible. For example, to simplify \(\mathrm{4·2+1}\) we would first multiply \(\mathrm{4⋅2}\) to get 8 and then add the 1 to get 9. A good habit to develop is to work down the page, writing each step of the process below the previous step. The example just described would look like this: \[ \mathrm{ 4⋅2+1} \\ \mathrm{8+1} \\ \mathrm{9}\] By not using an equal sign when you simplify an expression, you may avoid confusing expressions with equations. SIMPLIFY AN EXPRESSION To simplify an expression, do all operations in the expression. We've introduced most of the symbols and notation used in algebra, but now we need to clarify the order of operations. Otherwise, expressions may have different meanings, and they may result in different values. For example, consider the expression \(\mathrm{4+3⋅7}\). Some students simplify this getting 49, by adding \(\mathrm{4+3}\) and then multiplying that result by 7. Others get 25, by multiplying \(\mathrm{3·7}\) first and then adding 4. The same expression should give the same result. So mathematicians established some guidelines that are called the order of operations. USE THE ORDER OF OPERATIONS. Parentheses and Other Grouping Symbols Simplify all expressions inside the parentheses or other grouping symbols, working on the innermost parentheses first. Exponents Simplify all expressions with exponents. Perform all multiplication and division in order from left to right. These operations have equal priority. Perform all addition and subtraction in order from left to right. These operations have equal priority. Students often ask, "How will I remember the order?" Here is a way to help you remember: Take the first letter of each key word and substitute the silly phrase "Please Excuse My Dear Aunt Sally". \[\begin{array}{ll} \text{Parentheses} & \text{Please} \\ \text{Exponents} & \text{Excuse} \\ \text{Multiplication Division} & \text{My Dear} \\ \text{Addition Subtraction} & \text{Aunt Sally} \end{array}\] It's good that "My Dear" goes together, as this reminds us that multiplication and division have equal priority. We do not always do multiplication before division or always do division before multiplication. We do them in order from left to right. Similarly, "Aunt Sally" goes together and so reminds us that addition and subtraction also have equal priority and we do them in order from left to right. example \(\PageIndex{10}\) Simplify: \(\mathrm{18÷6+4(5−2)}\). Parentheses? Yes, subtract first. Exponents? No. Multiplication or division? Yes. Divide first because we multiply and divide left to right. Any other multiplication or division? Yes. Multiply. Any other multiplication of division? No. Any addition or subtraction? Yes. Add. Simplify: \(\mathrm{30÷5+10(3−2).}\) Simplify: \(\mathrm{70÷10+4(6−2).}\) When there are multiple grouping symbols, we simplify the innermost parentheses first and work outward. Simplify: \(\mathrm{5+23+3[6−3(4−2)].}\) Are there any parentheses (or other grouping symbols)? Yes. Focus on the parentheses that are inside the brackets. Subtract. Continue inside the brackets and multiply. Continue inside the brackets and subtract. The expression inside the brackets requires no further simplification. Are there any exponents? Yes. Simplify exponents. Is there any multiplication or division? Yes. Is there any addition of subtraction? Yes. Simplify: \(\mathrm{9+53−[4(9+3)].}\) Simplify: \(\mathrm{72−2[4(5+1)].}\) In the last few examples, we simplified expressions using the order of operations. Now we'll evaluate some expressions—again following the order of operations. To evaluate an expression means to find the value of the expression when the variable is replaced by a given number. To evaluate an expression means to find the value of the expression when the variable is replaced by a given number. To evaluate an expression, substitute that number for the variable in the expression and then simplify the expression. Evaluate when \(x=4\): ⓐ \(x^2\) ⓑ \(3^x\) ⓒ \(2x^2+3x+8\). Use definition of exponent. Simplify. ⓒ Follow the order of operations. Evaluate when \(x=3\), ⓐ \(x^2\) ⓑ \(4^x\) ⓒ \(3x^2+4x+1\). ⓐ 9 ⓑ 64 ⓒ40 Evaluate when \(x=6\), ⓐ \(x^3\) ⓑ \(2^x\) ⓒ \(6x^2−4x−7\). ⓐ 216 ⓑ 64 ⓒ 185 Algebraic expressions are made up of terms. A term is a constant, or the product of a constant and one or more variables. A term is a constant or the product of a constant and one or more variables. Examples of terms are \(7,y,5x^2,9a,\) and \(b^5\). The constant that multiplies the variable is called the coefficient. The coefficient of a term is the constant that multiplies the variable in a term. Think of the coefficient as the number in front of the variable. The coefficient of the term \(3x\) is 3. When we write \(x\), the coefficient is 1, since \(x=1⋅x\). Some terms share common traits. When two terms are constants or have the same variable and exponent, we say they are like terms. Look at the following 6 terms. Which ones seem to have traits in common? \[5x \; \; \; 7 \; \; \; n^2 \; \; \; 4 \; \; \; 3x \; \; \; 9n^2\] We say, \(7\) and \(4\) are like terms. \(5x\) and \(3x\) are like terms. \(n^2\) and \(9n^2\) are like terms. LIKE TERMS Terms that are either constants or have the same variables raised to the same powers are called like terms. If there are like terms in an expression, you can simplify the expression by combining the like terms. We add the coefficients and keep the same variable. \[\begin{array}{lc} \text{Simplify.} & 4x+7x+x \\ \text{Add the coefficients.} & 12x \end{array}\] ExAMPLE \(\PageIndex{19}\): How To Combine Like Terms Simplify: \(2x^2+3x+7+x^2+4x+5\). Simplify: \(3x^2+7x+9+7x^2+9x+8\). \(10x^2+16x+17\) Simplify: \(4y^2+5y+2+8y2+4y+5.\) \(12y^2+9y+7\) COMBINE LIKE TERMS. Identify like terms. Rearrange the expression so like terms are together. Add or subtract the coefficients and keep the same variable for each group of like terms. We listed many operation symbols that are used in algebra. Now, we will use them to translate English phrases into algebraic expressions. The symbols and variables we've talked about will help us do that. Table summarizes them. Addition a plus b the sum of a and b a increased by b b more than a the total of a and b b added to a \(a+b\) Subtraction a minus b the difference of a and b a decreased by b b less than a b subtracted from a \(a−b\) Multiplication a times b the product of aa and bb twice a \(a·b,ab,a(b),(a)(b)\) \(2a\) Division a divided by b the quotient of a and b the ratio of a and b b divided into a \(a÷b,a/b,\frac{a}{b},b \overline{\smash{)}a}\) Look closely at these phrases using the four operations: Each phrase tells us to operate on two numbers. Look for the words of and and to find the numbers. Translate each English phrase into an algebraic expression: ⓐ the difference of \(14x\) and 9 ⓑ the quotient of \(8y^2\) and 3 ⓒ twelve more than \(y\) ⓓ seven less than \(49x^2\) ⓐ The key word is difference, which tells us the operation is subtraction. Look for the words of and and to find the numbers to subtract. ⓑ The key word is quotient, which tells us the operation is division. ⓒ The key words are more than. They tell us the operation is addition. More than means "added to." \[\text{twelve more than }y \\ \text{twelve added to }y \\ y+12\] ⓓ The key words are less than. They tell us to subtract. Less than means "subtracted from." \[\text{seven less than }49x^2 \\ \text{seven subtracted from }49x^2 \\ 49x^2−7\] Exercise \(\PageIndex{23}\) Translate the English phrase into an algebraic expression: ⓐ the difference of \(14x^2\) and 13 ⓑ the quotient of \(12x\) and 2 ⓒ 13 more than \(z\) ⓓ 18 less than \(8x\) ⓐ \(14x^2−13\) ⓑ \(12x÷2\) ⓒ \(z+13\) ⓓ \(8x−18\) ⓐ the sum of \(17y^2\) and 19 ⓑ the product of 7 and y ⓒ Eleven more than \(x\) ⓓ Fourteen less than 11a ⓐ \(17y^2+19\) ⓑ \(7y\) ⓒ \(x+11\) ⓓ \(11a−14\) We look carefully at the words to help us distinguish between multiplying a sum and adding a product. ⓐ eight times the sum of x and y ⓑ the sum of eight times x and y There are two operation words—times tells us to multiply and sum tells us to add. ⓐ Because we are multiplying 8 times the sum, we need parentheses around the sum of x and y, \((x+y)\). This forces us to determine the sum first. (Remember the order of operations.) \[\text{eight times the sum of }x \text{ and }y \\ 8(x+y)\] ⓑ To take a sum, we look for the words of and and to see what is being added. Here we are taking the sum of eight times x and y. ⓐ four times the sum of p and q ⓑ the sum of four times p and q ⓐ \(4(p+q)\) ⓑ \(4p+q\) ⓐ the difference of two times x and 8 ⓑ two times the difference of x and 8 ⓐ \(2x−8\) ⓑ\(2(x−8)\) Later in this course, we'll apply our skills in algebra to solving applications. The first step will be to translate an English phrase to an algebraic expression. We'll see how to do this in the next two examples. The length of a rectangle is 14 less than the width. Let w represent the width of the rectangle. Write an expression for the length of the rectangle. \[\begin{array}{lc} \text{Write a phrase about the length of the rectangle.} & \text{14 less than the width} \\ \text{Substitute }w \text{ for "the width."} & w \\ \text{Rewrite less than as subtracted from.} & \text{14 subtracted from } w \\ \text{Translate the phrase into algebra.} & w−14 \end{array}\] The length of a rectangle is 7 less than the width. Let w represent the width of the rectangle. Write an expression for the length of the rectangle. \(w−7\) The width of a rectangle is 6 less than the length. Let l represent the length of the rectangle. Write an expression for the width of the rectangle. \(l−6\) The expressions in the next example will be used in the typical coin mixture problems we will see soon. June has dimes and quarters in her purse. The number of dimes is seven less than four times the number of quarters. Let q represent the number of quarters. Write an expression for the number of dimes. \[\begin{array}{lc} \text{Write a phrase about the number of dimes.} & \text{7 less than 4 times }q \\ \text{Translate 4 times }q. & \text{7 less than 4}q \\ \text{Translate the phrase into algebra.} & 4q−7 \end{array}\] Geoffrey has dimes and quarters in his pocket. The number of dimes is eight less than four times the number of quarters. Let q represent the number of quarters. Write an expression for the number of dimes. \(4q−8\) Lauren has dimes and nickels in her purse. The number of dimes is three more than seven times the number of nickels. Let n represent the number of nickels. Write an expression for the number of dimes. \(7n+3\) How to find the prime factorization of a composite number. If a factor is prime, that branch is complete. Circle the prime, like a bud on the tree. How To Find the least common multiple using the prime factors method. \(a=b\) is read "a is equal to b." The symbol "=" is called the equal sign. \(a≠b\) a is not equal to b. \(a≤b\) a is less than or equal to b. \(a≥b\) a is greater than or equal to b. Grouping Symbols \(\begin{array}{lc} \text{Parentheses} & \mathrm{()} \\ \text{Brackets} & \mathrm{[]} \\ \text{Braces} & \mathrm{ \{ \} } \end{array}\) Exponential Notation \(a^n\) means multiply a by itself, n times. The expression an is read a to the \(n^{th}\) power. How to use the order of operations. How to combine like terms. the sum of aa and b b added to a \(a+b\) b subtracted from a \(a−b\) a divided by b b divided into a \(a÷b,a/b,\frac{a}{b},b \overline{\smash{)}a}\) Identify Multiples and Factors In the following exercises, use the divisibility tests to determine whether each number is divisible by 2, by 3, by 5, by 6, and by 10.utoNum" template (preferably at the end) to the page. [Show Solution] Divisible by 2, 3, 6 Divisible by 2 Divisible by 3, 5 Find Prime Factorizations and Least Common Multiples In the following exercises, find the prime factorization. \(2⋅43\) \(5⋅7⋅13\) \(2⋅2⋅2⋅2⋅3⋅3⋅3\) In the following exercises, find the least common multiple of each pair of numbers using the prime factors method. In the following exercises, simplify each expression. \(2^3−12÷(9−5)\) \(3^2−18÷(11−5)\) \(2+8(6+1)\) \(20÷4+6(5−1)\) \(3(1+9⋅6)−4^2\) \(2[1+3(10−2)]\) \(5[2+4(3−2)]\) \(8+2[7−2(5−3)]−3^2\) \(10+3[6−2(4−2)]−2^4\) In the following exercises, evaluate the following expressions. When \(x=2\), ⓐ \(x^6\) ⓑ \(4^x\) ⓒ \(2x^2+3x−7\) ⓐ 64 ⓑ 16 ⓒ 7 ⓑ \(5x\) ⓒ \(3x^2−4x−8\) When \(x=4,y=1\) \(x^2+3xy−7y^2\) \(6x^2+3xy−9y^2\) When \(x=10,y=7\) \((x−y)^2\) When \(a=3,b=8\) \(a^2+b^2\) Simplify Expressions by Combining Like Terms In the following exercises, simplify the following expressions by combining like terms. \(7x+2+3x+4\) \(10x+6\) \(8y+5+2y−4\) \(10a+7+5a−2+7a−4\) \(22a+1\) \(7c+4+6c−3+9c−1\) \(3x^2+12x+11+14x^2+8x+5\) \(17x^2+20x+16\)17x^2+20x+16\) \(5b^2+9b+10+2b^2+3b−4\) In the following exercises, translate the phrases into algebraic expressions. ⓐ the difference of \(5x^2\) and \(6xy\) ⓑ the quotient of \(6y^2\) and \(5x\) ⓒ Twenty-one more than \(y^2\) ⓓ \(6x\) less than \(81x^2\) ⓐ \(5x^2−6xy\) ⓑ \(\frac{6y^2}{5x}\) ⓒ \(y^2+21\) ⓓ \(81x^2−6x\) ⓐ the difference of \(17x^2\) and \(17x^2\) and \(5xy\) ⓒ Eighteen more than \(a^2\); ⓓ\(11b\) less than \(100b^2\) ⓐ the sum of \(4ab^2\) and \(3a^2b\) ⓑ the product of \(4y^2\) and \(5x\) ⓒ Fifteen more than \(m\) ⓓ \(9x\) less than \(121x^2\) ⓐ \(4ab^2+3a^2b\) ⓑ \(20xy^2\) ⓒ \(m+15\) ⓓ \(121x^2−9x\) \(9x<121x^2\) ⓐ the sum of \(3x^2y\) and \(7xy^2\) ⓑ the product of \(6xy^2\) and \(4z\) ⓒ Twelve more than \(3x^2\) ⓓ \(7x^2\) less than \(63x^3\) ⓐ eight times the difference of \(y\) and nine ⓑ the difference of eight times \(y\) and 9 ⓐ \(8(y−9)\) ⓑ \(8y−9\) ⓐ seven times the difference of \(y\) and one ⓑ the difference of seven times \(y\) and 1 ⓐ five times the sum of \(3x\) and \(y\) ⓑ the sum of five times \(3x\) and \(y\) ⓐ \(5(3x+y)\) ⓑ \(15x+y\) ⓐ eleven times the sum of \(4x2\) and \(5x\) ⓑ the sum of eleven times \(4x^2\) and \(5x\) Eric has rock and country songs on his playlist. The number of rock songs is 14 more than twice the number of country songs. Let c represent the number of country songs. Write an expression for the number of rock songs. \(14>2c\) The number of women in a Statistics class is 8 more than twice the number of men. Let \(m\) represent the number of men. Write an expression for the number of women. Greg has nickels and pennies in his pocket. The number of pennies is seven less than three the number of nickels. Let n represent the number of nickels. Write an expression for the number of pennies. \(3n-7\) Jeannette has \($5\) and \($10\) bills in her wallet. The number of fives is three more than six times the number of tens. Let \(t\) represent the number of tens. Write an expression for the number of fives. Explain in your own words how to find the prime factorization of a composite number. Answers will vary. Why is it important to use the order of operations to simplify an expression? Explain how you identify the like terms in the expression \(8a^2+4a+9−a^2−1.\) Explain the difference between the phrases "4 times the sum of x and y" and "the sum of 4 times x and y". ⓐ Use this checklist to evaluate your mastery of the objectives of this section. ⓑ If most of your checks were: …confidently. Congratulations! You have achieved the objectives in this section. Reflect on the study skills you used so that you can continue to use them. What did you do to become confident of your ability to do these things? Be specific. …with some help. This must be addressed quickly because topics you do not master become potholes in your road to success. In math every topic builds upon previous work. It is important to make sure you have a strong foundation before you move on. Who can you ask for help? Your fellow classmates and instructor are good resources. Is there a place on campus where math tutors are available? Can your study skills be improved? …no - I don't get it! This is a warning sign and you must not ignore it. You should get help right away or you will quickly be overwhelmed. See your instructor as soon as you can to discuss your situation. Together you can come up with a plan to get you the help you need. composite number A composite number is a counting number that is not prime. It has factors other than 1 and the number itself. If a number m is a multiple of n, then m is divisible by n. To evaluate an expression means to find the value of the expression when the variables are replaced by a given number. If \(a·b=m\), then a and b are factors of m. A number is a multiple of n if it is the product of a counting number and n. The order of operations are established guidelines for simplifying an expression. To simplify an expression means to do all the math possible. A term is a constant, or the product of a constant and one or more variables. 1.1E: Exercises Lynn Marecek via OpenStax source[1]-math-5117
CommonCrawl
Automorphism group of real orthogonal Lie groups I would like to understand what is the "outer-automorphism group" $Out$ of $SO(p,q)$ and $O(p,q)$, where $p+q >0$ and $pq \neq 0$. My working definition of $Out$ is as follows: Let us denote by $Aut(G)$ the automorphism group of a Lie group $G$. I take the inner-automorphism group $Inn(G)$ of $G$ to be all elements $K\in Aut(G)$ for which there exists a $g\in G$ such that $K = Ad_{g}$, namely $K(h) = g h g^{-1}$ for all $h\in G$. $Inn(G)$ is a normal subgroup of $Aut(G)$ and then $Out(G) = Aut(G)/Inn(G)$ is a group which I define to be the outer-morphism group of $G$. I have not been able to find what $Out(G)$ is for $G = SO(p,q), O(p,q)$. I have noticed that there are many references dealing with the outer-automorphism group of complex Lie algebras, which can be read off from their Dynkin diagram. However, $\mathfrak{so}(p,q)\simeq\mathfrak{o}(p,q)$ is not a complex Lie algebra but a real form. I don't know how the outer-automorphism group of a simple real Lie algebra can be computed in general. In fact, Wikipedia says that the characterization of the outer-automorphism group of a real simple Lie algebra in terms of a short exact sequence involving the full and inner autmorphisms groups (a result classical for complex Lie algebras) was only obtained as recently as in 2010! In any case, I expect the answer to my question to be even more involved since I am not interested in the outer-automorphism group of a real Lie algebra but of the full real Lie group, in my case $SO(p,q)$ and $O(p,q)$. If I am not mistaken, for $q=0$ and $p = even$ we have $O(p,0) = SO(p,0)\rtimes\mathbb{Z}_{2}$, where $\mathbb{Z}_{2}$ is the outer-automorphism group of $SO(p,0)$, so $Out(SO(p,0)) = \mathbb{Z}_{2}$. dg.differential-geometry lie-groups lie-algebras BilateralBilateral $\begingroup$ This is really a phenomenon. You would expect that someone has worked this out at least 50 if not 100 years ago (at least some undergraduate thesis should exist). It is a bit tricky, and it is hard to keep track of all phenomena. One needs one or two arguments that are not entirely elementary, but nothing that Cartan or Killing would not have known. And then you would expect that in the more modern literature, the authors just say "in the special case of $O(p,q)$, our methods reproduce the old results of ...". $\endgroup$ – Sebastian Goette Apr 14 '16 at 11:47 Let's first address your comment in response to Igor Rivin's answer: why don't we find this topic addressed in textbooks on Lie groups? Beyond the definite (= compact) case, disconnectedness issues become more complicated and your question is thereby very much informed by the theory of linear algebraic groups $G$ over $\mathbf{R}$. That in turn involves two subtle aspects (see below) that are not easy to express solely in analytic terms and are therefore beyond the level of such books (which usually don't assume familiarity with algebraic geometry at the level needed to work with linear algebraic groups over a field such as $\mathbf{R}$ that is not algebraically closed). And books on linear algebraic groups tend to say little about Lie groups. The first subtlety is that $G(\mathbf{R})^0$ can be smaller than $G^0(\mathbf{R})$ (i.e., connectedness for the analytic topology may be finer than for the Zariski topology), as we know already for indefinite orthogonal groups, and textbooks on Lie groups tend to focus on the connected case for structural theorems. It is a deep theorem of Elie Cartan that if a linear algebraic group $G$ over $\mathbf{R}$ is Zariski-connected semisimple and simply connected (in the sense of algebraic groups; e.g., ${\rm{SL}}_n$ and ${\rm{Sp}}_{2n}$ but not ${\rm{SO}}_n$) then $G(\mathbf{R})$ is connected, but that lies beyond the level of most textbooks. (Cartan expressed his result in analytic terms via anti-holomorphic involutions of complex semisimple Lie groups, since there was no robust theory of linear algebraic groups at that time.) The group $G(\mathbf{R})$ has finitely many connected components, but that is not elementary (especially if one assumes no knowledge of algebraic geometry), and the theorem on maximal compact subgroups of Lie groups $H$ in case $\pi_0(H)$ is finite but possibly not trivial appears to be treated in only one textbook (Hochschild's "Structure of Lie groups", which however does not address the structure of automorphism groups); e.g., Bourbaki's treatise on Lie groups assumes connectedness for much of its discussion of the structure of compact Lie groups. The second subtlety is that when the purely analytic operation of "complexification" for Lie groups (developed in Hochschild's book too) is applied to the Lie group of $\mathbf{R}$-points of a (Zariski-connected) semisimple linear algebraic group, it doesn't generally "match" the easier algebro-geometric scalar extension operation on the given linear algebraic group (e.g., the complexification of the Lie group ${\rm{PGL}}_3(\mathbf{R})$ is ${\rm{SL}}_3(\mathbf{C})$, not ${\rm{PGL}}_3(\mathbf{C})$). Here too, things are better-behaved in the "simply connected" case, but that lies beyond the level of introductory textbooks on Lie groups. Now let us turn to your question. Let $n = p+q$, and assume $n \ge 3$ (so the Lie algebra is semisimple; the cases $n \le 2$ can be analyzed directly anyway). I will only address ${\rm{SO}}(p,q)$ rather than ${\rm{O}}(p, q)$, since it is already enough of a headache to keep track of disconnected effects in the special orthogonal case. To be consistent with your notation, we'll write $\mathbf{O}(p,q) \subset {\rm{GL}}_n$ to denote the linear algebraic group over $\mathbf{R}$ "associated" to the standard quadratic form of signature $(p, q)$ (so its group of $\mathbf{R}$-points is what you have denoted as ${\rm{O}}(p,q)$), and likewise for ${\mathbf{SO}}(p,q)$. We will show that ${\rm{SO}}(p, q)$ has only inner automorphisms for odd $n$, and only the expected outer automorphism group of order 2 (arising from reflection in any nonzero vector) for even $n$ in both the definite case and the case when $p$ and $q$ are each odd. I will leave it to someone else to figure out (or find a reference on?) the case with $p$ and $q$ both even and positive. We begin with some preliminary comments concerning the definite (= compact) case for all $n \ge 3$, for which the Lie group ${\rm{SO}}(p,q) = {\rm{SO}}(n)$ is connected. The crucial (non-trivial) fact is that the theory of connected compact Lie groups is completely "algebraic'', and in particular if $G$ and $H$ are two connected semisimple $\mathbf{R}$-groups for which $G(\mathbf{R})$ and $H(\mathbf{R})$ are compact then every Lie group homomorphism $G(\mathbf{R}) \rightarrow H(\mathbf{R})$ arising from a (unique) algebraic homomorphism $G \rightarrow H$. In particular, the automorphism groups of $G$ and $G(\mathbf{R})$ coincide, so the automorphism group of ${\rm{SO}}(n)$ coincides with that of $\mathbf{SO}(n)$. Note that any linear automorphism preserving a non-degenerate quadratic form up to a nonzero scaling factor preserves its orthogonal and special orthogonal group. It is a general fact (due to Dieudonne over general fields away from characteristic 2) that if $(V, Q)$ is a non-degenerate quadratic space of dimension $n \ge 3$ over any field $k$ and if ${\mathbf{GO}}(Q)$ denotes the linear algebraic $k$-group of conformal automorphisms then the action of the algebraic group ${\mathbf{PGO}}(Q) = {\mathbf{GO}}(Q)/{\rm{GL}}_1$ on ${\mathbf{SO}}(Q)$ through conjugation gives exactly the automorphisms as an algebraic group. More specifically, $${\mathbf{PGO}}(Q)(k) = {\rm{Aut}}_k({\mathbf{SO}}(Q)).$$ This is proved using a lot of the structure theory of connected semisimple groups over an extension field that splits the quadratic form, so it is hard to "see'' this fact working directly over the given ground field $k$ (such as $k = \mathbf{R}$); that is one of the great merits of the algebraic theory (allowing us to prove results over a field by making calculations with a geometric object over an extension field, and using techniques such as Galois theory to come back to where we began). Inside the automorphism group of the Lie group ${\rm{SO}}(p,q)$, we have built the subgroup ${\rm{PGO}}(p,q) := {\mathbf{PGO}}(p,q)(\mathbf{R})$ of "algebraic'' automorphisms (and it gives all automorphisms when $p$ or $q$ vanish). This subgroup is $${\mathbf{GO}}(p,q)(\mathbf{R})/\mathbf{R}^{\times} = {\rm{GO}}(p,q)/\mathbf{R}^{\times}.$$ To analyze the group ${\rm{GO}}(p,q)$ of conformal automorphisms of the quadratic space, there are two possibilities: if $p \ne q$ (such as whenever $p$ or $q$ vanish) then any such automorphism must involve a positive conformal scaling factor due to the need to preserve the signature, and if $p=q$ (the "split'' case: orthogonal sum of $p$ hyperbolic planes) then signature-preservation imposes no condition and we see (upon choosing a decomposition as an orthogonal sum of $p$ hyperbolic planes) that there is an evident involution $\tau$ of the vector space whose effect is to negative the quadratic form. Thus, if $p \ne q$ then ${\rm{GO}}(p,q) = \mathbf{R}^{\times} \cdot {\rm{O}}(p,q)$ whereas ${\rm{GO}}(p,p) = \langle \tau \rangle \ltimes (\mathbf{R}^{\times} \cdot {\rm{O}}(p,p))$. Hence, ${\rm{PGO}}(p,q) = {\rm{O}}(p,q)/\langle -1 \rangle$ if $p \ne q$ and ${\rm{PGO}}(p,p) = \langle \tau \rangle \ltimes ({\rm{O}}(p,p)/\langle -1 \rangle)$ for an explicit involution $\tau$ as above. We summarize the conclusions for outer automorphisms of the Lie group ${\rm{SO}}(p, q)$ arising from the algebraic theory. If $n$ is odd (so $p \ne q$) then ${\rm{O}}(p,q) = \langle -1 \rangle \times {\rm{SO}}(p,q)$ and so the algebraic automorphisms are inner (as is very well-known in the algebraic theory). Suppose $n$ is even, so $-1 \in {\rm{SO}}(p, q)$. If $p \ne q$ (with the same parity) then the group of algebraic automorphisms contributes a subgroup of order 2 to the outer automorphism group (arising from any reflection in a non-isotropic vector, for example). Finally, the contribution of algebraic automorphisms to the outer automorphism group of ${\rm{SO}}(p,p)$ has order 4 (generated by two elements of order 2: an involution $\tau$ as above and a reflection in a non-isotropic vector). This settles the definite case as promised (i.e., all automorphisms inner for odd $n$ and outer automorphism group of order 2 via a reflection for even $n$) since in such cases we know that all automorphisms are algebraic. Now we may and do assume $p, q > 0$. Does ${\rm{SO}}(p, q)$ have any non-algebraic automorphisms? We will show that if $n \ge 3$ is odd (i.e., $p$ and $q$ have opposite parity) or if $p$ and $q$ are both odd then there are no non-algebraic automorphisms (so we would be done). First, let's compute $\pi_0({\rm{SO}}(p,q))$ for any $n \ge 3$. By the spectral theorem, the maximal compact subgroups of ${\rm{O}}(p,q)$ are the conjugates of the evident subgroup ${\rm{O}}(q) \times {\rm{O}}(q)$ with 4 connected components, and one deduces in a similar way that the maximal compact subgroups of ${\rm{SO}}(p, q)$ are the conjugates of the evident subgroup $$\{(g,g') \in {\rm{O}}(p) \times {\rm{O}}(q)\,|\, \det(g) = \det(g')\}$$ with 2 connected components. For any Lie group $\mathscr{H}$ with finite component group (such as the group $G(\mathbf{R})$ for any linear algebraic group $G$ over $\mathbf{R}$), the maximal compact subgroups $K$ constitute a single conjugacy class (with every compact subgroup contained in one) and as a smooth manifold $\mathscr{H}$ is a direct product of such a subgroup against a Euclidean space (see Chapter XV, Theorem 3.1 of Hochschild's book "Structure of Lie groups'' for a proof). In particular, $\pi_0(\mathscr{H}) = \pi_0(K)$, so ${\rm{SO}}(p, q)$ has exactly 2 connected components for any $p, q > 0$. Now assume $n$ is odd, and swap $p$ and $q$ if necessary (as we may) so that $p$ is odd and $q>0$ is even. For any $g \in {\rm{O}}(q) - {\rm{SO}}(q)$, the element $(-1, g) \in {\rm{SO}}(p, q)$ lies in the unique non-identity component. Since $n \ge 3$ is odd, so ${\rm{SO}}(p, q)^0$ is the quotient of the connected (!) Lie group ${\rm{Spin}}(p, q)$ modulo its order-2 center, the algebraic theory in characteristic 0 gives $${\rm{Aut}}({\mathfrak{so}}(p,q)) = {\rm{Aut}}({\rm{Spin}}(p, q)) = {\rm{SO}}(p, q).$$ Thus, to find nontrivial elements of the outer automorphism group of the disconnected Lie group ${\rm{SO}}(p, q)$ we can focus attention on automorphisms $f$ of ${\rm{SO}}(p, q)$ that induce the identity on ${\rm{SO}}(p, q)^0$. We have arranged that $p$ is odd and $q>0$ is even (so $q \ge 2$). The elements $$(-1, g) \in {\rm{SO}}(p, q) \cap ({\rm{O}}(p) \times {\rm{O}}(q))$$ (intersection inside ${\rm{O}}(p, q)$, so $g \in {\rm{O}}(q) - {\rm{SO}}(q)$) have an intrinsic characterization in terms of the Lie group ${\rm{SO}}(p, q)$ and its evident subgroups ${\rm{SO}}(p)$ and ${\rm{SO}}(q)$: these are the elements outside ${\rm{SO}}(p, q)^0$ that centralize ${\rm{SO}}(p)$ and normalize ${\rm{SO}}(q)$. (To prove this, consider the standard representation of ${\rm{SO}}(p) \times {\rm{SO}}(q)$ on $\mathbf{R}^{p+q} = \mathbf{R}^n$, especially the isotypic subspaces for the action of ${\rm{SO}}(q)$ with $q \ge 2$.) Hence, for every $g \in {\rm{O}}(q) - {\rm{SO}}(q)$ we have $f(-1,g) = (-1, F(g))$ for a diffeomorphism $F$ of the connected manifold ${\rm{O}}(q) - {\rm{SO}}(q)$. Since $f$ acts as the identity on ${\rm{SO}}(q)$, it follows that the elements $g, F(g) \in {\rm{O}}(q) - {\rm{SO}}(q)$ have the same conjugation action on ${\rm{SO}}(q)$. But ${\rm{PGO}}(q) \subset {\rm{Aut}}({\rm{SO}}(q))$, so $F(g)g^{-1} \in \mathbf{R}^{\times}$ inside ${\rm{GL}}_q(\mathbf{R})$ with $q>0$ even. Taking determinants, this forces $F(g) = \pm g$ for a sign that may depend on $g$. But $F$ is continuous on the connected space ${\rm{O}}(q) - {\rm{SO}}(q)$, so the sign is actually independent of $g$. The case $F(g) = g$ corresponds to the identity automorphism of ${\rm{SO}}(q)$, so for the study of non-algebraic contributions to the outer automorphism group of ${\rm{SO}}(p, q)$ (with $p$ odd and $q > 0$ even) we are reduced to showing that the case $F(g) = -g$ cannot occur. We are seeking to rule out the existence of an automorphism $f$ of ${\rm{SO}}(p, q)$ that is the identity on ${\rm{SO}}(p, q)^0$ and satisfies $(-1, g) \mapsto (-1, -g)$ for $g \in {\rm{O}}(q) - {\rm{SO}}(q)$. For this to be a homomorphism, it is necessary (and sufficient) that the conjugation actions of $(-1, g)$ and $(-1, -g)$ on ${\rm{SO}}(p, q)^0$ coincide for all $g \in {\rm{O}}(q) - {\rm{SO}}(q)$. In other words, this requires that the element $(1, -1) \in {\rm{SO}}(p, q)$ centralizes ${\rm{SO}}(p, q)^0$. But the algebraic group ${\mathbf{SO}}(p, q)$ is connected (for the Zariski topology) with trivial center and the same Lie algebra as ${\rm{SO}}(p, q)^0$, so by consideration of the compatible algebraic and analytic adjoint representations we see that $(1, -1)$ cannot centralize ${\rm{SO}}(p, q)^0$. Thus, no non-algebraic automorphism of ${\rm{SO}}(p, q)$ exists in the indefinite case when $n \ge 3$ is odd. Finally, suppose $p$ and $q$ are both odd, so ${\rm{SO}}(p,q)^0$ does not contain the element $-1 \in {\rm{SO}}(p,q)$ that generates the center of ${\rm{SO}}(p,q)$ (and even the center of ${\rm{O}}(p,q)$). Thus, we have ${\rm{SO}}(p,q) = {\rm{SO}}(p,q)^0 \times \langle -1 \rangle$ with ${\rm{SO}}(p,q)^0$ having trivial center. Any (analytic) automorphism of ${\rm{SO}}(p,q)$ clearly acts trivially on the order-2 center $\langle -1 \rangle$ and must preserve the identity component too, so such an automorphism is determined by its effect on the identity component. It suffices to show that every analytic automorphism $f$ of ${\rm{SO}}(p,q)^0$ arises from an algebraic automorphism of ${\rm{SO}}(p,q)$, as then all automorphisms of ${\rm{SO}}(p,q)$ would be algebraic (so the determination of the outer analytic automorphism group for $p, q$ odd follows as for the definite case with even $n \ge 4$). By the theory of connected semisimple algebraic groups in characteristic 0, for any $p, q \ge 0$ with $p+q \ge 3$ every analytic automorphism of the connected (!) group ${\rm{Spin}}(p,q)$ is algebraic. Thus, it suffices to show that any automorphism $f$ of ${\rm{SO}}(p,q)^0$ lifts to an automorphism of the degree-2 cover $\pi:{\rm{Spin}}(p,q) \rightarrow {\rm{SO}}(p,q)^0$. (Beware that this degree-2 cover is not the universal cover if $p, q \ge 2$, as ${\rm{SO}}(p,q)^0$ has maximal compact subgroup ${\rm{SO}}(p) \times {\rm{SO}}(q)$ with fundamental group of order 4.) The Lie algebra automorphism ${\rm{Lie}}(f)$ of ${\mathfrak{so}}(p,q) = {\mathfrak{spin}}(p,q)$ arises from a unique algebraic automorphism of the group ${\mathbf{Spin}}(p,q)$ since this latter group is simply connected in the sense of algebraic groups. The induced automorphism of the group ${\rm{Spin}}(p,q)$ of $\mathbf{R}$-points does the job, since its compatibility with $f$ via $\pi$ can be checked on Lie algebras (as we are working with connected Lie groups). This final argument also shows that the remaining problem for even $p, q \ge 2$ is to determine if any automorphism of ${\rm{SO}}(p,q)$ that is the identity map on ${\rm{SO}}(p,q)^0$ is itself the identity map. (If affirmative for such $p, q$ then the outer automorphism group of ${\rm{SO}}(p,q)$ is of order 2, and if negative then the outer automorphism group is bigger.) nfdc23nfdc23 8052121 silver badges3535 bronze badges $\begingroup$ Thank you very much for your answer. I have to go carefully through it and try to prove the $O(p,q)$ case along similar lines (in case this is possible). Just to know if I get the correct answer: do you know by heart if $O(p,q)$ has non-trivial outer automorphisms? The relevant reference that you recommend to understand the details of your proof is then Hochschild's book or there are others more appropriates? $\endgroup$ – Bilateral Apr 11 '16 at 0:09 $\begingroup$ Hochschild's book won't help with the algebraic aspects of my argument (e.g., Dieudonne's theorem, admittedly only needed in characteristic 0); I was freely using anything I needed from the theory of linear algebraic groups, so perhaps ask a friend who knows about algebraic groups if you need some assistance with that. In the indefinite case for odd $n$ the group ${\rm{O}}(p,q) = {\rm{SO}}(p,q) \times \langle -1 \rangle$ does have a non-inner (non-algebraic!) automorphism $(g,z) \mapsto (g, f(g)z)$ for the unique surjective homomorphism $f:{\rm{SO}}(p,q) \rightarrow \langle -1 \rangle$! $\endgroup$ – nfdc23 Apr 11 '16 at 0:46 $\begingroup$ Of course, in the definite case for odd $n$ the triviality of the outer automorphism group for ${\rm{O}}(n)$ reduces to that of ${\rm{SO}}(n)$ since ${\rm{O}}(n) = {\rm{SO}}(n) \times \langle -1 \rangle$ with ${\rm{SO}}(n)$ connected. So that is an alternative (albeit perhaps much heavier) approach to what is done in Goette's answer for odd $n$. The case of even $n$ requires more serious effort (as in the indefinite case). $\endgroup$ – nfdc23 Apr 11 '16 at 0:57 $\begingroup$ I have added a treatment of ${\rm{SO}}(p,q)$ for odd $p, q$ with $p+q \ge 3$ at the end of the answer (again, all automorphisms turn out to be algebraic, so the outer automorphism group is of order 2). Analyzing the case of ${\rm{SO}}(p,q)$ for even $p, q \ge 2$ requires a better idea (especially when $n \ge 6$). $\endgroup$ – nfdc23 Apr 11 '16 at 3:47 $\begingroup$ A linear algebraic group over a field $k$ (especially not algebraically closed; e.g., finite or $\mathbf{R}$) is a very different thing from its group of $k$-points. Please talk with a colleague or friend who knows about algebraic groups to understand the difference, which is technically very essential. For example, the algebraic groups ${\rm{SL}}_3$ and ${\rm{PGL}}_3$ over $\mathbf{R}$ are not isomorphic but ${\rm{SL}}_3(\mathbf{R}) = {\rm{PGL}}_3(\mathbf{R})$! Likewise, it is "wrong" to consider ${\mathbf{SO}}(p,q)$ as another viewpoint on ${\rm{SO}}(p,q)$; they're very different objects. $\endgroup$ – nfdc23 Apr 12 '16 at 1:17 Edit. nfcd has given an almost complete answer. Let me add a few missing cases below. Unfortunately, I don't get away using only elementary methods. Final edit. I will regard the groups $G=O(p,q)$ or $G=SO(p,q)$ as Lie groups (in the $C^\infty$ setting), to avoid complications with e.g. $SO(2)$ (which would have uncountably many outer automorphisms as an abstract group). On the other hand, algebraic geometry tools do not suffice as we see in nfcd's answer. There are basically two sources for outer automorphisms in this case. The normaliser of $G$ in $GL(n, \mathbb R)$ could be larger than $G$ itself. In this case, one has to check which elements of $N_{GL(n)}(G)/G$ act by conjugation in a different way than elements of $G$ itself. There exists a nontrivial homomorphism $\varphi$ from $G/G^0$ to the center $C(G)$. In this case, one has to check that $g\mapsto \varphi(g)\cdot g$ is bijective. In contrast to the first type, these automorphisms change the spectral decomposition of a matrix in $G\subset GL(n,\mathbb R)$ in most cases. I could not prove in general that matrix groups have no other outer automorphisms. But a case by case proof will reveal that here, all are generated by the two types above. Let's start with $O(n)$. It is generated by reflections on hyperplanes, that is, by elements $g$ with eigenvalues $\pm 1$, such that the $-1$-eigenspace is one-dimensional. They satisfy three properties: $g^2=e$, $g\ne e$, and $C(g)\cong O(1)\times O(n-1)$. A maximal subset of commuting reflections corresponds to a selection of orthogonal lines in $\mathbb R^n$. With some extra work one sees that all automorphisms mapping reflections to reflections are inner automorphisms. The only other elements with similar properties are reflections on lines, which have a one-dimensional $1$-eigenspace. If those happen to be in $SO(n)$, they cannot generate $O(n)$, so $O(n)$ has no outer automorphisms. This happens when $n$ is odd. If $n$ is even, you have an automorphism $g\mapsto\det(g)\cdot g$ flipping both types of generators. For $n=2$, there is no difference between hyperplanes and lines, and so by the argument above, all automorphisms are inner. For $n\ge 4$ even, an inner automorphism does not change the multiplicities of eigenvalues, hence the automorphism above is outer. So $\mathrm{Out}(O(n))=\mathbb Z/2$ if $n$ is even and $n\ge 4$. For $SO(n)$, you already noticed that elements $g\in O(n)\setminus SO(n)$ give automorphisms. Because $-g$ and $g$ induce the same automorphism, this does not give an outer automorphism if $n$ is odd. Indeed, $SO(n)$ is generated by reflections along hyperplanes for $n$ odd, so by an argument as above there are no outer automorphisms. If $n$ is even, one can check that the automorphism is indeed an outer one by regarding a matrix composed of $\frac n2$ rotation blocks of small nonzero angle. Such a matrix specifies an orientation on $\mathbb R^n$ that is preserved by all inner automorphisms, but not by an element of $O(n)\setminus SO(n)$. To see that there are no more outer automorphism of $SO(n)$, one notices that every automorphism lifts to the universal cover $\mathrm{Spin}(n)$. This group is semisimple, compact, connected and simply connected for $n\ge 3$, so its automorphism group is the symmetry group of the Dynkin diagram, which is $\mathbb Z/2$ except if $n=8$. For $n=8$, the automorphism group is the symmetric group on $3$ elements. One can check that of these, only two descend to $SO(8)$. For $G=O(p,q)$ or $G=SO(p,q)$ with both $p\ne 0$, $q\ne 0$, we have the following key observation. For each outer automorphism of $G$ there exists an outer automorphism of $G$ that acts on a fixed maximal compact subgroup $K$. For let $\Phi\colon G\to G$ represent an outer automorphism. Then $\Phi(K)\subset G$ is a maximal subgroup of $K$, hence conjugate to $K$ by an inner automorphism of $G$. The composition represents the same outer automorphism and acts on $K$. Now assume that $\Phi$ acts as an inner automorphism on $K$. By composing with composition by an appropriate element of $K$, we may assume that $\Phi$ acts as identity on $K$. We want to find all automorphisms $\Phi$ that act as identity on all of $K$. Note that the choice of $K$ corresponds to a choice of splitting $\mathbb R^{p,q}\cong\mathbb R^p\oplus\mathbb R^q$. The group $G$ is generated by $K$ and by one-parameter groups of hyperbolic rotations that act on the span of two unit vectors $v\in\mathbb R^p$ and $w\in\mathbb R^q$ as $\bigl(\begin{smallmatrix}\cosh t&\sinh t\\\sinh t&\cosh t\end{smallmatrix}\bigr)$. All these subgroups are conjugate to each other by elements of $K$. Each subgroup commutes with a subgroup of $K$ that is isomorphic to $K\cap(O(p-1)\times O(q-1))$, and that determines the plane spanned by $v$ and $w$. The speed of such a rotation can be measured using the Killing form, which in intrinsic, so $\Phi$ can not change its absolute value. The upshot is that the only nontrivial automorphism that acts trivially on $K$ but not on $G$ is conjugation by $(\pm 1,\mp 1)\in O(p)\times O(q)$. It is an inner automorphism of $G$ except if we are dealing with $SO(p,q)$ and both $p$ and $q$ are odd. In that last case, it is an odd product of conjugations by reflections, and we will encounter it again below. So from now on we consider outer automorphisms of $K$ and see if we can extend them to $G$. We start with $O(p,q)$, which is a bit easier. Its maximal compact subgroup is $K=O(p)\times O(q)$. As above, we choose as generators a set $\mathbb RP^{p-1}\sqcup\mathbb RP^{q-1}$ consisting of all reflections along hyperplanes in both groups. Each element commutes with a subgroup isomorphic to $O(p-1,q)$ or $O(p,q-1)$, respectively. Reflections (of all of $\mathbb R^{p,q}$) along lines in $\mathbb R^p$ or $\mathbb R^q$ have similar properties. This way, we get three nontrivial endomorphisms given by multiplying each group element by a locally constant homomorphism $O(p,q)\to\{1,-1\}$, which we will denote by $\det_p$, $\det_q$ and $\det=\det_p\cdot\det_q$, which restrict on $K$ to $\det_{O(p)}$, $\det_{O(q)}$ and $\det_{O(p)}\cdot\det_{O(q)}$. If $p$ is even, multiplication with $\det_p$ is bijective, and hence an outer automorphism. If $q$ is even, multiplication with $\det_q$ is an automorphism, and if $p+q$ is even, multiplication with $\det$ is an automorphism. Only if $p=q=1$, multiplication with $\det$ does not change the spectral decomposition of at least one element of $K$, and similar as in the case of $O(2)$ above, it corresponds to the outer automorphism that comes from swapping both copies of $\mathbb R^1$. Because there are no other sets of generators with similar properties, we have found all outer automorphisms of $O(p,q)$. The maximal compact subgroup of $SO(p,q)$ is $K=S(O(p)\times O(q))=SO(p,q)\cap O(p+q)$. It has two connected components. The connected component of the identity is $K^0=SO(p)\times SO(q)$. As in the case of $SO(n)$, the only possible outer automorphisms are generated by conjugating with reflections $r$ in $O(p)$ or $O(q)$. If $p+q$ is odd, $-r\in S(O(p)\times O(q))$ has the same effect, so one gets an inner automorphism. If $p+q$ is even, one gets a nontrivial outer automorphism by an orientation argument as above for $SO(n)$. An odd number of these compose to conjugation by $(-1,1)\in O(p)\times O(q)$ considered above. And of course for $p=q$, you get additional ones swapping both factors as above. Note that none of these automorphisms changes the eigenvalues of the matrices. It remains to check if there are outer automorhisms that only effect $$R=K\setminus K^0=S(O(p)\times O(q))\setminus(SO(p)\times SO(q))=(O(p)\setminus SO(p))\times(O(q)\setminus SO(q))\;.$$ Such an automorphism becomes inner when restricted to $K^0$, so by composing with an inner automorphism, we find a representative $\Phi$ that acts as identity on $K^0$. We note that $R$ contains products $r_p\circ r_q$ of a reflection in $O(p)$ with a reflection in $O(q)$. The only other element of $R$ that act in the same way by conjugation on $SO(p)\times SO(q)$ would be $-r_p\circ r_q$, so all $\Phi$ can do is multiply elements of $R$ by $-1$. This gives a nontrivial endomorphism that is an outer automorphism if and only if $p$, $q$ are even (which changes the eigenvalues of some matrices, hence is not in our list above). To summarize, if $p\ne q$, the outer automorphism group is of the form $(\mathbb Z/2)^k$. Generators are given below. $$\begin{matrix} \text{group}&\text{case}&\text{generators}\\ O(n)&\text{$n$ odd or $n=2$}&\text{---}\\ O(n)&\text{$n$ even, $n\ge 4$}&\mu_{\det}\\ SO(n)&\text{$n$ odd}&\text{---}\\ SO(n)&\text{$n$ even}&C_r\\ O(p,q)&\text{$p$, $q$ odd, $p+q\ge 4$}&\mu_{\det}\\ O(p,q)&\text{$p$ even, $q$ odd}&\mu_{\det_p}\\ O(p,q)&\text{$p$, $q$ even}&\mu_{\det_p},\mu_{\det_q}\\ SO(p,q)&\text{$p$, $q$ odd}&C_r\\ SO(p,q)&\text{$p$ even, $q$ odd}&\text{---}\\ SO(p,q)&\text{$p$, $q$ even}&C_r,\mu_{\det_p} \end{matrix}$$ where $\mu_{\dots}$ denotes multiplication with a homomorphism to the center, $C_{\dots}$ denotes conjugation with an element in the normaliser, and $r$ denotes a reflection. If $p=q$, then there is an additional generator induced by swapping the two copies of $\mathbb R^p$. The full outer automorphism group will then be of the form $(\mathbb Z/2)^k\rtimes(\mathbb Z/2)$, where $(\mathbb Z/2)^k$ is the group described in the table. Sebastian GoetteSebastian Goette $\begingroup$ In the end, all arguments here were available more than 50 years ago. The argument that the $\mathbb R P^{n-1}\subset O(n)$ of reflections is rigid can be done using the Killing form (which is intrinsicly defined, hence invariant under outer automorphisms). I would be really astonished if this is nowhere in the literature. $\endgroup$ – Sebastian Goette Apr 12 '16 at 14:48 $\begingroup$ The argument for making a subgroup inclusion ${\rm{Out}}(G) \rightarrow {\rm{Out}}(K)$ seems to omit addressing two points: for well-definedness one has to show that $N_G(K) = Z_G \cdot K$ (otherwise the construction is not independent of representatives in ${\rm{Aut}}(G)$), and for injectivity it has to be shown that if an automorphism of $G$ is the identity on $K$ then it is inner. $\endgroup$ – nfdc23 Apr 12 '16 at 16:02 $\begingroup$ I meant to write $Z_G(K)K$, and it doesn't seem harmless. Choose $g \in N_G(K)$ and let $c_g \in {\rm{Aut}}(G)$ denote $g$-conjugation. For $f \in {\rm{Aut}}(G)$ that preserves $K$, $c_g \cdot f$ preserves $K$ and its image in ${\rm{Out}}(K)$ is obtained from the image of $f$ by multiplication against the class of $c_g|_K$. Thus, in order for ${\rm{Out}}(G) \rightarrow {\rm{Out}}(K)$ to be well-defined, we need that $c_g|_K$ is an inner automorphism, which is to say there exists $k \in K$ such that $gk \in Z_G(K)$. So if $N_G(K)$ is bigger than $Z_G(K)K$ then it is not well-defined. $\endgroup$ – nfdc23 Apr 12 '16 at 23:36 $\begingroup$ @nfcd my point is - I don't need a group homomorphism at all. I just need representatives of outer automorphisms that I can put my hand on. $\endgroup$ – Sebastian Goette Apr 14 '16 at 7:00 This is discussed in this paper by Brian Roberts. (2010), where he points out that the outer automorphism group of the orthogonal groups is trivial. Igor RivinIgor Rivin $\begingroup$ Are you referring to the proof of Thm 2? I haven't been able to check the given reference. But what about the map $g\mapsto\det(g)\cdot g$ on $O(2k)$ in my answer? And how would an outer automorphism of $SO(3)$ look like (which is claimed to have a "flip", and if it looks the way I image it would be inner)? $\endgroup$ – Sebastian Goette Apr 9 '16 at 20:43 $\begingroup$ I haven't been able to check the reference that Brian Roberts cites either. In any case, why is this result (the outer automorphism group of orthogonal groups) so elusive in the literature? Shouldn't be a result present in any graduate book on Lie groups? $\endgroup$ – Bilateral Apr 10 '16 at 1:08 $\begingroup$ The given link says the opposite in the indefinite case. $\endgroup$ – nfdc23 Apr 11 '16 at 1:30 Onishchik: Lectures on Real Semisimple Lie Algebras and Their Representations. jorge vargasjorge vargas $\begingroup$ This book on Lie algebras doesn't seem to say much about the structure of Lie groups (and their automorphisms, which as you know is a more delicate matter than in the case of Lie algebras). $\endgroup$ – nfdc23 Apr 11 '16 at 2:45 on page 386 (paragraph 66.7) you find the table Out(G)/Int(G) on page 387 you find D_{l,j} j>1 your Lie algebras so(p,q) when your p or q is even. on page 391 you find so(p,q) when both p,q are odd The general theorem is on page 382-386 $\begingroup$ You should edit your answer rather than write a new answer and again a new answer.... $\endgroup$ – Mikhail Borovoi Apr 12 '16 at 15:38 Check the book Freudenthal Linear Lie groups, in this book you find a complete treatment of Aut(G) for G a real semisimple (connected) Lie group. best regards $\begingroup$ The main difficulty of the question posed is that the Lie groups ${\rm{SO}}(p,q)$ are disconnected, so results about automorphisms of connected groups don't seem to apply. Since the book of Freudenthal is very difficult to navigate (because it has no table of contents and uses a lot of non-standard notation and terminology), can you please point to where its "complete treatment" of automorphisms in the connected case is given? I see a discussion very early on which is too preliminary to be the "complete treatment" you're referring to. $\endgroup$ – nfdc23 Apr 11 '16 at 1:33 Thanks for contributing an answer to MathOverflow! Not the answer you're looking for? Browse other questions tagged dg.differential-geometry lie-groups lie-algebras or ask your own question. How are orthogonal groups over $\mathbb{R}$ parameterized by symmetric bilinear forms? Classification of real forms up to inner automorphisms For $\mathfrak g$ A Lie algebra of type $ E_7 $, $\mathfrak h $ a Cartan subalgebra and $\Delta$ the resulting root system, does $ Aut(\mathfrak g,\mathfrak h)\rightarrow Aut(\Delta) $ split over the Weyl group? real representation of real semi simple Lie algebra Existence of a Lie algebra element orthogonal to the adjoint orbit of another element Computing the inner automorphism group of a finite Lie algebra
CommonCrawl
DCDS-B Home Wild oscillations in a nonlinear neuron model with resets: (Ⅰ) Bursting, spike-adding and chaos December 2017, 22(10): 4003-4039. doi: 10.3934/dcdsb.2017205 Wild oscillations in a nonlinear neuron model with resets: (Ⅱ) Mixed-mode oscillations Jonathan E. Rubin 1, , Justyna Signerska-Rynkowska 2,3,, , Jonathan D. Touboul 2,4, and Alexandre Vidal 5, Department of Mathematics, University of Pittsburgh, Pittsburgh, USA The Mathematical Neuroscience Team, CIRB-Collége de France, (CNRS UMR 7241, INSERM U1050, UPMC ED 158, MEMOLIFE PSL), Paris, France, Inria Paris, Mycenae Team, Paris, France Math., Gdańsk University of Technology, Gdańsk, Poland Department of Mathematics, Brandeis University, Waltham MA 02454, USA Laboratoire de Mathématiques et Modélisation d'Évry (LaMME), CNRS UMR 8071, Université d'Évry-Val-d'Essonne, France * Corresponding author: [email protected] Received November 2016 Revised June 2017 Published August 2017 Fund Project: J. E. Rubin was partly supported by US National Science Foundation awards DMS 1312508 and 1612913. J. Signerska-Rynkowska was supported by Polish National Science Centre grant 2014/15/B/ST1/01710 This work continues the analysis of complex dynamics in a class of bidimensional nonlinear hybrid dynamical systems with resets modeling neuronal voltage dynamics with adaptation and spike emission. We show that these models can generically display a form of mixed-mode oscillations (MMOs), which are trajectories featuring an alternation of small oscillations with spikes or bursts (multiple consecutive spikes). The mechanism by which these are generated relies fundamentally on the hybrid structure of the flow: invariant manifolds of the continuous dynamics govern small oscillations, while discrete resets govern the emission of spikes or bursts, contrasting with classical MMO mechanisms in ordinary differential equations involving more than three dimensions and generally relying on a timescale separation. The decomposition of mechanisms reveals the geometrical origin of MMOs, allowing a relatively simple classification of points on the reset manifold associated to specific numbers of small oscillations. We show that the MMO pattern can be described through the study of orbits of a discrete adaptation map, which is singular as it features discrete discontinuities with unbounded left-and right-derivatives. We study orbits of the map via rotation theory for discontinuous circle maps and elucidate in detail complex behaviors arising in the case where MMOs display at most one small oscillation between each consecutive pair of spikes. Keywords: Hybrid dynamical systems, rotation theory, mixed-mode oscillations, bursting, nonlinear integrate-and-fire neuron model. Mathematics Subject Classification: Primary:34K34, 37E45;Secondary:92C20. Citation: Jonathan E. Rubin, Justyna Signerska-Rynkowska, Jonathan D. Touboul, Alexandre Vidal. Wild oscillations in a nonlinear neuron model with resets: (Ⅱ) Mixed-mode oscillations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 4003-4039. doi: 10.3934/dcdsb.2017205 A. Alonso and R. Klink, Differential electroresponsiveness of stellate and pyramidal-like cells of medial entorhinal cortex layer Ⅱ, J. Neurophysiol., 70 (1993), 128-143. Google Scholar A. Alonso and R. Llinás, Subthreshold Na+-dependent theta-like rhythmicity in stellate cells of entorhinal cortex layer Ⅱ, Nature, 342 (1989), 175-177. doi: 10.1038/342175a0. Google Scholar L. Alsedá, J. Llibre, M. Misiurewicz and Ch. Tresser, Periods and entropy for Lorenz-like maps, Ann. Inst. Fourier (Grenoble), 39 (1989), 929-952. doi: 10.5802/aif.1195. Google Scholar L. Alsedá, J. Llibre and M. Misiurewicz, Combinatorial Dynamics and Entropy in Dimension One (second ed. ), Advanced Series in Nonlinear Dynamics, 5. World Scientific Publishing Co. , Inc. , River Edge, NJ, 1993. doi: 10.1142/4205. Google Scholar R. Amir, M. Michaelis and M. Devor, Membrane potential oscillations in dorsal root ganglion neurons: Role in normal electrogenesis and neuropathic pain, J. Neurosci., 19 (1999), 8589-8596. Google Scholar L. S. Bernardo and R. E. Foster, Oscillatory behavior in inferior olive neurons: Mechanism, modulation, cell agregates, Brain Research Bulletin, 17 (1986), 773-784. Google Scholar R. Brette, Rotation numbers of discontinuous orientation-preserving circle maps, Set-Valued Anal., 11 (2003), 359-371. doi: 10.1023/A:1025644532200. Google Scholar R. Brette and W. Gerstner, Adaptive exponential integrate-and-fire model as an effective description of neuronal activity, J. Neurophysiol., 94 (2005), 3637-3642. Google Scholar N. Brunel and P. Latham, Firing rate of noisy quadratic integrate-and-fire neurons, Neural Comput., 15 (2006), 2281-2306. doi: 10.1162/089976603322362365. Google Scholar S. Coombes and P. Bressloff, Mode locking and Arnold tongues in integrate-and-fire oscillators, Phys. Rev. E., 60 (1999), 2086-2096. doi: 10.1103/PhysRevE.60.2086. Google Scholar N. Fourcaud-Trocme, D. Hansel, C. van Vreeswijk and N. Brunel, How spike generation mechanisms determine the neuronal response to fluctuating inputs, J. Neurosci. , 23 (2003), 11628. Google Scholar E. Foxall, R. Edwards, S. Ibrahim and P. van den Driessche, A contraction argument for two-dimensional spiking neuron models, SIAM J. Appl. Dyn. Syst., 11 (2012), 540-566. doi: 10.1137/10081811X. Google Scholar T. Gedeon and M. Holzer, Phase locking in integrate-and-fire models with refractory periods and modulation, J. Math. Biol., 49 (2004), 577-603. doi: 10.1007/s00285-004-0268-4. Google Scholar L. Giocomo, E. Zilli, E. Fransén and M. Hasselmo, Temporal frequency of subthreshold oscillations scales with entorhinal grid cell field spacing, Science, 315 (2007), 1719-1722. doi: 10.1126/science.1139207. Google Scholar W. H. Gottschalk and G. A. Hedlund, Topological Dynamics Amer. Math. Soc. Colloq. Publ. , Vol. 36. American Mathematical Society, Providence, R. I. , 1955. Google Scholar A. Granados, L. Alsedá and M. Krupa, The period adding and incrementing bifurcations: from rotation theory to applications, SIAM Rev., 59 (2017), 225-292. doi: 10.1137/140996598. Google Scholar P. Hartman, On the local linearization of differential equations, Proc. Am. Math. Soc., 14 (1963), 568-573. doi: 10.1090/S0002-9939-1963-0152718-3. Google Scholar ——, Ordinary Differential Equations Classics Appl. Math. , 38, SIAM, 1982. Corrected reprint of the second (1982) edition. doi: 10.1137/1.9780898719222. Google Scholar A. L. Hodgkin and A. F. Huxley, A quantitative description of membrane current and its application to conduction and excitation in nerve, J. Physiol., 117 (1952), 500-544. Google Scholar F. Hofbauer, Periodic points for piecewise monotonic transformations, Ergodic Theory Dynam. Systems, 5 (1985), 237-256. doi: 10.1017/S014338570000287X. Google Scholar E. M. Izhikevich, Simple model of spiking neurons, IEEE Trans. Neural Netw., 14 (2003), 1569-1572. doi: 10.1109/TNN.2003.820440. Google Scholar ——, Which model to use for cortical spiking neurons?, IEEE Trans. Neural Netw. , 15(2004), 1063-1070. Google Scholar ——, Dynamical Systems in Neuroscience: The Geometry of Excitability And Bursting, Comp. Neurosci. , MIT Press, 2007. Google Scholar ——, Resonate-and-fire neurons, Neural Netw. , 14 (2001), 883-894 Google Scholar E. M. Izhikevich and G. M. Edelman, Large-scale model of mammalian thalamocortical systems, Proc. Natl. Acad. Sci. USA, 105 (2008), 3593-3598. Google Scholar N. D. Jimenez, S. Mihalas, R. Brown, E. Niebur and J. Rubin, Locally contractive dynamics in generalized integrate-and-fire neurons, SIAM J. Appl. Dyn. Syst., 12 (2013), 1474-1514. doi: 10.1137/120900435. Google Scholar R. S. G. Jones, Synaptic and intrinsic properties of neurones of origin of the perforant path in layer Ⅱ of the rat entorhinal cortex in vitro, Hippocampus, 4 (1994), 335-353. Google Scholar A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems (Encyclopedia of Mathematics and its Applications), Cambridge University Press, 1995. doi: 10.1017/CBO9780511809187. Google Scholar J. P. Keener, Chaotic behavior in piecewise continuous difference equations, Trans. Amer. Math. Soc., 261 (1980), 589-604. doi: 10.1090/S0002-9947-1980-0580905-3. Google Scholar J. P. Keener, F. C. Hoppensteadt and J. Rinzel, Integrate-and-fire models of nerve membrane response to oscillatory input, SIAM J. Appl. Math., 41 (1981), 503-517. doi: 10.1137/0141042. Google Scholar I. Lampl and Y. Yarom, Subthreshold oscillations of the' membrane potential: A functional synchronizing and timing device, J. Neurophysiol., 70 (1993), 2181-2186. Google Scholar L. Lapicque, Recherches quantitatifs sur l'excitation des nerfs traitee comme une polarisation, J. Physiol. Paris, 9 (1907), 620-635. Google Scholar A. Lasota and J. A. Yorke, On the existence of invariant measures for piecewise monotonic transformations, Trans. Amer. Math. Soc., 186 (1973), 481-488. doi: 10.1090/S0002-9947-1973-0335758-1. Google Scholar C. Liu, M. Michaelis, R. Amir and M. Devor, Spinal nerve injury enhances subthreshold membrane potential oscillations in drg neurons: Relation to neuropathic pain, J. Neurophysiol., 84 (2000), 205-215. Google Scholar R. R. Llinás, The intrinsic electrophysiological properties of mammalian neurons: insights into central nervous system function, Science, 242 (1988), 1654-1664. Google Scholar R. R. Llinás and Y. Yarom, Electrophysiology of mammalian inferior olivary neurones in vitro. Different types of voltage-dependent ionic conductances, J. Physiol., 315 (1981), 549-567. Google Scholar ——, Oscillatory properties of guinea-pig inferior olivary neurones and their pharmacological modulation: an in vitro study, J. Physiol. , 376 (1986), 163-182 Google Scholar A. Lüthi, T. Bal and D. McCormick, Periodicity of thalamic spindle waves is abolished by ZD7288, a blocker of Ih, J. Neurophysiol., 79 (1998), 3284-3289. Google Scholar W. Marzantowicz and J. Signerska, On the interspike-intervals of periodically-driven integrate-and-fire models, J. Math. Anal. Appl., 423 (2015), 456-479. doi: 10.1016/j.jmaa.2014.10.013. Google Scholar S. Mihalas and E. Niebur, A generalized linear integrate-and-fire neural model produces diverse spiking behaviors, Neural Comput., 21 (2009), 704-718. doi: 10.1162/neco.2008.12-07-680. Google Scholar M. Misiurewicz, Rotation intervals for a class of maps of the real line into itself, Ergodic Theory Dynam. Systems, 6 (1986), 117-132. doi: 10.1017/S0143385700003321. Google Scholar ——, Rotation theory, in Online Proceedings of the RIMS Workshop Dynamical Systems and Applications: Recent Progress, 2006. https://www.math.kyoto-u.ac.jp/kokubu/RIMS2006/RIMS_Online_Proceedings.html Google Scholar R. Naud, N. Macille, C. Clopath and W. Gerstner, Firing patterns in the adaptive exponential integrate-and-fire model, Biol. Cybernet., 99 (2008), 335-347. doi: 10.1007/s00422-008-0264-7. Google Scholar F. Rhodes and Ch. L. Thompson, Rotation numbers for monotone functions on the circle, J. London Math. Soc., 34 (1986), 360-368. doi: 10.1112/jlms/s2-34.2.360. Google Scholar ——, Topologies and rotation numbers for families of monotone functions on the circle, J. London Math. Soc. , 43 (1991), 156-170. doi: 10.1112/jlms/s2-43.1.156. Google Scholar H. Rotstein, Abrupt and gradual transitions between low and hyperexcited firing frequencies in neuronal models with fast synaptic excitation: A comparative study Chaos: An Interdisciplinary Journal of Nonlinear Science, 23 (2013), 046104, 22pp. doi: 10.1063/1.4824320. Google Scholar ——, Mixed-mode oscillations in single neurons, in Encyclopedia of Computational Neuroscience, Springer, 2015,1720-1727 Google Scholar H. Rotstein, S. Coombes and A. M. Gheorghe, Canard-like explosion of limit cycles in two-dimensional piecewise-linear models of FitzHugh-Nagumo type, SIAM J. Appl. Dyn. Syst., 11 (2012), 135-180. doi: 10.1137/100809866. Google Scholar H. Rotstein, T. Oppermann, J. White and N. Kopell, A reduced model for medial entorhinal cortex stellate cells: subthreshold oscillations, spiking and synchronization, J. Comput. Neurosci., 21 (2006), 271-292. doi: 10.1007/s10827-006-8096-8. Google Scholar H. Rotstein, M. Wechselberger and N. Kopell, Canard induced mixed-mode oscillations in a medial entorhinal cortex layer Ⅱ stellate cell model, SIAM J. Appl. Dyn. Syst., 7 (2008), 1582-1611. doi: 10.1137/070699093. Google Scholar J. E. Rubin, J. Signerska-Rynkowska, J. Touboul and A. Vidal, Wild oscillations in a nonlinear neuron model with resets: (Ⅰ) Bursting, spike adding and chaos, Discrete Contin. Dyn. Syst. Ser. B, 22 (2017), 3967-4002. Google Scholar J. Rubin and M. Wechselberger, Giant squid-hidden canard: The 3d geometry of the Hodgkin-Huxley model, Biol. Cybernet., 97 (2007), 5-32. doi: 10.1007/s00422-007-0153-5. Google Scholar V. S. Samovol, A necessary and sufficient condition of smooth linearization of autonomous planar systems in a neighborhood of a critical point, Math. Notes, 46 (1989), 543-550. doi: 10.1007/BF01159105. Google Scholar E. Shlizerman and P. Holmes, Neural dynamics, bifurcations, and firing rates in a quadratic integrate-and-fire model with a recovery variable. Ⅰ: Deterministic behavior, Neural Comput., 24 (2012), 2078-2118. doi: 10.1162/NECO_a_00308. Google Scholar J. Signerska-Rynkowska, Analysis of interspike-intervals for the general class of integrate-and-fire models with periodic drive, Math. Model. Anal., 20 (2015), 529-551. doi: 10.3846/13926292.2015.1085459. Google Scholar S. Sternberg, Local contractions and a theorem of Poincaré, Amer. J. Math., 79 (1957), 809-824. doi: 10.2307/2372437. Google Scholar D. Stowe, Linearization in two dimensions, J. Diff. Eq., 63 (1986), 183-226. doi: 10.1016/0022-0396(86)90047-1. Google Scholar P. H. E. Tiesinga, Precision and reliability of periodically and quasiperiodically driven integrate-and-fire neurons Phys. Rev. E. , 65 (2002), 041913, 14pp. doi: 10.1103/PhysRevE.65.041913. Google Scholar J. Touboul, Bifurcation analysis of a general class of nonlinear integrate-and-fire neurons, SIAM Journal on Applied Mathematics, 68 (2008), 1045-1079. doi: 10.1137/070687268. Google Scholar ——, Importance of the cutoff value in the quadratic adaptive integrate-and-fire model, Neural Comput. , 21 (2009), 2114-2122. doi: 10.1162/neco.2009.09-08-853. Google Scholar J. Touboul and R. Brette, Dynamics and bifurcations of the adaptive exponential integrate-and-fire model, Biol. Cybernet., 99 (2008), 319-334. doi: 10.1007/s00422-008-0267-4. Google Scholar ——, Spiking dynamics of bidimensional integrate-and-fire neurons, SIAM J. Appl. Dyn. Syst. , 8 (2009), 1462-1506. doi: 10.1137/080742762. Google Scholar M. Yoshida and A. Alonso, Cell-type-specific modulation of intrinsic firing properties and subthreshold membrane oscillations by the M (Kv7)-current in neurons of the entorhinal cortex, Journal of Neurophysiol, 98 (2007), 2779-2794. doi: 10.1152/jn.00033.2007. Google Scholar L.-S. Young, What are SRB measures, and which dynamical systems have them? Dedicated to David Ruelle and Yasha Sinai on the occasion of their 65th birthdays, J. Statist. Phys., 108 (2002), 733-754. doi: 10.1023/A:1019762724717. Google Scholar Figure 1. The geometry of MMOs: (Upper row) Phase plane with $v$ and $w$ nullclines (dashed black) and stable (red) and unstable (blue) manifolds of the saddle; the stable manifold winds around the repulsive singular point. The reset line $\{v=v_R\}$ (solid vertical line) intersects the stable manifold, separating out regions such that trajectories emanating from each undergo a specific number of small oscillations (colored segments, here from 0 to 3 below the $w$-nullcline and from $3.5$ to $0.5$ above). (Lower rows) The solution for one given initial condition in each segment. Note that the time interval varies in the different plots (indicated on the $x$-axis). Simulations had initial conditions $v=v_R=0.012$ and $w$ chosen within the different intervals on the reset line. Figure 2. Geometry of the phase plane with indication of the points relevant in the characterization of the adaptation map $\Phi$. In this example, there are only $p=2$ intersections of $\{ v=v_R\}$ with $\mathcal{W}^s$ (thus $p_1=1$). Figure 3. Typical topology of manifolds and sections in Lemma 3.2: we consider the correspondence map between sections $S_s$ (red) and $S_u$ (orange) transverse, respectively, to the stable and unstable manifolds (black lines) of the saddle (orange circle). Typical trajectories are plotted in blue. The key arguments are the characterization of correspondence maps associated with the linearized system (upper left inset) between two transverse sections $S_s'$ and $S_u'$, and the smooth conjugacy between the nonlinear flow and its linearization. Figure 4. Partitions of $(d, \gamma)$ parameter space (for fixed values of the other parameters) according to geometric properties of the map $\Phi$ for the quartic model ($F=v^4+2av$, $a=\varepsilon =0.1$, $b=1$, $I=0.1175$ and $v_R=0.1158$) assuming only two intersections of the reset line with the stable manifold (see text for further information). Figure 8. The orientation-preserving maps $\Psi_l$ (green) and $\Psi_r$ (blue) enveloping the lift $\Psi$ (red line), which is non-monotonic and admits negative jumps, for the adaptation map $\Phi$ (blue dashed curve) in the overlapping case. Figure 5. Phase plane structure, $v$ signal generated along attractive periodic orbits and sequence of $w$ reset values for two sets of parameter values for which the map $\Phi$ is in the non-overlapping case (C4). In both cases, $v_R=0.1$ and $\gamma=0.05$. The top case ($d=0.08$) illustrates the regular spiking behavior corresponding to the rotation number $\varrho = 0$. The bottom case ($d=0.08657$) displays a complex MMBO periodic orbit with associated rational rotation number. Figure 6. Phase plane (inset) and adaptation map (top) fulfilling condition (C4) and the additional condition $\Phi(\alpha)<w_1<\Phi(\beta)$, along with the associated MMBO orbit of system (1) (bottom). The rotation number is equal to $0.5$, hence the $v$ signal along the orbit is a periodic alternation of a pair of spikes and one small oscillation. The parameter values of the system corresponding to this simulation are $v_R=0.1$, $\gamma=0.05$ and $d=0.087$. Figure 7. Rotation number as a function of $d$. The parameter values $v_R=0.1$ and $\gamma=0.05$ have been chosen such that the adaptive map $\Phi$ fulfills condition (C4) for any value of $d \in [0.08, 0.092]$. Theorem 4.3 applies here, and the rotation number varies as a devil's staircase, as shown in the bottom plot. The top panels show the adaptation map and corresponding attractive periodic orbit at the $d$ values labelled correspondingly in the rotation number plot; note that the rotation number for case (b) is a rational number between 1/3 and 1/2 Figure 9. Rotation intervals for the lifts $\Psi_{d, l}, \Psi_{d, r}$ of the adaptation maps $\Phi_d$ for a range of $d$. The parameter value $\gamma=0.05$ has been chosen so that $\Phi_d$ remains in the overlapping case for all $d \in [0.0745, 0.0825]$. Figure 10. Rotation numbers according to $(d, \gamma)$. Left panel: rotation number of the point $w=0$ together with the boundaries of the regions A to E corresponding to the different subcases when $w_1$ is the unique discontinuity of the adaptation map lying in the interval $[\beta, \alpha]$ (see text for more details). Right panel: rotation numbers of the left and right lifts $\Psi_l$ and $\Psi_r$ associated with $\Phi$ for $(d, \gamma)$ varying along the blue segment drawn in the inset Michele Barbi, Angelo Di Garbo, Rita Balocchi. Improved integrate-and-fire model for RSA. Mathematical Biosciences & Engineering, 2007, 4 (4) : 609-615. doi: 10.3934/mbe.2007.4.609 Bo Lu, Shenquan Liu, Xiaofang Jiang, Jing Wang, Xiaohui Wang. The mixed-mode oscillations in Av-Ron-Parnas-Segel model. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 487-504. doi: 10.3934/dcdss.2017024 Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Maria Francesca Carfora. A leaky integrate-and-fire model with adaptation for the generation of a spike train. Mathematical Biosciences & Engineering, 2016, 13 (3) : 483-493. doi: 10.3934/mbe.2016002 Mathieu Desroches, Bernd Krauskopf, Hinke M. Osinga. The geometry of mixed-mode oscillations in the Olsen model for the Peroxidase-Oxidase reaction. Discrete & Continuous Dynamical Systems - S, 2009, 2 (4) : 807-827. doi: 10.3934/dcdss.2009.2.807 Shyan-Shiou Chen, Chang-Yuan Cheng. Delay-induced mixed-mode oscillations in a 2D Hindmarsh-Rose-type model. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 37-53. doi: 10.3934/dcdsb.2016.21.37 Tomáš Roubíček, V. Mantič, C. G. Panagiotopoulos. A quasistatic mixed-mode delamination model. Discrete & Continuous Dynamical Systems - S, 2013, 6 (2) : 591-610. doi: 10.3934/dcdss.2013.6.591 Jonathan E. Rubin, Justyna Signerska-Rynkowska, Jonathan D. Touboul, Alexandre Vidal. Wild oscillations in a nonlinear neuron model with resets: (Ⅰ) Bursting, spike-adding and chaos. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3967-4002. doi: 10.3934/dcdsb.2017204 Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Maria Francesca Carfora. A simple algorithm to generate firing times for leaky integrate-and-fire neuronal model. Mathematical Biosciences & Engineering, 2014, 11 (1) : 1-10. doi: 10.3934/mbe.2014.11.1 Theodore Vo, Richard Bertram, Martin Wechselberger. Bifurcations of canard-induced mixed mode oscillations in a pituitary Lactotroph model. Discrete & Continuous Dynamical Systems - A, 2012, 32 (8) : 2879-2912. doi: 10.3934/dcds.2012.32.2879 Timothy J. Lewis. Phase-locking in electrically coupled non-leaky integrate-and-fire neurons. Conference Publications, 2003, 2003 (Special) : 554-562. doi: 10.3934/proc.2003.2003.554 Feng Zhang, Alice Lubbe, Qishao Lu, Jianzhong Su. On bursting solutions near chaotic regimes in a neuron model. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1363-1383. doi: 10.3934/dcdss.2014.7.1363 Benoît Perthame, Delphine Salort. On a voltage-conductance kinetic system for integrate & fire neural networks. Kinetic & Related Models, 2013, 6 (4) : 841-864. doi: 10.3934/krm.2013.6.841 Roberta Sirovich, Luisa Testa. A new firing paradigm for integrate and fire stochastic neuronal models. Mathematical Biosciences & Engineering, 2016, 13 (3) : 597-611. doi: 10.3934/mbe.2016010 Sebastian Hage-Packhäuser, Michael Dellnitz. Stabilization via symmetry switching in hybrid dynamical systems. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 239-263. doi: 10.3934/dcdsb.2011.16.239 Astridh Boccabella, Roberto Natalini, Lorenzo Pareschi. On a continuous mixed strategies model for evolutionary game theory. Kinetic & Related Models, 2011, 4 (1) : 187-213. doi: 10.3934/krm.2011.4.187 Pierre Guiraud, Etienne Tanré. Stability of synchronization under stochastic perturbations in leaky integrate and fire neural networks of finite size. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 5183-5201. doi: 10.3934/dcdsb.2019056 Jonathan H. Tu, Clarence W. Rowley, Dirk M. Luchtenburg, Steven L. Brunton, J. Nathan Kutz. On dynamic mode decomposition: Theory and applications. Journal of Computational Dynamics, 2014, 1 (2) : 391-421. doi: 10.3934/jcd.2014.1.391 Fanchao Kong, Juan J. Nieto. Almost periodic dynamical behaviors of the hematopoiesis model with mixed discontinuous harvesting terms. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 5803-5830. doi: 10.3934/dcdsb.2019107 Wandi Ding. Optimal control on hybrid ODE Systems with application to a tick disease model. Mathematical Biosciences & Engineering, 2007, 4 (4) : 633-659. doi: 10.3934/mbe.2007.4.633 Qi Yang, Lei Wang, Enmin Feng, Hongchao Yin, Zhilong Xiu. Identification and robustness analysis of nonlinear hybrid dynamical system of genetic regulation in continuous culture. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-21. doi: 10.3934/jimo.2018168 Jonathan E. Rubin Justyna Signerska-Rynkowska Jonathan D. Touboul Alexandre Vidal
CommonCrawl
Compute and Simplify the Matrix Expression Including Transpose and Inverse Matrices Let $A, B, C$ be the following $3\times 3$ matrices. -1 & 0\ & 1 \\ \[(A^{\trans}-B)^{\trans}+C(B^{-1}C)^{-1}.\] (The Ohio State University, Linear Algebra Midterm Exam Problem) Every Plane Through the Origin in the Three Dimensional Space is a Subspace Prove that every plane in the $3$-dimensional space $\R^3$ that passes through the origin is a subspace of $\R^3$. Quiz 4: Inverse Matrix/ Nonsingular Matrix Satisfying a Relation (a) Find the inverse matrix of \end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason. (b) Find a nonsingular $2\times 2$ matrix $A$ such that \[A^3=A^2B-3A^2,\] where \end{bmatrix}.\] Verify that the matrix $A$ you obtained is actually a nonsingular matrix. Basis For Subspace Consisting of Matrices Commute With a Given Diagonal Matrix Let $V$ be the vector space of all $3\times 3$ real matrices. Let $A$ be the matrix given below and we define \[W=\{M\in V \mid AM=MA\}.\] That is, $W$ consists of matrices that commute with $A$. Then $W$ is a subspace of $V$. Determine which matrices are in the subspace $W$ and find the dimension of $W$. (a) \[A=\begin{bmatrix} a & 0 & 0 \\ 0 &b &0 \\ 0 & 0 & c \end{bmatrix},\] where $a, b, c$ are distinct real numbers. (b) \[A=\begin{bmatrix} 0 &a &0 \\ 0 & 0 & b \end{bmatrix},\] where $a, b$ are distinct real numbers. Linearly Independent vectors $\mathbf{v}_1, \mathbf{v}_2$ and Linearly Independent Vectors $A\mathbf{v}_1, A\mathbf{v}_2$ for a Nonsingular Matrix Let $\mathbf{v}_1$ and $\mathbf{v}_2$ be $2$-dimensional vectors and let $A$ be a $2\times 2$ matrix. (a) Show that if $\mathbf{v}_1, \mathbf{v}_2$ are linearly dependent vectors, then the vectors $A\mathbf{v}_1, A\mathbf{v}_2$ are also linearly dependent. (b) If $\mathbf{v}_1, \mathbf{v}_2$ are linearly independent vectors, can we conclude that the vectors $A\mathbf{v}_1, A\mathbf{v}_2$ are also linearly independent? (c) If $\mathbf{v}_1, \mathbf{v}_2$ are linearly independent vectors and $A$ is nonsingular, then show that the vectors $A\mathbf{v}_1, A\mathbf{v}_2$ are also linearly independent. by Yu · Published 01/19/2017 If matrix product $AB$ is a square, then is $BA$ a square matrix? Let $A$ and $B$ are matrices such that the matrix product $AB$ is defined and $AB$ is a square matrix. Is it true that the matrix product $BA$ is also defined and $BA$ is a square matrix? If it is true, then prove it. If not, find a counterexample. Matrix $XY-YX$ Never Be the Identity Matrix Let $I$ be the $n\times n$ identity matrix, where $n$ is a positive integer. Prove that there are no $n\times n$ matrices $X$ and $Y$ such that \[XY-YX=I.\] Row Equivalent Matrix, Bases for the Null Space, Range, and Row Space of a Matrix Let \[A=\begin{bmatrix} (a) Find a matrix $B$ in reduced row echelon form such that $B$ is row equivalent to the matrix $A$. (b) Find a basis for the null space of $A$. (c) Find a basis for the range of $A$ that consists of columns of $A$. For each columns, $A_j$ of $A$ that does not appear in the basis, express $A_j$ as a linear combination of the basis vectors. (d) Exhibit a basis for the row space of $A$. Determine a Matrix From Its Eigenvalue a & -1\\ \end{bmatrix}\] be a $2\times 2$ matrix, where $a$ is some real number. Suppose that the matrix $A$ has an eigenvalue $3$. (a) Determine the value of $a$. (b) Does the matrix $A$ have eigenvalues other than $3$? Use Cramer's Rule to Solve a $2\times 2$ System of Linear Equations Use Cramer's rule to solve the system of linear equations 3x_1-2x_2&=5\\ 7x_1+4x_2&=-1. Find a Matrix so that a Given Subset is the Null Space of the Matrix, hence it's a Subspace Let $W$ be the subset of $\R^3$ defined by \[W=\left \{ \mathbf{x}=\begin{bmatrix} \end{bmatrix}\in \R^3 \quad \middle| \quad 5x_1-2x_2+x_3=0 \right \}.\] Exhibit a $1\times 3$ matrix $A$ such that $W=\calN(A)$, the null space of $A$. Conclude that the subset $W$ is a subspace of $\R^3$. A Group Homomorphism and an Abelian Group Simple Commutative Relation on Matrices Centralizer, Normalizer, and Center of the Dihedral Group $D_{8}$ Two Matrices are Nonsingular if and only if the Product is Nonsingular
CommonCrawl
Improved \({(g-2)_\mu }\) measurements and wino/higgsino dark matter Regular Article - Theoretical Physics Manimala Chakraborti ORCID: orcid.org/0000-0002-0469-87121, Sven Heinemeyer2,3,4 & Ipsita Saha5 The European Physical Journal C volume 81, Article number: 1069 (2021) Cite this article A preprint version of the article is available at arXiv. The electroweak (EW) sector of the Minimal Supersymmetric Standard Model (MSSM) can account for a variety of experimental data. In particular it can explain the persistent \(3-4\,\sigma \) discrepancy between the experimental result for the anomalous magnetic moment of the muon, \((g-2)_\mu \), and its Standard Model (SM) prediction. The lightest supersymmetric particle (LSP), which we take as the lightest neutralino, \({\tilde{\chi }}_{1}^0\), can furthermore account for the observed Dark Matter (DM) content of the universe via coannihilation with the next-to-LSP (NLSP), while being in agreement with negative results from Direct Detection (DD) experiments. Concerning the unsuccessful searches for EW particles at the LHC, owing to relatively small production cross-sections a comparably light EW sector of the MSSM is in full agreement with the experimental data. The DM relic density can fully be explained by a mixed bino/wino LSP. Here we take the relic density as an upper bound, which opens up the possibility of wino and higgsino DM. We first analyze which mass ranges of neutralinos, charginos and scalar leptons are in agreement with all experimental data, including relevant LHC searches. We find roughly an upper limit of \(\sim 600 \,\, \mathrm {GeV}\) for the LSP and NLSP masses. In a second step we assume that the new result of the Run 1 of the "MUON G-2" collaboration at Fermilab yields a precision comparable to the existing experimental result with the same central value. We analyze the potential impact of the combination of the Run 1 data with the existing \((g-2)_\mu \) data on the allowed MSSM parameter space. We find that in this case the upper limits on the LSP and NLSP masses are substantially reduced by roughly \(100 \,\, \mathrm {GeV}\). We interpret these upper bounds in view of future HL-LHC EW searches as well as future high-energy \(e^+e^-\) colliders, such as the ILC or CLIC. One of the most important tasks at the LHC is to search for physics beyond the Standard Model (SM). This includes the production and measurement of the properties of Cold Dark Matter (CDM). These two (related) tasks will be among the top priority in the future program of high-energy particle physics. One tantalizing hint for physics beyond the SM (BSM) is the anomalous magnetic moment of the muon, \((g-2)_\mu \). The experimental result deviates from the SM prediction by \(3-4\sigma \) [1, 2]. Improved experimental results are expected soon [3] from the Run 1 data of the "MUON G-2" experiment [4]. Another clear sign for BSM physics is the precise measurement of the CDM relic abundance [5]. A final set of related constraints comes from CDM Direct Detection (DD) experiments. The LUX [6], PandaX-II [7] and XENON1T [8] experiments provide stringent limits on the spin-independent (SI) DM scattering cross-section, \(\sigma _p^{\mathrm{SI}}\). Among the BSM theories under consideration the Minimal Supersymmetric Standard Model (MSSM) [9,10,11,12] is one of the leading candidates. Supersymmetry (SUSY) predicts two scalar partners for all SM fermions as well as fermionic partners to all SM bosons. Contrary to the case of the SM, in the MSSM two Higgs doublets are required. This results in five physical Higgs bosons instead of the single Higgs boson in the SM. These are the light and heavy \({\mathcal{CP}}\)-even Higgs bosons, h and H, the \({\mathcal{CP}}\)-odd Higgs boson, A, and the charged Higgs bosons, \(H^\pm \). The neutral SUSY partners of the (neutral) Higgs and electroweak gauge bosons gives rise to the four neutralinos, \({\tilde{\chi }}_{1,2,3,4}^0\). The corresponding charged SUSY partners are the charginos, \({\tilde{\chi }}_{1,2}^\pm \). The SUSY partners of the SM leptons and quarks are the scalar leptons and quarks (sleptons, squarks), respectively. The electroweak (EW) sector of the MSSM (the charginos, neutralinos and scalar leptons) can account for a variety of experimental data. The lightest SUSY particle (LSP), the lightest neutralino \({\tilde{\chi }}_{1}^0\), can explain the CDM relic abundance [13, 14], while not being in conflict with negative DD results and the negative LHC searches. The requirement to give the full amount of DM relic density can be met if the LSP is a bino or a mixed bino/wino state. Furthermore, the EW sector of the MSSM can account for the persistent \(3-4\,\sigma \) discrepancy of \((g-2)_\mu \). Recently in Ref. [15], assuming that the LSP gives rise to the full amount of DM relic density, upper limits on the various masses of the EW SUSY sector were derived, while being in agreement with all other experimental data.Footnote 1 The upper limits strongly depend on the deviation of the experimental result of \((g-2)_\mu \) from its SM prediction. Taking the current deviation of \((g-2)_\mu \) [1, 2], limits of roughly \(\sim 600 \,\, \mathrm {GeV}\) were set on the mass of the LSP and the next-to-LSP (NLSP). Assuming that the new result of the Run 1 of the "MUON G-2" collaboration at Fermilab yields a precision comparable to the existing experimental result with the same central value, yielded a reduction of these upper limits of roughly \(\sim 100 \,\, \mathrm {GeV}\). In this paper we perform an analysis similar to [15], but under the assumption that the relic DM only gives an upper bound, which opens up the possibility of wino and higgsino DM. We analyze the four cases of bino-dominated, bino/wino, (nearly pure) wino and higgsino LSP for the current deviation of \((g-2)_\mu \), as well as for the assumption of improved \((g-2)_\mu \) bounds from the combination of existing experimental data with the Run 1 data of the "MUON G-2" experiment. In all cases we require the agreement with all other relevant existing data, such as the DD bounds and the EW searches at the LHC. The derived upper limits on the EW masses are discussed in the context of the upcoming searches at the HL-LHC as well as at possible future \(e^+e^-\) colliders, such as the ILC [37, 38] or CLIC [38,39,40,41]. The electroweak sector of the MSSM In our notation for the MSSM we follow exactly Ref. [15]. Here we restrict ourselves to a very short introduction of the relevant parameters and symbols of the EW sector of the MSSM, consisting of charginos, neutralinos and scalar leptons. The scalar quark sector is assumed to be heavy and not to play a relevant role in our analysis. Throughout this paper we also assume that all parameters are real, i.e. the absence of \(\mathcal{CP}\)-violation. The masses and mixings of the neutralinos are determined (besides SM parameters)) by \(U(1)_Y\) and \(SU(2)_L\) gaugino masses \(M_1\) and \(M_2\), the Higgs mixing parameter \(\mu \) and \(\tan \beta \), the ratio of the two vacuum expectation values (vevs) of the two Higgs doublets of MSSM, \(\tan \beta = v_2/v_1\). After diagonalization, the four eigenvalues of the matrix give the four neutralino masses \(m_{{\tilde{\chi }}_{1}^0}< m_{{\tilde{\chi }}_{2}^0}< m_{{\tilde{\chi }}_{3}^0} <m_{{\tilde{\chi }}_{4}^0}\). The masses and mixings of the charginos are determined (besides SM parameters) by \(M_2\), \(\mu \) and \(\tan \beta \). Diagonalizing the mass matrix two chargino-mass eigenvalues \(m_{{\tilde{\chi }}_{1}^\pm } < m_{{\tilde{\chi }}_{2}^\pm }\) can be obtained. For the sleptons, as in Ref. [15], we choose common soft SUSY-breaking parameters for all three generations. The charged slepton mass matrix are determined (besides SM parameters) by the diagonal soft SUSY-breaking parameters \(m_{{\tilde{l}}_L}^2\) and \(m_{{\tilde{l}}_R}^2\) and the trilinear coupling \(A_l\) (\(l = e, \mu , \tau \)), where the latter are taken to be zero. Mixing between the "left-handed" and "right-handed" sleptons is only relevant for scalar taus, where the off-diagonal entry in the mass matrix is given by \(-m_\tau \mu \tan \beta \). Thus, for the first two generations, the mass eigenvalues can be approximated as \(m_{{\tilde{l}}_{1}} \simeq m_{{\tilde{l}}_L}, m_{{\tilde{l}}_{2}} \simeq m_{{\tilde{l}}_R}\). In general we follow the convention that \({\tilde{l}}_1\) (\({\tilde{l}}_2\)) has the large "left-handed" ("right-handed") component. Besides the symbols equal for all three generations, we also explicitly use the scalar electron, muon and tau masses, \(m_{{\tilde{e}}_{1,2}}\), \(m_{\tilde{\mu }_{1,2}}\) and \(m_{\tilde{\tau }_{1,2}}\). The sneutrino and slepton masses are connected by the usual SU(2) relation. Overall, the EW sector at the tree level can be described with the help of six parameters: \(M_1,M_2,\mu , \tan \beta , m_{{\tilde{l}}_L}, m_{{\tilde{l}}_R}\). Throughout our analysis we neglect \({\mathcal{CP}}\)-violation and assume \(\mu , M_1, M_2 > 0 \). In Ref. [15] it was shown that choosing these parameters positive covers the relevant parameter space once the \((g-2)_\mu \) results are taken into account (see, however, the discussion in Sect. 7). Following the stronger experimental limits from the LHC [42, 43], we assume that the colored sector of the MSSM is sufficiently heavier than the EW sector, and does not play a role in this analysis. For the Higgs-boson sector we assume that the radiative corrections to the light \({\mathcal{CP}}\)-even Higgs boson (largely originating from the top/stop sector) yield a value in agreement with the experimental data, \(M_h\sim 125 \,\, \mathrm {GeV}\). This naturally yields stop masses in the TeV range [44, 45], in agreement with the above assumption. Concerning the Higgs-boson mass scale, as given by the \({\mathcal{CP}}\)-odd Higgs-boson mass, \(M_A\), we employ the existing experimental bounds from the LHC. In the combination with other data, this results in a mostly non-relevant impact of the heavy Higgs bosons on our analysis, as will be discussed below. Relevant constraints The experimental result for \(a_\mu := (g-2)_\mu /2\) is dominated by the measurements made at the Brookhaven National Laboratory (BNL) [46], resulting in a world average of [47] $$\begin{aligned} a_\mu ^{\mathrm{exp}}&= 11 659 209.1 (5.4) (3.3) \times 10^{-10}, \end{aligned}$$ where the first uncertainty is statistical and the second systematic. The SM prediction of \(a_\mu \) is given by [48] (based on Refs. [1, 2, 49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66] ),Footnote 2 $$\begin{aligned} a_\mu ^{\mathrm{SM}}&= (11 659 181.0 \pm 4.3) \times 10^{-10}. \end{aligned}$$ Comparing this with the current experimental measurement in Eq. (1) results in a deviation of $$\begin{aligned} \Delta a_\mu ^{\mathrm{old}}&= (28.1 \pm 7.6) \times 10^{-10}, \end{aligned}$$ corresponding to a \(3.7\,\sigma \) discrepancy. This "current" result will be used below with a hard cut at \(2\,\sigma \) uncertainty. Efforts to improve the experimental result at Fermilab by the "MUON G-2" collaboration [4] and at J-PARC [67] aim to reduce the experimental uncertainty by a factor of four compared to the BNL measurement. For the second step in our analysis we consider the upcoming Run 1 result from the Fermilab experiment [3]. The Run 1 data is expected to have roughly the same experimental uncertainty as the current result in Eq. (1). We furthermore assume that the Run 1 data yields the same central value as the current result. Consequently, we anticipate that the experimental uncertainty shrinks by \(1/\sqrt{2}\), yielding a future value of $$\begin{aligned} \Delta a_\mu ^{\mathrm{fut}}&= (28.1 \pm 6.2) \times 10^{-10}, \end{aligned}$$ corresponding to a \(4.5\,\sigma \) discrepancy. Thus, the combination of Run 1 data with the existing experimental \((g-2)_\mu \) data has the potential to (nearly) establish the "discovery" of BSM physics. This "anticipated future" result will be used below with a hard cut at \(2\,\sigma \) uncertainty. Recently a new lattice calculation for the leading order hadronic vacuum polarization (LO HVP) contribution to \(a_\mu ^{\mathrm{SM}}\) [68] has been reported, which, however, was not used in the new theory world average, Eq. (2) [48]. Consequently, we also do not take this result into account, see also the discussions in Refs. [15, 68,69,70,71,72]. On the other hand, we are also aware that our conclusions would change substantially if the result presented in [68] turned out to be correct. In the MSSM the main contribution to \((g-2)_\mu \) at the one-loop level comes from diagrams involving \({\tilde{\chi }}_{1}^\pm -{\tilde{\nu }}\) and \({\tilde{\chi }}_{1}^0-{\tilde{\mu }}\) loops. In the case of a bino-dominated LSP the contributions are approximated as [73,74,75] $$\begin{aligned} a^{{\tilde{\chi }}^{\pm }-{\tilde{\nu }}_{\mu }}_{\mu }&\approx \frac{\alpha \, m^2_\mu \, \mu \,M_{2} \tan \beta }{4\pi \sin ^2\theta _W \, m_{\tilde{\nu }_{\mu }}^{2}}\nonumber \\&\quad \times \left( \frac{f_{\chi ^{\pm }}(M_{2}^2/m_{\tilde{\nu }_{\mu }}^2) -f_{\chi ^{\pm }}(\mu ^2/m_{\tilde{\nu }_{\mu }}^2)}{M_2^2-\mu ^2} \right) , \end{aligned}$$ $$\begin{aligned} a^{{\tilde{\chi }}^0 -{\tilde{\mu }}}_{\mu }&\approx \frac{\alpha \, m^2_\mu \, \,M_{1}(\mu \tan \beta -A_\mu )}{4\pi \cos ^2\theta _W \, (m_{\tilde{\mu }_R}^2 - m_{\tilde{\mu }_L}^2)}\nonumber \\&\quad \times \left( \frac{f_{\chi ^0}(M^2_1/m_{\tilde{\mu }_R}^2)}{m_{\tilde{\mu }_R}^2} - \frac{f_{\chi ^0}(M^2_1/m_{\tilde{\mu }_L}^2)}{m_{\tilde{\mu }_L}^2}\right) , \end{aligned}$$ where the loop functions f are as given in Ref. [75]. In our analysis MSSM contribution to \((g-2)_\mu \) up to two-loop order is calculated using GM2Calc [76], implementing two-loop corrections from [77,78,79] (see also [80, 81]). This code also works numerically reliable for the cases of (very) compressed spectra, such as for wino and higgsino DM. Other constraints All other experimental constraints are taken into account exactly as in Ref. [15]. These comprise Vacuum stability constraints: All points are check to possess a stable and correct EW vacuum, e.g. avoiding charge and color breaking minima. This check is performed with the public code Evade [82, 83]. Constraints from the LHC: The LHC searches for EW SUSY that prove to be the most relevant in our case are (i) the constraints from \({\tilde{\chi }}_{2}^0 {\tilde{\chi }}_{1}^\pm \) pair production leading to three leptons and \({E\!\!\!\!/_T}\) in the final state [84] (ii) the constraint coming from slepton pair production searches leading to dilepton and \({E\!\!\!\!/_T}\) in the final state [85]. The exclusion contours provided by the experimental collaborations are based on "simplified model" approach. However, for the models considered in our work, the kinematic configurations and compositions of the gauginos may be considerably different from the simplified scenarios. Therefore, it is important to properly recast these limits, rather than the "naive" application of the direct exclusion contours on our parameter space. We use our implementations of the ATLAS analyses Refs. [84, 85] in the program package CheckMATE [86,87,88]. A detailed description of our implementations can be found in Ref. [15]. As in Ref. [15], the constraints coming from "compressed spectra" searches [89], corresponding to very low splittings between \(m_{{\tilde{\chi }}_{1}^\pm },m_{{\tilde{\chi }}_{2}^0},m_{{\tilde{l}}_{1}}\) and \(m_{{\tilde{\chi }}_{1}^0}\) are applied directly on our parameter space. In addition to the searches described in Ref. [15], we take into account the latest constraints from the disappearing track searches at the LHC [90, 91]. These are particularly important for wino DM scenario where the mass gap between \({\tilde{\chi }}_{1}^\pm \) and \({\tilde{\chi }}_{1}^0\) can be \(\sim \) a few hundred \(\,\, \mathrm {MeV}\). The long-lived \({\tilde{\chi }}_{1}^\pm \) (lifetime \(\sim {{\mathcal {O}}}(\mathrm {ns})\)) decays into final states involving a \({\tilde{\chi }}_{1}^0\) and a soft pion which can not be reconstructed within the detector. Thus, the signal involves a charged track from \({\tilde{\chi }}_{1}^\pm \) that produces hits only in the innermost layers of the detector with no subsequent hits at larger radii. Dark matter relic density constraints: We use the latest result from Planck [5]. $$\begin{aligned} \Omega _{\mathrm{CDM}} h^2 \; \le \; 0.122. \end{aligned}$$ As stressed above, we take the relic density as an upper limit (evaluated from the central value plus \(2\,\sigma \). The relic density in the MSSM is evaluated with MicrOMEGAs [92,93,94,95]. An additional DM component could be, e.g., a SUSY axion [96], which would then bring the total DM density into agreement with the Planck measurement of \(\Omega _{\mathrm{CDM}} h^2 = 0.120 \pm 0.001\) [5]. In the case of wino DM, because of the extremely small mass splitting, the effect of "Sommerfeld enhancement" [97] can be very important. For wino DM providing the full amount of DM it shifts the allowed range of \(m_{{\tilde{\chi }}_{1}^0}\) from \(\sim 2.0 \,\, \mathrm {TeV}\) to about \(\sim 2.9 \,\, \mathrm {TeV}\). Since here we are interested in the case that the wino DM only gives a fraction of the whole DM relic density, see Eq. (7), we can safely neglect the Sommerfeld enhancement. The upper limit on \(m_{{\tilde{\chi }}_{1}^0}\) is given, as will be shown below, by the \((g-2)_\mu \) constraint, but not by the DM relic density. Allowing higher masses here (as would be the case if the Sommerfeld enhancement had been taken into account) could thus not lead to a larger allowed parameter space. On the other hand, for a point with a relic density fulfilling Eq. (7) the Sommerfeld enhancement would only lower the "true" DM density, which still fulfills Eq. (7). Direct detection constraints of Dark matter: We employ the constraint on the spin-independent DM scattering cross-section \(\sigma _p^{\mathrm{SI}}\) from XENON1T [8] experiment, evaluating the theoretical prediction for \(\sigma _p^{\mathrm{SI}}\) using MicrOMEGAs [92,93,94,95]. A combination with other DD experiments would yield only very slightly stronger limits, with a negligible impact on our results. For parameter points with \(\Omega _{{\tilde{\chi }}} h^2 \; \le \; 0.118\) (\(2\,\sigma \) lower limit from Planck [5]), we scale the cross-section with a factor of (\(\Omega _{{\tilde{\chi }}} h^2\)/0.118) to account for the fact that \({\tilde{\chi }}_{1}^0\) provides only a fraction of the total DM relic density of the universe. Here the effect of neglecting the Sommerfeld enhancement leads to a more conservative allowed region of parameter space. Another potential set of constraints is given by the indirect detection of DM. However, we do not impose these constraints on our parameter space because of the well-known large uncertainties associated with astrophysical factors like DM density profile as well as theoretical corrections [98,99,100,101]. However, for the sake of completeness, we briefly discuss the indirect detection constraints from current experimental data from ANTARES/IceCube [102] and Super-Kamiokande [103] for several sample points discussed in Sect. 5.4. Concretely, we evaluate the cross section for DM annihilation to SM particles, \(\langle \sigma v \rangle _{\mathrm{SM SM}}\) for the most promising channels. These cross sections can then be compared to the limits set by ANTARES/IceCube and Super-Kamiokande, which depend on the assumption of a certain DM galactic halo profile. The combined ANTARES/IceCube data sets an upper limit on the cross section in the ballpark of \(\langle \sigma v \rangle _{WW, \tau \tau } \lesssim 10^{-23}~\mathrm{cm}^3~\mathrm{s}^{-1}\) and \(\langle \sigma v \rangle _{b {\bar{b}}} \lesssim 10^{-22}~\mathrm{cm}^3~\mathrm{s}^{-1}\), assuming Navarro–Frenk–White profile (NFW) DM density profile [104]. The limits derived assuming the Burkert profile [105] are observed to be somewhat weaker. Similarly, we also analyze the prospects for indirect detection of these points in the next generation experiments, in particular, at the Cherenkov Telescope Array (CTA) [106], which is expected to have a substantially improved sensitivity compared to the current generation of experiments. The projected reach from the CTA shows a considerable variation in the upper limit depending on the assumed halo profile. The strongest limit assuming the Einasto profile [107] is predicted to be \(\langle \sigma v \rangle _{\tau \tau } \lesssim 2 \times 10^{-27}~\mathrm{cm}^3~\mathrm{s}^{-1}\) and \(\langle \sigma v \rangle _{WW, b {\bar{b}}} \lesssim 4 \times 10^{-27}~\mathrm{cm}^3~\mathrm{s}^{-1}\). The assumption of the "cored Einasto profile" [107] provides limits which are weaker by almost an order of magnitude. The projections for the NFW profile lies in between the above two. Parameter scan and analysis flow Parameter scan We scan the relevant MSSM parameter space to obtain lower and upper limits on the relevant neutralino, chargino and slepton masses. In order to achieve a "correct" DM relic density, see Eq. (7), by the lightest neutralino, \({\tilde{\chi }}_{1}^0\), some mechanism such as a specific co-annihilation or pole annihilation has to be active in the early universe. At the same time \(m_{{\tilde{\chi }}_{1}^0}\) must not be too high, such that the EW sector can provide the contribution required to bring the theory prediction of \(a_\mu \) into agreement with the experimental measurement, see Sect. 3. The combination of these two requirements yields the following possibilities. (The cases present a certain choice of favored possibilities, upon which one can expand, as will briefly discussed in Sect. 7.) (A) Higgsino DM This scenario is characterized by a small value of \(\mu \) (as favored, e.g., by naturalness arguments [108,109,110,111,112,113]).Footnote 3 Such a scenario is also naturally realized in Anomaly Mediation SUSY breaking (see e.g. Ref. [115] and references therein). We scan the following parameters: $$\begin{aligned}&\quad 100 \,\, \mathrm {GeV}\le \mu \le 1.2 \,\, \mathrm {TeV}, \quad 1.1 \mu \le M_1 \le 10 \mu , \nonumber \\&\quad 1.1 \mu \le M_2 \le 10 \mu , \; \quad 5 \le \tan \beta \le 60, \; \nonumber \\&\quad 100 \,\, \mathrm {GeV}\le m_{{\tilde{l}}_L}, m_{{\tilde{l}}_R}\le 2 \,\, \mathrm {TeV}. \end{aligned}$$ (B) Wino DM This scenario is characterized by a small value of \(M_2\). Such a scenario is also naturally realized in Anomaly Mediation SUSY breaking (see e.g. Ref. [115] and references therein). We scan the following parameters: $$\begin{aligned}&\quad 100 \,\, \mathrm {GeV}\le M_2 \le 1.5 \,\, \mathrm {TeV}, \quad 1.1 M_2 \le M_1 \le 10 M_2, \nonumber \\&\quad 1.1 M_2 \le \mu \le 10 M_2, \; \quad 5 \le \tan \beta \le 60, \; \nonumber \\&\quad 100 \,\, \mathrm {GeV}\le m_{{\tilde{l}}_L}, m_{{\tilde{l}}_R}\le 2 \,\, \mathrm {TeV}. \end{aligned}$$ The choice of \(M_2 \ll M_1, \mu \) leads (at tree-level) to a very degenerate spectrum with \(m_{{\tilde{\chi }}_{1}^\pm } - m_{{\tilde{\chi }}_{1}^0} = {{\mathcal {O}}}(1 \mathrm{\,eV})\). However, this spectrum does not correspond to the on-shell (OS) masses of all six charginos and neutralinos. Since only three (soft SUSY-breaking) mass parameters are available (\(M_1\), \(M_2\) and \(\mu \)), only three out of the six masses can be renormalized OS. The (one-loop) shifts for the three remaining masses are obtained via the \(\hbox {CCN}_i\) renormalization scheme [116] with \(i \in \{2, 3, 4\}\). In a \(\hbox {CCN}_i\) scheme the \({\tilde{\chi }}_{1}^\pm \), \({\tilde{\chi }}_{2}^\pm \) and \({\tilde{\chi }}_{i}^0\) are chosen OS. This automatically yields a good renormalization for \(M_2\) and \(\mu \). The \({\tilde{\chi }}_{i}^0\) has to be chosen, parameter point by parameter point, such that also \(M_1\) is renormalized well (which excludes the \(\hbox {CCN}_1\) for \(M_2 \ll M_1, \mu \)). For each point we choose the \({\tilde{\chi }}_{i}^0\) to be renormalized OS such that the maximum shift of all three shifted neutralino masses is minimized. We have explicitly checked for each scanned point that the such chosen \(\hbox {CCN}_i\) indeed yields reasonably small shifts for three shifted neutralino masses, in particular for \(m_{{\tilde{\chi }}_{1}^0}\).Footnote 4 Only this transition to OS masses yields a mass splitting between \(m_{{\tilde{\chi }}_{1}^\pm }\) and \(m_{{\tilde{\chi }}_{1}^0}\) that allows then for the decay \({\tilde{\chi }}_{1}^\pm \rightarrow {\tilde{\chi }}_{1}^0 \pi ^\pm \). (C) Mixed bino/wino DM This scenario has been analyzed in Ref. [15]. It can in principle be realized in three different versions corresponding to the coannihilation mechanism (see Ref. [15] for a detailed discussion). However, a larger wino component is found only for \({\tilde{\chi }}_{1}^\pm \)-coannihilation. The scan parameters are chosen as, $$\begin{aligned}&\quad 100 \,\, \mathrm {GeV}\le M_1 \le 1 \,\, \mathrm {TeV}, \quad M_1 \le M_2 \le 1.1 M_1, \nonumber \\&\quad 1.1 M_1 \le \mu \le 10 M_1, \; \quad 5 \le \tan \beta \le 60, \; \nonumber \\&\quad 100 \,\, \mathrm {GeV}\le m_{{\tilde{l}}_L}\le 1.5 \,\, \mathrm {TeV}, \; \quad m_{{\tilde{l}}_R}= m_{{\tilde{l}}_L}. \end{aligned}$$ Here we choose one soft SUSY-breaking parameter for all sleptons together. While this choice should not have a relevant effect in the \({\tilde{\chi }}_{1}^\pm \)-coannihilation case, this have an impact in the next case. In our scans we will see that the chosen lower and upper limits are not reached by the points that meet all the experimental constraints. This ensures that the chosen intervals indeed cover all the relevant parameter space. (D) Bino DM This scenario covers the coannihilation with sleptons, and has also been analyzed in Ref. [15]. In this scenario "accidentally" the wino component of the \({\tilde{\chi }}_{1}^0\) can be non-negligible. However, this is not a distinctive feature of this scenario. We cover the two distinct cases that either the SU(2) doublet sleptons, or the singlet sleptons are close in mass to the LSP. (D1) Case-L: SU(2) doublet $$\begin{aligned}&\quad 100 \,\, \mathrm {GeV}\le M_1 \le 1 \,\, \mathrm {TeV}, \quad M_1 \le M_2 \le 10 M_1, \nonumber \\&\quad 1.1 M_1 \le \mu \le 10 M_1, \; \quad 5 \le \tan \beta \le 60, \; \nonumber \\&\quad M_1 \,\, \mathrm {GeV}\le m_{{\tilde{l}}_L}\le 1.2 M_1, \quad M_1 \le m_{{\tilde{l}}_R}\le 10 M_1. \end{aligned}$$ (D2) Case-R: SU(2) singlet $$\begin{aligned}&\quad 100 \,\, \mathrm {GeV}\le M_1 \le 1 \,\, \mathrm {TeV}, \quad M_1 \le M_2 \le 10 M_1, \nonumber \\&\quad 1.1 M_1 \le \mu \le 10 M_1, \; \quad 5 \le \tan \beta \le 60, \; \nonumber \\&\quad M_1 \,\, \mathrm {GeV}\le m_{{\tilde{l}}_R}\le 1.2 M_1,\; \quad M_1 \le m_{{\tilde{l}}_L}\le 10 M_1. \end{aligned}$$ In all scans we choose flat priors of the parameter space and generate \({{\mathcal {O}}}(10^7)\) points. The mass parameters of the colored sector have been set to high values, such that the resulting SUSY particle masses are outside the reach of the LHC, the light \({\mathcal{CP}}\)-even Higgs-boson is in agreement with the LHC measurements (see, e.g., Refs. [44, 45]), where the concrete values are not relevant for our analysis. \(M_A\) has also been set to be above the TeV scale. Consequently, we do not include explicitly the possibility of A-pole annihilation, with \(M_A\sim 2 m_{{\tilde{\chi }}_{1}^0}\). As we will discuss below the combination of direct heavy Higgs-boson searches with the other experimental requirements constrain this possibility substantially (see, however, also Sect. 6). Similarly, we do not consider h- or Z-pole annihilation, as such a light neutralino sector likely overshoots the \((g-2)_\mu \) contribution (see, however, the discussion in Sect. 6). Analysis flow The data sample is generated by scanning randomly over the input parameter range mentioned above, using a flat prior for all parameters. We use SuSpect [118] as spectrum and SLHA file generator, which yields \(\smash {\overline{\mathrm {DR}}}\) values for the chargino and neutralino masses. While in most of the cases the difference between \(\smash {\overline{\mathrm {DR}}}\) and on-shell (OS) masses is not phenomenologically relevant for our analysis, this is different for wino DM. Because of the very small mass difference of the \(\smash {\overline{\mathrm {DR}}}\) values for \(m_{{\tilde{\chi }}_{1}^0}\) and \(m_{{\tilde{\chi }}_{1}^\pm }\) we take into account the transition to OS masses, as discussed above. In the next step the points are required to satisfy the \({\tilde{\chi }}_{1}^\pm \) mass limit from LEP [119]. The SLHA output files from SuSpect are then passed as input to GM2Calc and MicrOMEGAs for the calculation of \((g-2)_\mu \) and the DM observables, respectively. The parameter points that satisfy the current \((g-2)_\mu \) constraint, Eq. (3), the DM relic density, Eq. (7), the direct detection constraints and the vacuum stability constraints as checked with Evade are then taken to the final step to be checked against the latest LHC constraints implemented in CheckMATE. The branching ratios of the relevant SUSY particles are computed using SDECAY [120] and given as input to CheckMATE. The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{{\tilde{\chi }}_{1}^\pm }\) plane for the higgsino DM scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). For the color coding: see text Higgsino DM We start our discussion with the case of higgsino DM, as discussed in Sect. 4.1. We follow the analysis flow as described in Sect. 4.2, where the vacuum stability constraints did not affect any of the LHC allowed points. We also remark that we have checked that the (possible) one-loop corrections to the chargino/neutralino masses do not change the results in a relevant way and were thus omitted (see, however, Sect. 5.2). In the following we denote the points surviving certain constraints with different colors: grey (round): all scan points (i.e. points excluded by \((g-2)_\mu \)). green (round): all points that are in agreement with \((g-2)_\mu \), taking into account the current or anticipated future limits, see Eqs. (3) and (4), respectively, (i.e. points excluded by the DM relic density). blue (triangle): points that additionally obey the upper limit of the DM relic density, see Eq. (7) (i.e. points excluded by the DD constraints). In some plots below all points that pass the \((g-2)_\mu \) constraint are also in agreement with the DM relic density constraint, resulting in only blue (but no green) points to be visible. cyan (diamond): points that additionally pass the DD constraints, see Sect. 3 (i.e. points excluded by the LHC constraints). red (star): points that additionally pass the LHC constraints, see Sect. 3 (i.e. points that pass all constraints). In Fig. 1 we show our results in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{{\tilde{\chi }}_{1}^\pm }\) plane for the current (left) and future (right) \((g-2)_\mu \) constraint, see Eqs. (3) and (4), respectively. By definition the points are clustered in the diagonal of the plane, and \(m_{{\tilde{\chi }}_{2}^0} \approx m_{{\tilde{\chi }}_{1}^\pm }\). Starting with the \((g-2)_\mu \) constraint (green points), they all pass also the relic density constraint (dark blue) and thus are visible as dark blue points in the plot. Overall, one can observe a clear upper limits from \((g-2)_\mu \) of about \(660 \,\, \mathrm {GeV}\) for the current limits and about \(600 \,\, \mathrm {GeV}\) from the anticipated future accuracy. As mentioned above, the DM relic density constraint does not yield changes in the allowed parameter space. Applying the DD limits, on the other hand, forces the points to have even smaller mass differences between \({\tilde{\chi }}_{1}^0\) and \({\tilde{\chi }}_{1}^\pm \). It has also an important impact on the upper limit, which is reduced to about \(500~(480) \,\, \mathrm {GeV}\) for the current (future) \((g-2)_\mu \) bounds. Applying the LHC constraints, corresponding to the "surviving" red points (stars), does not yield a further reduction from above for the current \((g-2)_\mu \) constraint, whereas for the anticipated future accuracy yields a reduction of \(\sim 30 \,\, \mathrm {GeV}\). The LHC constraints (see also the discussion below) also cut always (as anticipated) points in the lower mass range, resulting in a lower limit of \(\sim 120 \,\, \mathrm {GeV}\) for \(m_{{\tilde{\chi }}_{1}^0} \approx m_{{\tilde{\chi }}_{1}^\pm }\). The LHC constraint which is most effective in this parameter plane is the one designed for compressed spectra, as demonstrated in Fig. 2, showing our scan points in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\Delta m(= m_{{\tilde{\chi }}_{2}^0} - m_{{\tilde{\chi }}_{1}^0})\) parameter plane. The color coding is as in Fig. 1. The black line indicates the bound from compressed spectra searches [89]. One can clearly observe that the compressed spectra bound sits exactly in the preferred higgsino DM region, i.e. at \(\Delta m(= m_{{\tilde{\chi }}_{2}^0} - m_{{\tilde{\chi }}_{1}^0}) = {{\mathcal {O}}}(10 \,\, \mathrm {GeV})\). Other LHC constraint that is effective in this case is the bound from slepton pair production leading to dilepton and \({E\!\!\!\!/_T}\) in the final state [85], as will be discussed in more detail below. The bounds from the disappearing track searches [91] turn out to be ineffective in this case because of the very short lifetime of the \({\tilde{\chi }}_{1}^\pm \) [121]. From Figs. 1, 2 one can conclude that the experimental data set an upper as well as a lower bound, yielding a clear search target for the upcoming LHC runs, and in particular for future \(e^+e^-\) colliders, as will be discussed in Sect. 6. In particular, this collider target gets (potentially) sharpened by the improvement in the \((g-2)_\mu \) measurements. The results of our parameter scan in the \(m_{{\tilde{\chi }}_{2}^0}\)–\(\Delta m(= m_{{\tilde{\chi }}_{2}^0}-m_{{\tilde{\chi }}_{1}^0})\) plane for the higgsino DM scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). For the color coding: see text The impact of the DD experiments is demonstrated in Fig. 3. We show the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\sigma _p^{\mathrm{SI}}\) plane for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding of the points (from yellow to dark blue) denotes \(M_2/\mu \), whereas in red we show the points fulfilling \((g-2)_\mu \) , relic density, DD and the LHC constraints. The black line indicates the current DD limits, here taken for sake of simplicity from XENON1T [8], as discussed in Sect. 3. It can be seen that a slight downward shift of this limit, e.g. due to additional DD experimental limits from LUX [6] or PANDAX [7], would not change our results in a strong way, but only slightly reduce the upper limit on \(m_{{\tilde{\chi }}_{1}^0}\). The scanned parameter space extends from large \(\sigma _p^{\mathrm{SI}}\) values, given for the smallest scanned \(M_2/\mu \) values to the smallest ones, reached for the largest scanned \(M_2/\mu \), i.e. the \(\sigma _p^{\mathrm{SI}}\) constraints are particularly strong for small \(M_2/\mu \), which can be understood as follows. The most important contribution to DM scattering comes from the exchange of a light \({\mathcal{CP}}\)-even Higgs boson in the t-channel. The corresponding \(h{\tilde{\chi }}_{1}^0{\tilde{\chi }}_{1}^0\) coupling at tree level is given by [122] $$\begin{aligned} c_{h{\tilde{\chi }}_{1}^0{\tilde{\chi }}_{1}^0}\simeq - \frac{1}{2} (1 + \sin 2\beta ) \left( \tan ^2\theta _\mathrm {w} \frac{M_W}{M_1-\mu } + \frac{M_W}{M_2-\mu } \right) , \nonumber \\ \end{aligned}$$ where we have assumed \(\mu > 0\). Thus, the coupling becomes large for \(\mu \sim M_2\) or \(\mu \sim M_1\). Therefore, the XENON1T DD bound pushes the allowed parameter space into the almost pure higgsino-LSP region, with negligible bino and wino component. The impact of the compressed spectra searches is visible in the lower \(m_{{\tilde{\chi }}_{1}^0}\) region. Given both CDM constraints and the LHC constraints, shown in red, the smallest \(M_2/\mu \) value we find is 2.2 for both current and anticipated future \((g-2)_\mu \) bound. This result depends mildly on the assumed \((g-2)_\mu \) constraint, as this cuts away the largest \(m_{{\tilde{\chi }}_{1}^0}\) values. All of the points will be conclusively probed by the future DD experiment XENONnT [123]. Scan results in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\sigma _p^{\mathrm{SI}}\) plane for higgsino DM scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding of the points denotes \(M_2/\mu \) and the black line indicates the DD limits (see text). In red we show the points fulfilling the \((g-2)_\mu \), relic density, DD and additionally the LHC constraints The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{{\tilde{l}}_{1}}\) plane for the higgsino DM scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 1 The distribution of \(m_{{\tilde{l}}_{1}}\) (where it should be kept in mind that we have chosen the same masses for all three generations, see Sect. 2) is presented in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{{\tilde{l}}_{1}}\) plane in Fig. 4, with the same color coding as in Fig. 1. The \((g-2)_\mu \) constraint places important constraints in this mass plane, since both types of masses enter into the contributing SUSY diagrams, see Sect. 3. The constraint is satisfied in a roughly triangular region with its tip around \((m_{{\tilde{\chi }}_{1}^0}, m_{{\tilde{l}}_{1}}) \sim (650 \,\, \mathrm {GeV}, 700 \,\, \mathrm {GeV})\) in the case of current \((g-2)_\mu \) constraints, and around \(\sim (600 \,\, \mathrm {GeV}, 600 \,\, \mathrm {GeV})\) in the case of the anticipated future limits, i.e. the impact of the anticipated improved limits is clearly visible as an upper limit for both masses. Since no specific other requirement is placed on the slepton sector in the higgsino DM case the slepton masses are distributed over the \((g-2)_\mu \) allowed region. The DM relic density constraint, as discussed above, does not yield any further bounds on the allowed parameter space. The inclusion of the DM DD bounds, as visible by the cyan and red points, only cuts away the very largest slepton masses (for a given LSP mass). The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\tan \beta \) plane in the higgsino DM scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 1. The black line indicates the current exclusion bounds for heavy MSSM Higgs bosons at the LHC (see text) The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^\pm }-\Delta m (= m_{{\tilde{\chi }}_{1}^\pm }-m_{{\tilde{\chi }}_{1}^0})\) plane for the wino DM scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 1 The LHC constraints cut out all points with \(m_{{\tilde{\chi }}_{1}^0} \,\lesssim \,125 \,\, \mathrm {GeV}\), as well as a triangular region with the tip around \((m_{{\tilde{\chi }}_{1}^0}, m_{{\tilde{l}}_{1}}) \sim (340 \,\, \mathrm {GeV}, 450 \,\, \mathrm {GeV})\). The first "cut" is due to the searches for compressed spectra. The second cut is mostly a result of the constraint coming from slepton pair production searches leading to dilepton and \({E\!\!\!\!/_T}\) in the final state [85]. The bound obtained by recasting the experimental search in CheckMATE is substantially weaker than the original limit from ATLAS. That limit is obtained for a "simplified model" with \(\text {BR}({\tilde{l}}_{1}, {\tilde{l}}_{2} \rightarrow l {\tilde{\chi }}_{1}^0) = 100 \%\), an assumption which is not strictly valid in our parameter space. The small mass gap among \({\tilde{\chi }}_{1}^0, {\tilde{\chi }}_{2}^0\) and \({\tilde{\chi }}_{1}^\pm \) allows significant \(\text {BR}\) of the sleptons to final states involving \({\tilde{\chi }}_{1}^\pm \) and \({\tilde{\chi }}_{2}^0\). This reduces the number of signal leptons and hence weakens the exclusion limit. Overall we can place an upper limit on the light slepton mass of about \(\sim 1200 \,\, \mathrm {GeV}\) and \(1050 \,\, \mathrm {GeV}\) for the current and the anticipated future accuracy of \((g-2)_\mu \), respectively. Since larger values of slepton masses are reached for lower values of \(m_{{\tilde{\chi }}_{1}^0}\), the impact of \((g-2)_\mu \) is relatively weaker than in the case of chargino/neutralino masses. We finish our analysis of the higgsino DM case with the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\tan \beta \) plane presented in Fig. 5 with the same color coding as in Fig. 1. The \((g-2)_\mu \) constraint is fulfilled in a triangular region with the largest neutralino masses allowed for the largest \(\tan \beta \) values (where we stopped our scan at \(\tan \beta = 60\)), following the analytic dependence of the \((g-2)_\mu \) contributions in Sect. 3, \(a_\mu \propto \tan \beta /m_{\mathrm{EW}}^2\) (where we denote with \(m_{\mathrm{EW}}\) an overall EW mass scale. In agreement with the previous plots, the largest values for the lightest neutralino masses are \(\sim 650 \,\, \mathrm {GeV}\) \((\sim 600 \,\, \mathrm {GeV})\) for the current (anticipated future) \((g-2)_\mu \) constraint. The DM relic density does not give any additional constraint. The points allowed by the DM DD limits (cyan and red) yield the observed reduction to about \(\sim 500 \,\, \mathrm {GeV}\). The LHC constraints cut out all points at low \(m_{{\tilde{\chi }}_{1}^0}\), but nearly independent of \(\tan \beta \). As observed before, they yield a small further reduction in the case of the anticipated future \((g-2)_\mu \) accuracy. In Fig. 5 we also show as black lines the current bound from LHC searches for heavy neutral Higgs bosons [124] in the channel \(pp \rightarrow H/A \rightarrow \tau \tau \) in the \(M_h^{125}({\tilde{\chi }})\) benchmark scenario (based on the search data published in Ref. [125] using \(139\, \text{ fb}^{-1}\)).Footnote 5 In this scenario light charginos and neutralinos are present, suppressing the \(\tau \tau \) decay mode and thus yielding relatively weak limits in the \(M_A\)–\(\tan \beta \) plane (see, e.g., Fig. 5 in [124]). The black lines correspond to \(m_{{\tilde{\chi }}_{1}^0} = M_A/2\), i.e. roughly to the requirement for A-pole annihilation, where points above the black lines are experimentally excluded. It can be observed that all points allowed by \((g-2)_\mu \) are above the exclusion curve. This renders the effects of A-pole annihilation in this scenario effectively irrelevant (and justifies our choice to fix \(M_A\) above the TeV scale). Wino DM The next case under investigation is the wino DM case, as discussed in Sect. 4.1. We follow the analysis flow as described in Sect. 4.2 and denote the points surviving certain constraints with different colors as defined in Sect. 5.1. The vacuum stability test had no effect on the points passing all other constraints. The results of our wino parameter scan in the \(m_{{\tilde{\chi }}_{1}^\pm }\)–\(\tau \) plane for current (left) and anticipated future limits (right) from \((g-2)_\mu \), where \(\tau \) is the lifetime of the chargino decaying to \(\pi ^\pm {\tilde{\chi }}_{1}^0\). The current limit from CMS [91] is shown as the black solid line. The color coding is as in Fig. 6 In Fig. 6 we show our results in the \(m_{{\tilde{\chi }}_{1}^\pm }\)–\(\Delta m (= m_{{\tilde{\chi }}_{1}^\pm } - m_{{\tilde{\chi }}_{1}^0})\) plane for the current (left) and future (right) \((g-2)_\mu \) constraint, see Eqs. (3) and (4), respectively. We display the results for \(\Delta m\) rather than for \(m_{{\tilde{\chi }}_{1}^\pm }\), since the mass difference is very small, and the various features are more easily visible in this plane. It should be remembered that we have applied the one-loop shift to, in particular, \(m_{{\tilde{\chi }}_{1}^0}\) that allows for the decay \({\tilde{\chi }}_{1}^\pm \rightarrow {\tilde{\chi }}_{1}^0 \pi ^\pm \), see the discussion in Sect. 4.1. This results in the lower bound on \(\Delta m\) of \(\sim 0.15 \,\, \mathrm {GeV}\). As in the higgsino scenario, see Sect. 5.1, all points that pass the \((g-2)_\mu \) constraint (current and anticipated future) also pass the relic density constraint, shown as blue triangles in Fig. 6. The highest allowed chargino masses are bounded from above by the \((g-2)_\mu \) constraint. The overall allowed parameter space, shown as red stars, is furthermore bounded from "above" by the DD limits and from "below" by the LHC constraints. The DD limits cut away larger mass differences, which can be understood as follows. The \(h{\tilde{\chi }}_{1}^0{\tilde{\chi }}_{1}^0\) coupling for a wino-like \({\tilde{\chi }}_{1}^0\) is given by [122] $$\begin{aligned} c_{h{\tilde{\chi }}_{1}^0{\tilde{\chi }}_{1}^0}\simeq \frac{M_W}{M_2^2-\mu ^2}(M_2+\mu \sin 2\beta ), \end{aligned}$$ in the limit of \(||\mu |-M_2|\gg M_Z\) and a decoupled \({\mathcal{CP}}\)-odd Higgs boson (assuming also that the h-exchange dominates over the H contribution in the (spin independent) DD bounds). This coupling becomes large for \(\mu \sim M_2\). On the other hand, the tree level mass splitting between the wino-like states \({\tilde{\chi }}_{1}^\pm \) and \({\tilde{\chi }}_{1}^0\) generated (mainly by the mixing of the lighter chargino with the charged higgsino) is given as [131] $$\begin{aligned} \Delta m (= m_{{\tilde{\chi }}_{1}^\pm } - m_{{\tilde{\chi }}_{1}^0}) \simeq \frac{ M_W^4 (\sin 2\beta )^2\tan ^2 \theta _{\mathrm {w}} }{ (M_1 - M_2) \mu ^2 }, \end{aligned}$$ for \(|M_1 - M_2| \gg M_Z\). The mass splitting increases for smaller \(\mu \) values and thus coincides with larger DD cross sections, as discussed with Eq. (14). All limits together yield maximum \(\Delta m \sim 2 (0.2) \,\, \mathrm {GeV}\) for \(m_{{\tilde{\chi }}_{1}^\pm } \sim 100 (600) \,\, \mathrm {GeV}\) for the current \((g-2)_\mu \) constraint. The upper limit is reduced to \(\sim 500 \,\, \mathrm {GeV}\) for the future anticipated \((g-2)_\mu \) constraint. The relevant LHC constraint is further analyzed in Fig. 7, where we show the plane \(m_{{\tilde{\chi }}_{1}^\pm }\)–\(\tau _{{\tilde{\chi }}_{1}^\pm }\). \(\tau _{{\tilde{\chi }}_{1}^\pm }\) denotes the lifetime of the chargino decaying to \(\pi ^\pm {\tilde{\chi }}_{1}^0\). Overlaid as black line is the bound from (CMS) charged disappearing track analysis [91]. One can observe that this constraint, cutting out parameter points between \(\tau _{{\tilde{\chi }}_{1}^\pm } \sim 0.01\) ns and \(\sim 1\) ns is responsible for the main LHC exclusion. It should be noted that in the "red star area" also some points appear cyan, i.e. excluded by (other) LHC searches, where the most relevant channels are pair production of sleptons leading to two leptons and \({E\!\!\!\!/_T}\) in the final state [85]. However, these channels are not strong enough to exclude more of the \(m_{{\tilde{\chi }}_{1}^\pm }\)–\(\tau _{{\tilde{\chi }}_{1}^\pm }\) plane than the disappearing track search. It can be expected in the (near) future that improved DD bounds, cutting the allowed parameter space from small \({\tilde{\chi }}_{1}^\pm \) lifetime and improved disappearing track searches, cutting from large lifetimes, may substantially shrink the allowed parameter space, sharpening the upper limit on \(m_{{\tilde{\chi }}_{1}^\pm }\) and the prospects for future collider searches. The two bounds together have the potential to firmly rule out the case of wino DM in the MSSM, will be discussed below. Scan results in the \(m_{{\tilde{\chi }}_{1}^0}\)-\(\sigma _p^{\mathrm{SI}}\) plane for the wino DM scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding of the points denotes \(\mu /M_2\) and the black solid line indicates the current DD limit from XENON-1T while the black dashed and dot-dashed lines are respectively the projected reach of XENON-nT and coherent neutrino scattering floor. In red we show the points fulfilling \((g-2)_\mu \), relic density, DD and additionally the LHC constraints The impact of the DD experiments is demonstrated in Fig. 8. We show the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\sigma _p^{\mathrm{SI}}\) plane for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding of the points (from yellow to dark green) denotes \(\mu /M_2\), whereas in red we show the points fulfilling \((g-2)_\mu \) , relic density, DD and the LHC constraints. The solid black line indicates the current DD limits, here taken for sake of simplicity from XENON1T [8], as discussed in Sect. 3. It can be seen that a slight downward shift of this limit, e.g. due to additional DD experimental limits from LUX [6] or PANDAX [7], would not change our results in a relevant way. However, moderately improved limits may have a strong impact, as discussed above. The scanned parameter space extends from large \(\sigma _p^{\mathrm{SI}}\) values, given for the smallest scanned \(\mu /M_2\) values to the smallest ones, reached for the largest scanned \(\mu /M_2\), i.e. the \(\sigma _p^{\mathrm{SI}}\) constraints are particularly strong for small \(\mu /M_2\). Given both CDM constraints and the LHC constraints, shown in red, the smallest \(\mu /M_2\) value we find is 1.5 for both the current and anticipated future \((g-2)_\mu \) bound. As mentioned above, the DD bound can become relevantly stronger with future experiments. We show as dashed line the projected limit of XENONnT [123], and the dot-dashed line indicates the neutrino floor [132]. One can see that the XENONnT result will either firmly exclude or detect a wino DM candidate, possibly in conjunction with improved disappearing track searches at the LHC, as discussed above. The distribution of the lighter slepton mass (where it should be kept in mind that we have chosen the same masses for all three generations, see Sect. 2) is presented in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{{\tilde{l}}_{1}}\) plane in Fig. 9, with the same color coding as in Fig. 6. The \((g-2)_\mu \) constraint places important constraints in this mass plane, since both types of masses enter into the contributing SUSY diagrams, see Sect. 3 (and obviously all points pass the DM relic density upper limit). The \((g-2)_\mu \) constraint is satisfied in a triangular region with its tip around \((m_{{\tilde{\chi }}_{1}^0}, m_{{\tilde{l}}_{1}}) \sim (700 \,\, \mathrm {GeV}, 700 \,\, \mathrm {GeV})\) in the case of current \((g-2)_\mu \) constraints, and around \(\sim (600 \,\, \mathrm {GeV}, 700 \,\, \mathrm {GeV})\) in the case of the anticipated future limits. The highest slepton masses reached are about \(1500 (1200) \,\, \mathrm {GeV}\), respectively; i.e. the impact of the anticipated improved limits is clearly visible as an upper limit. After including the DD and LHC constraints the upper limits for the LSP are slightly reduced by \(\sim 100 \,\, \mathrm {GeV}\), whereas the upper limits on sleptons are not affected. Concerning the LHC searches, the constraints from slepton pair production searches rule out some parameter points from the low \(m_{{\tilde{l}}_{1}}\) region. Larger slepton masses are excluded by the same search if the "second" slepton turns out to be relatively light. Points excluded by compressed spectra searches [89], depending on the lightest chargino mass, can be found all over the plane. The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{\tilde{\mu }_{1}}\) plane for the wino DM scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 6 The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}\)-\(\tan \beta \) plane in the wino DM scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 6. The black line indicates the current exclusion bounds for heavy MSSM Higgs bosons at the LHC (see text) We finish our analysis of the wino DM case with the \(m_{{\tilde{\chi }}_{1}^0}\)-\(\tan \beta \) plane presented in Fig. 10 with the same color coding as in Fig. 6. The \((g-2)_\mu \) constraint is fulfilled in a triangular region with largest neutralino masses allowed for the largest \(\tan \beta \) values (where we stopped our scan at \(\tan \beta = 60\)), following the analytic dependence of the \((g-2)_\mu \) contributions in Sect. 3, \(a_\mu \propto \tan \beta /m_{\mathrm{EW}}^2\) (where we denote with \(m_{\mathrm{EW}}\) an overall EW mass scale. In agreement with the previous plots, the largest values for the lightest neutralino masses are \(\sim 600 \,\, \mathrm {GeV}\) \((\sim 500 \,\, \mathrm {GeV})\) for the current (anticipated future) \((g-2)_\mu \) constraint. The points allowed by the DM constraints (blue/cyan) are distributed all over the allowed region. The LHC constraints also cut out points distributed all over the allowed triangle, in agreement with the previous discussion. In Fig. 10 we also show as black lines the current bound from LHC searches for heavy neutral Higgs bosons [124, 125], see the discussion in Sect. 5.1. Points above the black lines are experimentally excluded. There are a few points passing the current \((g-2)_\mu \) constraint below the black A-pole line, reaching up to \(m_{{\tilde{\chi }}_{1}^0} \sim 220 \,\, \mathrm {GeV}\), for which the A-pole annihilation could provide the correct DM relic density. For the anticipated future accuracy in \((g-2)_\mu \) this mechanism would effectively be absent, making the A-pole annihilation in this scenario marginal. Bino/wino and bino DM In this section we analyze the case of bino/wino and bino DM, as defined in Sect. 4.1. The three cases defined there correspond exactly to the set of analyses in Ref. [15]. However, we now apply the DM relic density as an upper bound ("DM upper bound"), whereas in Ref. [15] the LSP was required to give the full amount of CDM ("DM full"). Since overall the results are similar to the ones found in Ref. [15], we will keep the discussion brief, but try to highlight the differences w.r.t. Ref. [15]. For all scenarios we find that the vacuum stability bounds have no impact on the final mass limits found. Bino/wino DM with \({\tilde{{\varvec{\chi }}}}_{1}^{\pm }\)-coannihilation In Fig. 11 we show our results in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{{\tilde{\chi }}_{1}^\pm }\) plane for the current (left) and future (right) \((g-2)_\mu \) constraint, see Eqs. (3) and (4), respectively. The color coding is defined in Sect. 5.1. By definition of \({\tilde{\chi }}_{1}^\pm \)-coannihilation the points are clustered in the diagonal of the plane. Overall we observe here exactly the same pattern of points as in the case of "DM full" [15]. After taking into account all constraints we find upper limits of \(\sim 600 (500) \,\, \mathrm {GeV}\) in the case of the current (future) \((g-2)_\mu \) limits. Thus, the experimental data set about the same upper as well as lower bounds as in Ref. [15] (modulo differences due to point density artefacts). Consequently, the search targets for the upcoming LHC runs particular for future \(e^+e^-\) colliders remain about the same. The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}-m_{{\tilde{\chi }}_{1}^\pm }\) plane for the bino-wino \({\tilde{\chi }}_{1}^\pm \)-coannihilation scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). For the color coding: see text The impact of the DD experiments is demonstrated in Fig. 12. We show the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\sigma _p^{\mathrm{SI}}\) plane for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding of the points (from yellow to dark green) denotes \(\mu /M_1\), whereas in red we show the points fulfilling all constraints including the LHC ones. As in Fig. 3 the solid black line indicates the current DD limits from XENON1T [8]. Also here the results are in very good agreement with Ref. [15] (again modulo point density artefacts). The scanned parameter space extends from large \(\sigma _p^{\mathrm{SI}}\) values, given for the smallest scanned \(\mu /M_1\) values to the smallest ones, reached for the largest scanned \(\mu /M_1\), i.e. the \(\sigma _p^{\mathrm{SI}}\) constraints are particularly strong for small \(\mu /M_1\). Given in red the points fulfilling \((g-2)_\mu \) , both CDM constraints and the LHC constraints, the smallest \(\mu /M_1\) value we find is 1.67 for the current and 1.78 for the anticipated future \((g-2)_\mu \) bound. The dashed line indicates the projected XENONnT limit [123], and the dot-dashed line indicates the neutrino floor [132]. One can see that XENONnT will not be able to fully test the chargino co-annihilation scenario, with some points that pass all constraints (red) being even below the neutrino floor. Scan results in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\sigma _p^{\mathrm{SI}}\) plane for bino-wino \({\tilde{\chi }}_{1}^\pm \)-coannihilation scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding of the points denotes \(\mu /M_1\) and the black lines indicates the DD limits (see text). In red we show the points fulfilling \((g-2)_\mu \), the relic density, DD and the LHC constraints The distribution of the lighter slepton mass (where it should be kept in mind that we have chosen the same masses for all three generations, see Sect. 2) is presented in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{{\tilde{l}}_{1}}\) plane in Fig. 13, with the same color coding as in Fig. 11. The \((g-2)_\mu \) constraint places important constraints in this mass plane, since both types of masses enter into the contributing SUSY diagrams, see Sect. 3. The constraint is satisfied in a triangular region with its tip around \((m_{{\tilde{\chi }}_{1}^0}, m_{{\tilde{l}}_{1}}) \sim (700 \,\, \mathrm {GeV}, 800 \,\, \mathrm {GeV})\) in the case of current \((g-2)_\mu \) constraints, and around \(\sim (600 \,\, \mathrm {GeV}, 700 \,\, \mathrm {GeV})\) in the case of the anticipated future limits, i.e. the impact of the anticipated improved limits is clearly visible as an upper limit. These results remain unchanged w.r.t. the "DM full" case [15]. The points fulfilling the DM relic density constraint (blue/cyan/red) are distributed all over the \((g-2)_\mu \) allowed range. They extend to somewhat higher slepton masses as compared to Ref. [15], due to the less restrictive DM upper bound.Footnote 6 The DD limits cut away the largest values, in particular for \(m_{{\tilde{l}}_{1}}\), which can be understood as follows. Large \(m_{{\tilde{l}}_{1}}\) values, with correspondingly large sneutrino masses, require smaller \(\mu \) to satisfy the \((g-2)_\mu \) constraint. This in turn puts them in tension with the DD bounds, see Fig. 12. The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{{\tilde{l}}_{1}}\) plane for the bino-wino \({\tilde{\chi }}_{1}^\pm \)-coannihilation scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 11 The LHC constraints cut out lower slepton masses, following the same pattern as in Ref. [15]. They cut away masses up to \(m_{{\tilde{l}}_{1}} \,\lesssim \,450 \,\, \mathrm {GeV}\), as well as part of the very low \(m_{{\tilde{\chi }}_{1}^0}\) points nearly independent of \(m_{{\tilde{l}}_{1}}\). Here the latter "cut" is due to the searches for compressed spectra with \({\tilde{\chi }}_{1}^\pm , {\tilde{\chi }}_{2}^0\) decaying via off-shell gauge bosons [89]. The first "cut" is mostly a result of the searches for slepton pair production with a decay to two leptons plus missing energy [85]. As was demonstrated and discussed in detail in Ref. [15] for this limit it is crucial to employ a proper re-cast of the LHC searches, rather than a naive application of the published bounds, Overall we can place an upper limit on the light slepton mass of about \(\sim 1050 \,\, \mathrm {GeV}\) and \(\sim 950\,\, \mathrm {GeV}\) for the current and the anticipated future accuracy of \((g-2)_\mu \), respectively. The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\tan \beta \) plane in the bino-wino \({\tilde{\chi }}_{1}^\pm \)-coannihilation scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 11. The black line indicates the current exclusion bounds for heavy MSSM Higgs bosons at the LHC (see text) We finish our analysis of the \({\tilde{\chi }}_{1}^\pm \)-coannihilation case with the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\tan \beta \) plane presented in Fig. 14 with the same color coding as in Fig. 11. For this plane we find that the results are in full agreement with the "DM full" case as analyzed in Ref. [15]. The \((g-2)_\mu \) constraint is fulfilled in a triangular region with largest neutralino masses allowed for the largest \(\tan \beta \) values (\(\tan \beta = 60\)). In agreement with the previous plots, the largest values for the lightest neutralino masses are \(\sim 600 \,\, \mathrm {GeV}\) \((\sim 500 \,\, \mathrm {GeV})\) for the current (anticipated future) \((g-2)_\mu \) constraint. The points allowed by the DM constraints (blue/cyan) are distributed all over the allowed region. The LHC constraints cut out points at low \(m_{{\tilde{\chi }}_{1}^0}\), but nearly independent on \(\tan \beta \). As in the previous scenarios, in Fig. 14 we also show as black lines the current bound from LHC searches for heavy neutral Higgs bosons [124] in the channel \(pp \rightarrow H/A \rightarrow \tau \tau \) in the \(M_h^{125}({\tilde{\chi }})\) benchmark scenario. As before the black lines correspond to \(m_{{\tilde{\chi }}_{1}^0} = M_A/2\), i.e. roughly to the requirement for A-pole annihilation, where points above the black lines are experimentally excluded. The improved limits of the experimental analysis based on \(139~\text{ fb}^{-1}\) [125], but now with the relaxed DM relic density bound, still allow parameter points that cannot be regarded as excluded. They are found for \(m_{{\tilde{\chi }}_{1}^0} \,\lesssim \,240 (200) \,\, \mathrm {GeV}\) and \(\tan \beta \,\lesssim \,12\) in the case of the current (anticipated future) \((g-2)_\mu \) constraints. To analyze these points a dedicated analysis of the A-pole annihilation would be required. This dedicated analysis we leave for future work. Bino DM with \({\tilde{{\varvec{l}}}}^\pm \)-coannihilation case-L We now turn to the case of bino DM with \({\tilde{l}}^\pm \)-coannihilation. As discussed in Sect. 4.1 we distinguish two cases, depending on which of the two slepton soft SUSY-breaking parameters is set to be close to \(m_{{\tilde{\chi }}_{1}^0}\). We start with the Case-L, where we chose \(m_{{\tilde{l}}_{L}} \sim M_1\), i.e. the left-handed charged sleptons as well as the sneutrinos are close in mass to the LSP. We find that all six sleptons are close in mass and differ by less than \(\sim 50 \,\, \mathrm {GeV}\). The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{\tilde{\mu }_{1}}\) plane for the \({\tilde{l}}\)-coannihilation case-L scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 6 In Fig. 15 we show the results of our scan in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{\tilde{\mu }_{1}}\) plane, as before for the current \((g-2)_\mu \) constraint (left) and the anticipated future constraint (right). The color coding of the points is the same as in Fig. 1, see the description in the beginning of Sect. 5.1. By definition of the scenario, the points are located along the diagonal of the plane. Taking all bounds into account we find upper limits on the LSP mass of \(\sim 500 \,\, \mathrm {GeV}\) and \(\sim 450 \,\, \mathrm {GeV}\) for the current and anticipated future \((g-2)_\mu \) accuracy, respectively. The limits on \(m_{\tilde{\mu }_{1}}\) are about \(\sim 50 - 100 \,\, \mathrm {GeV}\) larger. In Fig. 16 we show the results in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{{\tilde{\chi }}_{1}^\pm }\) plane with the same color coding as in Fig. 15. We find that the light chargino mass assumes an upper limit of \(\sim 1600 \,\, \mathrm {GeV}\) (\(\sim 1400 \,\, \mathrm {GeV}\)) for the current (anticipated future \((g-2)_\mu \) ) constraint. Clearly visible are the LHC constraints for \({\tilde{\chi }}_{2}^0\)–\({\tilde{\chi }}_{1}^\pm \) pair production leading to three leptons and \({E\!\!\!\!/_T}\) in the final state [84]. At very low values of both \(m_{{\tilde{\chi }}_{1}^0}\) and \(m_{{\tilde{\chi }}_{1}^\pm }\) the compressed spectra searches [89] cut away points up to \(m_{{\tilde{\chi }}_{1}^0} \,\lesssim \,150 \,\, \mathrm {GeV}\). The results for the \({\tilde{l}}^\pm \)-coannihilation Case-L in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\tan \beta \) plane are presented in Fig. 17.Footnote 7 The overall picture is similar to the \({\tilde{\chi }}_{1}^\pm \)-coannihilation case shown above in Fig. 14. Larger LSP masses are allowed for larger \(\tan \beta \) values. On the other hand the combination of small \(m_{{\tilde{\chi }}_{1}^0}\) and large \(\tan \beta \) leads to a too large contribution to \(a_\mu ^{\mathrm{SUSY}}\) and is thus excluded. As in the other scenarios we also show the limits from H/A searches at the LHC as a solid line. Unlike the \({\tilde{\chi }}_{1}^\pm \)-coannihilation case, substantially less number of points remain below the black line making the A-pole annihilation a remote possibility in this scenario. It is worth mentioning here that a bug in the GM2calc-v1.5.0 had been detected, which was then fixed in the updated version v1.7.5. The bug-fixing leads to a significant deviation in \((g-2)_\mu \) value in this scenario (but not in the others), which results in a large deviation in the upper limits on the EW masses. Consequently, we refrain from a detailed comparison between the results obtained here for case-L and the ones using the DM relic density as direct measurement [15]. More specifically, the upper limit on the lightest chargino mass has largely been reduced compared to what has been found in [15]. On the other hand, the upper limits on the LSP and NLSP masses remain in the same ballpark.Footnote 8 Bino DM with \({\tilde{{\varvec{l}}}}^\pm \)-coannihilation case-R We now turn to our fifth scenario, bino DM with \({\tilde{l}}^\pm \)-coannihilation Case-R, where in the scan we require the "right-handed" sleptons to be close in mass with the LSP. It should be kept in mind that in our notation we do not mass-order the sleptons: for negligible mixing as it is given for selectrons and smuons the "left-handed" ("right-handed") slepton corresponds to \({\tilde{l}}_1\) (\({\tilde{l}}_2\)). As it will be seen below, in this scenario all relevant mass scales are required to be relatively light by the \((g-2)_\mu \) constraint.Footnote 9 We start in Fig. 18 with the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{\tilde{\mu }_{2}}\) plane with the same color coding as in Fig. 11. By definition of the scenario the points are concentrated on the diagonal. The current (future) \((g-2)_\mu \) bound yields upper limits on the LSP of \(\sim 700 (600) \,\, \mathrm {GeV}\), as well as an upper limit on \(m_{\tilde{\mu }_{2}}\) (which is close in mass to the \({\tilde{e}}_{2}\) and \({{\tilde{\tau }}_{2}}\)) of \(\sim 800 (700) \,\, \mathrm {GeV}\). These limits remain unchanged by the inclusion of the DM relic density bound. The limits agrees well with the ones found in the case with the relic density measurement taken as a direct measurement [15]. Including the DD and LHC constraints, these limits reduce to \(\sim 500~(380) \,\, \mathrm {GeV}\) for the LSP for the current (future) \((g-2)_\mu \) bounds, and correspondingly to \(\sim 550~(440) \,\, \mathrm {GeV}\) for \(m_{\tilde{\mu }_{2}}\). The LHC constraints cut out some, but not all lower-mass points, where the searches for compressed slepton/neutralino spectra [89] are most relevant. Due to the larger splitting the "right-handed" stau turns out to be the NLSP in this scenario, where the upper bounds are found at \(\sim 500~(380) \,\, \mathrm {GeV}\) for the current (anticipated future) \((g-2)_\mu \) bounds. The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}-m_{{\tilde{\chi }}_{1}^\pm }\) plane for the \({\tilde{l}}\)-coannihilation case-L scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 15 The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\tan \beta \) plane in the \({\tilde{l}}\)-coannihilation case-L scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 15. The black line indicates the current exclusion bounds for heavy MSSM Higgs bosons at the LHC (see text) The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{\tilde{\mu }_{2}}\) plane for the \({\tilde{l}}\)-coannihilation case-R scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 15 The distribution of the heavier slepton is displayed in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{\tilde{\mu }_{1}}\) plane in Fig. 19. As before, the results agree well with the ones found in Ref. [15]. Although the "left-handed" sleptons are allowed to be much heavier, the \((g-2)_\mu \) constraint imposes an upper limit of \(\sim 950~(800) \,\, \mathrm {GeV}\) in the case for the current (future) \((g-2)_\mu \) precision. Taking into account the CDM and LHC constraints we find upper limits for \(m_{\tilde{\mu }_{1}}\) of \(\sim 850 \,\, \mathrm {GeV}\) and \(\sim 800 \,\, \mathrm {GeV}\) for the current and anticipated future \((g-2)_\mu \) constraint, respectively. In Fig. 20 we show the results in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(m_{{\tilde{\chi }}_{1}^\pm }\) plane with the same color coding as in Fig. 18. The results are again in qualitative agreement with the case of the DM relic density taken as a direct measurement [15]. As in the Case-L the \((g-2)_\mu \) limits on \(m_{{\tilde{\chi }}_{1}^0}\) become slightly stronger for larger chargino masses. The upper limits on the chargino mass, however, are substantially stronger as in the Case-L. Taking all constraints into account, they are reached at \(\sim 1540 \,\, \mathrm {GeV}\) for the current and \(\sim 1350 \,\, \mathrm {GeV}\) for the anticipated future precision in \(a_\mu \). The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}-m_{{\tilde{\chi }}_{1}^\pm }\) plane for the \({\tilde{l}}\)-coannihilation case-R scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 18 The results of our parameter scan in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\tan \beta \) plane in the \({\tilde{l}}\)-coannihilation case-R scenario for current (left) and anticipated future limits (right) from \((g-2)_\mu \). The color coding is as in Fig. 18. The black line indicates the current exclusion bounds for heavy MSSM Higgs bosons at the LHC (see text) We finish our analysis of the \({\tilde{l}}^\pm \)-coannihilation Case-R with the results in the \(m_{{\tilde{\chi }}_{1}^0}\)–\(\tan \beta \) plane, presented in Fig. 21. The overall picture is similar to the previous cases shown above in Figs. 14 and 17, and also qualitatively in agreement with the results found in Ref. [15]. Larger LSP masses are allowed for larger \(\tan \beta \) values. On the other hand the combination of small \(m_{{\tilde{\chi }}_{1}^0}\) and very large \(\tan \beta \) values, \(\tan \beta \,\gtrsim \,40\) leads to stau masses below the LSP mass, which we exclude for the CDM constraints. The LHC searches mainly affect parameter points with \(\tan \beta \,\lesssim \,20\). Larger \(\tan \beta \) values induce a larger mixing in the third slepton generation, enhancing the probability for charginos to decay via staus and thus evading the LHC constraints. As before we also show the limits from H/A searches at the LHC, where we set (as above) \(m_{{\tilde{\chi }}_{1}^0} = M_A/2\), i.e. roughly to the requirement for A-pole annihilation, where points above the black lines are experimentally excluded. Comparing Case-R and Case-L, here for the current \((g-2)_\mu \) limit substantially less points are passing the current \((g-2)_\mu \) constraint below the black line, i.e. are potential candidates for A-pole annihilation. The masses reach only up to \(\sim 200 \,\, \mathrm {GeV}\). With the anticipated future \((g-2)_\mu \) limit hardly any point survives, leaving A-pole annihilation as a quite remote possibility with a strict upper bound on \(m_{{\tilde{\chi }}_{1}^0}\). Lowest and highest mass points In this section we present some sample spectra for the five cases discussed in the previous subsections. For each case, higgsino DM, wino DM, bino/wino DM with \({\tilde{\chi }}_{1}^\pm \)-coannihilation, \({\tilde{l}}^\pm \)-coannihilation Case-L and Case-R, we present three parameter points that are in agreement with all constraints (red points): the lowest LSP mass, the highest LSP with current \((g-2)_\mu \) constraints, as well as the highest LSP mass with the anticipated future \((g-2)_\mu \) constraint. They will be labeled as "H1, H2, H3", "W1, W2, W3", "C1, C2, C3", "L1, L2, L3", "R1, R2, R3" for higgsino DM, wino DM, bino/wino DM with \({\tilde{\chi }}_{1}^\pm \)-coannihilation, \({\tilde{l}}^\pm \)-coannihilation Case-L and Case-R, respectively. While the points are obtained from "random sampling", nevertheless they give an idea of the mass spectra realized in the various scenarios. In particular, the highest mass points give a clear indication on the upper limits of the NLSP mass. It should be noted that the \(\sigma _p^{\mathrm{SI}}\) values given below are scaled with a factor of (\(\Omega _{{\tilde{\chi }}} h^2\)/0.118) to account for the lower relic density. In Table 1 we show the 3 parameter points ("H1, H2, H3") from higgsino DM scenario, which are defined by the six scan parameters: \(M_1\), \(M_2\), \(\mu \), \(\tan \beta \) and the two slepton mass parameters, \(m_{{\tilde{l}}_{L}}\) and \(m_{{\tilde{l}}_{R}}\) (corresponding roughly to \(m_{{\tilde{e}}_1, {\tilde{\mu }}_1}\) and \(m_{{\tilde{e}}_2, {\tilde{\mu }}_2}\), respectively). Together with the masses and relevant \(\text {BR}\)s we also show the values of the DM observables and \(a_\mu ^{\mathrm{SUSY}}\). By definition of thescenario we have \(\mu < M_1, M_2\) and \(m_{{\tilde{\chi }}_{1}^0} \sim m_{{\tilde{\chi }}_{2}^0} \sim m_{{\tilde{\chi }}_{1}^\pm }\) with the \({\tilde{\chi }}_{1}^\pm \) as NLSP. We find in all three points \(M_2 \ll M_1\), which can be understood from the \((g-2)_\mu \) constraint. For (relatively) small \(\mu \) the chargino-sneutrino contribution, Eq. (5), is \(\propto 1/M_2\) and thus becomes large for smaller \(M_2\). Conversely, the neutralino-slepton contribution, Eq. (6), is \(\propto M_1\) and thus becomes larger for larger \(M_1\). As anticipated, the relic density is found to be quite low. For all of the three points, the decays of \({\tilde{\chi }}_{1}^\pm \) and \({\tilde{\chi }}_{2}^0\) to sleptons are not kinematically accessible. Therefore they mainly decay via off-shell sleptons and/or gauge-bosons to a pair of SM particles and the LSP. The sleptons represented by \({\tilde{l}}_{1}\) and \({\tilde{l}}_{2}\) decay to chargino/neutralino and the corresponding SM particle. The \({\tilde{l}}_{2}\) is mostly "right-handed" and thus decays preferably to a bino and an electron. However, this mode is kinematically open only for H3, leading to the observed \({\tilde{l}}_{2}\) decay pattern for H1 and H2. In Table 1 we also show the velocity-averaged DM annihilation cross-section at the present time in units of \(cm^3 s^{-1}\). Here we present only the values for annihilation into the WW final state (\(\left\langle \sigma v \right\rangle _{WW}\)), which has the largest cross-section in the case of higgsino LSP, with \(\left\langle \sigma v \right\rangle _{ZZ}\) lying closely below. The points are allowed by the current limits from combined ANTARES/IceCube [102] and Super-Kamiokande [103] data (independent of the choice of the DM galactic halo profile). From the projected CTA sensitivity reach [101], it is expected that the point H1 will be probed by CTA in future irrespective of the assumption on the DM halo profile. The points H2 and H3 will be covered by CTA provided one considers the Einasto or NFW halo profile. Table 1 The masses (in \(\,\, \mathrm {GeV}\)) and relevant \(\text {BR}\)s (%) of three points from the higgsino DM scenario, corresponding to the lowest LSP mass, the highest LSP mass with current \((g-2)_\mu \) constraints, as well as the highest LSP mass with the anticipated future \((g-2)_\mu \) constraint. Here \({\tilde{l}}\,(l)\) refers to \({\tilde{e}}\, (e)\) and \({\tilde{\mu }}\, (\mu )\) together. \(\nu \) denotes \(\nu _e\), \(\nu _\mu \) and \(\nu _\tau \) together. Only \(\text {BR}\)s above 0.1% are shown. The values of \((g-2)_\mu \) and DM observables are also shown. \(\sigma _p^{\mathrm{SI}}\) is scaled with a factor of \((\Omega _{{\tilde{\chi }}} h^2/0.118)\) and given in the units of pb. Velocity averaged DM annihilation cross-section at present time \(\left\langle \sigma v \right\rangle \) is given in the units of \(\text {cm}^{-3}~\text {s}^{-1}\). Only the final state with the largest cross-section, WW, is shown here In Table 2 we show the 3 parameter points ("W1, W2, W3") from wino DM scenario, defined in terms of the same set of input parameters as the higgsino DM scenario. Together with the masses and relevant \(\text {BR}\)s we also show the values of the DM observables and \(a_\mu ^{\mathrm{SUSY}}\). By definition of the scenario we have \(M_2 < M_1, \mu \) and \(m_{{\tilde{\chi }}_{1}^0} \sim m_{{\tilde{\chi }}_{1}^\pm }\), i.e. the \({\tilde{\chi }}_{1}^\pm \) being the NLSP decaying dominantly as \({\tilde{\chi }}_{1}^\pm \rightarrow {\tilde{\chi }}_{1}^0 \pi ^\pm \), but also the decay to the LSP and \(e \nu _e\) or \(\mu \nu _\mu \) occurs (either with a soft pion or a soft charged lepton). As anticipated, the relic density is found to be quite low. The second lightest neutralino is found substantially heavier than the LSP, with a variety of decay channels to be open, diluting possible signals. The sleptons represented by \({\tilde{l}}_{1}\) and \({\tilde{l}}_{2}\) decay to chargino/neutralino and the corresponding SM particle with relevant BRs to charged leptons, which will be crucial for their detection. The \({\tilde{l}}_{2}\) is mostly "right-handed" and thus decays preferably to a bino and an electron (i.e. the neutralino in the decay is determined by the mass ordering of \(\mu \) and \(M_1\)). As for higgsino DM, also for wino DM, the indirect detection cross-section is dominated by annihilation into WW final state, where the numbers are given in the last row (left) of Table 2. The current indirect detection data is not strong enough to rule out these points. However, due to the substantially large annihilation cross-section, the three wino sample points have the potential to be probed by the CTA, even in the most conservative case of "cored Einasto" halo profile. Table 2 The masses (in \(\,\, \mathrm {GeV}\)) and relevant \(\text {BR}\)s (%) of three points from the wino DM scenario, corresponding to the lowest LSP mass, the highest LSP mass with current \((g-2)_\mu \) constraints, as well as the highest LSP mass with the anticipated future \((g-2)_\mu \) constraint. The notation and units used are as in Table 1. \({\tilde{\nu }} \nu \) refers to all three generations (which in our sampling are mass degenerate). Only \(\text {BR}\)s above 0.1% are shown In Table 3 we show the 3 parameter points ("C1, C2, C3") from \({\tilde{\chi }}_{1}^\pm \)-coannihilation scenario, with the same definitions and notations as for the previous tables. We find that the \({\tilde{\tau }}\) is the NLSP in these three cases, and the combined contribution from \({\tilde{\tau }}\)-coannihilation together with \({\tilde{\chi }}_{1}^\pm \)-coannihilation brings the relic density to the ballpark value. For all of the three points, the decays of \({\tilde{\chi }}_{1}^\pm \) and \({\tilde{\chi }}_{2}^0\) to first two generations of sleptons are not kinematically accessible or strongly phase space suppressed. Therefore they decay with a very large BR to third generation charged sleptons and sneutrinos. This makes them effectively invisible to the LHC searches looking for electrons and muons in the signal. LHC analyses designed to specifically look for \(\tau \)-rich final states can prove beneficial to constrain these points, which are much less powerful, as discussed above. We refrain from showing results for \({{\tilde{\tau }}_{}}\) decays, since the corresponding dedicated searches turn out to be weaker (and thus not effective) than other applicable searches. In this case, the DM annihilates dominantly to final states involving SM fermions. The points C1, C2 and C3 are currently allowed by the exclusion bounds from ANTARES/IceCube [102] and Super-Kamiokande [103]. The point C1, with \(\tau \tau \) as the leading final state, has the potential to be probed by the CTA for the most optimistic case of Einasto DM profile. C2 and C3, having \(b{\bar{b}}\) as the dominant final state, are likely to evade detection at the CTA even in the case of Einasto profile. Table 3 The masses (in \(\,\, \mathrm {GeV}\)) and relevant \(\text {BR}\)s (%) of three points from \({\tilde{\chi }}_{1}^\pm \)-coannihilation scenario corresponding to the lowest LSP mass, the highest LSP mass with current \((g-2)_\mu \) constraints, as well as the highest LSP mass with the anticipated future \((g-2)_\mu \) constraint. The notation, definitions and units are as in Table 2. Only \(\text {BR}\)s above 0.1% are shown. The values of \(\left\langle \sigma v \right\rangle _{f {\bar{f}}}\) are given for \(f = \tau \) for C1 and \(f = b\) for C2 and C3, corresponding to the dominant DM annihilation channel for the respective parameter points In Table 4 we show three parameter points ("L1, L2, L3") taken from \({\tilde{l}}^\pm \)-coannihilation scenario Case-L, defined in the same way as in the \({\tilde{\chi }}_{1}^\pm \)-coannihilation case. The character of the \({\tilde{\chi }}_{2}^0\) and \({\tilde{\chi }}_{1}^\pm \) depend on the mass ordering of \(\mu \) and \(M_2\) and are thus somewhat random. This leads to a large variation in the various BRs of these two particles, possibly diluting any signal involving a certain SM particle, such as a charged lepton. The selectrons or smuons, on the other hand, decay to the LSP and the corresponding SM lepton, which tend to be soft in this case, making way for compressed spectra searches at the LHC. The indirect detection cross-sections, which are dominated by the annihilation into \(\tau \tau \) final state for L1 and \(b {\bar{b}}\) for L2 and L3, are far too low in this case to be probed by even the CTA. Table 4 The masses (in \(\,\, \mathrm {GeV}\)) and relevant \(\text {BR}\)s (%) of three points from \({\tilde{l}}^\pm \)-coannihilation scenario Case-L corresponding to the lowest LSP mass, the highest LSP mass with current \((g-2)_\mu \) constraints, as well as the highest LSP mass with the anticipated future \((g-2)_\mu \) constraint. The notation, definitions and units are as in Table 1. Only \(\text {BR}\)s above 0.1% are shown. The values of \(\left\langle \sigma v \right\rangle _{f {\bar{f}}}\) are given for \(f = \tau \) for L1 and \(f = b\) for L2 and L3, corresponding to the dominant DM annihilation channel for the respective parameter points The masses, \(\text {BR}\)s and values of the \((g-2)_\mu \) and DM observables of the parameter point for the Case-R ("R1, R2, R3") are shown in Table 5, defined in the same way as in the \({\tilde{\chi }}_{1}^\pm \)-coannihilation case. As in the case-L the character of the \({\tilde{\chi }}_{2}^0\) and \({\tilde{\chi }}_{1}^\pm \) depend on the mass ordering of \(\mu \) and \(M_2\) and are thus somewhat random. In particular for the two high-mass points R2 and R3 the two mass parameters are relatively close, leading to larger mixings for these two states. This in turn leads to a larger variation in the various BRs of these two particles, possibly diluting any signal involving a certain SM particle, such as a charged lepton. The selectrons or smuons, on the other hand, decay preferably to the LSP and the corresponding SM lepton. As in case-L these tend to be soft, making way for compressed spectra searches at the LHC. Irrespective of the assumed DM halo profile, the indirect detection cross-sections dominated by the annihilation into \(\tau \tau \) final state for the three points, lie below the sensitivity reach of the CTA. Table 5 The masses (in \(\,\, \mathrm {GeV}\)) and relevant \(\text {BR}\)s (%) of three points from \({\tilde{l}}^\pm \)-coannihilation scenario Case-R corresponding to the lowest LSP mass, the highest LSP mass with current \((g-2)_\mu \) constraints, as well as the highest LSP mass with the anticipated future \((g-2)_\mu \) constraint. The notation, definitions and units are as in Table 1. Only \(\text {BR}\)s above 0.1% are shown. \(\left\langle \sigma v \right\rangle _{f {\bar{f}}}\) is given for \(f = \tau \) for all three points, corresponding to the dominant final state for DM annihilation Prospects for future colliders In this section we briefly discuss the prospects of the direct detection of the (relatively light) EW particles at the approved HL-LHC, the hypothetical upgrade to the HE-LHC, the potential future FCC-hh, and at a possible future \(e^+e^-\) collider such as ILC [37, 38] or CLIC [38,39,40,41]. We concentrate on the compressed spectra searches, relevant for higgsino DM, wino DM and bino/wino DM with \({\tilde{\chi }}_{1}^\pm \)-coannihilation. Results for the future prospects for slepton coannihilation (although with the relic DM density taken as a direct measurement) can be found in Ref. [15]. \(m_{{\tilde{\chi }}_{1}^\pm }\)–\(\Delta m\) plane with anticipated limits from compressed spectra searches at the HL-LHC, the HE-LHC, FCC-hh and ILC500, ILC1000, CLIC380, CLIC1500, CLIC3000 (see text), original taken from Ref. [135]. Not included are disappearing track searches. Shown in blue, dark blue, turquoise are the points surviving all current constraints in the case of higgsino DM, wino DM and bino/wino DM with \({\tilde{\chi }}_{1}^\pm \)-coannihilation, respectively In Fig. 22 we present our results in the \(m_{{\tilde{\chi }}_{1}^\pm }\)–\(\Delta m\) plane (with \(\Delta m := m_{{\tilde{\chi }}_{1}^\pm } - m_{{\tilde{\chi }}_{1}^0}\)), which was presented (in its original form, i.e. down to \(\Delta m = 0.7 \,\, \mathrm {GeV}\)) in Ref. [135] for the higgsino DM case, but also directly valid for the wino DM case [136] (see the discussion below). Shown are the following anticipated limits for compressed spectra searchesFootnote 10: HL-LHC with \(3\,\text{ ab}^{-1}\) at \(\sqrt{s} = 14 \,\, \mathrm {TeV}\): solid blue and solid red: di-lepton searches (with soft leptons) [139]. HE-LHC with \(15\,\text{ ab}^{-1}\) at \(\sqrt{s} = 27 \,\, \mathrm {TeV}\): red dashed: same analysis as in Ref. [139], but rescaled by 1.5 to take into account the higher energy [140]. FCC-hh with \(30\,\text{ ab}^{-1}\) at \(\sqrt{s} = 100 \,\, \mathrm {TeV}\): magenta dashed: same analysis as in Ref. [139], but rescaled to take into account the higher energy [135]. solid magenta: mono-jet searches for very soft \({\tilde{\chi }}_{1}^\pm \) decays [140, 141] (see also the discussion in Ref. [136]), covering \(\Delta m = 0.7 \,\, \mathrm {GeV}\ldots 1 \,\, \mathrm {GeV}\). The upper limit can extend to higher values. ILC with \(0.5\,\text{ ab}^{-1}\) at \(\sqrt{s} = 500 \,\, \mathrm {GeV}\) (ILC500): solid light green: \({\tilde{\chi }}_{1}^\pm {\tilde{\chi }}_{1}^\pm \) or \({\tilde{\chi }}_{2}^0{\tilde{\chi }}_{1}^0\) production, which is sensitive up to the kinematic limit [136] (and references therein). ILC with \(1\,\text{ ab}^{-1}\) at \(\sqrt{s} = 1000 \,\, \mathrm {GeV}\) (ILC1000): gray dashed: \({\tilde{\chi }}_{1}^\pm {\tilde{\chi }}_{1}^\pm \) or \({\tilde{\chi }}_{2}^0{\tilde{\chi }}_{1}^0\) production, which is sensitive up to the kinematic limit [136] (and references therein). CLIC with \(1\,\text{ ab}^{-1}\) at \(\sqrt{s} = 380 \,\, \mathrm {GeV}\) (CLIC380): very dark green dot-dashed: \({\tilde{\chi }}_{1}^\pm {\tilde{\chi }}_{1}^\pm \) or \({\tilde{\chi }}_{2}^0{\tilde{\chi }}_{1}^0\) production, which is sensitive up to the kinematic limit [135] (see also Ref. [136]). CLIC with \(2.5\,\text{ ab}^{-1}\) at \(\sqrt{s} = 1500 \,\, \mathrm {GeV}\) (CLIC1500): green dot-dashed: \({\tilde{\chi }}_{1}^\pm {\tilde{\chi }}_{1}^\pm \) or \({\tilde{\chi }}_{2}^0{\tilde{\chi }}_{1}^0\) production, which is sensitive nearly up to the kinematic limit [135] (see also Ref. [136]). CLIC with \(5\,\text{ ab}^{-1}\) at \(\sqrt{s} = 3000 \,\, \mathrm {GeV}\) (CLIC3000): dark green dot-dashed: \({\tilde{\chi }}_{1}^\pm {\tilde{\chi }}_{1}^\pm \) or \({\tilde{\chi }}_{2}^0{\tilde{\chi }}_{1}^0\) production, which is sensitive nearly up to the kinematic limit [135] (see also Ref. [136]). Not shown in Fig. 22 are searches for disappearing tracks, which are relevant for very small mass differences, see Figs. 6 and 7. They cut out the lower edge of the dark blue points, wino DM. The life-time limits are expected to improve at the HL-LHC by a factor of \(\sim 2\) [140], only slightly cutting into the still allowed region. In the higgsino DM case (blue) it can be seen that the HL-LHC can cover part of the allowed parameter space, but that a full coverage can only be reached at a high-energy \(e^+e^-\) collider with \(\sqrt{s} \,\lesssim \,1000 \,\, \mathrm {GeV}\) (i.e. ILC1000 or CLIC1500). The wino DM case (dark blue), has larger production cross sections at pp colliders. However, the \(\Delta m\) is so small in this scenario that it largely escapes the HL-LHC searches (and the different production cross section turns out to be irrelevant). It can be expected that \(\Delta m \,\gtrsim \,0.7 \,\, \mathrm {GeV}\) can be covered with the monojet searches at the FCC-hh. Very low, but still allowed mass differences can be covered by the disappearing track searches. However, as for the higgsino DM case, also here a high-energy \(e^+e^-\) collider will be necessary to cover the whole allowed parameter space. While the currently allowed points would require CLIC1500, a parameter space reduced by the HL-LHC disappearing track searches, resulting e.g. in \(m_{{\tilde{\chi }}_{1}^0} \,\lesssim \,500 \,\, \mathrm {GeV}\) could be covered by the ILC1000. The more complicated case for the future collider analysis is given by the bino/wino parameter points (turquoise), since the limits shown not only assume a small mass difference between \({\tilde{\chi }}_{1}^0\) and \({\tilde{\chi }}_{1}^\pm \), but also pp production cross sections as for the higgsino case. For bino/wino DM these production cross sections turn out to be larger (as for the pure wino case), i.e. displaying the wino/bino points in Fig. 22 should be regarded as conservative with respect to the pp based limits. Consequently, it is expected that the HE-LHC or "latest" the FCC-hh would cover this scenario entirely. On the other hand, the \(e^+e^-\) limits should be directly applicable, and large parts of the parameter space will be covered by the ILC1000, and the entire parameter space by CLIC1500. As discussed in Sect. 4.1 we have not considered explicitly the possibility of Z or h pole annihilation to find agreement of the relic DM density with the other experimental measurements (see, e.g., the discussion in [31]). However, it should be noted that in this context an LSP with \(M \sim m_{{\tilde{\chi }}_{1}^0} \sim M_Z/2\) or \(\sim M_h/2\) (with \(M = M_1\) or \(M_2\) or \(\mu \)) would yield a detectable cross-section \(e^+e^- \rightarrow {\tilde{\chi }}_{1}^0{\tilde{\chi }}_{1}^0\gamma \) in any future high-energy \(e^+e^-\) collider. Furthermore, in the case of higgsino or wino DM, this scenario automatically yields other clearly detectable EW-SUSY production cross-sections at future \(e^+e^-\) colliders. For bino/wino DM this would depend on the values of \(M_2\) and/or \(\mu \). We leave this possibility for future studies. On the other hand, the possibility of A-pole annihilation was discussed for all five scenarios. While it appears a rather remote possibility (particularly for higgsino and wino DM), it cannot be fully excluded by our analysis. However, even in the "worst" case of \({\tilde{l}}^\pm \)-coannihilation Case-L an upper limit on \(m_{{\tilde{\chi }}_{1}^0}\) of \(\sim 260 \,\, \mathrm {GeV}\) can be set. While not as low as in the case of Z or h-pole annihilation, this would still offer good prospects for future \(e^+e^-\) colliders. We leave also this possibility for future studies. The electroweak (EW) sector of the MSSM, consisting of charginos, neutralinos and scalar leptons can account for a variety of experimental data. Concerning the CDM relic abundance, where the MSSM offers a natural candidate, the lightest neutralino, \({\tilde{\chi }}_{1}^0\), while satisfying the (negative) bounds from DD experiments. Concerning the LHC searches, because of comparatively small EW production cross-sections, a relatively light EW sector of the MSSM is also in agreement with the latest experimental exclusion limits. Most importantly, the EW sector of the MSSM can account for the long-standing \(3-4\,\sigma \) discrepancy of \((g-2)_\mu \). Improved experimental results are expected soon [3] by the publication of the Run 1 data of the "MUON G-2" experiment. In this paper we assume that the \({\tilde{\chi }}_{1}^0\) provides MSSM DM candidate, where we take the DM relic abundance measurements [5] as an upper limit. We analyzed several SUSY scenarios, scanning the EW sector (chargino, neutralino and slepton masses as well as \(\tan \beta \)), taking into account all relevant experimental data: the current limit for \((g-2)_\mu \), the relic density bound (as an upper limit), the DD experimental bounds, as well as the LHC searches for EW SUSY particles. Concerning the latter we included all relevant existing data, mostly relying on re-casting via CheckMATE, where several channels were newly implemented into CheckMATE [15]. Concretely, we analyzed five scenarios, depending on the hierarchy between \(M_1\), \(M_2\) and \(\mu \) as well as the mechanism that brings the relic density in agreement with the experimental data. For \(\mu < M_1, M_2\) we have higgsino DM, for \(M_2 < M_1, \mu \) wino DM, whereas for \(M_1 < M_2, \mu \) one can have mixed bino/wino DM if \({\tilde{\chi }}_{1}^\pm \)-coannihilation is responsible for the correct relic density (i.e. \(M_1 \,\lesssim \,M_2\)), or one can have bino DM if \({\tilde{l}}^\pm \)-coannihilation yields the correct relic density, with the mass of the "left-handed" ("right-handed") slepton close to \(m_{{\tilde{\chi }}_{1}^0}\), Case-L (Case-R). Our scans naturally also consist "mixed" cases. Higgsino and wino DM, together with the \((g-2)_\mu \) constraint can only be fulfilled if the relic density is substantially smaller than the measured density. The other three scenarios can easily accommodate the relic DM density as direct measurement, which was analyzed in Ref. [15]. In the present analysis these three scenarios were extended to the case where the relic density is used only as an upper bound. We find in all five cases a clear upper limit on \(m_{{\tilde{\chi }}_{1}^0}\). These are \(\sim 500 \,\, \mathrm {GeV}\) for higgsino DM, \(\sim 600 \,\, \mathrm {GeV}\) for wino DM, while for \({\tilde{\chi }}_{1}^\pm \)-coannihilation we find \(\sim 600 \,\, \mathrm {GeV}\), for \({\tilde{l}}^\pm \)-coannihilation Case-L \(\sim 500 \,\, \mathrm {GeV}\) and for Case-R values up to \(\sim 500 \,\, \mathrm {GeV}\) are allowed. Similarly, upper limits to masses of the coannihilating SUSY particles are found as, \(m_{{\tilde{\chi }}_{2}^0} \sim m_{{\tilde{\chi }}_{1}^\pm } \,\lesssim \,510 \,\, \mathrm {GeV}\) for higgsino DM, \(m_{{\tilde{\chi }}_{1}^\pm } \,\lesssim \,600 \,\, \mathrm {GeV}\) for wino DM, \(m_{{\tilde{\chi }}_{1}^\pm } \,\lesssim \,650 \,\, \mathrm {GeV}\) for bino/wino DM with \({\tilde{\chi }}_{1}^\pm \)-coannihilation and \(m_{{\tilde{l}}_{L}} \,\lesssim \,600 \,\, \mathrm {GeV}\) for bino DM case-L and \(m_{{\tilde{l}}_{R}} \,\lesssim \,600 \,\, \mathrm {GeV}\) for bino DM case-R. For the latter, in the \({\tilde{l}}^\pm \)-coannihilation case-R, the upper limit on the lighter \({\tilde{\tau }}\) is even lower, \(m_{\tilde{\tau }_{2}} \,\lesssim \,500 \,\, \mathrm {GeV}\). The current \((g-2)_\mu \) constraint also yields limits on the rest of the EW spectrum, although much loser bounds are found. These upper bounds set clear collider targets for the HL-LHC and future \(e^+e^-\) colliders. In a second step we assumed that the new result of the Run 1 of the "MUON G-2" collaboration at Fermilab yields a precision comparable to the existing experimental result with the same central value. We analyzed the potential impact of the combination of the Run 1 data with the existing result on the allowed MSSM parameter space. We find that the upper limits on the LSP masses are decreased to about \(\sim 480 \,\, \mathrm {GeV}\) for higgsino DM, \(\sim 500 \,\, \mathrm {GeV}\) for wino DM, as well as \(\sim 500 \,\, \mathrm {GeV}\) for \({\tilde{\chi }}_{1}^\pm \)-coannihilation, \(\sim 450 \,\, \mathrm {GeV}\) for \({\tilde{l}}^\pm \)-coannihilation Case-L and \(\sim 380 \,\, \mathrm {GeV}\) in Case-R, sharpening the collider targets substantially. Similarly, the upper limits on the NLSP masses go down to about \(m_{{\tilde{\chi }}_{2}^0} \sim m_{{\tilde{\chi }}_{1}^\pm } \,\lesssim \,490 \,\, \mathrm {GeV}\) for higgsino DM, \(m_{{\tilde{\chi }}_{1}^\pm } \,\lesssim \,600 \,\, \mathrm {GeV}\) for wino DM, \(m_{{\tilde{\chi }}_{1}^\pm } \sim 550 \,\, \mathrm {GeV}\) for bino/wino DM with \({\tilde{\chi }}_{1}^\pm \)-coannihilation, as well as \(m_{{\tilde{l}}_{L}} \,\lesssim \,550 \,\, \mathrm {GeV}\) for bino DM case-L and \(m_{{\tilde{l}}_{R}} \,\lesssim \,440 \,\, \mathrm {GeV}\), \(m_{\tilde{\tau }_{2}} \,\lesssim \,380 \,\, \mathrm {GeV}\) for bino DM case-R. For the three cases with a small mass difference between the lightest chargino and the LSP (higgsino DM, wino DM and bino/wino DM with \({\tilde{\chi }}_{1}^\pm \)-coannihilation) we have also briefly analyzed the prospects for future collider searches, specially targeting these small mass differences. The results for the points surviving all (current) constraints have been displayed in the plane of the Next-to-LSP (in these cases \(m_{{\tilde{\chi }}_{1}^\pm }\)) vs. \(\Delta m = m_{{\tilde{\chi }}_{1}^\pm } - m_{{\tilde{\chi }}_{1}^0}\), overlaid with the anticipated limits from future collider searches from HL-LHC, HE-LHC, FCC-hh, ILC and CLIC [135, 136]. (These limits are directly applicable to higgsino and wino DM, and likely to be overly conservative for bino/wino DM with \({\tilde{\chi }}_{1}^\pm \)-coannihilation.) While parts of the parameter spaces can be covered by future pp machines, a full exploration of the parameter space requires a future \(e^+e^-\) colliders. Taking into account limits from future \(e^+e^-\) machines, a center-of-mass energy of \(\sqrt{s} = 1 \,\, \mathrm {TeV}\) will be sufficient to conclusively explore higgsino DM, wino DM and bino/wino DM with \({\tilde{\chi }}_{1}^\pm \)-coannihilation. Finally, we have briefly checked for the parameter points with the lightest and heaviest spectra in each of the five scenarios that they are in agreement with the limits from indirect DM detection experiments, such as ANTARES/IceCube and Super-Kamiokande. Assuming the most favorable galactic DM profile, some of the lighter mass spectra, particularly for the higgsino and wino DM scenarios, can be tested by the CTA, whereas the other scenarios have too low DM annihilation cross sections. While we have attempted to cover nearly the full set of possibilities that the EW spectrum of the MSSM presents, while being in agreement with all the various experimental constraints, our studies can be extended/completed in the following ways. One can analyze the cases of: (i) complex parameters in the chargino/neutralino sector (then also taking EDM constraints into account); (ii) different soft SUSY-breaking parameters in the three generations of sleptons, and/or between the left- and right-handed entries in the case of \({\tilde{\chi }}_{1}^\pm \)-coannihilation; (iii) A-pole annihilation, focusing on very low \(m_{{\tilde{\chi }}_{1}^0}\) and \(\tan \beta \) values; (iv) h- and Z-pole annihilation, which could be realized for sufficiently heavy sleptons. On the other hand, one can also restrict oneself further by assuming some GUT relations between, in particular, \(M_1\) and \(M_2\). We leave these analyses for future work. In this paper we have analyzed in particular the impact of \((g-2)_\mu \) measurements on the EW SUSY spectrum. The current measurement sets clear upper limits on many EW SUSY particle masses. On the other hand we have also clearly demonstrated the potential of the upcoming measurements of the "MUON G-2" collaboration, which have a strong potential of sharpening the future collider experiment prospects. We are eagerly awaiting the new "MUON G-2" result to illuminate further the possibility of relatively light EW BSM particles. Note added After the completion of this work a new \((g-2)_\mu \) measurement was published [134]. The new combined world average of \(\Delta a_\mu ^{\mathrm{new}} = (25.1 \pm 5.9) \times 10^{-10}\) yields about the same \(2\,\sigma \) lower limit of the deviation as the numbers used here, \(\Delta a_\mu ^{-2\sigma } = 12.9 \times 10^{-10} \approx \Delta a_\mu ^{-2\sigma , \mathrm{new}} = 13.3 \times 10^{-10}\). Consequently, it is expected that the new world average yields similar upper limits as the ones reported here, see, e.g., Ref. [133]. This manuscript has no associated data or the data will not be deposited. [Authors' comment: There is no additional data or the data is already included in the manuscript.] Other articles that investigated (part of) this interplay are Refs. [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36], see Ref. [15] for a detailed description. In Ref. [15] a slightly different value was used, with a negligible effect on the results. A recent analysis in the higgsino DM scenario, requiring the LSP to yield the full DM relic density, can be found in Ref. [114]. We thank C. Schappacher for evaluating the mass shift for our wino DM points (following Ref. [117]). We thank T. Stefaniak for the evaluation of this limit, using the latest version of HiggsBounds [126,127,128,129,130]. A very few points have at higher \(m_{{\tilde{l}}_{1}}\) have the "correct" relic density, which can be attributed to a substantially large sample of points. Clearly visible in the two plots are different point densities due to independent samplings that were subsequently joined. However, the ranges covered by the different searches (or colors) are clearly visible and not affected by the different point densities. It should be noted that the improved GM2Calc version will be taken into account in the update of Ref. [133], which improves on Ref. [15] by using the new world average including the recently published Fermilab measurement of \((g-2)_\mu \) [134]. The scan for case-R turned out to be computationally extremely expensive, resulting in a relatively low density of LHC tested points. Consequently, all upper bounds on particle masses should be taken with a relatively large uncertainty. Very recent analysis using vector boson fusion processes [137] or a future muon collider [138] for compressed (higgsino) spectra have not been taken into account. A. Keshavarzi, D. Nomura, T. Teubner, Phys. Rev. D 101(1), 014029 (2020). arXiv:1911.00367 [hep-ph] ADS Google Scholar M. Davier, A. Hoecker, B. Malaescu, Z. Zhang, Eur. Phys. J. C 80(3), 241 (2020). https://doi.org/10.1140/epjc/s10052-020-7792-2 [erratum: Eur. Phys. J. C 80(5), 410 (2020)]. arXiv:1908.00921 [hep-ph] See: https://theory.fnal.gov/events/event/first-results-from-the-muon-g-2-experiment-at-fermilab J. Grange et al. (Muon g-2 Collaboration), arXiv:1501.06858 [physics.ins-det] N. Aghanim et al, Astron. Astrophys. 641, A6 (2020). https://doi.org/10.1051/0004-6361/201833910 [erratum: Astron. Astrophys. 652, C4 (2021)]. arXiv:1807.06209 [astro-ph.CO] D.S. Akerib et al. (LUX Collaboration), Phys. Rev. Lett. 118(2), 021303 (2017). arXiv:1608.07648 [astro-ph.CO] X. Cui et al. (PandaX-II Collaboration), Phys. Rev. Lett. 119(18), 181302 (2017). arXiv:1708.06917 [astro-ph.CO] E. Aprile et al. (XENON Collaboration), Phys. Rev. Lett. 121(11), 111302 (2018). arXiv:1805.12562 [astro-ph.CO] H. Nilles, Phys. Rep. 110, 1 (1984) R. Barbieri, Riv. Nuovo Cim. 11, 1 (1988) H. Haber, G. Kane, Phys. Rep. 117, 75 (1985) J. Gunion, H. Haber, Nucl. Phys. B 272, 1 (1986) H. Goldberg, Phys. Rev. Lett. 50, 1419 (1983) J. Ellis, J. Hagelin, D. Nanopoulos, K. Olive, M. Srednicki, Nucl. Phys. B 238, 453 (1984) M. Chakraborti, S. Heinemeyer, I. Saha, Eur. Phys. J. C 80(10), 984 (2020). arXiv:2006.15157 [hep-ph] A. Bharucha, S. Heinemeyer, F. von der Pahlen, Eur. Phys. J. C 73(11), 2629 (2013). arXiv:1307.4237 [hep-ph] A. Fowlie, K. Kowalska, L. Roszkowski, E.M. Sessolo, Y.L.S. Tsai, Phys. Rev. D 88, 055012 (2013). https://doi.org/10.1103/PhysRevD.88.055012arXiv:1306.1567 [hep-ph] T. Han, S. Padhi, S. Su, Phys. Rev. D 88(11), 115010 (2013). arXiv:1309.5966 [hep-ph] K. Kowalska, L. Roszkowski, E.M. Sessolo, A.J. Williams, JHEP 06, 020 (2015). https://doi.org/10.1007/JHEP06(2015)020arXiv:1503.08219 [hep-ph] A. Choudhury, S. Mondal, Phys. Rev. D 94(5), 055024 (2016). arXiv:1603.05502 [hep-ph] A. Datta, N. Ganguly, S. Poddar, Phys. Lett. B 763, 213–217 (2016). arXiv:1606.04391 [hep-ph] M. Chakraborti, A. Datta, N. Ganguly, S. Poddar, JHEP 1711, 117 (2017). arXiv:1707.04410 [hep-ph] K. Hagiwara, K. Ma, S. Mukhopadhyay, Phys. Rev. D 97(5), 055035 (2018). arXiv:1706.09313 [hep-ph] T.T. Yanagida, W. Yin, N. Yokozaki, JHEP 06, 154 (2020). arXiv:2001.02672 [hep-ph] W. Yin, N. Yokozaki, Phys. Lett. B 762, 72–79 (2016). arXiv:1607.05705 [hep-ph] M. Chakraborti, U. Chattopadhyay, S. Poddar, JHEP 1709, 064 (2017). arXiv:1702.03954 [hep-ph] E.A. Bagnaschi et al., Eur. Phys. J. C 75, 500 (2015). arXiv:1508.01173 [hep-ph] A. Datta, N. Ganguly, JHEP 1801, 103 (2019). arXiv:1809.05129 [hep-ph] P. Cox, C. Han, T.T. Yanagida, Phys. Rev. D 98(5), 055015 (2018). arXiv:1805.02802 [hep-ph] M. Carena, J. Osborne, N.R. Shah, C.E.M. Wagner, Phys. Rev. D 98(11), 115010 (2018). https://doi.org/10.1103/PhysRevD.98.115010arXiv:1809.11082 [hep-ph] P. Cox, C. Han, T.T. Yanagida, N. Yokozaki, JHEP 08, 097 (2019). arXiv:1811.12699 [hep-ph] M. Abdughani, K. Hikasa, L. Wu, J.M. Yang, J. Zhao, JHEP 1911, 095 (2019). arXiv:1909.07792 [hep-ph] M. Endo, K. Hamaguchi, S. Iwamoto, T. Kitahara, JHEP 2004, 165 (2020). arXiv:2001.11025 [hep-ph] G. Pozzo, Y. Zhang, Phys. Lett. B 789, 582–591 (2019). arXiv:1807.01476 [hep-ph] P. Athron et al. (GAMBIT), Eur. Phys. J. C 79(5), 395 (2019). arXiv:1809.02097 [hep-ph] H. Baer et al., The international linear collider technical design report—volume 2: physics. arXiv:1306.6352 [hep-ph] G. Moortgat-Pick et al., Eur. Phys. J. C 75(8), 371 (2015). arXiv:1504.01726 [hep-ph] L. Linssen, A. Miyamoto, M. Stanitzki, H. Weerts, arXiv:1202.5940 [physics.ins-det] H. Abramowicz et al. (CLIC Detector and Physics Study Collaboration), arXiv:1307.5288 [hep-ex] P. Burrows et al. (CLICdp and CLIC Collaborations), CERN Yellow Rep. Monogr. 1802, 1 (2018). arXiv:1812.06018 [physics.acc-ph] See: https://twiki.cern.ch/twiki/bin/view/AtlasPublic/SupersymmetryPublicResults See: https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsSUS E. Bagnaschi et al., Eur. Phys. J. C 78(3), 256 (2018). arXiv:1710.11091 [hep-ph] P. Slavich et al., Eur. Phys. J. C 81(5), 450 (2021). https://doi.org/10.1140/epjc/s10052-021-09198-2. arXiv:2012.15629 [hep-ph] G.W. Bennett et al. (Muon g-2 Collaboration), Phys. Rev. D 73, 072003 (2006). arXiv:hep-ex/0602035 M. Tanabashi et al. (Particle Data Group), Phys. Rev. D 98(3), 030001 (2018) T. Aoyama et al., Phys. Rept. 887, 1–166 (2020). https://doi.org/10.1016/j.physrep.2020.07.006. arXiv:2006.04822 [hep-ph] T. Aoyama, M. Hayakawa, T. Kinoshita, M. Nio, Phys. Rev. Lett. 109, 111808 (2012). arXiv:1205.5370 [hep-ph] T. Aoyama, T. Kinoshita, M. Nio, Atoms 7(1), 28 (2019) A. Czarnecki, W.J. Marciano, A. Vainshtein, Phys. Rev. D 67, 073006 (2003). arXiv:hep-ph/0212229 C. Gnendiger, D. Stöckinger, H. Stöckinger-Kim, Phys. Rev. D 88, 053005 (2013). arXiv:1306.5546 [hep-ph] M. Davier, A. Hoecker, B. Malaescu, Z. Zhang, Eur. Phys. J. C 77(12), 827 (2017). arXiv:1706.09436 [hep-ph] A. Keshavarzi, D. Nomura, T. Teubner, Phys. Rev. D 97(11), 114025 (2018). arXiv:1802.02995 [hep-ph] G. Colangelo, M. Hoferichter, P. Stoffer, JHEP 02, 006 (2019). arXiv:1810.00007 [hep-ph] M. Hoferichter, B.L. Hoid, B. Kubis, JHEP 08, 137 (2019). arXiv:1907.01556 [hep-ph] A. Kurz, T. Liu, P. Marquard, M. Steinhauser, Phys. Lett. B 734, 144–147 (2014). arXiv:1403.6400 [hep-ph] K. Melnikov, A. Vainshtein, Phys. Rev. D 70, 113006 (2004). arXiv:hep-ph/0312226 P. Masjuan, P. Sanchez-Puertas, Phys. Rev. D 95(5), 054026 (2017). arXiv:1701.05829 [hep-ph] G. Colangelo, M. Hoferichter, M. Procura, P. Stoffer, JHEP 04, 161 (2017). arXiv:1702.07347 [hep-ph] M. Hoferichter, B.L. Hoid, B. Kubis, S. Leupold, S.P. Schneider, JHEP 10, 141 (2018). arXiv:1808.04823 [hep-ph] A. Gérardin, H.B. Meyer, A. Nyffeler, Phys. Rev. D 100(3), 034520 [arXiv:1903.09471 [hep-lat]] (2019) J. Bijnens, N. Hermansson-Truedsson, A. Rodríguez-Sánchez, Phys. Lett. B 798, 134994 (2019). arXiv:1908.03331 [hep-ph] G. Colangelo, F. Hagelstein, M. Hoferichter, L. Laub, P. Stoffer, JHEP 03, 101 (2020). arXiv:1910.13432 [hep-ph] T. Blum, N. Christ, M. Hayakawa, T. Izubuchi, L. Jin, C. Jung, C. Lehner, Phys. Rev. Lett. 124(13), 132002 (2020). arXiv:1911.08123 [hep-lat] G. Colangelo, M. Hoferichter, A. Nyffeler, M. Passera, P. Stoffer, Phys. Lett. B 735, 90–91 (2014). arXiv:1403.7512 [hep-ph] T. Mibe (J-PARC g-2 Collaboration), Chin. Phys. C 34, 745 (2010) S. Borsanyi et al., Nature 593(7857), 51–55 (2021). https://doi.org/10.1038/s41586-021-03418-1. arXiv:2002.12347 [hep-lat] C. Lehner, A.S. Meyer, Phys. Rev. D 101, 074515 (2020). arXiv:2003.04177 [hep-lat] A. Crivellin, M. Hoferichter, C.A. Manzari, M. Montull, Phys. Rev. Lett. 125(9), 091801 (2020). arXiv:2003.04886 [hep-ph] A. Keshavarzi, W.J. Marciano, M. Passera, A. Sirlin, Phys. Rev. D 102(3), 033002 [arXiv:2006.12666 [hep-ph]] (2020). arXiv:2006.12666 [hep-ph] E. de Rafael, Phys. Rev. D 102(5), 056025 (2020). arXiv:2006.13880 [hep-ph] T. Moroi, Phys. Rev. D 53, 6565 (1996). arXiv:hep-ph/9512396 [Erratum: Phys. Rev. D 56, 4424 (1997)] S.P. Martin, J.D. Wells, Phys. Rev. D 64, 035003 (2001). arXiv:hep-ph/0103067 M. Badziak, K. Sakurai, JHEP 1910, 024 (2019). arXiv:1908.03607 [hep-ph] P. Athron et al., Eur. Phys. J. C 76(2), 62 (2016). arXiv:1510.08071 [hep-ph] P. von Weitershausen, M. Schafer, H. Stockinger-Kim, D. Stockinger, Phys. Rev. D 81, 093004 (2010). arXiv:1003.5820 [hep-ph] H. Fargnoli, C. Gnendiger, S. Paßehr, D. Stöckinger, H. Stöckinger-Kim, JHEP 1402, 070 (2014). arXiv:1311.1775 [hep-ph] M. Bach, J.H. Park, D. Stöckinger, H. Stöckinger-Kim, JHEP 1510, 026 (2015). arXiv:1504.05500 [hep-ph] S. Heinemeyer, D. Stockinger, G. Weiglein, Nucl. Phys. B 690, 62–80 (2004). arXiv:hep-ph/0312264 S. Heinemeyer, D. Stockinger, G. Weiglein, Nucl. Phys. B 699, 103–123 (2004). arXiv:hep-ph/0405255 W.G. Hollik, G. Weiglein, J. Wittbrodt, JHEP 03, 109 (2019). arXiv:1812.04644 [hep-ph] P.M. Ferreira, M. Mühlleitner, R. Santos, G. Weiglein, J.Wittbrodt, JHEP 09, 006 (2019). arXiv:1905.10234 [hep-ph] M. Aaboud et al. (ATLAS Collaboration), Eur. Phys. J. C 78(12), 995 (2018). arXiv:1803.02762 [hep-ex] G. Aad et al. (ATLAS Collaboration), Eur. Phys. J. C 80(2), 123 (2020). arXiv:1908.08215 [hep-ex] M. Drees, H. Dreiner, D. Schmeier, J. Tattersall, J.S. Kim, Comput. Phys. Commun. 187, 227–265 (2015). arXiv:1312.2591 [hep-ph] J.S. Kim, D. Schmeier, J. Tattersall, K. Rolbiecki, Comput. Phys. Commun. 196, 535–562 (2015). arXiv:1503.01123 [hep-ph] D. Dercks, N. Desai, J.S. Kim, K. Rolbiecki, J. Tattersall, T. Weber, Comput. Phys. Commun. 221, 383–418 (2017). arXiv:1611.09856 [hep-ph] G. Aad et al. (ATLAS Collaboration), Phys. Rev. D 101(5), 052005 (2020). arXiv:1911.12606 [hep-ex] M. Aaboud et al. (ATLAS Collaboration), JHEP 06, 022 (2018). arXiv:1712.02118 [hep-ex] A.M. Sirunyan et al. (CMS Collaboration), Phys. Lett. B 806, 135502 (2020). arXiv:2004.05153 [hep-ex] G. Belanger, F. Boudjema, A. Pukhov, A. Semenov, Comput. Phys. Commun. 149, 103–120 (2002). arXiv:hep-ph/0112278 G. Belanger, F. Boudjema, A. Pukhov, A. Semenov, Comput. Phys. Commun. 177, 894–895 (2007) G. Belanger, F. Boudjema, A. Pukhov, A. Semenov, Comput. Phys. Commun. 185, 960–985 (2014). https://doi.org/10.1016/j.cpc.2013.10.016. arXiv:1305.0237 [hep-ph] K.J. Bae, H. Baer, E.J. Chun, Phys. Rev. D 89(3), 031701 (2014). arXiv:1309.0519 [hep-ph] A. Sommerfeld, Ann. Phys. 403, 257 (1931) T.R. Slatyer, arXiv:1710.05137 [hep-ph] A. Hryczuk, K. Jodlowski, E. Moulin, L. Rinchiuso, L. Roszkowski, E.M. Sessolo, S. Trojanowski, JHEP 10, 043 (2019). arXiv:1905.00315 [hep-ph] L. Rinchiuso, O. Macias, E. Moulin, N.L. Rodd, T.R. Slatyer, Phys. Rev. D 103(2), 023011 (2021). arXiv:2008.00692 [astro-ph.HE] R.T. Co, B. Sheff, J.D. Wells, arXiv:2105.12142 [hep-ph] A. Albert et al. (ANTARES and IceCube), Phys. Rev. D 102(8), 082002 (2020). arXiv:2003.06614 [astro-ph.HE] K. Abe et al. (Super-Kamiokande), Phys. Rev. D 102(7), 072002 (2020). arXiv:2005.05109 [hep-ex] J.F. Navarro, C.S. Frenk, S.D.M. White, Astrophys. J. 462, 563–575 (1996). arXiv:astro-ph/9508025 A. Burkert, J. Lett. Astrophys. 447, L25 (1995). arXiv:astro-ph/9504041 B.S. Acharya et al. (CTA Consortium), arXiv:1709.07997 [astro-ph.IM] J. Einasto, Trudy Astrofizicheskogo Instituta Alma-Ata 5, 87–100 (1965) H. Baer, V. Barger, P. Huang, A. Mustafayev, X. Tata, Phys. Rev. Lett. 109, 161802 (2012). arXiv:1207.3343 [hep-ph] H. Baer, V. Barger, D. Mickelson, Phys. Rev. D 88(9), 095013 (2013). arXiv:1309.2984 [hep-ph] H. Baer, V. Barger, M. Savoy, H. Serce, Phys. Lett. B 758, 113–117 (2016). arXiv:1602.07697 [hep-ph] H. Baer, V. Barger, D. Sengupta, X. Tata, Eur. Phys. J. C 78(10), 838 (2018). arXiv:1803.11210 [hep-ph] K.J. Bae, H. Baer, V. Barger, D. Sengupta, Phys. Rev. D 99(11), 115027 (2019). arXiv:1902.10748 [hep-ph] ADS MathSciNet Google Scholar H. Baer, V. Barger, S. Salam, D. Sengupta, Phys. Rev. D 102(7), 075012 (2020). arXiv:2005.13577 [hep-ph] A. Delgado, M. Quirós, Phys. Rev. D 103(1), 015024 (2021). arXiv:2008.00954 [hep-ph] T. Fritzsche, T. Hahn, S. Heinemeyer, F. von der Pahlen, H. Rzehak, C. Schappacher, Comput. Phys. Commun. 185, 1529–1545 (2014). arXiv:1309.1692 [hep-ph] S. Heinemeyer, C. Schappacher, Eur. Phys. J. C 77(9), 649 (2017). arXiv:1704.07627 [hep-ph] A. Djouadi, J.L. Kneur, G. Moultaka, Comput. Phys. Commun. 176, 426 (2007). arXiv:hep-ph/0211331 Joint LEP2 SUSY Working Group, the ALEPH, DELPHI, L3 and OPAL Collaborations, see: http://lepsusy.web.cern.ch/lepsusy/ M. Muhlleitner, A. Djouadi, Y. Mambrini, Comput. Phys. Commun. 168, 46 (2005). arXiv:hep-ph/0311167 H. Fukuda, N. Nagata, H. Otono, S. Shirai, Phys. Lett. B 781, 306–311 (2018). arXiv:1703.09675 [hep-ph] J. Hisano, S. Matsumoto, M.M. Nojiri, O. Saito, Phys. Rev. D 71, 015007 (2005). arXiv:hep-ph/0407168 E. Aprile et al. (XENON), JCAP 11, 031 (2020). arXiv:2007.08796 [physics.ins-det] G. Aad et al. (ATLAS Collaboration), Phys. Rev. Lett. 125(5), 051801 (2020). arXiv:2002.12223 [hep-ex] P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, K.E. Williams, Comput. Phys. Commun. 181, 138–167 (2010). arXiv:0811.4169 [hep-ph] P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, K.E. Williams, Comput. Phys. Commun. 182, 2605–2631 (2011). arXiv:1102.1898 [hep-ph] P. Bechtle, O. Brein, S. Heinemeyer, O. Stål, T. Stefaniak, G. Weiglein, K.E. Williams, Eur. Phys. J. C 74(3), 2693 (2014). arXiv:1311.0055 [hep-ph] P. Bechtle, S. Heinemeyer, O. Stål, T. Stefaniak, G. Weiglein, Eur. Phys. J. C 75(9), 421 (2015). arXiv:1507.06706 [hep-ph] P. Bechtle, D. Dercks, S. Heinemeyer, T. Klingl, T. Stefaniak, G. Weiglein, J. Wittbrodt, Eur. Phys. J. C 80(12), 1211 (2020). arXiv:2006.06007 [hep-ph] M. Ibe, S. Matsumoto, R. Sato, Phys. Lett. B 721, 252–260 (2013). arXiv:1212.5989 [hep-ph] F. Ruppin, J. Billard, E. Figueroa-Feliciano, L. Strigari, Phys. Rev. D 90(8), 083510 (2014). arXiv:1408.3581 [hep-ph] M. Chakraborti, S. Heinemeyer, I. Saha, arXiv:2104.03287 [hep-ph] B. Abi et al. (Muon g-2), Phys. Rev. Lett. 126(14), 141801 (2021). arXiv:2104.03281 [hep-ex] R.K. Ellis et al., arXiv:1910.11775 [hep-ex] M. Berggren, arXiv:2003.12391 [hep-ph] C. Natalia, F. Andrés, G. Alfredo, J. Will, S. Paul, T. Cheng, arXiv:2102.10194 [hep-ph] R. Capdevilla, F. Meloni, R. Simoniello, J. Zurita, JHEP 06, 133 (2021). https://doi.org/10.1007/JHEP06(2021)133. arXiv:2102.11292 [hep-ph] (ATLAS Collaboration), ATL-PHYS-PUB-2018-031 X. Cid Vidal et al., CERN Yellow Rep. Monogr. 7, 585–865 (2019). arXiv:1812.07831 [hep-ph] T. Golling et al., CERN Yellow Rep. 3, 441–634 (2017). arXiv:1606.00947 [hep-ph] We thank M. Berggren, J. List and D. Stöckinger for helpful discussions. We thank C. Schappacher for the calculation chargino/neutralino mass shifts. We thank T. Stefaniak for the evaluation of the latest direct search limits for heavy MSSM Higgs bosons in the \(M_h^{125}({\tilde{\chi }})\) scenario [124], using HiggsBounds [126,127,128,129,130]. I.S. gratefully thanks S. Matsumoto for the cluster facility. The work of I.S. is supported by World Premier International Research Center Initiative (WPI), MEXT, Japan. The work of S.H. is supported in part by the MEINCOP Spain under contract PID2019-110058GB-C21 and in part by the AEI through the grant IFT Centro de Excelencia Severo Ochoa SEV-2016-0597. The work of M.C. is supported by the project AstroCeNT: Particle Astrophysics Science and Technology Centre, carried out within the International Research Agendas programme of the Foundation for Polish Science financed by the European Union under the European Regional Development Fund. Astrocent, Nicolaus Copernicus Astronomical Center of the Polish Academy of Sciences, ul. Rektorska 4, 00-614, Warsaw, Poland Manimala Chakraborti Instituto de Física Teórica (UAM/CSIC), Universidad Autónoma de Madrid, Cantoblanco, 28049, Madrid, Spain Sven Heinemeyer Campus of International Excellence UAM+CSIC, Cantoblanco, 28049, Madrid, Spain Instituto de Física de Cantabria (CSIC-UC), 39005, Santander, Spain Kavli IPMU (WPI), UTIAS, University of Tokyo, Kashiwa, Chiba, 277-8583, Japan Ipsita Saha Correspondence to Manimala Chakraborti. Funded by SCOAP3 Chakraborti, M., Heinemeyer, S. & Saha, I. Improved \({(g-2)_\mu }\) measurements and wino/higgsino dark matter. Eur. Phys. J. C 81, 1069 (2021). https://doi.org/10.1140/epjc/s10052-021-09814-1 DOI: https://doi.org/10.1140/epjc/s10052-021-09814-1
CommonCrawl
Profiles for bounded solutions of dispersive equations, with applications to energy-critical wave and Schrödinger equations CPAA Home On a system of semirelativistic equations in the energy space July 2015, 14(4): 1327-1341. doi: 10.3934/cpaa.2015.14.1327 Well-posedness and ill-posedness results for the regularized Benjamin-Ono equation in weighted Sobolev spaces G. Fonseca 1, , G. Rodríguez-Blanco 1, and W. Sandoval 1, Universidad Nacional de Colombia, Bogotá, Colombia, Colombia, Colombia Received May 2013 Revised September 2013 Published April 2015 We consider the initial value problem associated to the regularized Benjamin-Ono equation, rBO. Our aim is to establish local and global well-posedness results in weighted Sobolev spaces via contraction principle. We also prove a unique continuation property that implies that arbitrary polynomial type decay is not preserved yielding sharp results regarding well-posedness of the initial value problem in most weighted Sobolev spaces. Keywords: Benjamin-Ono equation, well-posedness, weighted Sobolev spaces.. Mathematics Subject Classification: Primary: 35B05; Secondary: 35B6. Citation: G. Fonseca, G. Rodríguez-Blanco, W. Sandoval. Well-posedness and ill-posedness results for the regularized Benjamin-Ono equation in weighted Sobolev spaces. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1327-1341. doi: 10.3934/cpaa.2015.14.1327 J. Angulo, M. Scialom and C. Banquet, The regularized Benjamin-Ono and BBM equations: Well-posedness and nonlinear stability,, \emph{J. Diff. Eqs.}, 250 (2011), 4011. doi: 10.1016/j.jde.2010.12.016. Google Scholar J. P. Albert and J. L. Bona, Comparisons between model equations for long waves,, \emph{J. Nonlinear Sci.}, 1 (1991), 345. doi: 10.1007/BF01238818. Google Scholar T. B. Benjamin, Internal waves of permanent form in fluids of great depth,, \emph{J. Fluid Mech.}, 29 (1967), 559. Google Scholar J. Bona and H. Kalisch, Models for internal waves in deep water,, \emph{Discrete Contin. Dyn. Syst.}, 6 (2000), 1. doi: 10.3934/dcds.2000.6.1. Google Scholar H. Brezis and T. Gallouet, Nonlinear Schrödinger evolution equations,, \emph{Nonlinear Anal. TMA.}, 4 (1980), 677. doi: 10.1016/0362-546X(80)90068-1. Google Scholar J. Duoandikoetxea, Fourier Analysis,, Grad. Studies in Math., 29 (2001). Google Scholar L. Escauriaza, C. E. Kenig, G. Ponce and L. Vega, On uniqueness properties of solutions of the k-generalized KdV equations,, \emph{J. Funct. Anal.}, 244 (2007), 504. doi: 10.1016/j.jfa.2006.11.004. Google Scholar L. Escauriaza, C. E. Kenig, G. Ponce and L. Vega, The sharp hardy uncertainty principle for Schrödinger evolutions,, \emph{Duke Math. J.}, 155 (2010), 163. doi: 10.1215/00127094-2010-053. Google Scholar G. Fonseca and G. Ponce, The I.V.P for the Benjamin-Ono equation in weighted Sobolev spaces,, \emph{J. Funct. Anal.}, 260 (2011), 436. doi: 10.1016/j.jfa.2010.09.010. Google Scholar G. Fonseca, F. Linares and G. Ponce, The I.V.P for the Benjamin-Ono equation in weighted Sobolev spaces II,, \emph{J. Funct. Anal.}, 262 (2012), 2031. doi: 10.1016/j.jfa.2011.12.017. Google Scholar G. Fonseca, F. Linares and G. Ponce, The IVP for the dispersion generalized Benjamin-Ono equation in weighted Sobolev spaces,, \emph{Ann. I. H. Poincar\'e-AN}, 30 (2013), 763. doi: 10.1016/j.anihpc.2012.06.006. Google Scholar R. Hunt, B. Muckenhoupt and R. Wheeden, Weighted norm inequalities for the conjugate function and Hilbert transform,, \emph{Trans. AMS.}, 176 (1973), 227. Google Scholar R. J. Iorio, On the Cauchy problem for the Benjamin-Ono equation,, \emph{Comm. P. D. E.}, 11 (1986), 1031. doi: 10.1080/03605308608820456. Google Scholar R. J. Iorio, Unique continuation principle for the Benjamin-Ono equation,, \emph{Diff. and Int. Eqs.}, 16 (2003), 1281. Google Scholar R. J. Iorio and V. Iorio, Fourier Analysis and Partial Differential Equations,, Cambridge University Press, (2001). doi: 10.1017/CBO9780511623745. Google Scholar H. Kalisch, Error analysis of a spectral projection of the regularized Benjamin-Ono equation,, \emph{BIT}, 45 (2005), 69. doi: 10.1007/s10543-005-2636-x. Google Scholar T. Kato, On the Cauchy problem for the (generalized) Korteweg-de Vries equation,, \emph{Advances in Mathematics Supplementary Studies, 8 (1983), 93. Google Scholar T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-stokes equations,, \emph{Comm. Pure Appl. Math.}, 41 (1988), 891. doi: 10.1002/cpa.3160410704. Google Scholar C. E. Kenig, G. Ponce and L. Vega, On the unique continuation of solutions to the generalized KdV equation,, \emph{Math. Res. Letters}, 10 (2003), 833. doi: 10.4310/MRL.2003.v10.n6.a10. Google Scholar D. J. Korteweg and G. de Vries, On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves,, \emph{Philos. Mag. 5}, 39 (1895), 22. Google Scholar H. Koch and N. Tzvetkov, Nonlinear wave interactions for the Benjamin-Ono equation,, \emph{Int. Math. Res. Not.}, 30 (2005), 1833. doi: 10.1155/IMRN.2005.1833. Google Scholar F. Linares and G. Ponce, Introduction to Nonlinear Dispersive Equations,, Universitext. Springer, (2009). Google Scholar L. Molinet, J. C. Saut and N. Tzvetkov, Ill-posedness issues for the Benjamin-Ono and related equations,, \emph{SIAM J. Math. Anal.}, 33 (2001), 982. doi: 10.1137/S0036141001385307. Google Scholar B. Muckenhoupt, Weighted norm inequalities for the Hardy maximal function,, \emph{Trans. AMS.}, 165 (1972), 207. Google Scholar J. Nahas and G. Ponce, On the persistent properties of solutions to semi-linear Schrödinger equation,, \emph{Comm. P.D.E.}, 34 (2009), 1208. doi: 10.1080/03605300903129044. Google Scholar J. Nahas and G. Ponce, On the persistent properties of solutions of nonlinear dispersive equations in weighted Sobolev spaces,, \emph{RIMS Kokyuroku Bessatsu (RIMS Proceedings)}, (2011), 23. Google Scholar H. Ono, Algebraic solitary waves on stratified fluids,, \emph{J. Phy. Soc. Japan}, 39 (1975), 1082. Google Scholar G. Ponce, On the global well-posedness of the Benjamin-Ono equation,, \emph{Diff. Int. Eqs.}, 4 (1991), 527. Google Scholar E. M. Stein, The characterization of functions arising as potentials,, \emph{Bull. Amer. Math. Soc.}, 67 (1961), 102. Google Scholar E. M. Stein, Harmonic Analysis,, Princeton University Press, (1993). Google Scholar Luc Molinet, Francis Ribaud. Well-posedness in $ H^1 $ for generalized Benjamin-Ono equations on the circle. Discrete & Continuous Dynamical Systems - A, 2009, 23 (4) : 1295-1311. doi: 10.3934/dcds.2009.23.1295 Francis Ribaud, Stéphane Vento. Local and global well-posedness results for the Benjamin-Ono-Zakharov-Kuznetsov equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 449-483. doi: 10.3934/dcds.2017019 Dongfeng Yan. KAM Tori for generalized Benjamin-Ono equation. Communications on Pure & Applied Analysis, 2015, 14 (3) : 941-957. doi: 10.3934/cpaa.2015.14.941 Jerry Bona, H. Kalisch. Singularity formation in the generalized Benjamin-Ono equation. Discrete & Continuous Dynamical Systems - A, 2004, 11 (1) : 27-45. doi: 10.3934/dcds.2004.11.27 Amin Esfahani, Steve Levandosky. Solitary waves of the rotation-generalized Benjamin-Ono equation. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 663-700. doi: 10.3934/dcds.2013.33.663 Sondre Tesdal Galtung. A convergent Crank-Nicolson Galerkin scheme for the Benjamin-Ono equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1243-1268. doi: 10.3934/dcds.2018051 Jaime Angulo, Carlos Matheus, Didier Pilod. Global well-posedness and non-linear stability of periodic traveling waves for a Schrödinger-Benjamin-Ono system. Communications on Pure & Applied Analysis, 2009, 8 (3) : 815-844. doi: 10.3934/cpaa.2009.8.815 Nakao Hayashi, Pavel Naumkin. On the reduction of the modified Benjamin-Ono equation to the cubic derivative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 237-255. doi: 10.3934/dcds.2002.8.237 Kenta Ohi, Tatsuo Iguchi. A two-phase problem for capillary-gravity waves and the Benjamin-Ono equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (4) : 1205-1240. doi: 10.3934/dcds.2009.23.1205 Lufang Mi, Kangkang Zhang. Invariant Tori for Benjamin-Ono Equation with Unbounded quasi-periodically forced Perturbation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 689-707. doi: 10.3934/dcds.2014.34.689 C. H. Arthur Cheng, John M. Hong, Ying-Chieh Lin, Jiahong Wu, Juan-Ming Yuan. Well-posedness of the two-dimensional generalized Benjamin-Bona-Mahony equation on the upper half plane. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 763-779. doi: 10.3934/dcdsb.2016.21.763 Vishal Vasan, Bernard Deconinck. Well-posedness of boundary-value problems for the linear Benjamin-Bona-Mahony equation. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 3171-3188. doi: 10.3934/dcds.2013.33.3171 Hongjie Dong. Dissipative quasi-geostrophic equations in critical Sobolev spaces: Smoothing effect and global well-posedness. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1197-1211. doi: 10.3934/dcds.2010.26.1197 Alan Compelli, Rossen Ivanov. Benjamin-Ono model of an internal wave under a flat surface. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4519-4532. doi: 10.3934/dcds.2019185 Sergey Zelik, Jon Pennant. Global well-posedness in uniformly local spaces for the Cahn-Hilliard equation in $\mathbb{R}^3$. Communications on Pure & Applied Analysis, 2013, 12 (1) : 461-480. doi: 10.3934/cpaa.2013.12.461 Nikolaos Bournaveas. Local well-posedness for a nonlinear dirac equation in spaces of almost critical dimension. Discrete & Continuous Dynamical Systems - A, 2008, 20 (3) : 605-616. doi: 10.3934/dcds.2008.20.605 G. Fonseca G. Rodríguez-Blanco W. Sandoval
CommonCrawl
Representation Theory and Algebraic Combinatorics Unit (Liron Speyer) OIST Representation Theory Seminar Recordings of talks will be available on this page and here if the speaker agrees to have their talk recorded. Tuesday 1st February 2022, 4:30–5:30pm JST (UTC+9), online on Zoom Daniel Tubbenhauer, University of Sydney Title: On weighted KLRW algebras Abstract: Weighted KLRW algebras are diagram algebras that depend on continuous parameters. Varying these parameters gives a way to interpolate between various algebras that appear in (categorical) representation theory such as semisimple algebras, KLR algebras, quiver Schur algebras and diagrammatic Cherednik algebras. This talk is a friendly (and diagrammatic!) introduction explaining these algebras, with no prior knowledge about any of these assumed. Based on joint work A. Mathas. Meeting ID: 96107261495, the password will be announced 24 hours before the talk. Tuesday 15th February 2022, 4:30–5:30pm JST (UTC+9), online on Zoom Title: TBA Tuesday 1st March 2022, 4:30–5:30pm JST (UTC+9), online on Zoom Robert Spencer, University of Cambridge Tuesday 14th December 2021, 4:30–5:30pm JST (UTC+9), online on Zoom Joanna Meinel Title: Decompositions of tensor products: Highest weight vectors from branching Abstract: We consider tensor powers of the natural sl_n-representation, and we look for descriptions of highest weight vectors therein: We discuss explicit formulas for n=2, a recursion for n=3, and for bigger n we demonstrate how Jucys-Murphy elements allow us to compute highest weight vectors (both in theory and in practice using sage). This is joint work with Pablo Zadunaisky. Tuesday 30th November 2021, 9:30–10:30am JST (UTC+9), online on Zoom Tianyuan Xu, University of Colorado at Boulder Title: On Kazhdan–Lusztig cells of a-value 2 Abstract: The Kazhdan–Lusztig (KL) cells of a Coxeter group are subsets of the group defined using the KL basis of the associated Iwahori–Hecke algebra. The cells of symmetric groups can be computed via the Robinson–Schensted correspondence, but for general Coxeter groups combinatorial descriptions of KL cells are largely unknown except for cells of a-value 0 or 1, where a refers to an N-valued function defined by Lusztig that is constant on each cell. In this talk, we will report some recent progress on KL cells of a-value 2. In particular, we classify Coxeter groups with finitely many elements of a-value 2, and for such groups we characterize and count all cells of a-value 2 via certain posets called heaps. We will also mention some applications of these results for cell modules. This is joint work with Richard Green. Tuesday 16th November 2021, 4:30–5:30pm JST (UTC+9), online on Zoom Samuel Creedon, City, University of London Title: Defining an Affine Partition Algebra Abstract: In this talk we motivate the construction of a new algebra called the affine partition algebra. We summarise some of its basic properties and describe an action which extends the Schur-Weyl duality between the symmetric group and partition algebra. We establish connections to the affine partition category defined recently by Brundan and Vargas and show that such a category is a full subcategory of the Heisenberg category. Tuesday 9th November 2021, 9:30–10:30am JST (UTC+9), online on Zoom Arik Wilbert, University of South Alabama Title: Real Springer fibers and odd arc algebras Abstract: Arc algebras were introduced by Khovanov in a successful attempt to lift the quantum sl2 Reshetikhin-Turaev invariant for tangles to a homological invariant. When restricted to knots and links, Khovanov's homology theory categorifies the Jones polynomial. Ozsváth-Rasmussen-Szabó discovered a different categorification of the Jones polynomial called odd Khovanov homology. Recently, Naisse-Putyra were able to extend odd Khovanov homology to tangles using so-called odd arc algebras which were originally constructed by Naisse-Vaz. The goal of this talk is to discuss a geometric approach to understanding odd arc algebras and odd Khovanov homology using Springer fibers over the real numbers. This is joint work with J. N. Eberhardt and G. Naisse. Tuesday 26th October 2021, 9:30–10:30am JST (UTC+9), online on Zoom George Seelinger, University of Michigan Title: Diagonal harmonics and shuffle theorems Abstract: The Shuffle Theorem, conjectured by Haglund, Haiman, Loehr, Remmel and Ulyanov, and proved by Carlsson and Mellit, describes the characteristic of the $S_n$-module of diagonal harmonics as a weight generating function over labeled Dyck paths under a line with slope −1. The Shuffle Theorem has been generalized in many different directions, producing a number of theorems and conjectures. We provide a generalized shuffle theorem for paths under any line with negative slope using different methods from previous proofs of the Shuffle Theorem. In particular, our proof relies on showing a "stable" shuffle theorem in the ring of virtual GL_l-characters. Furthermore, we use our techniques to prove the Extended Delta Conjecture, yet another generalization of the original Shuffle Conjecture. Tuesday 12th October 2021, 3:00–4:00pm JST (UTC+9), online on Zoom Paul Wedrich, University of Hamburg Title: Knots and quivers, HOMFLYPT and DT Abstract: I will describe a surprising connection between the colored HOMFLY-PT polynomials of knots and the motivic Donaldson-Thomas invariants of certain symmetric quivers, which was conjectured by Kucharski-Reineke-Stosic-Sulkowski. I will outline a proof of this correspondence for arborescent links via quivers associated with 4-ended tangles. Finally, I will speculate about how much of the HOMFLY-PT skein theory might carry over to the realm of DT quiver invariants and what kind of geometric information about knots might be encoded in these quivers. This is joint work with Marko Stosic. Tuesday 28th September 2021, 4:30–5:30pm JST (UTC+9), online on Zoom Hankyung Ko, Uppsala University Title: Bruhat orders and Verma modules Abstract: The Bruhat order on a Weyl group has a representation theoretic interpretation in terms of Verma modules. The talk concerns resulting interactions between combinatorics and homological algebra. I will present several questions around the above realization of the Bruhat order and answer them based on a series of recent works, partly joint with Volodymyr Mazorchuk and Rafael Mrden. Tuesday 6th July 2021, 4:30–5:30pm JST (UTC+9), online on Zoom Diego Millan Berdasco, Queen Mary University of London Title: On the computation of decomposition numbers of the symmetric group Abstract: The most important open problem in the modular representation theory of the symmetric group is finding the multiplicity of the simple modules as composition factors of the Specht modules. In characteristic 0 the Specht modules are just the simple modules of the symmetric group algebra, but in positive characteristic they may no longer be simple. We will survey the rich interplay between representation theory and combinatorics of integer partitions, review a number of results in the literature which allow us to compute composition series for certain infinite families of Specht modules from a finite subset of them, and discuss the extension of these techniques to other Specht modules. Tuesday 15th June 2021, 4:30–5:30pm JST (UTC+9), online on Zoom Sira Gratz, University of Glasgow Title: Grassmannians, Cluster Algebras and Hypersurface Singularities Abstract: Grassmannians are objects of great combinatorial and geometric beauty, which arise in myriad contexts. Their coordinate rings serve as a classical example of cluster algebras, as introduced by Fomin and Zelevinsky at the start of the millennium, and their combinatorics is intimately related to algebraic and geometric concepts such as to representations of algebras and hypersurface singularities. At the core lies the idea of generating an object from a so-called "cluster" via the concept of "mutation". In this talk, we offer an overview of Grassmannian combinatorics in a cluster theoretic framework, and ultimately take them to the limit to explore the a priori simple question: What happens if we allow infinite clusters? We introduce the notion of a cluster algebra of infinite rank (based on joint work with Grabowski), and of a Grassmannian category of infinite rank (based on joint work with August, Cheung, Faber and Schroll). Friday 28th May 2021, 4:30–5:30pm JST (UTC+9), online on Zoom, note the unusual day Max Gurevich, Technion, Israel Title: New constructions for irreducible representations in monoidal categories of type A Abstract: One ever-recurring goal of Lie theory is the quest for effective and elegant descriptions of collections of simple objects in categories of interest. A cornerstone feat achieved by Zelevinsky in that regard, was the combinatorial explication of the Langlands classification for smooth irreducible representations of p-adic GL_n. It was a forerunner for an exploration of similar classifications for various categories of similar nature, such as modules over affine Hecke algebras or quantum affine algebras, to name a few. A next step - reaching an effective understanding of all reducible finite-length representations remains largely a difficult task throughout these settings. Recently, joint with Erez Lapid, we have revisited the original Zelevinsky setting by suggesting a refined construction of all irreducible representations, with the hope of shedding light on standing decomposition problems. This construction applies the Robinson-Schensted-Knuth transform, while categorifying the determinantal Doubilet-Rota-Stein basis for matrix polynomial rings appearing in invariant theory. In this talk, I would like to introduce the new construction into the setting of modules over quiver Hecke (KLR) algebras. In type A, this category may be viewed as a quantization/gradation of the category of representations of p-adic groups. I will explain how adopting that point of view and exploiting recent developments in the subject (such as the normal sequence notion of Kashiwara-Kim) brings some conjectural properties of the RSK construction (back in the p-adic setting) into resolution. Time permits, I will discuss the relevance of the RSK construction to the representation theory of cyclotomic Hecke algebras. Tuesday 27th April 2021, 4:30–5:30pm JST (UTC+9), online on Zoom Mark Wildon, Royal Holloway, University of London Title: Plethysms, polynomial representations of linear groups and Hermite reciprocity over an arbitrary field Abstract: Let \(E\) be a \(2\)-dimensional vector space. Over the complex numbers the irreducible polynomial representations of the special linear group \(SL(E)\) are the symmetric powers \(Sym^r E\). Composing polynomial representations, for example to form \(Sym^4 Sym^2 E\), corresponds to the plethysm product on symmetric functions. Expressing such a plethysm as a linear combination of Schur functions has been identified by Richard Stanley as one of the fundamental open problems in algebraic combinatorics. In my talk I will use symmetric functions to prove some classical isomorphisms, such as Hermite reciprocity \(Sym^m Sym^r E \cong Sym^r Sym^m E\), and some others discovered only recently in joint work with Rowena Paget. I will then give an overview of new results showing that, provided suitable dualities are introduced, Hermite reciprocity holds over arbitrary fields; certain other isomorphisms (we can prove) have no modular generalization. The final part is joint work with my Ph.D student Eoghan McDowell. Stacey Law, University of Cambridge Title: Sylow branching coefficients and a conjecture of Malle and Navarro Abstract: The relationship between the representation theory of a finite group and that of its Sylow subgroups is a key area of interest. For example, recent results of Malle–Navarro and Navarro–Tiep–Vallejo have shown that important structural properties of a finite group \(G\) are controlled by the permutation character \(\mathbb{1}_P\big\uparrow^G\), where \(P\) is a Sylow subgroup of \(G\) and \(\mathbb{1}_P\) denotes the trivial character of \(P\). We introduce so-called Sylow branching coefficients for symmetric groups to describe multiplicities associated with these induced characters, and as an application confirm a prediction of Malle and Navarro from 2012, in joint work with E. Giannelli, J. Long and C. Vallejo. Tuesday 30th March 2021, 9:30–10:30am JST (UTC+9), online on Zoom Alexander Kleshchev, University of Oregon Title: Irreducible restrictions from symmetric groups to subgroups Abstract: We motivate, discuss history of, and present a solution to the following problem: describe pairs (G,V) where V is an irreducible representation of the symmetric group S_n of dimension >1 and G is a subgroup of S_n such that the restriction of V to G is irreducible. We do the same with the alternating group A_n in place of S_n. The latest results on the problem are joint with Pham Huu Tiep and Lucia Morotti. Tuesday 16th March 2021, 4:30–5:30pm JST (UTC+9), online on Zoom Catharina Stroppel, University of Bonn Title: Verlinde rings and DAHA actions Abstract: In this talk we will briefly recall how quantum groups at roots give rise Verlinde algebras which can be realised as Grothendieck rings of certain monoidal categories. The ring structure is quite interesting and was very much studied in type A. I will try to explain how one gets a natural action of certain double affine Hecke algebras and show how known properties of these rings can be deduced from this action and in which sense modularity of the tensor category is encoded. Tuesday 2nd March 2021, 4:30–5:30pm JST (UTC+9), online on Zoom Aaron Yi Rui Low, National University of Singapore Title: Adjustment matrices Abstract: James's Conjecture predicts that the adjustment matrix for weight \(w\) blocks of the Iwahori-Hecke algebras \(\mathcal{H}_{n}\) and the \(q\)-Schur algebras \(\mathcal{S}_{n}\) is the identity matrix when \(w<\mathrm{char}(\mathbb{F})\). Fayers has proved James's Conjecture for blocks of \(\mathcal{H}_{n}\) of weights 3 and 4. We shall discuss some results on adjustment matrices that have been used to prove James's Conjecture for blocks of \(\mathcal{S}_{n}\) of weights 3 and 4 in an upcoming paper. If time permits, we will look at a proof of the weight 3 case. Tuesday 16th February 2021, 9:30–10:30am JST (UTC+9), online on Zoom Nick Davidson, Reed College Title: Type P Webs and Howe Duality Abstract: Webs are combinatorially defined diagrams which encode homomorphisms between tensor products of certain representations of Lie (super)algebras. I will describe some recent work with Jon Kujawa and Rob Muth which defines webs for the type P Lie superalgebra, and then uses these webs to deduce an analog of Howe duality for this Lie superalgebra. Tuesday 2nd February 2021, 9:30–10:30am JST (UTC+9), online on Zoom Alistair Savage, University of Ottawa Title: Affinization of monoidal categories Abstract: We define the affinization of an arbitrary monoidal category, corresponding to the category of string diagrams on the cylinder. We also give an alternative characterization in terms of adjoining dot generators to the category. The affinization formalizes and unifies many constructions appearing in the literature. We describe a large number of examples coming from Hecke-type algebras, braids, tangles, and knot invariants. Tuesday 26th January 2021, 4:30–5:30pm JST (UTC+9), L4E48, and online on Zoom Chris Chung, OIST Title: \(\imath\)Quantum Covering Groups: Serre presentation and canonical basis Abstract: In 2016, Bao and Wang developed a general theory of canonical basis for quantum symmetric pairs \((\mathbf{U}, \mathbf{U}^\imath)\), generalizing the canonical basis of Lusztig and Kashiwara for quantum groups and earning them the 2020 Chevalley Prize in Lie Theory. The \(\imath\)divided powers are polynomials in a single generator that generalize Lusztig's divided powers, which are monomials. They can be similarly perceived as canonical basis in rank one, and have closed form expansion formulas, established by Berman and Wang, that were used by Chen, Lu and Wang to give a Serre presentation for coideal subalgebras \(\mathbf{U}^\imath\), featuring novel \(\imath\)Serre relations when \(\tau(i) = i\). Quantum covering groups, developed by Clark, Hill and Wang, are a generalization that `covers' both the Lusztig quantum group and quantum supergroups of anisotropic type. In this talk, I will talk about how the results for \(\imath\)-divided powers and the Serre presentation can be extended to the quantum covering algebra setting, and subsequently applications to canonical basis for \(\mathbf{U}^\imath_\pi\), the quantum covering analogue of \(\mathbf{U}^\imath\), and quantum covering groups at roots of 1. Tuesday 12th January 2021, 4:30–5:30pm JST (UTC+9), online on Zoom Matthew Fayers, Queen Mary University of London Title: The Mullineux map Abstract: In characteristic p, the simple modules for the symmetric group \(S_n\) are the James modules \(D^\lambda\), labelled by p-regular partitions of n. If we let \(sgn\) denote the 1-dimensional sign module, then for any p-regular \(\lambda\), the module \(D^\lambda\otimes sgn\) is also a simple module. So there is an involutory bijection \(m_p\) on the set of p-regular partitions such that \(D^\lambda\otimes sgn=D^{m_p(\lambda)}\). The map \(m_p\) is called the Mullineux map, and an important problem is to describe \(m_p\) combinatorially. There are now several known solutions to this problem. I will describe the history of this problem and explain the known combinatorial solutions, and then give a new solution based on crystals and regularisation. Tuesday 8th December 2020, 4:30–5:30pm JST (UTC+9), online on Zoom Nicolas Jacon, University of Reims Champagne-Ardenne Title: Cores of Ariki-Koike algebras Abstract: We study a natural generalization of the notion of cores for l-partitions : the (e, s)-cores. We relate this notion with the notion of weight as defined by Fayers and use it to describe the blocks of Ariki-Koike algebras. Qi Wang, Osaka University Title: On \(\tau\)-tilting finiteness of Schur algebras Abstract: Support \(\tau\)-tilting modules are introduced by Adachi, Iyama and Reiten in 2012 as a generalization of classical tilting modules. One of the importance of these modules is that they are bijectively corresponding to many other objects, such as two-term silting complexes and left finite semibricks. Let \(V\) be an \(n\)-dimensional vector space over an algebraically closed field \(\mathbb{F}\) of characteristic \(p\). Then, the Schur algebra \(S(n,r)\) is defined as the endomorphism ring \(\mathsf{End}_{\mathbb{F}G_r}\left ( V^{\otimes r} \right )\) over the group algebra \(\mathbb{F}G_r\) of the symmetric group \(G_r\). In this talk, we discuss when the Schur algebra \(S(n,r)\) has only finitely many pairwise non-isomorphic basic support \(\tau\)-tilting modules. Jieru Zhu, Hausdorff Institute of Mathematics Title: Double centralizer properties for the Drinfeld double of the Taft algebras Abstract: The Drinfeld double of the taft algebra, \(D_n\), whose ground field contains \(n\)-th roots of unity, has a known list of 2-dimensional irreducible modules. For each of such module \(V\), we show that there is a well-defined action of the Temperley-Lieb algebra \(TL_k\) on the \(k\)-fold tensor product of \(V\), and this action commutes with that of \(D_n\). When \(V\) is self-dual and when \(k \leq 2(n-1)\), we further establish a isomorphism between the centralizer algebra of \(D_n\) on \(V^{\otimes k}\), and \(TL_k\). Our inductive argument uses a rank function on the TL diagrams, which is compatible with the nesting function introduced by Russell-Tymoczko. This is joint work with Georgia Benkart, Rekha Biswal, Ellen Kirkman and Van Nguyen. Notes, slides Rob Muth, Washington and Jefferson College Title: Specht modules and cuspidal ribbon tableaux Abstract: Representation theory of Khovanov-Lauda-Rouquier (KLR) algebras in affine type A can be studied through the lens of Specht modules, associated with the cellular structure of cyclotomic KLR algebras, or through the lens of cuspidal modules, associated with categorified PBW bases for the quantum group of affine type A. Cuspidal ribbons provide a sort of combinatorial bridge between these approaches. I will describe some recent results on cuspidal ribbon tableaux, and some implications in the world of KLR representation theory, such as bounds on labels of simple factors of Specht modules, and the presentation of cuspidal modules. Portions of this talk are joint work with Dina Abbasian, Lena Difulvio, Gabrielle Pasternak, Isabella Sholtes, and Frances Sinclair. Eoghan McDowell, Royal Holloway, University of London Title: The image of the Specht module under the inverse Schur functor Abstract: The Schur functor and its inverses give an important connection between the representation theories of the symmetric group and the general linear group. Kleshchev and Nakano proved in 2001 that when the characteristic of the field is at least 5, the image of the Specht module under the inverse Schur functor is isomorphic to the dual Weyl module. In this talk I will address what happens in characteristics 2 and 3: in characteristic 3, the isomorphism holds, and I will give an elementary proof of this fact which covers also all characteristics other than 2; in characteristic 2, the isomorphism does not hold for all Specht modules, and I will classify those for which it does. Our approach is with Young tableaux, tabloids and Garnir relations. Tuesday 29th September 2020, 9:00–10:00am JST (UTC+9), online on Zoom Mahir Can, Tulane University Title: Spherical Varieties and Combinatorics Abstract: Let G be a reductive complex algebraic group with a Borel subgroup B. A spherical G-variety is an irreducible normal G-variety X where B has an open orbit. If X is affine, or if it is projective but endowed with a G-linearized ample line bundle, then the group action criteria for the sphericality is in fact equivalent to the representation theoretic statement that a certain space of functions (related to X) is multiplicity-free as a G-module. In this talk, we will discuss the following question about a class of spherical varieties: if X is a Schubert variety for G, then when do we know that X is a spherical L-variety, where L is the stabilizer of X in G. Chris Bowman, University of Kent Title: Tautological p-Kazhdan–Lusztig Theory for cyclotomic Hecke algebras Abstract: We discuss a new explicit isomorphism between (truncations of) quiver Hecke algebras and Elias–Williamson's diagrammatic endomorphism algebras of Bott–Samelson bimodules. This allows us to deduce that the decomposition numbers of these algebras (including as examples the symmetric groups and generalised blob algebras) are tautologically equal to the associated p-Kazhdan–Lusztig polynomials, provided that the characteristic is greater than the Coxeter number. This allows us to give an elementary and explicit proof of the main theorem of Riche–Williamson's recent monograph and extend their categorical equivalence to cyclotomic Hecke algebras, thus solving Libedinsky–Plaza's categorical blob conjecture. Liron Speyer Chris Chung Eoghan McDowell Jieru Zhu OIST News Feed | Tuesday, February 1, 2022 - 16:30 to 17:30
CommonCrawl
Search Results: 1 - 10 of 100 matches for " " Distance Learning in Einstein's Fourth Dimension [cached] Robin Throne Nonpartisan Education Review , 2007, Abstract: This article blends the concepts of space-time from theoretical physics and Einstein's Relativity Theory to discuss the spatio-temporal nature of distance education. By comparing and contrasting speed-of-light space travel with the speed of computer processing, the leap is made to consider the fourth dimension and its phenomena for the Web traveler. Learning events are compared with events in time to depict the theory presented. A fourth extremal even unimodular lattice of dimension 48 [PDF] Gabriele Nebe Mathematics , 2013, Abstract: We show that there is a unique extremal even unimodular lattice of dimension 48 which has an automorphism of order 5 of type 5-(8,16)-8. Since the three known extremal lattices do not admit such an automorphism, this provides a new example of an extremal even unimodular lattice in dimension 48. Quantization effects for a fourth order equation of exponential growth in dimension four [PDF] Frederic Robert Abstract: We investigate the asymptotic behavior as $k \to +\infty$ of sequences $(u_k)_{k\in\mathbb{N}}\in C^4(\Omega)$ of solutions of the equations $\Delta^2 u_k=V_k e^{4u_k}$ on $\Omega$, where $\Omega$ is a bounded domain of $\mathbb{R}^4$ and $\lim_{k\to +\infty}V_k=1$ in $C^0_{loc}(\Omega)$. The corresponding 2-dimensional problem was studied by Br\'ezis-Merle and Li-Shafrir who pointed out that there is a quantization of the energy when blow-up occurs. As shown by Adimurthi, Struwe and the author, such a quantization does not hold in dimension four for the problem in its full generality. We prove here that under natural hypothesis on $\Delta u_k$, we recover such a quantization as in dimension 2. A Model of the Universe that Can Explain Dark Matter, Dark Energy, and the Fourth Space Dimension [PDF] Donald J. Koterwas Journal of Modern Physics (JMP) , 2016, DOI: 10.4236/jmp.2016.710112 Abstract: This paper explains how a model of the universe can be constructed by incorporating time and space into geometry in a unique way to produce a 4-space dimension/1-time dimension model. The model can then show how dark matter can be the gravity that is produced by real matter that exists throughout our entire universe. The model can also show how dark energy is not an increase in energy that is causing the accelerated expansion of the universe, but is an accelerating decrease in matter throughout the universe as the stars and galaxies in the universe continue to convert matter into energy during their life cycles. And then the model can show how a fourth space dimension must exist in our universe to locate a point in space. The fourth Dimension [PDF] Eugen Schweitzer Physics , 2009, Abstract: In different passages of his dialogues, Plato showed deep mathematically-based physical insights. Regrettably most readers overlooked the respective statements, or they utterly did not understand those hints since they were full of philological fallacious terms. Respectable translators misinterpreted such statements and therefore Plato's respective remarks were not recognized as substantial knowledge. Furthermore, Plato often supplemented such basic remarks by diffusely veiled and varied allusions that were often ironically hidden somewhere in his dialogues by inconspicuous double meanings. However, this mode of intentionally coded discrete communication was generally not understood because such irony is not to everyone's taste. However, the attempts to reconstruct Plato's system on the basis of admittedly individually interpreted double meanings lead to a conclusive mathematical-physical cyclical system of dimensions. Additionally it was possible to assign Plato's system of philosophical ideas analogously to this cyclical system. Plato took the verifiability of the mathematical-physical results as proof of the system of his ideas and finally as proof of his ethical creed, the unconditional trust in the 'all surmounting Good.' Sign-Preserving Property for Some Fourth-Order Elliptic Operators in One Dimension and Radial Symmetry [PDF] Philippe Laurencot,Christoph Walker Abstract: For a class of one-dimensional linear elliptic fourth-order equations with homogeneous Dirichlet boundary conditions it is shown that a non-positive and non-vanishing right-hand side gives rise to a negative solution. A similar result is obtained for the same class of equations for radially symmetric solutions in a ball or in an annulus. Several applications are given, including applications to nonlinear equations and eigenvalue problems. The Fourth Dimension of Biochemical Pathways [PDF] Richard Robinson PLOS Biology , 2012, DOI: 10.1371/journal.pbio.0060151 Data Transmission in the Fourth Dimension [PDF] Serge Burckel Abstract: Alice wants to send an arbitrary binary word to Bob. We show here that there is no problem for her to do that with only two bits. Of course, we consider here information like a signal in 4D. Dimension quotients [PDF] M. Hartl,R. Mikhailov,I. B. S. Passi Abstract: We present two approaches, one homological and the other simplicial, for the investigation of dimension quotients of groups. The theory is illustrated, in particular, with a conceptual discussion of the fourth and fifth dimension quotients.
CommonCrawl
HITRANonline Line-by-line Absorption Cross Sections Collision Induced Absorption Aerosol Properties HITEMP Isotopologues Line-by-line Parameters Cross section Data HITRAN papers About HITRAN Definitions and Units Motivation and Definition Although the goal of the HITRAN database[1] is to provide the user with line lists of fundamental spectral parameters for absorbing molecules in the gas state, we have encountered numerous cases where at the present time this is neither possible nor feasible. These cases include several situations: (1) important atmospheric molecules with significant infrared features in specific spectral regions for which there is not a sufficient amount of line parameters presently available, neither measured at high resolution nor theoretically calculated; (2) molecules with low fundamental vibrational modes of vibration which results in very dense spectra of overlapping bands; (3) molecules with severe resonances or other phenomena that have not been successfully modeled at present; and (4) UV transitions that have not been adequately analyzed at the level of line-by-line representation. Case (1) is the primary category, especially concerning polyatomic molecules with many atoms, including the chlorofluorocarbons (CFCs) and other so-called "heavy" species. Even though for case (2) quite reliable line-by-line parameters for some bands may exist, many significant combination and difference bands may be missing. Examples of this case in HITRAN are chlorine nitrate (ClONO2), sulfur hexafluoride (SF6), and carbon tetrafluoride (CF4). Early in the history of the HITRAN editions, it was decided to add a portion to the compilation called absorption cross-sections. These data are obtained from relatively high-resolution, high-quality, laboratory observations. The data in general have been measured at several temperatures and pressures. HITRAN casts them into a standardized format in order that they may be applied in a quasi-quantitative manner to radiative-transfer codes. The cross sections (cm2molecule-1) can be incorporated directly into a line-by-line calculation as additive spectral values to the infinite resolution line absorption coefficients (with proper wavenumber interpolation), before the instrument function is applied. They are also very valuable in discriminating weak features in a user's spectra. The absorption cross-section, $k_\nu$ (cm2molecule-1), is defined as: \begin{equation} k_\nu = -\frac{\ln \tau_\nu}{\rho L}, \end{equation} where $\tau_\nu$ is the spectral transmittance at wavenumber $\nu$, temperature $T$ and pressure $P$, $\rho$ is the density ($\mathrm{molecule/cm^3}$) along an optical path of length L (cm). In the HITRAN cross section data, $k_\nu$ is presented at several $(T, P)$ combinations representative of atmospheric layers given in commonly tabulated atmospheric models as well as conditions encountered in the polar regions. It should be emphasized that the accuracy of the cross-sectional method is somewhat limited (especially for strong absorptions), and the data sets also do not allow extrapolation and interpolation to different thermodynamic conditions in the same way as line-by-line parameters. However, omitting the cross-sections in spectral regions where no line parameters are available can lead to much larger errors in the interpretation of line-by-line simulations of atmospheric spectra. Structure and Formatting of Data In the HITRAN FTP site, the data are presented as separate files for each individual molecule. Each portion of the file corresponding to a particular temperature-pressure pair begins with a header (see Table 1) that contains information on the wavenumber ($\mathrm{cm^{-1}}$) range, number of cross-section data in this set, temperature (K), and pressure (Torr). The maximum value of the absorption cross sections ($\mathrm{cm^2/molecule}$) and additional information containing the reference to that observation are also presented in each header. The cross sections have been cast into an equal wavenumber interval grid. It should be noted that the initial and final wavenumbers, $\nu_\mathrm{min}$ and $\nu_\mathrm{max}$, respectively, of each temperature–pressure set for a given wavenumber region are not always identical. They have been taken from the analysis of the observations. The sampling intervals are also not necessarily identical for each temperature–pressure set. The wavenumber interval of the grid is obtained by taking the difference of the initial and final wavenumber and dividing this quantity by the number of points, $N$, minus one, i.e., $\Delta\nu = (\nu_\mathrm{max} - \nu_\mathrm{min})/(N - 1)$. This value of $N$ is provided so that a user's personal program can read the table of cross- sections that follows the header. Note that the use of the features of HITRANonline makes much of this discussion transparent. The table below illustrates the format of each header record. Following the header, the cross-section values are arranged in records containing ten values of fields of ten for each cross-section. In other words, each record contains 100 bytes (the trailing bytes on the last line may not be meaningful if $N$ is not a multiple of 10). Field length Molecule 20 Character Chemical formula (right-justified) Minimum wavenumber, $\nu_\mathrm{min}$ 10 Real Start of range ($\mathrm{cm^{-1}}$) Maximum wavenumber, $\nu_\mathrm{max}$ 10 Real End of range ($\mathrm{cm^{-1}}$) Number of points, $N$ 7 Integer Number of cross-sections in set Temperature, $T$ 7 Real Temperature (K) of set Pressure, $P$ 6 Real Pressure of set in Torr Maximum cross-section value in set, $\sigma_\mathrm{max}$ 10 Real Useful for scaling plots ($\mathrm{cm^2/molecule}$) Instrument resolution 5 Real See note Common name 15 Character Familiar name of molecule Not currently used 4 Reserved for future use Broadener 3 Character Air, $\mathrm{N_2}$, or self-broadened (if left blank) Reference 3 Integer Index pointing to source of data Note: Most cross sections have been taken from Fourier transform spectrometer (FTS) measurements. In that case the resolution is given in $\mathrm{cm^{-1}}$. There are some cross-sections taken from grating spectrometer measurements in the UV. In those cases, the resolution is given in milli-Ångströms in the form xxx mÅ, where xxx are up to three digits. In the FTP site, for the IR cross-sections, the data on each molecule (chemical compound) are stored in separate files, which are labeled with the chemical symbol followed by an underscore and IRxx.xsc, where xx stands for the HITRAN edition that the data were originally introduced or later updated and the file extension xsc signifies that it is a list of cross-sections. For example, the file with the name C2H6_IR10.xsc contains ethane (C2H6) infrared cross-sections that were obtained in 2010. It is to be noted that the files may have many temperature–pressure sets for different spectral regions, as indicated by headers throughout the file. While the temperature–pressure $(T, p)$ sets are reasonably complete for many species for an adequate simulation of atmospheric transmission in the spectral regions where those species are active, for other species an insufficiency of the $(T, p)$ sets may become apparent. It is hoped that future measurements at extended sets of $(T, p)$ combinations may help broaden the coverage in the database. Several remarks should be made about the structure of the data: Occasionally data have been provided by contributors with some very small negative values of cross-sections. These values may be due to natural processes of the observation. In these cases, we have attempted to not only provide the data with all negative values zeroed out (to avoid non-physical calculations in some radiative-transfer codes), but have also repeated the files as originally submitted (these are placed in a subordinate folder with negative values kept and in a two-column format). When a provider has measured a large spectral region where there are significant non- absorbing gaps between bands, the HITRAN format may have removed null portions between bands in order to save file storage. In these cases, there will be more $(T, p)$ sets within a file; this is indicated by headers with different wavenumber regions within the file. Some UV cross-section data have been obtained from grating spectrometer observations rather than Fourier transform spectrometers. The original data have come on a wavelength grid rather than wavenumber. We have preserved these original sets in the subordinate folder, while casting them into the equal wavenumber grid in the main folder. Use of Cross-sections using HITRANonline HITRANonline provides a very convenient tool for downloading and plotting the cross- section data. A more detailed description of the features of HITRANonline is provided by Hill et al.[2]. We will give a specific example here. The user wishing to employ the cross-section data, both infrared and ultraviolet, first goes to the pull-down menu under Data Access. One then clicks on the second entry, Absorption Cross Sections. A long table then appears which lists all the molecules that are currently available. For convenience, the user can search for the molecule by typing in either the common name or chemical formula, rather than scrolling down the list. Let us choose the example of ethane ($\mathrm{C_2H_6}$). Searching, or alternatively scrolling down and clicking on ethane, will highlight this molecule and also bring up a table on the right side of the screen showing all the currently available $(T, p)$ sets. In our example there are 14 such sets, all in the same wavenumber range (2545 to 3315 $\mathrm{cm^{-1}}$) and all with the same number of points in the set (613296). (In fact, this choice of molecule shows that all sets were taken at the same resolution and with the same broadener.) Running one's cursor over the table reveals the original source of the data. For the sake of this example, we will choose four of the $(T, p)$ sets: (194.0, 103.9), (215, 119.2), (250.0, 200.1), and (270.0, 376.5). Having clicked on those boxes, we are now ready for step 2, Get data. One clicks on that box at the upper portion of the screen (notice that once the user has chosen at least one $(T, p)$ set, the box is highlighted in green and is ready for applying). There may be a slight delay for the next page to appear, depending on the number of points involved. The next page shows a list of files of the four data sets chosen, and 2 files providing the reference data in different formats, followed by a rough plot of the 4 cross sections chosen. Clicking on any of the four data files (the first one is labeled C2H6_194.0_103.9_2545.0- 3315.0_13.xsc) brings up the full data file, exactly as it would appear in the HITRAN FTP site. One can then save it to one's local computer if desired. The plot that appears is shown in Fig 1. Figure 1. Plot of ethane cross-sections for four chosen temperature-pressure sets. Just above the plot on the right side are some useful tools for viewing the plot in more detail. For example, clicking on the second symbol, Box Zoom, allows the user to go into the plot and expand a portion. If we use this to highlight the area around 3000 wavenumbers and only cross sections between 0 and $2 \times 10^{-18}$, we would obtain the following plot: Figure 2. Zoomed-in portion of Fig. 1. This zoomed-in plot can be further expanded. Figure 3 shows the portion between roughly 3005 and 3008 $\mathrm{cm^{-1}}$: Figure 3. Further expansion of plot. Note that at present we have limited the plot function to a maximum of 5 temperature-pressure sets; choosing more will only provide the data files and references. Further information can be found in the most recent paper devoted to this section of the database 3]. [1] I. E. Gordon, et al., "The HITRAN2016 Molecular Spectroscopic Database", J. Quant. Spectrosc. Radiat. Transfer 203, 3-69 (2017). [link to article] [ADS] [2] C. Hill et al., "HITRANonline: An online interface and the flexible representation of spectroscopic data in the HITRAN database", J. Quant. Spectrosc. Radiat. Transfer 177, 4-14 (2016). [link to article] [ADS] [3] R. V. Kochanov et al., "Infrared absorption cross-sections in HITRAN2016 and beyond: Expansion for climate, environment, and atmospheric applications", J. Quant. Spectrosc. Radiat. Transfer 230, 172-221 (2019). [link to article] [ADS]
CommonCrawl
A modified TNM staging system for non-metastatic colorectal cancer based on nomogram analysis of SEER database Kong Xiangxing1, Li Jun1, Cai Yibo1, Tian Yu2, Chi Shengqiang2, Tong Danyang2, Hu Yeting1, Yang Qi1, Li Jingsong2, Graeme Poston3, Yuan Ying4 & Ding Kefeng ORCID: orcid.org/0000-0002-2380-37171 BMC Cancer volume 18, Article number: 50 (2018) Cite this article To revise the American Joint Committee on Cancer TNM staging system for colorectal cancer (CRC) based on a nomogram analysis of Surveillance, Epidemiology, and End Results (SEER) database, and to prove the rationality of enhancing T stage's weighting in our previously proposed T-plus staging system. Total 115,377 non-metastatic CRC patients from SEER were randomly grouped as training and testing set by ratio 1:1. The Nomo-staging system was established via three nomograms based on 1-year, 2-year and 3-year disease specific survival (DSS) Logistic regression analysis of the training set. The predictive value of Nomo-staging system for the testing set was evaluated by concordance index (c-index), likelihood ratio (L.R.) and Akaike information criteria (AIC) for 1-year, 2-year, 3-year overall survival (OS) and DSS. Kaplan–Meier survival curve was used to valuate discrimination and gradient monotonicity. And an external validation was performed on database from the Second Affiliated Hospital of Zhejiang University (SAHZU). Patients with T1-2 N1 and T1N2a were classified into stage II while T4 N0 patients were classified into stage III in Nomo-staging system. Kaplan–Meier survival curves of OS and DSS in testing set showed Nomo-staging system performed better in discrimination and gradient monotonicity, and the external validation in SAHZU database also showed distinctly better discrimination. The Nomo-staging system showed higher value in L.R. and c-index, and lower value in AIC when predicting OS and DSS in testing set. The Nomo-staging system showed better performance in prognosis prediction and the weight of lymph nodes status in prognosis prediction should be cautiously reconsidered. The existing 7th edition American Joint Committee on Cancer (AJCC) tumor-node-metastasis (TNM) staging system is widely used to predict survival for colorectal cancer patients and to guide adjuvant chemotherapy after potentially curative surgery. The next edition would probably be applied next year, but the preview showed no change in the strategy to classify non-metastatic patients. The TNM staging system classifies patients with positive lymph nodes (N+) into stage III, regardless of T stage. However, patients with early T stage who are N+ can have better outcomes than high T stages N− (negative lymph nodes) patients [1,2,3]. This phenomenon is called survival paradox and may mislead oncologists to overestimate the prognosis risk of stage IIIa but underestimate stage II. Our former research established the T-plus staging system by re-analyzing the summary data from the Surveillance, Epidemiology, and End Results (SEER) tumor registry [4]. The relative weights of T stage and N stage were calculated based on their impact on survival. This study showed that T stage affected postoperative survival more significantly than N stage in non-metastatic colorectal cancer, and the survival paradox was also eliminated by adopting the T-plus staging system [4]. This staging system was verified by a Chinese cohort with 25-year follow-up [5]. However, the implemented process of this research applied linear regression, while the effect of TN combinations on survival may be non-linear. Hence, a more scientific method was required to address this problem. Nomogram is a statistical method that incorporate multiple variables and reduce statistical predictive models into a single numerical estimate of the probability of an event [6]. Nomogram is widely used in predicting tumor prognosis [7]. Memorial Sloan Kettering of Cancer Center offered a nomogram system for colorectal cancer patients to estimate overall survival and disease free survival after surgery on its official website [8]. It was based on 128,853 patients with primary colon cancer reported to SEER in 2011 [9]. Su [10] verified the MSKCC nomogram system providing more accurate survival predictions than the 7th edition TNM staging system in an external Chinese cohort. The aim of this study was to further verify the concept enhancing the weighting of T stage by constructing a modified TNM staging system for non-metastatic colorectal cancer based on nomogram analysis of individual data from SEER database. Total 115,377 patients were enrolled from the SEER 18 Registries Research Data, November 2015 submission (1973–2013). All patients were diagnosed with colorectal cancer between 2004 and 2013 by histopathological examination. The inclusion criteria were: (a) Primary site of tumor was colon (c18.0-c18.9, c19.9) or rectum (c20.9); (b) Histologic types were adenocarcinoma (8140), mucinous adenocarcinoma (8480), mucin-producing adenocarcinoma (8481), mucinous cyst-adenocarcinoma (8470), signet ring cell carcinoma (8490) or undifferentiated carcinoma (8010, 8020, 8021); (c) No distant metastasis (CS mets at dx: 00); (d) No other malignant tumor history (sequence number: 00). The exclusion criteria were: (a) Survival was unknown or less than 3 months; (b) Site specific surgery was unknown (blank); (c) Not receiving surgery (Rx Summ--Surg Prim Site: 0); (d) Regional nodes examined was none or unknown (95–99); (e) Regional nodes positive was none or unknown (95–99); (f) Tumor destruction; no pathologic specimen or pathologic specimen unknown (Rx Summ--Surg Prim Site: 10–19) or no lymph nodes examined (Regional nodes examined: 0); (g) Unknown if surgery performed (Rx Summ--Surg Prim Site: 99) or with no lymph nodes examined (Regional nodes examined: 0). Abbreviations complied with the Coding and Staging Manual of SEER [11]. Total 1194 patients were enrolled from a database of the Second Affiliated Hospital of Zhejiang University (SAHZU) for external validation. All patients were diagnosed with colorectal cancer between 2005 and 2011 by histopathological examination. Detailed and sufficient pathological and surviva information was extracted. Exclusion criteria were death by surgical complications within a 3-month postoperative period, stage 0 or stage IV disease, multiple colorectal cancer, or prior history of malignancy. Database cleansing The basic patient data from SEER and SAHZU database included age, gender, race, follow-up months, survival status, invasive depth of tumor (T stage) and the number of involved lymph node (N stage), etc. Three different survival phases (1-year, 2-year and 3-year) were extracted according to 25 combinations of T stage (1 = T1, 2 = T2, 3 = T3, 4 = T4a, and 5 = T4b) and N stage (0 = N0, 1 = N1a, 2 = N1b, 3 = N2a, and 4 = N2b). Stage N1c (tumor deposit) were classified as N1b considering the debate of definitions of tumor deposit in recent versions of TNM staging systems. All data were independently proofread three times by Dr. Jun Li, Dr. Xiangxing Kong and Dr. Yibo Cai to ensure accuracy. Ph-test of the database The proportional hazards test (Ph-test) was performed on SEER database (training set) to make sure the hazard ratio of T and N stage was consistent with increasing survival months. Generally, nomogram is constructed based on Cox proportional hazards model or Logistic regression analysis [6]. Therefore, If the database could not pass the Ph-test, Logistic regression model will be used instead of Cox proportional hazards model. Construction of nomogram and Nomo-staging system Fifty percent of patients from the SEER database were randomly classified into a training set while the remainders were classified into a testing set. Combining each T stage and N stage, we developed 25 groups of TN combinations. Chi-squared test was performed between the two data sets to verify the balance of the distribution of 25 TN combinations and the distribution of cancer sites. In the three Logistic regression analyses for construction of nomogram and Nomo-staging system, the endpoint events were 1-year, 2-year and 3-year disease specific survival status respectively. The survival status was defined as 0 for alive, 1 for death due to colorectal cancer and blank for other status. Only T and N stage were included as variables. Three nomograms were then formulated based on the result of Logistic regression analysis [6]. For each TN combination, a total nomo-score was calculated in every nomogram. Each nomo-score corresponded to the survival ratio of relative year. A larger nomo-score represented a poorer prognosis. For each nomogram, normalization of the nomo-score was performed using the formula: $$ \frac{x-\mathit{\max}}{\mathit{\max}-\mathit{\min}} $$ x represented a specific nomo-score, max represented the highest nomo-score in this nomogram, while min represented the lowest. The average standardized nomo-score of each TN combination from the three nomograms was then calculated. After ranking the average standardized nomo-score, we planned to divide the 25 TN combinations into five groups (I, II, IIIa, IIIb and IIIc) to be in consistence with previous studies [4, 5]. As we found 5 stage groups would make the staging model simplest to analyze while still retain the survival paradox. A group of clinical colorectal oncologists discussed each stage's cut-off and voted for the Nomo-staging system based on clinical experience and average nomo-score. Evaluation of the performance of staging systems The assessment of the performance of the prognostic system was based on a comprehensive estimation, as we previously described [5], that included: 1) homogeneity, the smaller differences in survival among patients in the same stage indicated a better staging system; 2) discriminatory ability, the greater differences in survival among patients in different stages indicated a better staging system; 3) monotonicity of gradients, the phenomenon that prognosis of patients with earlier stages was better than the patients with higher stages indicated a better staging system. The Logistic regression analysis on 1-year, 2-year, 3-year overall survival and disease specific survival was performed using the testing set. The likelihood ratio (L.R.) χ2 test was used to measure homogeneity. Staging systems with higher chi-square values are better than those with lower chi-square values. The Akaike information criteria (AIC) value and the concordance index (c-index) were calculated to measure discriminatory ability [12]. A smaller AIC value indicated a better staging system. A higher c-index value indicated a better staging system. Survival curves were drawn using the Kaplan-Meier method. In drafting the Kaplan-Meier curves, the survival status was defined 1 for death caused by colorectal cancer and 0 for other status for disease specific survival, the survival status was defined 1 for death caused by any reasons and 0 for censored data for overall survival. The log-rank test and trend χ2 test were used to determine significance and to compare the discriminatory and gradient monotonicity. A higher χ2 score indicated a better staging system. Statistic software All data was stored in Microsoft EXCEL. R 3.2.2 (GUI 1.66) was used to perform the Ph-test, construct nomograms and perform evaluations. Graphpad Prism 6.0c (GraphPad Software Inc., San Diego, CA, USA) was used to perform Chi-squared tests and draw histograms. The Kaplan-Meier survival curves were drawn using Stata 12.0 software (StataCorp LP, College Station, TX, USA). The distributions of the measurement data were tested by skewness and kurtosis normality tests. Data that was not normally distributed were described by the median and inter-quartile range (M, IQR). A two-sided P-value of 0.05 or less was considered to indicate statistical significance. Basic information of the patients from SEER database and SAHZU A total of 115,377 patients were extracted from the SEER database. The baseline characters were listed in Additional file 1: Table S1. The median follow-up was 39 months (IQR = 51 months). These patients were randomly grouped as training and testing set by ratio 1:1. The distribution of 25 TN combination between the training set and testing set had no significant difference (χ2 = 20.28, P = 0.6806, Additional file 2: Figure S1). Additionally, there was no significant difference in the distribution of cancer site between two data sets (\( \mathcal{X} \)2 = 0.0467, P = 0.8289, Additional file 2: Figure S1). A total of 1194 patients were enrolled from the SAHZU database. The baseline characters were listed in Additional file 1: Table S2. The median follow-up was 56 months (IQR = 25 months). The construct of the three nomograms Three nomograms describing 1 year, 2-year and 3-year disease specific survival were established by R (Fig. 1a-c). Each TN combination had a nomo-score, which indicated the risk of death. The lowest nomo-score was 0 (T1 N0) in 1-year, 2-year and 3-year. This combination indicated the best prognosis, which meant having lowest risk of death in the 1st year, 2nd year and 3rd year post potentially curative surgery. The highest nomo-score was 179 (T4bN2b) in 1-year, 183 (T4bN2b) in 2-year and 198 (T4bN2b) in 3-year. As the median follow-up was only 39 months, the Logistic regression analysis for more than 3-year disease specific survival was impossible. The nomograms of disease specific survival for training set. a, 1-year disease specific survival; b, 2-year disease specific survival; c, 3-year disease specific survival Establishment of the Nomo-staging system The result of ranking average nomo-score was listed in Additional file 1: Table S3. The Nomo-staging system was established according to the expert consensus (Table 1). The distribution of patients in the 7th AJCC TNM staging system and nomo-staging system was listed in Additional file 1: Table S1. Table 1 TN categories of two staging systems for colorectal cancer Evaluation of the Nomo-staging system The overall survival and disease specific survival Kaplan-Meier curves in the testing set were constructed. For the Nomo-staging system, stage II and stage IIIa were clearly differentiated for both overall survival (Fig. 2) and disease specific survival (Fig. 3), and the patients with higher stages showed poorer prognoses. For the Nomo-staging system, none of the survival curves for any of the stages crossed. However, for the 7th edition TNM staging system, the overall survival and disease specific survival curves of stage IIIa crossed with stage I which is the reason for the survival paradox. The trend χ2 of the 7th edition TNM staging system was lower than Nomo-staging system, which indicated the improvement of discriminatory ability and monotonicity of gradients. The Kaplan-Meier overall survival curves of testing set colorectal cancer patients according to two staging systems The Kaplan-Meier disease specific survival curves of testing set colorectal cancer patients according to two staging systems The external validation of SAHZU database by Kaplan-Meier curves clearly showed the survival paradox between stage II and stage IIIa in the 7th edition TNM staging system (Fig. 4), while the nomo-staging system revised the monotonicity of gradients. The Kaplan-Meier overall survival curves of SAHZU database colorectal cancer patients according to two staging systems The Logistic regression analysis of the testing set on overall survival (Table 2) and disease specific survival (Table 3) showed the Nomo-staging systems had better homogeneity and discriminatory ability. For 1-year, 2-year and 3-year overall survival, the Nomo-staging system showed higher value in both L.R. (5.9033 e2, 1.4835 e3, 2.2706 e3). and c-index (6.0841 e−1, 6.2550 e−1, 6.3344 e−1). It also showed the lower value in AIC (23,249.95, 34,904.74, 39,082.20). It is also analogous for 1-year, 2-year and 3-year disease specific survival in L.R., c-index and AIC. However, these three indexes did not perform well in the SAHZU database (Additional file 1: Table S4). Table 2 Comparison of the predictive performance of 2 staging systems for overall survival of testing set Table 3 Comparison of the predictive performance of 2 staging systems for disease specific survival of testing set The existing 7th edition TNM staging system is used to guide to predict the prognosis of patients with colorectal cancer and to guide to discussion of adjuvant treatment [13]. The 8th edition would be officially published in 2018. The preview showed there was no change in the classification of non-metastatic patients. The 8th edition gave a more explicit definition of tumor deposit and defined peritoneal metastasis as M1c. In addition, several biomarkers were unobtrusively recommended to assess patients' prognosis, but the lack of quantitative detection approach limited its combination with TNM staging system. The main function of the present staging system was treatment guidance. For example, all stage I patients and some stage II patients with low recurrence risk don't require adjuvant chemotherapy, while stage III patients are strongly advised to receive postoperative adjuvant chemotherapy [14]. The survival paradox might lead oncologists to overestimate the prognosis risk of stage IIIa, while underestimating the risk of stage II. The cause of this paradox was unclear. Our previous research showed the possible reason for survival paradox in colorectal cancer was the over-weighting of N stage. Therefore, we proposed the T-plus system which enhanced the weighting of T stage, revised the correspondence of stages with TN combinations and eliminated the survival paradox [4]. The T-plus staging system reflects the significance of the T stage in colorectal cancer and abandons the rigid classification according to lymph node status. Additionally, it performed well in a Chinese colorectal cancer retrospective cohort [5]. Here we used nomograms to verify the core concept of T-plus staging system that put T stage weighting higher. Nomograms are regarded as a type of machine learning which gives the computer the ability to learn without being explicitly programmed. It acts more like an auto-grouping machine rather than a simple linear regression analysis. Nomograms use Cox proportional hazards model or Logistic regression model to make cancer survival analysis [15]. In the present study, the training set failed to pass the Ph-test. Therefore, Logistic regression model rather than Cox proportional hazards model was used [6]. Similar to T-plus staging system, not all patients with positive lymph nodes were classified as stage III in the Nomo-staging system. Nomo-staging systems had better performance in prognosis prediction than the 7th edition TNM staging system. Although the core concept of T-plus staging system and Nomo-staging system was consistent, there were still some differences. T-plus staging system strengthened more weighting of T stage than that in Nomo-staging system. For example, in T-plus staging system T1N1a was classified into stage I. In the Nomo-staging system, T1-2 N0 was kept into stage I while T1 N1-2a, T2 N1 were grouped into stage II. Hence, the Nomo-staging system was a more moderate change to the 7th edition TNM staging system comparing to the drastic changes made by T-plus staging system. The present study did not perform stratified analysis according to the adjuvant treatment because of the limitation of SEER database. Therefore, the obvious debate is that these changes that shift the correspondence of stages and TN combinations should be attributed to the contribution of adjuvant chemotherapy. Over the past twenty years, the progress of adjuvant chemotherapy and radiotherapy had significantly improved the survival of patients with stage III colorectal cancer [16, 17]. The benefits due to adjuvant therapy might narrow the survival gap between stage II and stage III. However, it seemed impossible to thoroughly neutralize the survival difference between stage II and stage III, let alone improve the survival of stage III equal to stage I by adjuvant therapy. For colon cancer, it has been reported the survival paradox was not caused by adjuvant chemotherapy through analyzing U.S. National Cancer Data Base [18, 19]. Considering stages II and III rectal cancer were classified as locally advanced and usually received the same adjuvant chemo-radiotherapy regimen, stage IIIa patients still showed better a prognosis than stage II patients [1, 4]. This indicated that the paradox was derived from the 7th edition TNM staging system instead of adjuvant therapy. Additionally, the SUNRISE study reported stage II/III patients with low 12-gene recurrence score could safely avoid chemotherapy [20]. This study found the heterogeneity of recurrence risks in stage III as well as in stage II colon cancer. As reported, patients with stage II disease in the Recurrence Score high-risk group had a 5-year risk of recurrence similar to patients with stage IIIa/b disease in the low-risk group. This result testified that some stage III patients had a good prognosis and did not require adjuvant chemotherapy. We believe more evidence are necessary to support the argument that these stage III patients do not require adjuvant chemotherapy and should be considered to be reclassified into earlier stages. There were several limitations in our research. Patients with N1c were combined with N1b in this study. N1c represented tumor deposits found in the pathological specimen. However, the diagnosis criteria of tumor deposits was not consistent considering the criteria kept changing in the recent versions of TNM staging systems [21, 22]. Mayo E. at el showed that tumor deposits were associated with worse 3-year overall survival in patients of any known and unknown N categories [23]. The value of tumor deposits should be assessed in future staging system. The second limitation of this study was that the longest follow-up in the SEER database was only 119 months, and the median follow-up was less than 4 years. Therefore, 5-year overall survival and disease specific survival were not possible to calculated by Logistic regression. Three-year disease specific survival and overall survival were not strong enough endpoints for such an oncological study. Additionally, the lack of treatment information, limited by the current database, might also reduce the reliability of Nomo-staging system. Therefore, we performed external validation on SAHZU database. Although the L.R., c-index and AIC of nomo-staging system did not perform steadily better than the 7th AJCC staging system, the Kaplan-Meier curves showed the survival paradox was revised by nomo-staging system. However, the results should still be verified by other databases with long-term follow-up data and intact treatment information. The third shortcoming was that the Nomo-staging system was only separated to five groups because of the convenience when comparing the performance of the two staging systems by the same number of subgroups. To predict colorectal cancer prognosis more precisely and individually, more subgroups were needed. Moreover, studies have shown that molecular types could predict the prognosis of colorectal cancer independent of the TNM staging system [24,25,26]. Integrating the prognostic molecular markers, such as microsatellite instability, Ras and Braf gene mutational status and immune-score etc., into the prognosis prediction system may predict the prognosis of colorectal cancer more precisely. Additionally, advanced technology related computer science and big data science should be applied to mine such complex data and to produce a new generation colorectal cancer prognosis prediction system. In addition, the fundamental goal of construction nomo-staging system was to verify the concept of reconsidering the weight of lymph nodes status in prognosis prediction, instead of directly applying nomo-staging system into clinical practice. The present study established a modified TNM staging system via nomogram analysis which performed better than the 7th edition TNM staging system in predicting survival of non-metastasis colorectal cancer, which was both validated in SEER testing set and SAHZU database. The Nomo-staging system indicated the weight of lymph nodes status in prognosis prediction should be cautiously reconsidered. However, the robust and rationality of increasing T-stage weight in staging system should be validated in more database considering the limitation of short follow-up time. AIC: Akaike information criteria AJCC: American Joint Committee on Cancer c-index: Concordance index DSS: Disease specific survival IQR: Inter-quartile range L.R.: Likelihood ratio Overall survival Ph-test: Proportional hazards test SAHZU: Second Affiliated Hospital of Zhejiang University SEER: Surveillance, Epidemiology, and End Results TNM: Tumor-node-metastasis Gunderson LL, Jessup JM, Sargent DJ, Greene FL, Stewart A. Revised tumor and node categorization for rectal cancer based on surveillance, epidemiology, and end results and rectal pooled analysis outcomes. J Clin Oncol. 2010;28(2):256–63. Gunderson LL, Jessup JM, Sargent DJ, Greene FL, Stewart AK. Revised TN categorization for colon cancer based on national survival outcomes data. J Clin Oncol. 2010;28(2):264–71. Kim MJ, Jeong SY, Choi SJ, Ryoo SB, Park JW, Park KJ, JH O, Kang SB, Park HC, Heo SC, et al. Survival paradox between stage IIB/C (T4N0) and stage IIIA (T1-2N1) colon cancer. Ann Surg Oncol. 2015;22(2):505–12. Li J, Guo BC, Sun LR, Wang JW, XH F, Zhang SZ, Poston G, Ding KF. TNM staging of colorectal cancer should be reconsidered by T stage weighting. World J Gastroenterol. 2014;20(17):5104–12. Li J, Yi CH, YT H, Li JS, Yuan Y, Zhang SZ, Zheng S, Ding KF. TNM Staging of colorectal cancer should be reconsidered according to weighting of the T stage: verification based on a 25-year follow-up. Medicine (Baltimore). 2016;95(6):e2711. Iasonos A, Schrag D, Raj GV, Panageas KS. How to build and interpret a nomogram for cancer prognosis. J Clin Oncol. 2008;26(8):1364–70. Balachandran VP, Gonen M, Smith JJ, DeMatteo RP. Nomograms in oncology: more than meets the eye. Lancet Oncol. 2015;16(4):e173–80. Slater R. Impact of an enhanced recovery programme in colorectal surgery. British journal of nursing (Mark Allen Publishing). 2010;19(17):1091–9. Weiser MR, Gonen M, Chou JF, Kattan MW, Schrag D. Predicting survival after curative colectomy for cancer: individualizing colon cancer staging. J Clin Oncol. 2011;29(36):4796–802. Liu M, Qu H, Bu Z, Chen D, Jiang B, Cui M, Xing J, Yang H, Wang Z, Di J, et al. Validation of the Memorial Sloan-Kettering Cancer Center Nomogram to predict overall survival after curative colectomy in a Chinese colon cancer population. Ann Surg Oncol. 2015;22(12):3881–7. Adamo M D L, Ruhl J: SEER Program Coding and Staging Manual 2015. National Cancer Institute, Bethesda 2015, MD 20850–9765. Ueno H, Mochizuki H, Akagi Y, Kusumi T, Yamada K, Ikegami M, Kawachi H, Kameoka S, Ohkura Y, Masaki T, et al. Optimal colorectal cancer staging criteria in TNM classification. J Clin Oncol. 2012;30(13):1519–26. King PM, Blazeby JM, Ewings P, Longman RJ, Kipling RM, Franks PJ, Sheffield JP, Evans LB, Soulsby M, Bulley SH, et al. The influence of an enhanced recovery programme on clinical outcomes, costs and quality of life after surgery for colorectal cancer. Colorectal Dis. 2006;8(6):506–13. Vlug MS, Wind J, Hollmann MW, Ubbink DT, Cense HA, Engel AF, Gerhards MF, van Wagensveld BA, van der Zaag ES, van Geloven AA, et al. Laparoscopy in combination with fast track multimodal management is the best perioperative strategy in patients undergoing colonic surgery: a randomized clinical trial (LAFA-study). Ann Surg. 2011;254(6):868–75. Liang W, Zhang L, Jiang G, Wang Q, Liu L, Liu D, Wang Z, Zhu Z, Deng Q, Xiong X, et al. Development and validation of a nomogram for predicting survival in patients with resected non-small-cell lung cancer. J Clin Oncol. 2015;33(8):861–9. Andre T, Boni C, Mounedji-Boudiaf L, Navarro M, Tabernero J, Hickish T, Topham C, Zaninelli M, Clingan P, Bridgewater J, et al. Oxaliplatin, fluorouracil, and leucovorin as adjuvant treatment for colon cancer. N Engl J Med. 2004;350(23):2343–51. Folkesson J, Birgisson H, Pahlman L, Cedermark B, Glimelius B, Gunnarsson U. Swedish rectal cancer trial: long lasting benefits from radiotherapy on survival and local recurrence rate. J Clin Oncol. 2005;23(24):5644–50. Chu QD, Zhou M, Medeiros K, Peddi P. Positive surgical margins contribute to the survival paradox between patients with stage IIB/C (T4N0) and stage IIIA (T1-2N1, T1N2a) colon cancer. Surgery. 2016;160(5):1333–43. Ahn HJ, Kim SW, Lee SW, Lee SW, Lim CH, Kim JS, Cho YK, Park JM, Lee IS, Choi MG. Long-term outcomes of palliation for unresectable colorectal cancer obstruction in patients with good performance status: endoscopic stent versus surgery. Surg Endosc. 2016;30(11):4765–75. Yamanaka T, Oki E, Yamazaki K, Yamaguchi K, Muro K, Uetake H, Sato T, Nishina T, Ikeda M, Kato T, et al. 12-gene recurrence score assay stratifies the recurrence risk in stage II/III colon cancer with surgery alone: the SUNRISE study. J Clin Oncol. 2016;34(24):2906–13. Jin M, Roth R, Rock JB, Washington MK, Lehman A, Frankel WL. The impact of tumor deposits on colonic adenocarcinoma AJCC TNM staging and outcome. Am J Surg Pathol. 2015;39(1):109–15. Nagtegaal ID, Tot T, Jayne DG, McShane P, Nihlberg A, Marshall HC, Pahlman L, Brown JM, Guillou PJ, Quirke P. Lymph nodes, tumor deposits, and TNM: are we getting better? J Clin Oncol. 2011;29(18):2487–92. Mayo E, Llanos AA, Yi X, Duan SZ, Zhang L. Prognostic value of tumor deposit and Perineural invasion status in colorectal cancer patients: a SEER-based population study. Histopathology. 2016;69(2):230–8. Galon J, Pages F, Marincola FM, Angell HK, Thurin M, Lugli A, Zlobec I, Berger A, Bifulco C, Botti G, et al. Cancer classification using the Immunoscore: a worldwide task force. J Transl Med. 2012;10:205. Andre T, de Gramont A, Vernerey D, Chibaudel B, Bonnetain F, Tijeras-Raballand A, Scriva A, Hickish T, Tabernero J, Van Laethem JL, et al. Adjuvant fluorouracil, Leucovorin, and Oxaliplatin in stage II to III colon cancer: updated 10-year survival and outcomes according to BRAF mutation and mismatch repair status of the MOSAIC study. J Clin Oncol. 2015;33(35):4176–87. Phipps AI, Limburg PJ, Baron JA, Burnett-Hartman AN, Weisenberger DJ, Laird PW, Sinicrope FA, Rosty C, Buchanan DD, Potter JD, et al. Association between molecular subtypes of colorectal cancer and patient survival. Gastroenterology. 2015;148(1):77–87. e72 The study was partly supported by the National Natural Science Foundation of China (No. 81672916 and 81301890), Fund of Public Welfare in Health Industry of China (No. 201402015), Key projects in the National Science and Technology Pillar Program during the Twelfth Five-year Plan Period (2014BAI09B07) and Traditional Chinese Medicine of Zhejiang Province (No. 2012ZQ017, No. 2017RC019). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The datasets generated and/or analyzed during the current study are available in the SEER dataset repository. https://seer.cancer.gov/. Preliminary results of this study was submitted to the ASCO-GI 2016 Annual Meeting as an abstract (#516) and accepted as a poster (BOARD B7). We confirm that this manuscript has not been published else where and is not under consideration by other journals. Department of surgical oncology, and The Key Laboratory of Cancer Prevention and Intervention, Second Affiliated Hospital, China National Ministry of Education, Zhejiang University School of Medicine, No. 88 Jiefang Road, Hangzhou, Zhejiang Province, 310009, China Kong Xiangxing, Li Jun, Cai Yibo, Hu Yeting, Yang Qi & Ding Kefeng Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310009, China Tian Yu, Chi Shengqiang, Tong Danyang & Li Jingsong Department of Surgery, School of Translational Studies, University of Liverpool, Aintree University Hospital, Liverpool, L9 7AL, UK Graeme Poston Department of medical oncology, and The Key Laboratory of Cancer Prevention and Intervention, Second Affiliated Hospital, China National Ministry of Education, Zhejiang University School of Medicine, No. 88 Jiefang Road, Hangzhou, Zhejiang Province, 310009, China Yuan Ying Kong Xiangxing Cai Yibo Chi Shengqiang Tong Danyang Hu Yeting Yang Qi Li Jingsong Ding Kefeng XXK and JL contributed to this work equally. XXK, JL and KFD. were responsible for conception, design and quality control of this study. XXK and JL performed the study selection, data extraction, statistical analyses, and was major contributors in writing the manuscript. JSL, YT, SQC and DYT participated in studies selection and statistical analyses. YTH, QY and YY contributed in classification criteria discussion. GP contributed to the writing of manuscript. GP, YY and KFD reviewed and edited the manuscript respectively. All authors read and approved the final manuscript. Correspondence to Ding Kefeng. As the data used was from SEER dataset (public), Ethics approval and consent to participate could be checked in SEER. The patient data of SAHZU dataset has been de-identified for this research. Table S1. The Demographic Information of the Patients from SEER Database Enrolled in This Study [online only]. Table S2. The Demographic Information of the Patients from SAHZU Database Enrolled in This Study [online only]. Table S3. The Ranking of Average Nomo-score [online only]. Table S4. Comparison of the Predictive Performance of 2 Staging Systems for Overall Survival of SAHZU dataset [online only]. (DOCX 78 kb) The distribution of 25 TN combinations (left) and the distribution of colon cancer and rectum cancer (right) between the training set and testing set [online only]. (TIFF 2963 kb) Kong, X., Li, J., Cai, Y. et al. A modified TNM staging system for non-metastatic colorectal cancer based on nomogram analysis of SEER database. BMC Cancer 18, 50 (2018). https://doi.org/10.1186/s12885-017-3796-1 TNM stage Nomogram Prognosis prediction
CommonCrawl
The Pulsating sdB+M Eclipsing System NY Virginis and its Circumbinary Planets Gravitational settling in pulsating subdwarf B stars and their progenitors EXOTIME: searching for planets around pulsating subdwarf B stars A survey for pulsating subdwarf B stars with the Nordic Optical Telescope The discovery of differential radial rotation in the pulsating subdwarf B star KIC 3527751 A likely candidate of type Ia supernova progenitors: the X-ray pulsating companion of the hot subdwarf HD 49798 The evolution of a hot subdwarf: observations of the pulsating subdwarf B star Feige~48 Radial velocities of pulsating subdwarf B stars: KPD 2109+4401 and PB 8783 K2 observations of the pulsating subdwarf B star EQ Piscium: an sdB+dM binary RAT J0455+1305: A rare hybrid pulsating subdwarf B star Circumbinary Planets Orbiting the Rapidly Pulsating Subdwarf B-type binary NY Vir DOI: 10.1088/2041-8205/745/2/L23 S. -B. Qian,L. -Y. Zhu,Z. -B. Dai,E. Fernández Lajús,F. -Y. Xiang,J. -J. He We report here the tentative discovery of a Jovian planet in orbit around the rapidly pulsating subdwarf B-type (sdB-type) eclipsing binary NY Vir. By using new determined eclipse times together with those collected from the literature, we detect that the observed-calculated (O-C) curve of NY Vir shows a small-amplitude cyclic variation with a period of 7.9\,years and a semiamplitude of 6.1\,s, while it undergoes a downward parabolic change (revealing a period decrease at a rate of $\dot{P}=-9.2\times{10^{-12}}$). The periodic variation was analyzed for the light-travel time effect via the presence of a third body. The mass of the tertiary companion was determined to be $M_3\sin{i^{\prime}}=2.3(\pm0.3)$\,$M_{Jupiter}$ when a total mass of 0.60\,$M_{\odot}$ for NY Vir is adopted. This suggests that it is most probably a giant circumbinary planet orbiting NY Vir at a distance of about 3.3 astronomical units (AU). Since the rate of period decrease can not be explained by true angular momentum loss caused by gravitational radiation or/and magnetic braking, the observed downward parabolic change in the O-C diagram may be only a part of a long-period (longer than 15 years) cyclic variation, which may reveal the presence of another Jovian planet ($\sim2.5$$M_{Jupiter}$) in the system.
CommonCrawl
Why does the internal energy of an ideal gas depends only on its temperature? In university physics textbook he says : The internal energy of an ideal gas depends only on its temperature, not on its pressure or volume. I know that the only contribution to the internal energy comes from the translational kinetic energy (for monatomic ideal gas) according to $$U=K_{trans}=\frac{3}{2} nKT$$ So, obviously, the internal energy $(U)$ depends only on the temperature $(T)$ and the number of moles $(n)$ of the gas. But if someone did work on the gas leading to an increase in pressure and decrease in volume, will this affect the temperature accordingly? If no, is it because the increase in pressure cancels out with the decrease in volume, and the temperature remains costant according to $$T=\frac{1}{nR}PV$$ If yes, why did he say that the internal energy depends only on the temperature, not on the pressure or the volume ? thermodynamics energy temperature ideal-gas Qmechanic♦ Karim mohieKarim mohie $\begingroup$ Molecules in an ideal gas are assumed to have zero interaction with each other. Their only energy, therefore, is kinetic, which is directly related to temperature. $\endgroup$ – Thermodynamix $\begingroup$ But the temperature itself depends on the pressure and the volume of the gas according to pv=nRT! $\endgroup$ – Karim mohie $\begingroup$ You can change the temperature however you like, via changing the volume or pressurizing or whatever. The end effect is that this change in temperature changes the kinetic energy of the molecules, hence the internal energy of the system. You cannot, however, change the kinetic energy of ideal gas molecules without changing their temperature. You might say, "Well look! I can change $P$ and it changes $T$, which changes energy!". Yeah, that's just one way of changing $T$, but the ultimate effect was increasing the kinetic energy $T$ of the molecules, hence the internal energy of the system. $\endgroup$ $\begingroup$ Consider what internal energy is in the first place; it's the energy contained within the system. Ideal gas molecules only have kinetic energy, and temperature is a measure of this average kinetic energy. Therefore, internal energy of an ideal gas is only determined by the average kinetic energy of molecules (i.e., by $T$). Sure there are many ways to change $T$ (changing $P$, $V$, inputting heat, etc.), but that's not the point. $\endgroup$ $\begingroup$ Your argument is analogous to saying that the kinetic energy of a ball thrown by someone depends on some implicit factor, like the strength of the person throwing it. Yeah, that's "technically" true, but more fundamentally it depends on the force at which it was launched. That force may depend on the strength, flexibility, and a bunch of other factors of the person who threw it. $\endgroup$ For an ideal gas, you have that $U = \frac{3}{2}nRT$ and also $PV = nRT$, which means that you can write $$U = \frac{3PV}{2}$$ if you'd like. It doesn't make sense to say that $U$ is a function of $T$ in no way affected by $P$ and $V$, because (via the ideal gas law) $P,V$, and $T$ are all related to one another. Instead, think of it as the fact that $U$ is determined completely by $T$. If you know $T$, then you know $U$, full stop. In particular, knowing how $T$ changes tells you immediately how $U$ changes. What happens to $U$ during an isothermal process? Well, if $T$ doesn't change, then $U$ doesn't change. That's it. J. MurrayJ. Murray For an arbitrary solid, liquid, or gas, it follows from the 1st and 2nd laws of thermodynamics thatthe internal energy per mole is related to the temperature T and specific molar volume V by $$dU=C_vdT-\left[P-T\left(\frac{\partial P}{\partial T}\right)_V\right]dV$$For an ideal gas, where PV=RT, the term in brackets is zero. So, in short, if one accepts the 1st and 2nd laws of thermodynamics, the only reason the internal energy of an ideal gas is a function only of temperature is precisely because its equation of state is PV=RT. Chet MillerChet Miller $\begingroup$ Does your first equation allow for dissipative effects, or is it only for reversible processes? $\endgroup$ – looksquirrel101 $\begingroup$ @looksquirrel101 Internal energy is a function of thermodynamic equilibrium state, and it is independent of any specific process or process path between equilibrium states. $\endgroup$ – Chet Miller $\begingroup$ But if we solved this equation (PV=nRT) for the temperature, and substituted by the value of T in K_{trans}, we will get the internal energy as a function of P and V! U=3KPV/2R. Is this equation valid? $\endgroup$ $\begingroup$ It would be valid exclusively for the case of an ideal gas. Is that a problem for you? In reality, it still would be a function only of T. $\endgroup$ $\begingroup$ @looksquirrel101 Yes and yes. $\endgroup$ The internal energy of an ideal gas is defined as $ dU = dQ - p dV$: From Joule's expansion experiment, we see that an ideal gas can expand adiabatically with no change in temperature. Therefore, $d Q =0$ (adiabatically) and $dW=0$ no external work was done. We can conclude that since the pressure and volume of the gas have changed in this process but the internal energy remained unchanged that $U = f(T)$. But if someone did work on the gas leading to an increase in pressure and decrease in volume, will this affect the temperature accordingly? The answer is yes for a reversible adiabatic ($Q=0$) compression. From the first law $$\Delta U=Q-W$$ If $Q=0$, $\Delta U=-W$. And since for an ideal gas $\Delta U=mC_{v}\Delta T$, $$mC_{v}\Delta T=-W$$ Finally, since work $W$ done on the gas is negative, this means there will be an increase in temperature, as well as pressure, due to the compression. The answer is no for a reversible isothermal compression, PV=constant, where the heat rejected equals the work done and the change in internal energy is zero, $$\Delta U=Q-W=0$$ For an ideal gas $$\Delta U=mC_{v}\Delta T=0$$. Pressure goes up when volume decreases and temperature is unaffected (remains constant). As already indicated it is no for an isothermal compression, $Pv$ = constant. From ideal gas law $$Pv=mRT$$ Therefore $T$ = constant. The internal energy of an ideal gas consists of only kinetic energy. If work is done on the gas energy is added to the gas. That means the internal kinetic energy has to increase. And since the internal energy of an ideal gas depends only on temperature, that means the temperature has to increase. Bob DBob D In the case of ideal gas there is no intermolecular forces.Since intermolecular forces are absent potential energy is also absent in the case of ideal gas system. We know that internal energy of a system is the sum of kinetic energy of the particles and potential energy.In the case of ideal gas system internal energy is equal to kinetic energy. Temperature is the measure of average kinetic energy of particles.In the case of ideal gas,internal energy is purely kinetic. Then obviously internal energy is a function of temperature alone. AishAish Not the answer you're looking for? Browse other questions tagged thermodynamics energy temperature ideal-gas or ask your own question. Temperature of ideal gas after volume increases in piston Can the internal energy of an ideal gas system increase as temperature decreases? Why is the internal energy of a real gas a function of pressure and temperature only? Internal energy of an ideal gas independent of any thermodynamic variable at constant temperature In adiabatic expansion does the internal energy of an ideal gas decrease? Understanding the derivation of the formula for change in internal energy of a gas in an enclosed cylinder How to calculate gas equation (not necessarily ideal) when the internal energy depends solely on the temperature? Adiabatic temperature increase given work done on (ideal) gas
CommonCrawl
SpringerPlus Krasnoselskii-type algorithm for zeros of strongly monotone Lipschitz maps in classical banach spaces C E Chidume1, A U Bello1,2 & B Usman1 SpringerPlus volume 4, Article number: 297 (2015) Cite this article Let \(E=L_p\), \(1<p<\infty \), and \(A:E\rightarrow E^*\) be a strongly monotone and Lipschitz mapping. A Krasnoselskii-type sequence is constructed and proved to converge strongly to the unique solution of \(Au=0\). Furthermore, our technique of proo f is of independent interest. Let \(H\) be a real Hilbert space. An operator \(A:H\rightarrow H\) is called monotone if $$\begin{aligned} \big <Ax-Ay,x-y\big >\ge 0\quad\forall ~x,y\in H, \end{aligned}$$ and is called strongly monotone if there exists \(\lambda \in (0,1)\) such that $$\begin{aligned} \big <Ax-Ay,x-y\big >\ge \lambda \Vert x-y\Vert ^2 \quad\forall x,y\in H. \end{aligned}$$ Interest in monotone operators stems mainly from their usefulness in numerous applications. Consider, for example, the following: Let \(f:H\rightarrow \mathbb {R}\) be a proper and convex function. The subdifferential of \(f\) at \(x\in H\) is defined by $$\begin{aligned} \partial f(x)=\big \{x^*\in H:f(y)-f(x)\ge \big <y-x,x^*\big >\quad\forall ~y\in H\big \}. \end{aligned}$$ It is easy to check that \(\partial f:H\rightarrow 2^H\) is a monotone operator on \(H\), and that \(0\in \partial f(x)\) if and only if \(x\) is a minimizer of \(f\). Setting \(\partial f\equiv A\), it follows that solving the inclusion \(0\in Au\), in this case, is solving for a minimizer of \(f\). Let \(E\) be a real normed space, \(E^*\) its topological dual space. The map \(J:E\rightarrow 2^{E^*}\) defined by $$\begin{aligned} Jx=\big \{x^*\in E^*:\big <x,x^*\big >=\Vert x\Vert .\Vert x^*\Vert ,~\Vert x\Vert =\Vert x^*\Vert \big \} \end{aligned}$$ is called the normalized duality map on \(E\). A map \(A:E\rightarrow E\) is called accretive if for each \(x,y\in E\), there exists \(j(x-y)\in J(x-y)\) such that $$\begin{aligned} \big <Ax-Ay,j(x-y)\big >\ge 0. \end{aligned}$$ \(A\) is called strongly accretive if there exists \(k\in (0,1)\) such that for each \(x,y\in E\), there exists \(j(x-y)\in J(x-y)\) such that $$\begin{aligned} \big <Ax-Ay,j(x-y)\big > \ge k\Vert x-y\Vert ^2. \end{aligned}$$ Several existence theorems have been established for the equation \(Au=0\) when \(A\) is of the monotone-type (see e.g., Deimling (1985; Pascali and Sburian 1978). For approximating a solution of \(Au=0\), assuming existence, where \(A:E\rightarrow E\) is of accretive-type, Browder (1967) defined an operator \(T:E \rightarrow E\) by \(T:=I-A\), where \(I\) is the identity map on \(E\). He called such an operator pseudo-contractive. A map \(T:E\rightarrow E\) is then called pseudo-contractive if $$\begin{aligned} \big <Tx-Ty,x-y\big > \le \Vert x-y\Vert ^2\quad \forall x,y \in E, \end{aligned}$$ and is called strongly pseudo-contractive if there exists \(k\in (0,1)\) such that $$\begin{aligned} \big <Tx-Ty,x-y\big > \le k\Vert x-y\Vert ^2\quad \forall x,y \in E. \end{aligned}$$ It is trivial to observe that zeros of \(A\) corresspond to fixed points of \(T\). For Lipschitz strongly pseudo-contractive maps, Chidume (1987) proved the following theorem. Theorem C1 (Chidume 1987) Let \(E =L_{p}, ~2\le p < \infty \) , and \(K\subset E\) be nonempty closed convex and bounded. Let \(T:K\rightarrow K\) be a strongly pseudocontractive and Lipschitz map. For arbitrary \(x_{0}\in K\) , let a sequence \(\{x_{n}\}\) be defined iteratively by \(x_{n+1} = (1-\alpha _{n})x_{n} + \alpha _{n}Tx_{n},~n\ge 0,\) where \(\{\alpha _{n}\} \subset (0,1)\) satisfies the following conditions: \((i)~\sum _{n=1}^{\infty }\alpha _{n}= \infty , ~~(ii)\sum _{n=1}^{\infty }\alpha _{n}^{2} < \infty \) . Then, \(\{x_{n}\}\) converges strongly to the unique fixed point of \(T\). The main tool used in the proof of Theorem C1 is an inequality of Bynum (1976). This theorem signalled the return to extensive research efforts on inequalities in Banach spaces and their applications to iterative methods for solutions of nonlinear equations. Consequently, this theorem of Chidume has been generalized and extended in various directions, leading to flourishing areas of research, for the past thirty years or so, by numerous authors (see e.g., Chidume 1986, 1990, 2002; Chidume and Ali 2007; Chidume and Chidume 2005, 2006; Chidume and Osilike 1999; Deng 1993a, b; Zhou 1997; Zhou and Jia 1996, 1997; Liu 1995, 1997; Qihou 1990, 2002; Weng 1991, 1992; Xiao 1998; Xu 1989, 1991a, b, 1992, 1998; Xu and Roach 1991, 1992; Xu et al. 1995; Zhu 1994 and a host of other authors). Recent monographs emanating from these researches include those by Chidume (2009), Berinde (2007), Goebel and Reich (1984) and William and Shahzad (2014). Unfortunately, the success achieved in using geometric properties developed from the mid 1980ies to early 1990ies in approximating zeros of accretive-type mappings has not carried over to approximating zeros of monotone-type operators in general Banach spaces. The first problem is that since \(A\) maps \(E\) to \(E^{*}\), for \(x_{n}\in E\), \(Ax_{n}\) is in \(E^{*}\). Consequently, a recursion formula containing \(x_{n}\) and \(Ax_{n}\) may not be well defined. Another difficulty is that the normalized duality map which appears in most Banach space inequalities developed, and also appears in the definition of accretive-type mappings, does not appear in the definition of monotone-type mappings in general Banach spaces. This creats very serious technical difficulties. Attemps have been made to overcome the first difficulty by introducing the inverse of the normalized duality mapping in the recursion formulas for approximating zeros of monotone-type mappings. But one major problem with such recursion formulas is that the exact form of the normalized duality map (or its inverse) is not known precisely in any space more general than \(L_p\) spaces, \(1<p<\infty \). Futhermore, the recursion formulas, apart from containing the normalized duality map and its inverse, generally involve computation of subsets and generalized projections, both of which are defined in a way that makes their computation almost impossible. We give some examples of some results obtained using these approximation schemes. Before we do this, however, we need the following definitions. Let \(E\) be a real normed space and let a funtion \(\phi (.,.):X\times X\longrightarrow \mathbb {R}\) be defined by $$\begin{aligned} \phi (x,y)= ||x\Vert ^2-2\big <x,J(y)\big > +||y||^2 \quad\forall ~x, y \in E. \end{aligned}$$ It is easy to see that in Hilbert space, \(\phi (x,y)\) reduces to \(\Vert x-y\Vert ^2\). A function \(\pi _K:E\longrightarrow K\) defined by: \(\pi _K(x)=\bar{x}\) such that \(\bar{x}\) is the solution of $$\begin{aligned} \min \big \{\phi (x,y),y\in K \big \}, \end{aligned}$$ is called a generalized projection map. Now we present the following results. In Hilbert space, suppose that a map \(A:K\rightarrow H\) is \(\gamma \)-inverse strongly monotone, i.e., there exists \(\gamma >0\) such that \(\langle Ax-Ay, x-y\rangle \ge \gamma ||Ax-Ay||^{2} ~\forall ~x, y\in H\). Iiduka et al. (2004) studied the following iterative scheme. $$\begin{aligned} \left\{ \begin{array}{lll} x_0&\in &K, choosen ~~arbitrary, \\ y_n&=&P_K(x_n-\alpha _nAx_n);\\ C_n&=&\big \{z\in K: \Vert y_n-z\Vert \le \Vert x_n-z\Vert \big \},\\ Q_n&=&\big \{z\in K:\big <x_n-z,x_0-x_n\big >\ge 0\big \}\\ x_{n+1}&=&P_{C_n\cap Q_n} (x_0), n\ge 1, \end{array} \right. \end{aligned}$$ where \(\{\alpha _n \}\) is a sequence in \([0, 2\gamma ]\). They proved that the sequence \(\{x_n \}\) generated by (1.7) converges strongly to \(P_{VI(K ,A)} (x_0 )\), where \(P_{VI(K ,A)}\) is the metric projection from \(K\) onto \(VI(K , A)\) (see e.g., Iiduka et al. 2004 for definition and explanation of the symbols). Zegeye and Shahzad proved the following result. Theorem 1.1 (Zegeye and Shahzad 2009) Let \(E\) be uniformly smooth and \(2\) -uniformly convex real Banach space with dual \(E^*\) . Let \(A:E\longrightarrow E^*\) be a \(\gamma \) -inverse strongly monotone mapping and \(T:E\longrightarrow E\) be relatively weak nonexpansive mapping with \(A^{-1}(0)\cap F(T)\ne \emptyset .\) Assume that \(0<\alpha _n\le b_0:=\frac{\gamma c^2}{2},\) where \(c\) is the constants from the Lipschitz property of \(J^{-1}\) , then the sequence generated by $$\begin{aligned} \left\{ \begin{array}{lll} x_0&\in&K, choosen ~~arbitrary, \\ y_n&=&J^{-1}(Jx_n-\alpha _nAx_n);\\ z_n&=&Ty_n,\\ H_0&=&\big \{v\in K:\phi (v,z_0)\le \phi (v,y_0)\le \phi (v,x_0)\big \},\\ H_n&=&\big \{v\in H_{n-1}\cap W_{n-1}:\phi (v,z_n)\le \phi (v,y_n)\le \phi (v,x_n)\big \},\\ W_0&=&E,\\ W_n&=&\big \{v\in W_{n-1}\cap H_{n-1}:\big <x_n-v,jx_0-jx_n\big >\ge 0\big \}\\ x_{n+1}&=&\Pi _{H_n\cap W_n} (x_0), n\ge 1, \end{array} \right. \end{aligned}$$ converges strongly to \(\Pi _{F(T)\cap A^{-1}(0)}x_0\) where \(\Pi _{F(T)\cap A^{-1}(0)}\) is the generalised projection from \(E\) onto \(F(T)\cap A^{-1}(0).\) We remark here that although the approximation methods used in the result of Iiduka et al. referred to above, and in Theorem 1.1 yield strong convergence to a solution of the problem under consideration, it is clear that they are not easy to implement. Furthermore, Theorem 1.1 excludes \(L_p\) spaces, \(2<p<\infty \), because these spaces are not \(2\)-uniformly convex. The theorem, however, is applicable in \(L_{p}\) spaces \(1<p<2\). In this paper, we introduce an iterative scheme of Krasnoselskii-type to approximate the unique zero of a strongly monotone Lipschitz mapping in \(L_{p}\) spaces, \(1<p<\infty \). In these spaces, the formula for \(J\) is known precisely (see e.g., Cioranescu 1990; Chidume 2009). The Krasnoselskii sequence, whenever it converges, is known to converge as fast as a geometric progression. Furthermore, our iteration method which will not involve construction of subsets or the use of generalized projection is also of independent interest. In the sequel, we shall need the following results and definitions. Lemma 2.1 (see e.g., Chidume 2009, p. 55) Let \(E=L_{p},\,1<p<2. \) Then, there exists a constant \(c_{p}>0\) such that for all x, y in L p the following inequalities hold: $$\begin{aligned} \Vert x+y\Vert ^2\ge & {} \Vert x\Vert ^2+2\langle y,J(x)\rangle +c_p\Vert y\Vert ^2, \end{aligned}$$ $$\begin{aligned} \langle x-y,J(x)-J(y)\rangle \ge (p-1)\Vert x-y\Vert ^2. \end{aligned}$$ Let \(E\) be a smooth real Banach space with dual \(E^*\). The function \(\phi :E\times E\rightarrow \mathbb {R}\), defined by, $$\begin{aligned} \phi (x,y)=\Vert x\Vert ^2-2\langle x,Jy\rangle +\Vert y\Vert ^2 \quad \text {for}~x,y\in E, \end{aligned}$$ where \(J\) is the normalized duality mapping from \(E\) into \(2^{E^*}\), introduced by Alber has been studied by Alber (1996), Alber and Guerre-Delabriere (2001), Kamimura and Takahashi (2002), Reich (1996) and a host of other authors. If \(E=H\), a real Hilbert space, then Eq (2.3) reduces to \(\phi (x,y)=\Vert x-y\Vert ^2\) for \(x,y\in H.\) It is obvious from the definition of the function \(\phi \) that $$\begin{aligned} (\Vert x\Vert -\Vert y\Vert )^2\le \phi (x,y)\le (\Vert x\Vert +\Vert y\Vert )^2\quad\text {for}~x,y\in E. \end{aligned}$$ Define \(V:X\times X^*\rightarrow \mathbb {R}\) by $$\begin{aligned} V(x,x^*)=\Vert x\Vert ^2-2\langle x,x^*\rangle +\Vert x^*\Vert ^2. \end{aligned}$$ Then, it is easy to see that $$\begin{aligned} V(x,x^*)=\phi (x,J^{-1}(x^*)) \quad\forall ~x\in ~X,~x^*\in ~X^*. \end{aligned}$$ Corollary 2.2 Let \(E=L_p\), \(1<p\le 2\) . Then \(J^{-1}\) is Lipschitz, i.e., there exists \(L_1>0\) such that for all \(u,v\in E^*\) , the following inequality holds: $$\begin{aligned} \Vert J^{-1}(u)-J^{-1}(v)\Vert \le L_1\Vert u-v\Vert . \end{aligned}$$ This follows from inequality (2.2). \(\square \) For \(L_p,~2\le p<\infty \), we have the following lemma. (Alber and Ryazantseva 2006, p. 48) Let \(X=L_p,~p\ge 2\) . Then, the inverse of the normalized duality map \(J^{-1}:X^*\rightarrow X\) is Hölder continuous on balls. i.e., \(\forall ~ u,v\in X^*\) such that \(\Vert u\Vert \le R\) and \(\Vert v\Vert \le R\) , then $$\begin{aligned} \Vert J^{-1}(u)-J^{-1}(v)\Vert \le m_p\Vert u-v\Vert ^\frac{1}{p-1}, \end{aligned}$$ where \(m_p:=(2^{p+1}Lpc^p_2)^{\frac{1}{p-1}}>0\) , for some constant \(c_2>0.\) This follows from the following inequality: $$\begin{aligned} \langle Jx-Jy,x-y\rangle \ge \frac{\Vert x-y\Vert ^p}{2^{p+1}Lpc^p_2},~~~c_2=2\max \{1,R\}. \end{aligned}$$ (see e.g., Alber and Ryanzantseva 2006, p. 48). \(\square \) (Alber 1996) Let \(X\) be a reflexive striclty convex and smooth Banach space with \(X^*\) as its dual. Then, $$\begin{aligned} V(x,x^*)+2\langle J^{-1}x^*-x,y^*\rangle \le V(x,x^*+y^*) \end{aligned}$$ for all \(x\in X\) and \(x^*,y^*\in X^{*}.\) Definition 2.5 An operator \(T:X\rightarrow X^*\) is called \(\psi \)-strongly monotone if there exists a continuous, strictly increasing function \(\psi :\mathbb {R}\rightarrow \mathbb {R}\) with \(\psi (0)=0\) such that $$\begin{aligned} \big <Tx-Ty,x-y\big >\ge \psi (\Vert x-y\Vert )\Vert x-y\Vert \quad \forall ~x,y\in D(T). \end{aligned}$$ Let \(X\) and \(Y\) be Banach spaces with \(X^*\) and \(Y^*\) as their respective duals. An operator \(A:D(A)\subset X\rightarrow Y^*\) is called hemicontinuous at \(x_0\in D(A)\) if \(x_0+t_ny\in D(A)\), $$\begin{aligned} for ~y\in X~and~t_n\rightarrow 0_{+}~ \implies ~A(x_0+t_ny)\mathop {\rightarrow }\limits ^{w^*} Ax_0. \end{aligned}$$ Clearly, every continuous map is hemicontinuous. Let \(T:X\rightarrow X^*\) be a hemicontinuous \(\psi \) -strongly monotone operator with \(D(T)=X\) . Then, \(R(T)=X^*.\) See chapter III, page \(48\) of Pascali and Sburian (1978). \(\square \) Convergence in \(L_p\) spaces, \(1<p\le 2\). In the sequel, \(k\) is the strong monotonicity constant of \(A\) and \(L>0\) is its Lipschitz constant, and \(\delta :=\frac{k}{2(L_1+1)(L+1)^2}\). Let \(E=L_p,~1<p\le 2\) . Let \(A:E\rightarrow E^{*}\) be a strongly monotone and Lipschitz map. For \(x_0\in E\) arbitrary, let the sequence \(\{x_n\}\) be defined by: $$\begin{aligned} x_{n+1} = J^{-1}(Jx_n - \lambda Ax_n), \quad n\ge 0, \end{aligned}$$ where \(\lambda \in \Big (0,\delta \Big )\) . Then, the sequence \(\{x_n\}\) converges strongly to \(x^*\in A^{-1}(0)\) and \(x^*\) is unique. Let \(\psi (t)=kt\) in inequality (2.11). By Lemma 2.7, \(A^{-1}(0)\ne \emptyset \). Let \(x^*\in A^{-1}(0)\). Using the definition of \(x_{n+1}\) we compute as follows: $$\begin{aligned} \phi (x^*,x_{n+1})&=\phi (x^*,J^{-1}(Jx_n-\lambda Ax_n))\\&= V(x^*,Jx_n-\lambda Ax_n) \end{aligned}$$ Applying Lemma 2.4, we have $$\begin{aligned} \phi (x^*,x_{n+1})&=V(x^*,Jx_n-\lambda Ax_n)\\&\le V(x^*,Jx_n)-2\lambda \langle J^{-1}(Jx_n-\lambda Ax_n)-x^*,Ax_n-Ax^*\rangle \\&= \phi (x^*,x_n)-2\lambda \langle x_n-x^*,Ax_n-Ax^*\rangle \\&\quad+ 2\lambda \langle x_n-x^*,Ax_n-Ax^*\rangle \\& \quad- 2\lambda \langle J^{-1}(Jx_n-\lambda Ax_n)-x^*,Ax_n-Ax^*\rangle \\&= \phi (x^*,x_n)-2\lambda \langle x_n-x^*,Ax_n-Ax^*\rangle \\&\quad - 2\lambda \langle J^{-1}(Jx_n-\lambda Ax_n)-J^{-1}(Jx_n),Ax_n-Ax^*\rangle . \end{aligned}$$ Using the strong monotonocity of \(A\), Lipschitz property of \(j^{-1}\) and the Lipschitz property of \(A\), we have that: $$\begin{aligned} \phi (x^*,x_{n+1})&\le \phi (x^*,x_n)-2\lambda k\Vert x_n-x^*\Vert ^2\\&\quad+ 2\lambda \Vert J^{-1}(Jx_n-\lambda Ax_n)-J^{-1}(Jx_n)\Vert \Vert Ax_n-Ax^*\Vert \\&\le \phi (x^*,x_n)-2\lambda k\big \Vert x_n-x^*\big \Vert ^2 +2\lambda ^2L_1L^2\big \Vert x_n-x^*\big \Vert ^2\\&\le \phi (x^*,x_n)-\lambda k\Vert x_n-x^*\Vert ^2. \end{aligned}$$ Thus, \(\phi (x^*,x_n)\) converges, since it is monotone decreasing and bounded below by zero. Consequently, $$\begin{aligned} \lambda k\Vert x_n-x^*\Vert ^2\le \phi (x^*,x_n)-\phi (x^*,x_{n+1})\rightarrow 0, \quad as~n\rightarrow \infty . \end{aligned}$$ This yields \(x_n\rightarrow x^*~~as~~ n\rightarrow \infty .\) Suppose there exists \(y^*\in A^{-1}(0)\), \(y^*\ne x^*\). Then, substituting \(x^*\) by \(y^*\) in the above argument, we obtain that \(x_n\rightarrow y^*\) as \(n\rightarrow \infty \). By uniqueness of limit \(x^*=y^*\). So, \(x^*\) is unique. completing the proof. \(\square \) Convergence in \(L_p\) spaces, \(2\le p<\infty \). We remark that for \(E=L_p,~2\le p<\infty \), if \(A:E\rightarrow E^*\) satisfies the following conditions: there exists \(k\in (0,1)\) such that $$\begin{aligned} \big <Ax-Ay,x-y\big >\ge k\Vert x-y\Vert ^{\frac{p}{p-1}} \quad \forall ~ x,y\in E, \end{aligned}$$ and \(A^{-1}(0)\ne \emptyset \), then the Krasnoselskii-type sequence (4.1) converges strongly to the unique solution of \(Au=0\). In fact, we prove the following theorem. In the following theorem, \(\delta _p:=\Big (\frac{k}{2m_pL^{\frac{p}{p-1}}}\Big )^{p-1}\). Let \(X=L_p,~2\le p<\infty \) . Let \(A:X\rightarrow X^*\) be a Lipschitz map. Assume that there exists a constant \(k\in (0,1)\) such that \(A\) satisfies the following condition: $$\begin{aligned} \big <Ax-Ay,x-y\big >\ge k\Vert x-y\Vert ^{\frac{p}{p-1}}, \end{aligned}$$ and that \(A^{-1}(0)\ne \emptyset .\) For arbitrary \(x_0\in X\) , define the sequence \(\{x_n\}\) iteratively by: $$\begin{aligned} x_{n+1}=J^{-1}(Jx_n-\lambda Ax_n), \quad n\ge 0, \end{aligned}$$ where \(\lambda \in (0,\delta _p)\) . Then, the sequence \(\{x_n\}\) converges strongly to the unique solution of the equation \(Ax=0.\) We first prove that \(\{x_n\}\) is bounded. This proof is by induction. Let \(x^*\in A^{-1}(0)\). Then, there exists \(r>0\) such that \(\phi (x^*,x_1)\le r\). By construction, \(\phi (x^*,x_1)\le r\). Suppose that \(\phi (x^*,x_n)\le r\), for some \(n\ge 1\). We prove that \(\phi (x^*,x_{n+1})\le r\). Using Eq (2.6) and inequality (2.10), we have: $$\begin{aligned} \phi (x^*,x_{n+1})&= \phi (x^*,J^{-1}(Jx_n-\lambda Ax_n)) = V(x^*,Jx_n-\lambda Ax_n)\\&\le V(x^*,Jx_n)-2\langle J^{-1}(Jx_n-\lambda Ax_n)-x^*,\lambda Ax_n\rangle \\&= V(x^*,Jx_n)-2\lambda \langle x_n-x^*,Ax_n-Ax^*\rangle \\&\quad +2\lambda \langle J^{-1}(Jx_n-\lambda Ax_n)-J^{-1}(Jx_n),Ax_n-Ax^*\rangle .\\&\le \phi (x^*,x_n)-2\lambda \langle x_n-x^*,Ax_n-Ax^*\rangle \\& \quad +2\lambda \Vert J^{-1}(Jx_n-\lambda Ax_n)-J^{-1}(Jx_n)\Vert \Vert Ax_n-Ax^*\Vert . \end{aligned}$$ Using condition (5.2) on \(A\) and inequality (2.8), we obtain: $$\begin{aligned} \phi (x^*,x_{n+1})&\le\phi (x^*,x_n)-2k\lambda \Vert x_n-x^*\Vert ^{\frac{p}{p-1}}+2\lambda \lambda ^{\frac{1}{p-1}}m_p\Vert Ax_n\Vert ^{\frac{1}{p-1}}\Vert Ax_n-Ax^*\Vert \\&\le \phi (x^*,x_n)-2k\lambda \Vert x_n-x^*\Vert ^{\frac{p}{p-1}}+2\lambda \lambda ^{\frac{1}{p-1}}m_p\Vert Ax_n-Ax^*\Vert ^{\frac{p}{p-1}}.\\&\le\phi (x^*,x_n)-2k\lambda \Vert x_n-x^*\Vert ^{\frac{p}{p-1}}+2\lambda \lambda ^{\frac{1}{p-1}}m_pL^{\frac{p}{p-1}}\Vert x_n-x^*\Vert ^{\frac{p}{p-1}}\\&\le \phi (x^*,x_n)-k\lambda \Vert x_n-x^*\Vert ^{\frac{p}{p-1}}\\&\le r. \end{aligned}$$ Hence, by induction, \(\{x_n\}\) is bounded. We now prove that \(\{x_n\}\) converges strongly to \(x^*\in A^{-1}(0)\). Let \(x^*\in A^{-1}(0)\). From the same computation as above, we have that: $$\begin{aligned} \phi (x^*,x_{n+1})\le & {} \phi (x^*,x_n)-\lambda k\Vert x_n-x^*\Vert ^{\frac{p}{p-1}}, \end{aligned}$$ which implies \(\phi (x^*,x_n)\) is decreasing and bounded below by zero, so the limit of \(\phi (x^*,x_n)\) exists. Therefore, $$\begin{aligned} 0\le \lim \Big (\lambda k\Vert x_n-x^*\Vert ^{\frac{p}{p-1}}\Big )\le \lim \Big (\phi (x^*,x_n)-\phi (x^*,x_{n+1})\Big )=0. \end{aligned}$$ Hence, \(x_n\rightarrow x^*\) as \(n\rightarrow \infty \). Uniqueness follows as in the proof of Theorem 4.1. \(\square \) Open Question If \(E=L_p,~2\le p<\infty \), attempts to obtain strong convergence of the Krasnoselskii-type sequence defined for \(x_0\in E\), by: $$\begin{aligned} x_{n+1}=J^{-1}(J(x_n)-\lambda Ax_n), \quad n\ge 0, \lambda \in (0,1) \end{aligned}$$ to a solution of the equation \(Au=0\), where \(A\) is strongly monotone and Lipschitz, have not yielded any positive result. It is, therefore, of interest to find out if a Krasnoselskii-type sequence will converge strongly to a solution of \(Au=0\) in this space. Alber YI (1996) Metric and generalized projection operators in Banach spaces: properties and applications. In: Kartsatos AG (ed) Theory and appli cations of nonlinear operators of accretive and monotone type. Marcel Dekker, New York, pp. 15–50 Alber YI, Guerre-Delabriere S (2001) On the projection methods for fixed point problems. Analysis (Munich). 21(1):17–39 Alber YI, Ryazantseva I (2006) Nonlinear ill posed problems of monotone type, Springer, London Berinde V (2007) Iterative approximation of fixed points, lecture notes in mathematics, Springer, London Browder FE (1967) Nonlinear mappings of nonexpansive and accretive type in Banach spaces. Bull Am Math Soc 73:875–882 Bynum WL (1976) Weak parallelogram laws for Banach spaces. Can Math Bull 19(3):269–275 Chidume CE (1986) An approximation method for monotone Lipschitzian operators in Hilbert-spaces. J Aust Math Soc Ser Pure Math Stat 41:59–63 Chidume CE (1987) Iterative approximation of fixed points of Lipschitzian strictly pseudocontractive mappings. Proc Am Math Soc 99(2):283–288 Chidume CE (1990) Iterative solution of nonlinear equations of the monotone type in Banach spaces. Bull Aust Math Soc 42:21–31 Chidume CE, Osilike MO (1999) Iterative solutions of nonlinear accretive operator equations in arbitrary Banach spaces. Nonlinear Anal Theo Methods Appl 36:863–872 Chidume CE (2002) Convergence theorems for asymptotically pseudocontractive mappings. Nonlinear Anal Theo Methods Appl 49:1–11 Chidume CE, Chidume CO (2005) Convergence theorems for fixed points of uniformly continuous generalized Phi-hemi-contractive mappings. J Math Anal Appl 303:545–554 Chidume CE, Chidume CO (2006) Convergence theorem for zeros of generalized Phi-quasi-accretive operators. Proc Am Math Soc 134:243–251 Chidume CE, Ali B (2007) Approximation of common fixed points for finite families of nonself asymptotically nonexpansive mappings in Banach spaces. J Math Anal Appl 326:960–973 Chidume CE (2009) Geometric properties of Banach spaces and nonlinear iterations, lectures notes in mathematics. vol 1965, Springer, London Cioranescu I (1990) Geometry of Banach spaces, duality mappings and nonlinear problems. vol 62, Kluwer, Dordrecht Deng L (1993) On Chidume's open question. J Math Appl 174(2):441–449 Deng L (1993) An iterative process for nonlinear Lipschitz strongly accretive mappings in uniformly convex and uniformly smooth Banach spaces. Acta Appl Math 32(2):183–196 Diemling K (1985) Nonlinear functional analysis, Springer, New York Goebel K, Reich S (1984) Uniform convexity, hyperbolic geometry, and nonexpansive mappings. Monographs and textbooks in pure and applied mathematics. vol. 83, Marcel Dekker inc., New York Iiduka H, Takahashi W, Toyoda M (2004) Approximation of solutions of variational inequalities for monotone mappings. Panamer Math J 14:49–61 Kamimura S, Takahashi W (2002) Strong convergence of a proximal-type algorithm in a Banach space. SIAMJ Optim 13(3):938–945 Liu L (1997) Approximation of fixed points of a strictly pseudocontractive mapping. Proc Am Math Soc 125(5):1363–1366 Liu L (1995) Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces. J Math Anal Appl 194(1):114–125 Pascali D, Sburian S (1978) Nonlinear mappings of monotone type. Editura Academia Bucaresti, Romania Qihou L (2002) Iterative sequences for asymptotically quasi-nonexpansive mapping with an error member of uniformly convex Banach space. J Math Anal Appl 266:468–471 Qihou L (1990) The convergence theorems of the sequence of Ishikawa iterates for hemi-contractive mapping. J Math Anal Appl 148:55–62 Reich S (1996) A weak convergence theorem for the alternating methods with Bergman distance. In: Kartsatos AG (ed) Theory and applications of nonlinear operators of accretive and monotone type, in lecture notes in pure and Appl Math, vol 178 Dekker, New York, pp 313–318 Weng XL (1991) Fixed point iteration for local striclty pseudocontractive mappings. Proc Am Math Soc 113(3):727–731 Weng XL (1992) Iterative construction of fixed points of a dissipative type operator. Tamkang J Math 23:205–215 William K, Shahzad N (2015) Fixed point theory in distance spaces, Springer, New York Xiao R (1998) Chidume's open problems and fixed point theorems. Xichuan Daxue Xuebao 35(4):505–508 Xu ZB (1989) Characteristic inequalities of \(L_p\) spaces and their applications. Acta Math Sinica 32(2):209–218 Xu ZB (1992) A note on the Ishikawa iteration schemes. J Math Anal Appl 167:582–587 Xu ZB, Roach GF (1991) Characteristic inequalities for uniformly convex and uniformly smooth Banach space. J Math Anal Appl 157:189–210 Xu ZB, Roach GF (1992) A necessary and sufficient condition for convergence for of a steepest descent approximation to accretive operator equations. J Math Anal Appl 167:340–354 Xu ZB, Jiang YL, Roach GF (1995) A further necessary and sufficient condition for strong convergence of nonlinear contraction semigroups and of iteration methods for accretive operators in Banach spaces. Proc Edinburgh Math Soc 38(2):1–12 Xu HK (1991) Inequalities in Banach spaces with applications. Nonlinear Anal 16(12):1127–1138 Xu Y (1991) Existence and convergence for fixed points of mappings of the asymptotically nonexpansive type. Nonlinear Anal 16:1139–1146 Xu Y (1998) Ishikawa and Mann iterative processes with errors for nonlinear strongly accretive operator equations. J Math Anal Appl 224:91–101 Zegeye H, Shahzad N (2009) Strong convegence theorems for monotone mappings and relatively weak nonexpansive mappings. Nonlinear Anal 70:2707–2716 Zhou H (1997) Iterative solutions of nonlinear equations involving strongly accretive operators without the Lipschitz assumption. J Math Anal Appl 213(1):296–307 Zhou H, Jia Y (1996) Approximating the zeros of accretive operators by the Ishikawa iteration process. Abstr Appl Anal 1(2):153–167 Zhou H, Jia Y (1997) Approximation of fixed points of strongly pseudocontractive maps without Lipschitz assumption. Proc Am Math Soc 125(6):1705–1709 Zhu L (1994) Iteration solution of nonlinear equations involving m-accretive operators in Banach spaces. J Math Anal Appl 188:410–415 The authors, C.E. Chidume, A.U. Bello and B. Usman all contributed in solving the problem. Moreover, A.U. Bello and B. Usman typed the manuscript, where as the corresponding author, C.E. Chidume, also typed and made some corrections. All the authors read and approve the final manuscript. Compliance with ethical guidelines Competing interests The authors declare that they have no competing interests. African University of Science and Technology, Abuja, Nigeria C E Chidume, A U Bello & B Usman Federal University, Dutsin-Ma, Dutsin-Ma, Katsina State, Nigeria A U Bello C E Chidume B Usman Correspondence to C E Chidume. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Chidume, C.E., Bello, A.U. & Usman, B. Krasnoselskii-type algorithm for zeros of strongly monotone Lipschitz maps in classical banach spaces. SpringerPlus 4, 297 (2015). https://doi.org/10.1186/s40064-015-1044-1 Strongly monotone Lipschitz Hölder continiuty
CommonCrawl
Time–frequency texture descriptors of EEG signals for efficient detection of epileptic seizure Volume 3 Supplement 2 Special Issue: EEG Analysis Techniques and Applications Abdulkadir Şengür1, Yanhui Guo2 & Yaman Akbulut1 Brain Informatics volume 3, pages 101–108 (2016)Cite this article Detection of epileptic seizure in electroencephalogram (EEG) signals is a challenging task and requires highly skilled neurophysiologists. Therefore, computer-aided detection helps neurophysiologist in interpreting the EEG. In this paper, texture representation of the time–frequency (t–f) image-based epileptic seizure detection is proposed. More specifically, we propose texture descriptor-based features to discriminate normal and epileptic seizure in t–f domain. To this end, three popular texture descriptors are employed, namely gray-level co-occurrence matrix (GLCM), texture feature coding method (TFCM), and local binary pattern (LBP). The features that are obtained on the GLCM are contrast, correlation, energy, and homogeneity. Moreover, in the TFCM method, several statistical features are calculated. In addition, for the LBP, the histogram is used as a feature. In the classification stage, a support vector machine classifier is employed. We evaluate our proposal with extensive experiments. According to the evaluated terms, our method produces successful results. 100 % accuracy is obtained with LIBLINEAR. We also compare our method with other published methods and the results show the superiority of our proposed method. Epileptic seizure is a physiopathological disease that is known as a neurological disorder caused by the transient and unexpected electrical disturbance of the brain. Electroencephalogram (EEG), which is a common method for detection of the epileptic seizure, constructs a representative signal containing information about the brain's electrical activity. Interpretation of EEG signals for manual detection of the epileptic seizure is not an easy task and requires high skills of neurophysiologists. Moreover, manual interpretation of the long recordings is tedious and time consuming. Therefore, an automated system to help neurophysiologists in detecting epileptic seizures is in great demand. Such an automated system is composed of two main parts [1–4]: EEG feature extraction and classification. While EEG feature extraction enables to characterize EEG signals, classification finds different categories in the input EEG signals. Detection of epileptic seizures on EEG signals is a popular research topic and many methods have been proposed [2–7]. In these methods, the representative EEG features were extracted either in the time domain [2–4] or frequency domain [8]. The features from time domain are generally extracted from the amplitude or rhythmicity of EEG signals. The frequency domain features are generally computed on the spectrum of EEG signals. There are also several methods based on the time–frequency (t–f) representation [9–11]. The t–f image-based features are used to describe the non-stationary nature of the EEG signals. Instantaneous frequency and sub-band energies are other important t–f domain features for the EEG characterization. In addition, multiscale representations of EEG signals represent rich features. For instance, the statistics of the wavelet coefficients and their relative energies are useful features for EEG classification [6]. Recently, several novel t–f features were proposed based on t–f image descriptors for the automatic detection of epileptic seizure in EEG data. In [10], the authors described visually the normal and epileptic seizure patterns in the t–f domain. The proposed features are based on Haralick's texture features calculated from the t–f representation of EEG signals. In [11], the authors proposed an approach for automatic detection of epileptic seizures using combined Hilbert–Huang transform and support vector machine (SVM) on the t–f image. Several statistical features such as mean, variance, skewness, and kurtosis of pixel intensity in the histogram of segmented gray-scale t–f image are considered. Other t–f image-based features were used to represent the EEG signals in [9]. The authors used a smoothed pseudo Wigner–Ville distribution to obtain the t–f images. The obtained t–f images were then segmented on the frequency bands of the EEG signals' rhythms. These features from the histogram of segmented t–f images were then used for a multiclass least squares SVM. In [12], the authors combined signal analysis and image processing for classifying EEG abnormalities. The combination of signal-based features and t–f image-related features was employed to merging key instantaneous frequency descriptors. The proposed method was used to recognize the EEG abnormalities in both adults and newborns. Our main motivation arises due to the following conclusions: First of all, we think that the t–f representation of healthy and epileptic seizure EEG signals contain different motifs. Especially, when the frequency bands of the EEG signals' rhythms are considered, the justification of our motivation becomes more convincing. Because, each rhythm region of the t–f image for healthy and epileptic seizure has considerably discriminatory texture. These motifs can successfully be modeled by various texture descriptors for further analysis. To this end, texture encoders such as GLCM, TFCM, and LBP are considered to re-shape the t–f images and a number of statistical quantities are calculated. The considered texture encoders are well known in the image processing and pattern recognition communities with numerous advantageous. These methods are quite efficient in characterizing various texture motifs. Their implementations are easy and complexities are quite low. In this paper, texture representation of the t–f image-based epileptic seizure detection is proposed. More specifically, we propose texture descriptor-based features to discriminate normal and epileptic seizure in the t–f domain. The features that are obtained on the GLCM are contrast, correlation, energy, and homogeneity. Moreover, in TFCM method, the calculated features are mean convergence, code variance, code entropy, uniformity, first-order difference moment, first-order inverse difference moment, second-order difference moment, second-order inverse difference, and four energy distribution values from the co-occurrence matrix. In addition, for the LBP, the histogram is used as the feature. In the classification stage, a support vector machine (SVM) classifier is considered. We evaluate our proposal with extensive experiments. According to the evaluated terms, our method produces successful results. 100 % accuracy is obtained with LIBLINEAR. We also compare our method with other existing methods, and the results show the superiority of our proposal. In [10], the authors used Haralick's texture features to classify the healthy and epileptic EEG signals. Our work is different from the previous one such that we search each frequency rhythms and concatenate the features of each rhythm for constructing robust descriptors. Moreover, to the best of our knowledge, TFCM and LBP methods are firstly considered for EEG signal classification in this work and achieved better results in our paper. The rest of the paper is organized as follows: in Sect. 2, the methodology and the related theories are given. In Sect. 3, the experimental works and the obtained results are presented. We conclude the paper in Sect. 4. In this work, t–f representation, texture descriptors, and SVM-based methodology are proposed for the classification of EEG signals as healthy and epileptic seizures. An illustration is given in Fig. 1. As it is observed from Fig. 1, the EEG signals are firstly transformed into t–f domain. The Spectrogram of Short-Time Fourier Transform (STFT) is used in order to obtain the t–f images of EEG signals. The obtained t–f images are then converted into 8-bit gray-scale images and are divided into five sub-images corresponding to the frequency bands of the rhythms. The GLCM, TFCM, and LBP texture descriptors are employed to extract distinctive features for classification purposes. The standard combination of SVM, LIBLINEAR, and Homogenous mapping is investigated for obtaining high-accuracy results in classifying the EEG signals. The proposed method STFT spectrogram The STFT spectrogram is defined as the normalized, squared magnitude of the STFT coefficients [13]. According to a non-mathematical definition, STFT coefficients can be obtained using a sliding window in time domain in order to divide the signal into small parts and then analyze each part with Fourier transform to determine the frequencies. Thus, a time-varying spectrum can be obtained. In a mathematical view, the STFT can be defined as $$X(n,\omega ) = \sum\limits_{m = - \infty }^{\infty } {x[m]w[n - m]e^{ - j\omega n} },$$ where \(x[m]w[n - m]\) is a short-time part of the input signal x[m] at time n. In addition, a discrete STFT is defined as $$X(n,k) = X(n,\omega )\left|\right._{{\omega = \frac{2\pi }{N}k}},$$ where N shows the number of discrete frequencies. Thus, the spectrogram in logarithmic scale is defined as $$S(n,k) = \log |X(n,k)|^{2}.$$ GLCM features GLCM features are commonly used in various image processing applications such as texture segmentation and classification, biomedical image analysis, scene segmentation, etc. [14]. GLCM can be seen as a directional pattern counter with a specific distance δ and angle θ between neighboring image pixel pairs for gray-scale images. This situation is represented in Fig. 2. Angular nearest neighbors In a numerical view, for θ = 0° and δ = 1, the GLCM can be defined as $$M_{\delta ,\theta = 0} (p,q) = \sum\limits_{n = 1}^{N} {\sum\limits_{m = 1}^{K} {\left\{ {\begin{array}{ll} 1 &\quad {{\text{if}}\;I(n,m) = p\,{\text{and}}\,I(n,m + \delta ) = q} \\ 0 &\quad {{\text{otherwise}}} \\ \end{array} } \right.} },$$ where p, q = 0, 1,… L – 1; L is the number of gray scales; N and K are the sizes of the image. After normalizing the GLCM, the contrast, correlation, energy, and homogeneity features are calculated. TFCM features The TFCM translates a gray-scale input image into a texture feature number image via differencing in the image domain followed by successive stages of vector classification [15]. The algorithm firstly calculates the differences along horizontal, vertical, and diagonal connectivity sets. Figure 3 shows the related illustrations. Horizontal, vertical, and diagonal connectivity sets The resulting two-element difference vectors are thresholded at a tolerance into quantized two-element vectors whose values are from the set of {−1, 0, 1}, interpreted as negative, no change, and positive difference, respectively. The TFCM maps the individual quantized difference vectors to gray-level class numbers based on the degree of the variation in each vector [15]. Then a mapping procedure is employed for further coding gray-level class numbers. The following mapping is further employed for obtaining final 2-D texture feature number images. After constructing the co-occurrence matrices of texture feature number images, 12-dimensional feature vector is calculated [15]. LBP features Ojala et al. developed an operator called LBP for describing the local textural patterns [16]. This simple but effective operator has been then used as a texture descriptor in many image processing-based applications. The LBP works in a 3 × 3 pixel block and the pixels in this block are thresholded by its center pixel value, multiplied by powers of two and then summed to obtain a label for the center pixel. Figure 4 shows the basic idea of the LBP operator. The center pixel's gray-scale value becomes 19 after applying the LBP procedure. The mathematical illustration of the procedure is as follows: $$LBP(x) = \sum\limits_{i = 1}^{8} {f(G(x_{i} ) - G(x))2^{i - 1} }$$ $$f(t) = \left\{ {\begin{array}{ll} {1,} &\quad {t \ge 0} \\ {0,} &\quad {t < 0} \\ \end{array} } \right.,$$ where x shows the location of the center pixel, x i shows the ith neighboring pixel as shown in Fig. 4, and G(.) is the gray-scale value of a pixel. LBP procedure after the LBP image is constructed; the histogram of the LBP image is used as the feature Experimental work The experiments are conducted on an open source EEG dataset that was recorded in Bonn University [17]. The recorded dataset has five sets denoted as A to E. Each contains 100 single-channel EEG signals, and each one having 4097 samples. In other words, each recorded EEG signal has 23.6 s duration. The datasets A and E are considered. While set A was taken from surface EEG recordings of five healthy volunteers with eyes open and closed, respectively, set E only contains epileptic seizure. Figure 5 shows a typical EEG illustration of both healthy and epileptic seizure. As shown in Fig. 5, the amplitudes of the epileptic EEG signals are higher than those of the normal EEG signals. Illustration of EEG signals, Set E and Set A Moreover, Fig. 6 shows the spectrogram of EEG signals for healthy and epileptic seizure, respectively. By visual inspection, a qualitative discrimination of healthy and epileptic seizure can be seen in Fig. 6. For further processing the t–f images, we convert them into 8-bit gray-scale images. The 8-bit gray-scale t–f images are then divided into five sub-images corresponding to the frequency bands of the rhythms to localize significant structures. The main EEG rhythm on frequency ranges is as follows [9]: Delta: 0–4 Hz. Theta: 4–8 Hz. Alpha: 8–12 Hz. Beta: 12–30 Hz. Gamma: 30–50 Hz. Spectrogram of EEG signal: a healthy and b epileptic seizure In Fig. 7, we show the divided sub-images corresponding to frequency bands of the rhythms. Gray-scale sub-images: a healthy and b epileptic seizure EEG signal. A gamma, B beta, C alpha, D theta, E delta After gray-scale sub-images (Fig. 7) for healthy and epileptic seizure EEG signal are constructed, the texture descriptors are computed. For computing the GLCM, the distance parameter is set to 1 and the angle parameter value ranges from 0o to 135o with a 45o increment. Thus, 4 GLCMs are obtained and by calculating the contrast, correlation, energy, and homogeneity features, a 16-dimensional feature vector is constructed for each sub-image. Moreover, for obtaining the TFCM features, once each sub-image is converted to a texture feature number, a 12-element feature vector is generated based on these co-occurrence matrices and texture feature number histograms. The tolerance parameter of the TFCM is set to 80. In addition, for LBP, the histogram is computed for each sub-images. Thus, a 256-dimensional feature vector is obtained. Finally, feature vectors that are extracted from each sub-image are concatenated. In this case, three of 80-, 60-, and 1280-dimensional feature vectors are constructed for the GLCM, TFCM, and LBP, respectively. The linear SVM is employed in the classification stage of our proposal [18]. Moreover, the homogeneous mapping is considered to increase the efficiency of the SVM [20, 21]. This mapping procedure enables a compact linear representation of the input dataset. Thus, a very fast linear SVM classifier can be obtained. The VLFeat tool is used for both homogeneous mapping and FV encoding [19]. The VLFeat open source library implements various computer vision algorithms such as Fisher Vector, VLAD, SIFT, MSER, SLIC superpixels, large-scale SVM training, and many others specializing in image understanding and local feature extraction and matching. We also use the LIBLINEAR for further increasing the efficiency of the SVM [20, 21]. LIBLINEAR was developed as an open source library for large-scale linear classification. To evaluate the performance of the proposed scheme, we employ classification accuracy, sensitivity, and specificity. $${\text{Sensitivity}}\;{ = }\;\frac{\text{TP}}{{{\text{TP}}\; + \;{\text{FN}}}}$$ $${\text{Specificity}}\;{ = }\;\frac{\text{TN}}{{{\text{TN}}\; + \;{\text{FP}}}}$$ $${\text{Accuracy}}\; = \;\frac{{{\text{TP}}\;{ + }\;{\text{TN}}}}{{{\text{TP}}\;{ + }\;{\text{TN}}\;{ + }\;{\text{FP}}\;{ + }\;{\text{FN}}}},$$ where TP represents the total number of correctly detected true-positive samples and TN represents the number of correctly detected true-negative samples; FP and FN represent the total number of false-positive and false-negative samples, respectively. The setup parameters of the classifiers are adjusted for obtaining the best performance. For the SVM, we experiment with all kernels and the best result is obtained with a linear kernel. The C parameter is set to 100. L2-regularized L2-loss solver is chosen for LIBLINEAR. In addition, the C parameter for LIBLINEAR is set to 0.07. Chi2 kernel is used for homogeneous mapping. It is worth mentioning that the experimental results are recorded using fivefold cross-validation. The overall performance of the proposed method is tabulated in Table 1. Table 1 Obtained results of GLCM features The results suggest that the best accuracy is obtained with Homogenous Mapping + LIBLINEAR. The classification accuracy is 100 %. In addition, the sensitivity and specificity values are 100 and 100 %, respectively. The SVM yields the worst classification results. 92.5 % accuracy is recorded. Sensitivity and specificity values are 95 and 90 %, respectively. Thus, it is obvious that LIBLINEAR structure greatly improves the performance. LIBLINEAR structure is 7 % better than that of SVM. In addition, homogeneous mapping also improves performance. 0.5 % more accurate result is obtained with homogenous mapping than LIBLINEAR structure. Similar experiments are carried out for TFCM features. The related classifier parameters are set as the follows: the SVM kernel is chosen as a polynomial and C is set to 1. L1-regularized L2-loss solver is chosen for LIBLINEAR. In addition, the C parameter for LIBLINEAR is set to 15. The performance results of TFCM features are shown in Table 2. The best accuracy is obtained using Homogenous Mapping + LIBLINEAR. The obtained accuracy is 87 %. The LIBLINEAR and SVM obtain the same classification accuracy. 82 % is tabulated. The other sensitivity and specificity values can be seen in Table 2. The best sensitivity value is obtained with Homogenous Mapping + LIBLINEAR. The worst specificity value is recorded for SVM (79 %). Table 2 Obtained results of TFCM features We conclude our experiments with the LBP features. We adjust the related parameters of the classifiers for obtaining high-performance results. Similar to previous experiments, the intersection kernel is chosen for the SVM. We also experiment with other kernels such as linear, radial basis function, polynomial, and sigmoid. The intersection kernel achieves the highest accuracy. The C parameter is selected as 0.32. L1-regularized L2-loss solver is chosen for LIBLINEAR. In addition, the C parameter for LIBLINEAR is set to 100. The performance results of LBP features are shown in Table 3. The best accuracy is obtained using SVM and LIBLINEAR. The obtained accuracy is 100 % for both classification methods. Actually, this is a surprising result because in the previous two experiments, the Homogenous Mapping + LIBLINEAR structure yields better results than SVM and LIBLINEAR. Homogenous Mapping + LIBLINEAR yields the worst results for LBP features. Table 3 Obtained results of LBP features We also compare our results with other published methods handling the classification problem in the same dataset A and E. The results are shown in Table 4. From Table 4, we can see that accuracy of our proposed method is higher compared with other methods. In this work, t–f representation of EEG signals, texture descriptors, and SVM approach has been used to detect the epileptic seizure. The STFT spectrogram has been considered for discrimination of the epileptic seizure and healthy EEG signals. The obtained t–f images are then divided on the frequency bands of the rhythms. The features are obtained by calculating the histogram of LBP and various statistical features of the GLCM and TFCM for each t–f sub-image. The features are then fed into the classifier. The extensive experiments indicate that the LBP features obtained the best results. The second best results are recorded with GLCM features, and finally TFCM-based features exhibit the worst performance. This situation may be caused because of the dimensionality of the feature vectors. In other words, LBP-based feature vector has the higher dimensionality and TFCM-based feature vector has the lowest. In addition, LBP may better characterize the EEG t–f images than the GLCM and TFCM methods. Siuly Yan Li (2014) A novel statistical algorithm for multiclass EEG signal classification. Eng Appl Artif Intell 34:154–167 Siuly S, Yan L, Peng W (2011) Clustering technique-based least square support vector machine for EEG signal classification. Comput Methods Prog Biomed 104(3):358–372 Zhu, Guohun and Li, Yan and Wen, Peng (Paul) (2014) Epileptic seizure detection in EEGs signals using a fast weighted horizontal visibility algorithm. Comput Methods Prog Biomed 115 (2). pp 64–75. ISSN 0169-2607 Siuly, Li Y, Wen P (2010) Analysis and classification of EEG signals using a hybrid clustering technique. In: IEEE/ICME international conference on complex medical engineering (ICME 2010), 13–15 Jul 2010, Gold Coast Güler NF et al (2005) Recurrent neural networks employing Lyapunov exponents for EEG signals classification. Expert Syst Appl 29(3):506–514 Subasi A (2007) EEG signal classification using wavelet feature extraction and a mixture of expert model. Expert Syst Appl 32(4):1084–1093 Halici U, Agi E, Ozgen C, Ulusoy I (2008) Analysis and classification of EEG signals for brain computer interfaces. In: International conference on cognitive neuroscience X Widman G, Schreiber T, Rehberg B, Hoeft A, Elger CE (2000) Quantification of depth of anesthesia by nonlinear time series analysis of brain electrical activity. Phys Rev E 62(4):4898–4903 Bajaj V, Pachori RB (2013) Automatic classification of sleep stages based on the time-frequency image of EEG signals. Comput Methods Prog Biomed 112(3):320–328 Boubchir L, Al-Maadeed S, Bouridane A (2014) Haralick feature extraction from time-frequency images for epileptic seizure detection and classification of EEG data. ICM Conference 32–35:14–17 Fu K, Qu J, Chai Y, Dong Y (2014) Classification of seizure based on the time-frequency image of EEG signals using HHT and SVM. Biomed Signal Process Control 13:15–22 Boashash B, Boubchir L, Azemi G (2011) Time-frequency signal and image processing of non-stationary signals with application to the classification of newborn EEG abnormalities and seizures. IEEE, ISSPIT' 2011 Spain, pp 120–129 Boashash B (2003) Time-frequency signal analysis and processing: a comprehensive reference. Elsevier, Oxford Sengur A, Amin M, Ahmad F, Sevigny P, DiFilippo D (2013) Textural feature based target detection in through-the-wall radar imagery. In: SPIE defense, sensing and security symposium, radar sensor technology XVII conference, Baltimore, 29 April–3 May Liang J, Zhao X, Xu R, Kwan C, Chang C-I (2004) Target detection with texture feature coding method and support vector machines. In Proceedings of the ICASSP, Montreal, QC, Canada, pp II-713–II-716 Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987 Andrzejak RG, Lehnertz K, Rieke C, Mormann F, David P, Elger CE (2001) Indications of nonlinear deterministic and finite dimensional structures in time series of brain electrical activity: dependence on recording region and brain state. Phys Rev E 64:061907 Vapnik V (1995) The nature of statistical learning theory. Springer-Verlag, New York Book MATH Google Scholar http://www.vlfeat.org/. Accessed: 20 May 2014 Fan R-E, Chang K-W, Hsieh C-J, Wang X-R, Lin C-J (2008) LIBLINEAR: a library for large linear classification. Journal of Machine Learning Research 9:1871–1874 Vedaldi A, Zisserman A (2010) Efficient additive kernels via explicit feature maps. In: Proceedings of the IEEE conference on computer vision and pattern recognition Polat K, Günes S (2007) Classification of epileptiform EEG using a hybrid system based on decision tree classifier and fast Fourier transform. Appl Math Comput 187(2):1017–1026 Wang D, Miao D, Xie C (2011) Best basis-based wavelet packet entropy feature extraction and hierarchical EEG classification for epileptic detection. Expert Syst Appl 38(11):14314–14320 Technology Faculty, Electrical and Electronics Engineering Department, Firat University, Elazig, Turkey Abdulkadir Şengür & Yaman Akbulut Department of Computer Science, University of Illinois at Springfield, Springfield, IL, USA Yanhui Guo Abdulkadir Şengür Yaman Akbulut Correspondence to Abdulkadir Şengür. Şengür, A., Guo, Y. & Akbulut, Y. Time–frequency texture descriptors of EEG signals for efficient detection of epileptic seizure. Brain Inf. 3, 101–108 (2016). https://doi.org/10.1007/s40708-015-0029-8 EEG signal Time–frequency image Texture descriptor Epileptic seizure detection
CommonCrawl
Rates of hospitalization and death for all-cause and rotavirus acute gastroenteritis before rotavirus vaccine introduction in Kenya, 2010–2013 Richard Omore ORCID: orcid.org/0000-0003-3702-30301,2, Sammy Khagayi1, Billy Ogwel1, Reuben Onkoba1, John B. Ochieng1, Jane Juma1, Stephen Munga1, Collins Tabu3, Sergon Kibet4, J. Pekka Nuorti2, Frank Odhiambo1, Jason M. Mwenda5, Robert F. Breiman7, Umesh D. Parashar6 & Jacqueline E. Tate6 BMC Infectious Diseases volume 19, Article number: 47 (2019) Cite this article Rotavirus vaccine was introduced in Kenya immunization program in July 2014. Pre-vaccine disease burden estimates are important for assessing vaccine impact. Children with acute gastroenteritis (AGE) (≥3 loose stools and/or ≥ 1 episode of unexplained vomiting followed by loose stool within a 24-h period), hospitalized in Siaya County Referral Hospital (SCRH) from January 2010 through December 2013 were enrolled. Stool specimens were tested for rotavirus (RV) using an enzyme immunoassay (EIA). Hospitalization rates were calculated using person-years of observation (PYO) from the Health Demographic Surveillance System (HDSS) as a denominator, while adjusting for healthcare utilization at household level and proportion of stool specimen collected from patients who met the case definition at the surveillance hospital. Mortality rates were calculated using PYO as the denominator and number of deaths estimated using total deaths in the HDSS, proportion of deaths attributed to diarrhoea by verbal autopsy (VA) and percent positive for rotavirus AGE (RVAGE) hospitalizations. Of 7760 all-cause hospitalizations among children < 5 years of age, 3793 (49%) were included in the analysis. Of these, 21% (805) had AGE; RV was detected in 143 (26%) of 541 stools tested. Among children < 5 years, the estimated hospitalization rates per 100,000 PYO for AGE and RVAGE were 2413 and 429, respectively. Mortality rate associated with AGE and RVAGE were 176 and 45 per 100,000 PYO, respectively. AGE and RVAGE caused substantial health care burden (hospitalizations and deaths) before rotavirus vaccine introduction in Kenya. Rotavirus is the most common cause of vaccine-preventable severe acute gastroenteritis (AGE) among infants and young children worldwide [1, 2]. In 2013, RVAGE was estimated to cause 215,000 global deaths among children < 5 years of whom 2% (~ 4000) were from Kenya [3] alone. In Kenya, RVAGE accounts for 19% (~ 9000) of annual hospitalizations among children < 5 years [4]. Two RV vaccines Rotarix® (GlaxoSmithKline), and RotaTeq® (Merck & Co.), are approved and recommended by the World Health Organization (WHO) for global use [5]. Efficacy and effectiveness studies of these vaccines have shown significant reduction in AGE and RVAGE associated hospitalizations and deaths among children < 5 years in both clinical trials and in settings where they have been incorporated into the national immunization programs [1, 6,7,8,9,10]. Consistent with data from Mexico [6] and Brazil [7, 11], African countries that were early introducers of RV vaccines including Malawi [10], Ghana [12], and Rwanda [13], have shown remarkable declines in childhood morbidity and mortality associated with AGE and RVAGE. Furthermore, the cost benefit of these vaccines has equally been demonstrated [4, 7, 14] .RV vaccine (Rotarix®) was introduced into the Kenya national immunization program in July 2014. Recent population-based data on pre-vaccine disease rates are not available in Kenya. However, such data are needed to evaluate the impact of vaccination program and may help county and national level governments, regional and global decision makers with evidence needed to support investment in these vaccines. We examined baseline rates of AGE and RVAGE specific hospitalizations and deaths among children < 5 years in rural western Kenya from January 1, 2010 to December 31, 2013 before RV vaccine introduction in Kenya. Study site Rotavirus surveillance and the Health Demographic Surveillance System (HDSS) platform in our study setting has been detailed elsewhere [15, 16]. In brief, the HDSS site is located in Siaya County in rural western Kenya. The HDSS is a longitudinal study that monitors births, deaths, out and in-migrations and other demographics of a defined population [16]. Our study was conducted in Karemo HDSS area within Siaya county referral hospital (SCRH) — the main regional referral hospital in this setting. Rotavirus surveillance and laboratory methods As part of the African-based, World Health Organization (WHO) coordinated RV rotavirus disease surveillance network [17], we conducted hospital-based prospective surveillance for RVAGE within the Kenya Medical Research Institute (KEMRI) operated HDSS area in Karemo [15]. Children aged 0–59 months residents of Karemo HDSS, hospitalized at the in-patient department of SCRH with AGE; defined as ≥3 looser than normal stools and/or ≥ 1 episode of unexplained vomiting followed by loose stool within a 24-h period beginning within the 7 days before seeking healthcare from January 1, 2010 to December 31, 2013 were eligible for enrolment. Trained health facility recorders approached all eligible patient children, explained the study and administered a questionnaire on demographics to their caretakers after obtaining informed consent. A study clinician then examined these patients and administered the standardized questionnaire to their parent/caretaker to gather information about symptoms, medical history, laboratory investigations, diagnosis, treatment and outcome of hospitalization. A whole stool specimen was collected from each participant in a plastic diaper from which at least 2 ml of stool was scooped into a specimen container using a sterile spatula within 48 h of admission, transported on the same day to the enteric laboratory based at the KEMRI-CGHR, and finally tested for rotavirus using a commercial enzyme immunoassay (EIA) (Rotaclone Kit, Meridian Bioscience). A case of RVAGE was defined as an AGE patient with a RV positive stool specimen. Details of the enrolment, testing and data management have been described previously [15]. In brief, we linked clinic data to laboratory results and to the longitudinal data including cause of death from verbal autopsy (VA) from the HDSS. During data collection, built-in software in the electronic questionnaire with built-in checks and controls ensured quality control. The linked data were then uploaded and managed using a Microsoft SQL Server 2008 database. Data were analysed using SAS version 9.4 (SAS Institute, Inc. Cary, North Carolina, USA). Descriptive analysis for AGE and RVAGE Proportion of admissions due to AGE was calculated by dividing the number of AGE cases by the number of all-cause admissions at SCRH who were residents of Karemo HDSS during the study period. The proportion of admissions that were associated with RVAGE was calculated by dividing the number of RV positive stool samples with the total samples collected and tested. Positivity rates by month and patient characteristics (age, gender, clinical features and illness outcome) were calculated. These proportions were plotted by month to show seasonality. Analysis of RVAGE, disease severity and risk factors The severity of RVAGE was assessed by using the 20-point Vesikari score [18].A score of less than 11 was categorized as mild while a score of 11 or more was classified as severe. Bivariate comparison of the laboratory-confirmed RV positivity and patient characteristics and treatment outcomes were evaluated using chi-square tests. Incidence rates of hospitalization and mortality due to AGE and RVAGE We used person-years of observation (PYO) contributed by all children aged less than 5 years residents of Karemo region during the study period as the denominator. As described previously [15], we calculated PYO by totaling person-time for all children aged 0–59 months who met HDSS residency requirement during the 4-year study period from 1st January 2010 or date of enrolment (if after) until they exited or lost their HDSS residency status through out-migration or death. The crude hospitalization rates were calculated by dividing the total number of AGE and RVAGE hospitalizations by the PYO contributed by children aged 0–59 months for the period that they met residency criteria for the HDSS. We used two adjustments for the hospitalization rates. First, to account for possible missed AGE cases, we divided the crude rate of AGE and RVAGE by the proportion of all in-patients who met the stool collection criteria, whether a sample was collected or not. The second adjustment accounted for children with AGE or possibly RVAGE who did not reach or attend a sentinel health care facility as reported from a population-based, healthcare utilization and attitude surveys (HUAS) for diarrhoea—a separate household survey conducted within the HDSS during the current RVAGE surveillance period [19].The HUAS revealed that the frequencies of seeking care for moderate-to-severe diarrhoea (MSD) from a hospital were 69, 70, 67, 57 and 64% for children aged 0–5, 6–11, 12–23, 24–59 and 0–59 months, respectively (GEMS-Kenya unpublished data). The 95% confidence intervals (CI) were calculated around crude rates by using the PEPI method [20]. Crude rates were then adjusted using Delta method [21, 22]. The adjusted hospitalization rates were finally stratified and reported by age groups that included; 0–5, 6–11, 12–23, 24–59, 0–11 and 0–59 months. AGE and RVAGE mortality rates Deaths were recorded at household level through regular interviews of HDSS residents. Diarrhoea as a cause of death was derived from Verbal autopsy (VA). The VA methodologies, coding and interpretation are described elsewhere [23, 24]. Upon the death of an HDSS resident, a trained village-based reporter sent a notification to HDSS data team. After a mourning period of at least 3 weeks, the interviewer from the HDSS approached the most appropriate interviewee who was closest to the deceased to administer a detailed questionnaire focusing on the signs, symptoms and medical history of the deceased. The VA data were collected electronically, validated and processed using an InterVA program, which is a probabilistic computer-based expert opinion algorithm that determines the most probable cause of death as described elsewhere [24]. We calculated the number of deaths attributed to RV by multiplying the total under-five deaths among HDSS residents in the study area by the proportion of deaths attributable to diarrhoea by VA, and the proportion of hospitalized AGE episodes attributable to RV in each of the various age groups as described below. $$ No\ of\ deaths\ attributable\ to\ RV={\displaystyle \begin{array}{l}\left( Total\ under- five\ deaths\ among\ HDSS\ residents\ in\ study\ area\right)\\ {}{}^{\ast }\ \left( proportion\ of\ deaths\ attributable\ to\ diarrhea\right)\\ {}{}^{\ast }\ \left( proportion\ of\ hospitalized\ AGE\ episodes\ attributable\ to\ RV\right)\end{array}} $$ Mortality rates associated with rotavirus gastroenteritis were obtained by dividing the number of deaths attributed to rotavirus by the total PYO in each of the specific age groups as described above. Enrolment profile and patient characteristics During the study period, a total of 7760 all-cause hospitalizations among children < 5 years of age were recorded at the SCRH paediatric ward, out of which 3793 were Karemo HDSS resident population. Among the 3793 Karemo HDSS resident children, 805 (21, 95%CI: 20–23) children were hospitalized due to AGE (Fig. 1). RV-positivity among hospitalized children from Karemo with AGE was more pronounced in infants (< 12 months of age), then toddlers (12–23 months of age), and was least in school-age children (24–59 months of age) (Table 1). Characteristics of patients who had stool specimens collected and those who did not have specimens collected are shown in Table 2. Flow diagram of Karemo DSS residents < 5 yrs. who were hospitalized and enrolled in the study from Siaya county referral hospital, western Kenya 2010–2013 Table 1 Characteristics of children< 5 years hospitalized at SCRH with all cause morbidity, AGE and RVAGE, 2010—2013 Table 2 Characteristics of Karemo resident children < 5 years hospitalized with AGE who had stool collected and those without stool collected, Siaya County Referral Hospital, Western Kenya, 2010–2013 Of the 541 stool samples collected, 204 (38%) were from infants aged 6–11 months. There was no difference in stool collection by gender. Furthermore, we did not observe any statistical difference in rotavirus positivity in male versus female patients among infants aged < 12 months ((69/211 [32.7%]) vs. (42/165 [25.4%]), OR = 1.42, p = 0.13),toddlers aged 12–23 months ((9/61 [14.7%]) vs. (12/46 [26.1%]), OR = 0.49, p = 0.15), or in children aged 24–59 months ((9/37 [24.3%]) vs. (2/21 [9.5%]), OR = 3.05, p = 0.18), respectively. The overall annual proportion of rotavirus detection ranged from 43/147 (29.3%) in 2010 to 21/95 (22.1%) in 2013 and the annual proportion of samples detected with rotavirus did not differ significantly over the 4-year study period. Rotavirus hospitalizations were seen throughout the year over the surveillance period, but peaked from January through March and around August–September each year during study period (Fig. 2). Rotavirus infection trends among hospitalized children < 5 years from Karemo DSS resident population seeking Care from Siaya County Referral Hospital, western Kenya 2010–2013 Compared with non-RVAGE cases, RVAGE cases were younger ((median age = 8 Interquartile range [IQR] 5–12) vs. 9 [IQR: 6–15] months; p < 0.032)), more likely to present with vomiting ((126/143 (88.1%) vs. 297/397 (74.8%)), and more likely to be classified as severe by Vesikari score (88/143 (61.5%) vs. 179/398 (44.9%), p-value = 0.0007).(Table 3).The length of hospitalization was similar for RVAGE compared to non-RVAGE (number of hospitalization days 4 [IQR] 3–6 vs. 4 [IQR] 3–6, p-value = 0.564). Table 3 The Vesikari scores for severity of illness among RVAGE and non- RVAGE hospitalized children <5 yrs. in Siaya County Referral Hospital, western Kenya, 2010–2013 Hospitalization attributed to AGE in Karemo HDSS The highest annual hospitalization rate (per 100,000 PYO) associated with AGE was observed in 2011 followed by 2010, 2012 and 2013 in descending order. The annual incidence (per 100,000) of hospitalizations due to all cause AGE was highest among infants and children aged 6–11 months remained most affected. Hospitalization attributed to RVAGE among children < 5 years from Karemo HDSS Incidence rates of RVAGE associated hospitalization was highest among infants, particularly among those aged 6–11 months. We observed the highest RVAGE hospitalization rate in 2011 followed by 2010, 2012 and 2013 in decreasing order. Hospitalization rates for AGE and RVAGE are shown in Table 4. Table 4 Adjusted† rates and 95% Confidence Intervals of hospitalization attributed to AGE and rotavirus per 100,000 Person-Years among in-patients aged 0–59 months residents of Karemo HDSS in Rural Western Kenya, 2010–2013 Mortality attributed to AGE and RVAGE Discharge information was available for 531 (98%) of the hospitalizations due to AGE of whom 33 (6.2%) died during hospitalization. The case-fatality proportion among RVAGE ((4.2%), [6/142]) compared to that observed from non-RVAGE ((6.9%), [27/389]) cases was similar, p = 0.26. The highest mortality rates of AGE and RVAGE were observed among infants (< 12 months of age), and remained most elevated among infants aged 6–11 months. Annual mortality rates associated with RVAGE were stable between 2010 and 2011, but increased before RV vaccine introduction, especially among children aged 6–11 months. Mortality rates attributed to AGE and RVAGE are shown in Table 5. Table 5 Rates and 95% Confidence Interval of deaths attributed to AGE and rotavirus per 100,000 Person-Years among in-patients aged 0–59 month's residents of Karemo HDSS in Rural Western Kenya, 2010–2013 This study documents comprehensive, age-stratified population-based hospitalization and mortality rates associated with AGE and RVAGE before introduction of RV vaccines among Kenyan children < 5 years in a rural community whose demographic and healthcare seeking characteristics are well described [16, 19]. Unlike other WHO rotavirus surveillance study sites in Africa, our hospital-based surveillance site for rotavirus is unique for a few reasons. First, it is supported by an ongoing HDSS which monitors population denominators and conducts verbal autopsy [16]. Second, our surveillance hospital is the only regional public referral hospital for the local HDSS and the only in-patient facility within the HDSS, making our surveillance data representative of the population as shown from our current data and as previously observed [15]. Furthermore, the advantages of a population-based incidence rate are two-fold. First, they provide an opportunity to estimate number of people affected by a disease. Second, they can help to project the number of illness episodes that can be prevented with effectively known interventions such as vaccines [25]. Our 4-year study's most important findings are that before RV vaccine introduction in Kenya; approximately 90 and 60% of RVAGE hospitalized children were aged < 2 years and < 1 year, respectively, and that hospitalizations and mortality associated with AGE and RVAGE were highest among infants. Furthermore our data suggests that children bearing the greatest burden of morbidity and mortality associated with AGE and RVAGE were infants aged 6–11 months. This finding is similar to observations from neighboring Sudan where pre-RV vaccine data indicates that 91 and 61% of rotavirus hospitalizations occurred before 2 years and 1 year respectively [26]. Furthermore, our finding is consistent with observations from the first 2 years of the current study [15], a study conducted at the coastal region of Kenya [27], Global Enteric Multicenter Study (GEMS) [2] and other studies conducted in Europe [28] before introduction of RV vaccines. Our observation that 21% of hospitalizations among children < 5 years in the HDSS were due to AGE is similar to 23% reported previously from mid-term analysis of our current study [15], 22% reported from Kilifi HDSS in coastal region of Kenya [27], 21% from neighboring Mwanza region in Tanzania [29], and 21% from Ethiopia [30]. In addition, our finding that 26% of hospitalized AGE case patients were infected with RV remains similar to the rate of 27% reported from mid-term analysis of our current surveillance data [15] and to 29% from Kilifi HDSS at the Kenyan coast [27]. These observations suggest that AGE and RVAGE burden in our setting is comparable to those from other settings in Kenya and neighboring countries before RV vaccine introduction. Our observation that rates of hospitalization due AGE and RVAGE declined over the study period before vaccine introduction may be associated with unknown non-RV vaccine intervention factors. However, the proportion of all deaths that were associated with AGE and RVAGE did not follow the same pattern. Thus, these observed trends are difficult to explain, though in part may reflect the effects of other interventions. Although widespread distribution and use of zinc and ORS as part of devolved government development efforts in Kenya has been described [31] and may be a contributing factor to the observed decline in diarrhea burden in this setting, such argument remains speculative and prompts further investigation. This trend however is consistent with other observations from a recent community-based survey conducted in this setting [19], and is not dissimilar to the global trend of diarrhoea and rotavirus disease [3, 32]. Our data show that rotavirus was more commonly detected among infants. Moreover, RVAGE presented with more severe episodes than non-RVAGE as characterized by severe dehydration, vomiting and low grade fever —an observation similar to other previous studies [30, 33,34,35]. Rotavirus is the most common cause of severe dehydrating diarrhoea and is the leading pathogen associated with moderate-to-severe diarrhoea (MSD) [35], as further reaffirmed by GEMS — the largest diarrhoea etiology case-control study ever conducted in countries representing the highest disease burden regions located in Africa and Asia [2]. Severe dehydration caused by diarrhoea in children is a major cause of preventable morbidity and mortality in Kenya [31]. As commonly observed in our setting and consistent with the caretakers healthcare seeking trends in Kenya [19, 31], delay in seeking care for childhood diarrhoea and reducing amount of fluid and food intake during childhood AGE illness can lead to severe disease. Our current study found that case-fatality among RVAGE was not significantly different from non-RVAGE cases, suggesting that rotavirus may not be associated with mortality in hospital based studies as shown from other studies [33, 36]. This finding supports the assumption that seeking care for RVAGE from a health care facility enables access to appropriate rehydration, which would then reduce the risk of death from the disease. Understanding seasonality of rotavirus can help formulate hypothesis for assessing potential factors influencing transmission and guide policy makers in deciding on appropriate interventions and approaches that can work in local settings for improving case management during peak seasons [37]. For example, in settings such as USA, rotavirus seasons have been observed to be delayed, shortened, and diminished [4, 38] after vaccine introduction. In our current analysis, rotavirus detection peaked in months which are locally known to be usually warm and dry. Our current findings are consistent with recent observations from Kenya [15, 33], and remains similar to findings from other studies conducted in Burkina Faso [37], Peru and Bangladesh [39] before rotavirus vaccine introduction in those settings. Although there is no unifying rotavirus seasonality pattern globally [40], it's spread by the faecal-oral route remains agreeable [35], and even airborne or droplet transmission has been postulated [41]. The later attribute potentially makes the virus transmission route also to resemble that of other non-enteric respiratory infectious diseases such as measles [42].These observations suggests that a drop in humidity and rainfall combined with dry soil could potentially increase additional chance for transmission through aerial contaminated faecal materials since survival of rotavirus may still be favored in such conditions as described elsewhere [41, 43]. Treating RVAGE is expensive. In Kenya, it has been estimated that rotavirus disease cost the national healthcare system $10.8 million each year, and that a 2-dose rotavirus vaccine (RVV) series can avert ~ 2500 deaths, ~ 6000 hospitalizations and ~ 860,000 clinic visits with a cost saving of $2.1 million annually [4].RV vaccines have been shown to be effective in reducing the hospitalizations and death due to diarrhoea in children and the protective effect potentially lasts through 2nd year of life [1, 44].While the benefit of these vaccines has been documented in other African countries where they were introduced ahead of Kenya, such as in South Africa [45], Rwanda [3], Ghana [12],and Togo [46], population-level benefits of RVV are yet to be demonstrated from Kenya. A possible limitation of our current study is that many rotavirus-associated fatalities are likely associated with delay in healthcare seeking [5]. Furthermore, VA relies on signs, symptoms and circumstances prior to death to assign cause of death which is subject to misclassification error, and therefore the method as applied in our current study may lead to over or under-estimation of mortality [36]. Our methodology for estimating diarrhoea deaths attributable to rotavirus was based on the following 3 assumptions: (i) that in the absence of treatment, the hospitalized severe cases would not have survived; (ii) the treatment effect on survival of severe diarrhoea is equal for rotavirus and non-rotavirus diarrhoea; and that (iii) the rotavirus attributable fraction of severe diarrhoea observed in the sentinel hospital are generalizable to the source population within each age stratum as already described elsewhere [36]. Maintaining caution when interpreting these estimates is important since we recognize that such assumptions may affect the validity and generalizability of the estimates to the general population. However, since there are currently no reliable data for the direct measurement of the proportion of diarrhoea deaths that are attributable to rotavirus [22, 36] especially in the high disease burden regions located mostly in low-and middle income countries such as in our setting [3], we believe our methodology remains more reasonable, robust and applicable as recommended by WHO [22, 36]. Moreover, our current hospital surveillance data suggests agreeable representation of the source population consistent with previous observations [15]. This study shows that AGE and RVAGE associated hospitalization and deaths are high in this setting with children aged 6–11 months bearing the greatest burden. These findings support the introduction of a vaccine that would potentially provide protection to young children before the disease peaks at 6–11 months of age in this setting. While improvements in drinking water, sanitation and hygiene can effectively prevent other forms of diarrhoea, such interventions do not adequately prevent the spread of rotavirus, thus leaving vaccines as the best alternative in preventing AGE and RVAGE in settings such as ours [5]. Continued surveillance will be important for measuring the impact of rotavirus vaccine introduction in Kenya. Acute gastroenteritis CDC: US Centers for Disease Control and Prevention CGHR: Center for Global Health Research EIA: Enzyme immunoassay GEMS: Global Enteric Multi-Center Study HDSS: Health Demographic Surveillance System KEMRI: Kenya Medical Research Institute MoH: PYO: Person-years of observation RVAGE: Rotavirus AGE Statistical Analysis Software SCRH: Siaya county referral hospital VA: Parashar, U.D., et al., Health impact of rotavirus vaccination in developing countries: Progress and way forward. Clin Infect Dis, 2016. 62 Suppl 2: p. S91–5. Kotloff KL, et al. Burden and aetiology of diarrhoeal disease in infants and young children in developing countries (the global enteric multicenter study, GEMS): a prospective, case-control study. Lancet. 2013;382(9888):209–22. Tate JE, et al. Global, regional, and National Estimates of rotavirus mortality in children <5 years of age, 2000-2013. Clin Infect Dis. 2016;62(Suppl 2):S96–S105. Tate, J.E., et al., Rotavirus disease burden and impact and cost-effectiveness of a rotavirus vaccination program in Kenya. J Infect Dis, 2009. 200 Suppl 1: p. S76–84. WHO, Rotavirus vaccines WHO position paper: January 2013 - recommendations. Vaccine, 2013. 31(52): p. 6170–1. Gastanaduy PA, et al. Effect of rotavirus vaccine on diarrhea mortality in different socioeconomic regions of Mexico. Pediatrics. 2013;131(4):e1115–20. Constenla DO, et al. Economic impact of a rotavirus vaccine in Brazil. J Health Popul Nutr. 2008;26(4):388–96. Lamberti LM, et al. A systematic review of the effect of rotavirus vaccination on diarrhea outcomes among children younger than 5 years. Pediatr Infect Dis J. 2016;35(9):992–8. Armah GE, et al. Efficacy of pentavalent rotavirus vaccine against severe rotavirus gastroenteritis in infants in developing countries in sub-Saharan Africa: a randomised, double-blind, placebo-controlled trial. Lancet. 2010;376(9741):606–14. Bar-Zeev N, et al. Effectiveness of a monovalent rotavirus vaccine in infants in Malawi after programmatic roll-out: an observational and case-control study. Lancet Infect Dis. 2015;15(4):422–8. Gurgel RG, et al. Incidence of rotavirus and all-cause diarrhea in Northeast Brazil following the introduction of a national vaccination program. Gastroenterology. 2009;137(6):1970–5. Armah, G., et al., Impact and effectiveness of monovalent rotavirus vaccine against severe rotavirus diarrhea in Ghana. Clin Infect Dis, 2016. 62 Suppl 2: p. S200–7. Ngabo F, et al. Can routinely collected national data on childhood morbidity and mortality from diarrhea be used to monitor health impact of rotavirus vaccination in Africa? Examination of pre-vaccine baseline data from Rwanda. Pediatr Infect Dis J. 2014;33(Suppl 1):S89–93. Rheingans RD, et al. Economic costs of rotavirus gastroenteritis and cost-effectiveness of vaccination in developing countries. J Infect Dis. 2009;200(Suppl 1):S16–27. Khagayi S, et al. High burden of rotavirus gastroenteritis in young children in rural western Kenya, 2010-2011. Pediatr Infect Dis J. 2014;33(Suppl 1):S34–40. Odhiambo FO, et al. Profile: the KEMRI/CDC health and demographic surveillance system--Western Kenya. Int J Epidemiol. 2012;41(4):977–87. Mwenda JM, et al. Preparing for the scale-up of rotavirus vaccine introduction in Africa: establishing surveillance platforms to monitor disease burden and vaccine impact. Pediatr Infect Dis J. 2014;33(Suppl 1):S1–5. Ruuska T, Vesikari T. Rotavirus disease in Finnish children: use of numerical scores for clinical severity of diarrhoeal episodes. Scand J Infect Dis. 1990;22(3):259–67. Omore R, et al. Health care-seeking behavior during childhood diarrheal illness: results of health care utilization and attitudes surveys of caretakers in western Kenya, 2007-2010. Am J Trop Med Hyg. 2013;89(1 Suppl):29–40. Abramson JH. WINPEPI updated: computer programs for epidemiologists, and their teaching potential. Epidemiol Perspect Innov. 2011;8(1):1. Long, J.X.a.J.S., Confidence Intervals for Predicted Outcomes in Regression Models for Categorical Outcomes , 2005 (Accessed online on November 16 2016). 2005. WHO, World Health Organization. Generic protocol for monitoring impact of rotavirus vaccination on gastroenteritis disease burden and viral strains: Geneva: World Health Organization, 2008; ( Accessed online on November 13, 2016). 2008. Amek NO, et al. Childhood cause-specific mortality in rural Western Kenya: application of the InterVA-4 model. Glob Health Action. 2014;7:25581. Byass P, et al. Strengthening standardised interpretation of verbal autopsy data: the new InterVA-4 tool. Glob Health Action. 2012;5:1–8. Breiman RF, et al. Use of population-based surveillance to determine the incidence of rotavirus gastroenteritis in an urban slum and a rural setting in Kenya. Pediatr Infect Dis J. 2014;33(Suppl 1):S54–61. Mustafa A, et al. Baseline burden of rotavirus disease in Sudan to monitor the impact of vaccination. Pediatr Infect Dis J. 2014;33(Suppl 1):S23–7. Nokes DJ, et al. Rotavirus genetic diversity, disease association, and temporal change in hospitalized rural Kenyan children. J Infect Dis. 2010;202(Suppl):S180–6. Plenge-Bonig A, et al. Breastfeeding protects against acute gastroenteritis due to rotavirus in infants. Eur J Pediatr. 2010;169(12):1471–6. Temu A, et al. Prevalence and factors associated with group a rotavirus infection among children with acute diarrhea in Mwanza, Tanzania. J Infect Dev Ctries. 2012;6(6):508–15. Abebe A, et al. Hospital-based surveillance for rotavirus gastroenteritis in children younger than 5 years of age in Ethiopia: 2007-2012. Pediatr Infect Dis J. 2014;33(Suppl 1):S28–33. Ministry Of Public Health and Sanitation, KENYA DEMOGRAPHIC AND HEALTH SURVEY (KDHS) , 2014. Nairobi, Kenya. Kovacs SD, et al. Deconstructing the differences: a comparison of GBD 2010 and CHERG's approach to estimating the mortality burden of diarrhea, pneumonia, and their etiologies. BMC Infect Dis. 2015;15:16. Omore R, et al. Epidemiology, seasonality and factors associated with rotavirus infection among children with moderate-to-severe diarrhea in rural Western Kenya, 2008-2012: the global enteric multicenter study (GEMS). PLoS One. 2016;11(8):e0160060. Bonkoungou IJ, et al. Epidemiology of rotavirus infection among young children with acute diarrhoea in Burkina Faso. BMC Pediatr. 2010;10:94. WHO, THE TREATMENT OF DIARRHOEA. A manual for physicians and other senior health workers, 4th revision. World Health Organization (WHO) press, 2005. WHO, WHO: External review of burden of disease attributable to rotavirus. 2005. Ouedraogo N, et al. Temporal distribution of gastroenteritis viruses in Ouagadougou, Burkina Faso: seasonality of rotavirus. BMC Public Health. 2017;17(1):274. Tate JE, et al. Sustained decline in rotavirus detections in the United States following the introduction of rotavirus vaccine in 2006. Pediatr Infect Dis J. 2011;30(1 Suppl):S30–4. Colston JM, et al. Seasonality and within-subject clustering of rotavirus infections in an eight-site birth cohort study. Epidemiol Infect. 2018;146(6):688–97. Patel MM, et al. Global seasonality of rotavirus disease. Pediatr Infect Dis J. 2013;32(4):e134–47. Ijaz MK, et al. Effect of relative humidity, atmospheric temperature, and suspending medium on the airborne survival of human rotavirus. Can J Microbiol. 1985;31(8):681–5. Cook SM, et al. Global seasonality of rotavirus infections. Bull World Health Organ. 1990;68(2):171–7. Sattar SA, et al. Effect of relative humidity on the airborne survival of rotavirus SA11. Appl Environ Microbiol. 1984;47(4):879–81. Ruiz-Palacios GM, et al. Safety and efficacy of an attenuated vaccine against severe rotavirus gastroenteritis. N Engl J Med. 2006;354(1):11–22. Groome MJ, et al. Effectiveness of monovalent human rotavirus vaccine against admission to hospital for acute rotavirus diarrhoea in south African children: a case-control study. Lancet Infect Dis. 2014;14(11):1096–104. Tsolenyanu, E., et al., Early evidence of impact of monovalent rotavirus vaccine in Togo. Clin Infect Dis, 2016. 62 Suppl 2: p. S196–S199. This study includes data generated by the KEMRI and CDC joint operated HDSS, which is a member of the International Network for the Demographic Evaluation of Populations and their Health (INDEPTH). We acknowledge the contributions of and thanks to; WHO country office in Kenya and WHO-AFRO office based in Brazzaville; thanks to Ministry of Health staff for supporting the data collection and processing; the entire KEMRI and CDC study team at SCRH. Special thanks to Dr. Daniel R. Feikin and, Dr. Kayla F. Laserson (CDC), Dr. Amek Nyagwara (KEMRI-CGHR), Charles Mwitherero (WHO), Caroline Maina (MoH, Kenya), Alice Ngereso (WHO), Prof. Kirsi Lumbe Sandat, Tiina Kangasala and Catarina. Stahle-Nieminen (University of Tampere), Linet Aluoch Sewe, Collins Okello, Pamela Kanga, Peter Jaron and Ken Ruttoh (KEMRI-CGHR) for supporting the study operations. We are grateful to the caretakers in the Karemo, Asembo and Gem communities who participated in this work. This manuscript is published with the approval of the Director, KEMRI. The study was funded and supported by grants from the GAVI alliance through WHO, the CDC Division of Viral Diseases, US Centers for Disease Control and Prevention, Atlanta, GA, USA Branch program funds through the Kenya Medical Research Institute. Richard Omore was supported in part by the International Doctoral Programme in Epidemiology and Public Health (IPPE), University of Tampere, Finland. The funders did not play any role in the study and interpretation of its outcome. Data were obtained with permission of the KEMRI and CDC HDSS Steering committee. Any data requests may be sent to the above steering committee, through the corresponding author. The findings and conclusions in this report are the findings and conclusions of the authors and do not necessarily represent the official position of the Kenya Medical Research Institute or the US Centers for Disease Control and Prevention. Kenya Medical Research Institute, Center for Global Health Research (KEMRI-CGHR), Kisumu, Kenya Richard Omore, Sammy Khagayi, Billy Ogwel, Reuben Onkoba, John B. Ochieng, Jane Juma, Stephen Munga & Frank Odhiambo Health Sciences Unit, Faculty of Social Sciences, University of Tampere, Tampere, Finland Richard Omore & J. Pekka Nuorti Division of Disease Surveillance and Response, Ministry of Public Health and Sanitation, Nairobi, Kenya Collins Tabu WHO Country Office for Kenya, Nairobi, Kenya Sergon Kibet WHO Regional Office for Africa (WHO/AFRO), Brazzaville, Congo Jason M. Mwenda Division of Viral Diseases, US Centers for Disease Control and Prevention, Atlanta, GA, USA Umesh D. Parashar & Jacqueline E. Tate Global Health Institute, Emory University, Atlanta, GA, USA Robert F. Breiman Richard Omore Sammy Khagayi Billy Ogwel Reuben Onkoba John B. Ochieng Jane Juma Stephen Munga J. Pekka Nuorti Frank Odhiambo Umesh D. Parashar Jacqueline E. Tate Conceived and designed the study: RO1, SK1, RO2, JPN, RFB, JMW, UDP, JT. Performed the study: RO1, SK1, BO, RO2, JBO, JJ, SM, CT, SK2, FO, RFB, UDP, JT. Analyzed the data: RO1, SK1, BO, RO2, JBO, JT. Contributed reagents, materials/analysis tools: ALL. Wrote the paper: RO1, SK1, JT. Reviewed the manuscript: All authors. Interpretation of data and critical revision of the manuscript for important intellectual content: RO1, SK1, JPN, RFB, UDP, JT. All authors read and approved the final manuscript. Correspondence to Richard Omore. Written informed consent was obtained from all the guardians or caretakers of the children before enrolment into the study. This study was approved as part of the HDSS by both the Ethical Review Committee of the Kenya Medical Research Institute and CDC-Atlanta. Omore, R., Khagayi, S., Ogwel, B. et al. Rates of hospitalization and death for all-cause and rotavirus acute gastroenteritis before rotavirus vaccine introduction in Kenya, 2010–2013. BMC Infect Dis 19, 47 (2019). https://doi.org/10.1186/s12879-018-3615-6
CommonCrawl
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Persistent homology of unweighted complex networks via discrete Morse theory Harish Kannan1, Emil Saucan2,3, Indrava Roy1 & Areejit Samal ORCID: orcid.org/0000-0002-6796-96041,4 Scientific Reports volume 9, Article number: 13817 (2019) Cite this article Topological data analysis can reveal higher-order structure beyond pairwise connections between vertices in complex networks. We present a new method based on discrete Morse theory to study topological properties of unweighted and undirected networks using persistent homology. Leveraging on the features of discrete Morse theory, our method not only captures the topology of the clique complex of such graphs via the concept of critical simplices, but also achieves close to the theoretical minimum number of critical simplices in several analyzed model and real networks. This leads to a reduced filtration scheme based on the subsequence of the corresponding critical weights, thereby leading to a significant increase in computational efficiency. We have employed our filtration scheme to explore the persistent homology of several model and real-world networks. In particular, we show that our method can detect differences in the higher-order structure of networks, and the corresponding persistence diagrams can be used to distinguish between different model networks. In summary, our method based on discrete Morse theory further increases the applicability of persistent homology to investigate the global topology of complex networks. In recent years, the field of topological data analysis (TDA) has rapidly grown to provide a set of powerful tools to analyze various important features of data1. In this context, persistent homology has played a key role in bringing TDA to the fore of modern data analysis. It not only gives a way to visualize data efficiently, but also to extract relevant information from both structured and unstructured datasets. This crucial aspect has been used effectively in various applications from astrophysics (e.g., determination of inter-galactic filament structures2) to imaging analysis (e.g., feature detection in 3D gray-scale images3) to biology (e.g., detection of breast cancer type with high survival rates4). Informally, the essence of the theory is its power to extract the shape of data, as well as infer higher-order correlations between various parts of the data at hand which are missed by other classical techniques1. The basic mathematical theory used in this subject is that of algebraic topology, and in particular the study of homology, developed by the French mathematician Henri Poincaré at the turn of the 20th century. The origins of persistent homology lie in the ideas of Morse theory5, which gives a powerful tool to detect the topological features of a given space through the computation of homology using real-valued functions on the space. We refer the reader to the survey article6 for further details. On the other hand, the discretized version of Morse theory developed by Robin Forman7,8,9, gives a way to characterize the homology group of a simplicial complex in terms of a real-valued function with certain properties, known as a discrete Morse function. Examples of such simplicial complexes associated with discrete spaces are the Vietoris-Rips complex corresponding to a discrete metric space, or the clique complex of a graph. Forman8,9 showed that given such a function, the so-called critical simplices completely determine the Euler characteristic of the space, which is a fundamental topological invariant. The study of complex networks in the last few decades has also significantly raised our ability to understand various kinds of interactions arising in both natural and artificial realms10,11,12,13. Understanding how different parts of networks behave and influence each other is therefore an important problem10,11,12,13. However, for large networks, detecting higher-order structures remains a difficult task14. Moreover, recent studies15,16,17 indicate that these higher-order correlations are not captured by usual network measures such as clustering coefficients. While a graph representation captures binary relationships among vertices of a network, simplicial complexes also reflect higher-order relationships in a complex network15,16,18,19,20,21,22,23,24,25. In this context, persistent homology has been employed to explore the topological properties of complex networks18,19,20,21,23,26. In this work, we present a systematic method to study the persistent homology of unweighted and undirected graphs or networks. Previous work has investigated the persistent homology of weighted and undirected networks by creating a filtration of the clique complexes corresponding to threshold graphs obtained via decreasing sequence of edge weights20,23. However, the lack of edge weights in unweighted networks does not permit a filtration based on threshold graphs20,23. Thus for unweighted networks, Horak et al.19 propose a filtration scheme based on the dimension of the simplices in the clique complex corresponding to the unweighted network. Horak et al.19 do not assign weights to vertices, edges or higher-dimensional simplices in the clique complex corresponding to an unweighted graph. An unexplored filtration scheme involves transforming an unweighted network into a weighted network by assigning edge weights based on some network property, such as edge betweenness centrality27,28 or discrete edge curvature29,30, and then employing the filtration scheme based on threshold graphs20,23. As an alternative, we here use discrete Morse theory7,8,9 to create a filtration scheme for unweighted networks by assigning weights to vertices, edges, triangles and higher-dimensional simplices in the clique complex of the graph. In our method, the weight of a simplex is chosen such that it reflects the degree of the vertices which constitute the simplex while simultaneously satisfying the conditions for the weighing function to be a discrete Morse function. Moreover, as explained in the Results section, an equally important intuition behind the choice of these weights is based on the goal of reducing the number of so-called critical simplices. In the context of TDA, classical Morse theory, which involves smooth functions defined on topological spaces that admit a smooth structure has been used to compute persistent homology, e.g. in statistical topology31, astrophysics2. Since the clique complex of a weighted or unweighted graph does not permit a smooth structure in general, applying classical Morse theory is not possible. However, discrete Morse theory7,8,9 provides an efficient way of computing persistent homology. A discrete Morse function not only captures higher-order topological information of the underlying space, a "preprocessing" with respect to a suitable discrete Morse function leads to significant simplification of the topological structure. This makes computation of persistent homology groups or homology groups of filtered complexes much more efficient, see e.g.32. This is especially useful for large datasets where computationally efficient methods are key to compute their persistent homology. We have applied discrete Morse theory to compute persistent homology of unweighted simple graphs. This is done by using the values given by the discrete Morse function to pass from an unweighted graph to a weighted simplicial complex (Fig. 1). This transformation automatically produces a filtration that is needed for the computation of persistent homology, through the so-called level subcomplexes associated with weights of critical simplices (See Theory section and Fig. 2). Moreover, this filtration is consistent with the topology of the underlying space and reveals finer topological features than the dimensional filtration scheme used in Horak et al.19 The combination of these techniques have been used3,33 with applications for image processing. However, to the best of our knowledge, this method has not been used for studying persistent homology in unweighted complex networks to date. Discrete Morse theory gives a theoretical lower bound on the number of critical simplices which can be attained by an optimal choice of the function on a simplicial complex. Interestingly, our method achieves close to the theoretical minimum number of critical simplices in several model and real networks analyzed here (See Results section). Furthermore, our algorithm for computing the discrete Morse function is easy to implement for complex networks. An illustration of the construction of a discrete Morse function f on a clique complex K corresponding to an unweighted and undirected graph G using our algorithm. (a) A simple example of an unweighted and undirected graph G containing 9 vertices and 11 edges. (b) The clique simplicial complex K corresponding to the simple graph G shown in (a). The clique complex K consists of 9 vertices or 0-simplices, 11 edges or 1-simplices and 2 triangles or 2-simplices. The figure also displays the orientation of the 1- and 2-simplices using arrows. (c) Generation of a discrete Morse function f on the clique complex K shown in (b) using our algorithm. The figure lists the state of the Flag variable in algorithm 1 and IsCritical variable in algorithm 2 (See SI Appendix) for each simplex in K. In this example, the clique complex has 4 critical simplices and their respective critical weights correspond to the filtration steps. The figure also lists the FiltrationWeight for each simplex in K obtained using algorithm 3 (See SI Appendix). Filtration based on the entire sequence of weights satisfying discrete Morse function is equivalent to filtration based only on the subsequence of critical weights in terms of persistent homology. (a) Filtration of the network shown in Fig. 1 based on weights of the 4 critical simplices. There is a 0-hole (or connected component) that persists across the 4 stages of the filtration while another 0-hole is born at stage 2 on addition of critical vertex v2 but dies at the stage 3 which corresponds to the weight of the critical edge \([{v}_{1},{v}_{7}]\). Moreover, a 1-hole is born at the stage 4 on addition of the critical edge \([{v}_{5},{v}_{7}]\). (b) Five intermediate stages during the filtration between critical weights 1.1 (stage 2) and 2.35 (stage 3). (c) Four intermediate stages during the filtration between critical weights 2.35 (stage 3) and 3.48 (stage 4). It is seen that the homology of the clique complex remains unchanged during the intermediate stages of the filtration whereby the birth and death of holes occur only at stages which correspond to critical weights. Our results underline the potence of persistent homology to detect inherent topological features of networks which are not directly captured by homology alone. For instance, the p-Betti numbers of the clique complexes corresponding to small-world10 and scale-free11 networks with similar size and average degree, respectively, are of comparable magnitude and thus, homology reveals no deep insight into the differences between the topological features of these two model networks. On the other hand, our observations on the persistent homology of these two networks indicate a clear demarkation with respect to the evolution of topological features in the clique complexes corresponding to these model networks during the filtration process. This dissimilarity in the evolution of topological characteristics that resonates across dimensions and the average degree of the underlying network, indicates an inherent disparity in the persistent homology of small-world and scale-free graphs. In addition to unravelling higher-order relationships in networks, this ability to capture inherent topological differences between two dissimilar networks thus motivates the application of our methods to study the persistent homology of real-world networks. The remainder of the paper is organized as follows. We begin with a Theory section which gives a brief overview of concepts in persistent homology and discrete Morse theory. We then proceed to describe the model networks and real-world networks that have been studied in this work in the Network datasets section. In the subsequent section on Results and Discussion, we present our algorithm to construct a discrete Morse function on a simplical complex associated with a network. In the same section, we present our results for model networks and real-world networks. The final section on Conclusions gives a summary and outlook of our findings. In Supplementary Information (SI) Appendix, we give a brief review of the mathematical theory of homology groups of a simplicial complex. In SI Appendix, we also provide a rigorous proof of concept for our algorithm to construct a discrete Morse function. We then follow up with two algorithms both of which illustrate key procedures that are essential to construct the filtration of a simplical complex associated with the investigated networks. In SI Appendix, time complexity for our algorithms as well as some theoretical results on stability of persistent homology and persistence diagrams are also given. Graphs and simplicial complexes Consider a finite simple graph \(G({\mathscr{V}}, {\mathcal E} )\) having vertex set \({\mathscr{V}}=\{{v}_{0},{v}_{1},\ldots ,{v}_{n}\}\) and the edge set \( {\mathcal E} \). Note that a simple graph does not contain self-loops or multi-edges34. Such a simple graph G can be viewed as a clique complex K35. A clique simplicial complex K is a collection of simplices where a p-dimensional simplex (or p-simplex) in K is a set of \(p+1\) vertices that form a complete subgraph. In other words, vertices correspond to 0-simplices, edges to 1-simplices, and triangles to 2-simplices in the clique complex of a graph. Note that the dimension p of simplices contained in K is restricted to the range 0 to \((|{\mathscr{V}}|-1)\) in the graph G. The dimension d of the clique complex K is given by the maximum dimension of its constituent simplices. A face γ of a p-simplex α is a subset of α with cardinality less than \(p+1\). Note that by definition, a face γ of a p-simplex α is a l-simplex where \(0\le l < p\) and this relationship is denoted as \({\gamma }^{l} < {\alpha }^{p}\). Formally, the clique complex K corresponding to the simple graph G satisfies the following condition which defines an abstract simplicial complex, namely, K is a collection of non-empty finite sets or simplices such that if α is an element (simplex) of K then so is every non-empty subset of α. For additional details, the interested reader is referred to standard text in algebraic topology36. Figure 1 displays an example of the correspondence between a simple graph and its clique complex. The ordering of the vertex set \(\{{v}_{0},{v}_{1},{v}_{2},\ldots ,{v}_{p}\}\) of a p-simplex α determines its orientation. Moreover, two orderings of the vertex set of α are considered to be equivalent if and only if they differ by an even permutation. If the dimension of a p-simplex is greater than 1, then all possible orderings of its vertex set fall under two equivalence classes, with each class being assigned an orientation36. An exception is the 0-simplex with one vertex which has exactly one equivalence class and orientation. An oriented p-simplex α specifies the orientation of its \(p+1\) vertices and is represented by \([{v}_{0},{v}_{1},{v}_{2},\ldots ,{v}_{p}]\)36. In Fig. 1, the oriented 2-simplices \([{v}_{2},{v}_{3},{v}_{4}]\) and \([{v}_{2},{v}_{4},{v}_{3}]\) have opposite orientations, i.e., \([{v}_{2},{v}_{3},{v}_{4}]=-\,[{v}_{2},{v}_{4},{v}_{3}]\). Persistent homology of a simplicial complex In SI Appendix, we give a brief review of the mathematical theory of homology groups of a simplicial complex. In particular, we define p-chain group, p-boundary operator, p-boundary, p-cycle, p-hole, p-homology group and p-Betti number. A subset Ki of a simplicial complex K is called a subcomplex of K if Ki by itself is an abstract simplicial complex. Then a filtration of a simplicial complex K is defined as a nested sequence of subcomplexes Ki of K where: $$\varnothing \subseteq {K}^{0}\subseteq {K}^{1}\subseteq \ldots \subseteq {K}^{q}=K$$ Note that each subcomplex Ki has an associated index i in the filtration. Moreover, each subcomplex Ki in the filtration has corresponding p-chain complexes \({C}_{p}^{i}\), p-boundary operators \({\partial }_{p}^{i}\), p-boundaries \({B}_{p}^{i}\) and \(p\)-cycles \({Z}_{p}^{i}\). The j-persistent p-homology group of Ki denoted as \({H}_{p}^{i,j}\) is defined as: $${H}_{p}^{i,j}={Z}_{p}^{i}/({B}_{p}^{i+j}\cap {Z}_{p}^{i}).$$ In the above equation, \({B}_{p}^{i+j}\) is the subgroup of \({C}_{p}^{i+j}\) which constitutes the p-boundaries of the subcomplex \({K}^{i+j}\). The j-persistent p-Betti number of Ki denoted as \({\beta }_{p}^{i,j}\) is defined as: $${\beta }_{p}^{i,j}={\rm{\dim }}({H}_{p}^{i,j}).$$ An intuitive explanation of the above definitions of the j-persistent p-homology group and the corresponding Betti number is as follows. A p-hole of the subcomplex Ki can potentially become the boundary of a \((p+1)\)-chain of a later subcomplex \({K}^{i+j}\) with \(j > 0\), and thus, no longer constitute a p-hole of Ki+j. The j-persistent p-Betti number of Ki represents the number of p-holes at the filtration index i that persist at the filtration index j + i. Therefore, each p-hole that appears across the filtration has a unique index that corresponds to its birth and death, and the persistence of such a p-hole can thus be characterized by its corresponding birth and death indices. Studying persistent homology allows us to quantify the longevity of such p-holes during filtration, and thus, measures the importance of these topological features which appear and disappear across the filtration. Discrete Morse theory Recalling from the preceeding section, to study the persistent homology corresponding to the clique complex K of a simple graph G, the primary requirement is a filtration of K. We here present a systematic method to study the persistent homology of unweighted and undirected networks by utilizing a refined filtration of the clique complex K based on discrete Morse theory7,8,9. It is important to note that the order in which the simplices of the clique complex K are added during the filtration affects the evolution of topological features which are observed by studying the persistent homology. Our proposed scheme which is based on discrete Morse theory developed by Forman7,8,9 tackles this by assigning weights to 0-simplices (vertices), 1-simplices (edges), 2-simplices (triangles) and higher-dimensional simplices appearing in the clique complex corresponding to an unweighted and undirected network. Assigning weights to higher-dimensional simplices captures important higher-order correlations in addition to edges or 1-simplices. Moreover, we leverage the following important features of the framework of discrete Morse theory in our new scheme. Firstly, the framework enables assignment of weights to p-simplices which are concordant with weights of \((p-1)\)-simplices. Secondly, it captures the topology of a simplicial complex via the concept of critical simplices described below. Most importantly, the framework provides a natural way to create a filtration scheme to study persistent homology based upon the weights of the aforementioned critical simplices as will be described below. We next provide the fundamental definitions in discrete Morse theory8,9. We remark that a p-dimensional simplex α in a simplicial complex K is denoted by \({\alpha }^{p}\in K\). Also, if a p-simplex αp in K is a face of a \((p+1)\)-simplex βp+1 in K then this is represented as \({\alpha }^{p} < {\beta }^{p+1}\) in the sequel. Given a function \(f:K\to {\mathbb{R}}\), for each simplex \({\alpha }^{p}\in K\), two sets \({U}_{\alpha }^{f}\) and \({V}_{\alpha }^{f}\) are defined as follows: $${U}_{\alpha }^{f}=\{{\beta }^{p+1}|{\alpha }^{p} < {\beta }^{p+1}\,{\rm{and}}\,f(\beta )\le f(\alpha )\}$$ $${V}_{\alpha }^{f}=\{{\gamma }^{p-1}|{\gamma }^{p-1} < {\alpha }^{p}\,{\rm{and}}\,f(\alpha )\le f(\gamma )\}$$ Simply stated, the set \({U}_{\alpha }^{f}\) contains any \((p+1)\)-simplex βp+1 of which αp is a face and the function value on β is less than or equal to the function value on α. The set \({V}_{\alpha }^{f}\) contains any \((p-1)\)-simplex γp−1 which is a face of αp and the function value on α is less than or equal to the function value on γ. A function \(f:K\to {\mathbb{R}}\) is a discrete Morse function8,9 if and only if for each simplex \({\alpha }^{p}\in K\): $$|{U}_{\alpha }^{f}|\le 1\,{\rm{and}}\,|{V}_{\alpha }^{f}|\le 1.$$ Given a discrete Morse function f on the simplicial complex K, a simplex \({\alpha }^{p}\in K\) is critical8,9 if and only if: $$|{U}_{\alpha }^{f}|=0\,{\rm{and}}\,|{V}_{\alpha }^{f}|=0.$$ Simply stated, a p-simplex \({\alpha }^{p}\in K\) is critical if the following conditions are simultaneously satisfied. The first condition being that if βp+1 is any \((p+1)\)-simplex in K of which αp is a face, then \(f(\alpha ) < f(\beta )\). The second condition being that if γp−1 is any \((p-1)\)-simplex in K which is a face of αp then \(f(\alpha ) > f(\gamma )\). The concept of critical simplices in discrete Morse theory is in spirit a discrete analogue to the concept of critical points in classical Morse theory wherein the critical points corresponding to a smooth real valued function on \(X\subseteq {{\mathbb{R}}}^{d}\) are the points where the gradient of the function vanishes. We remark that once a discrete Morse function f on a simplicial complex K is fixed, the sets \({U}_{\alpha }^{f}\) and \({V}_{\alpha }^{f}\) are denoted by Uα and Vα, respectively, to simplify the notation. A simple example for a discrete Morse function on a simplicial complex K is the dimension function used in Horak et al.19. The value of the dimension function on a given simplex α is the dimension of the simplex α. By a direct consequence of the definitions presented above in this section, for every simplex \(\alpha \in K\), the sets Uα and Vα corresponding to the dimension function are empty. Thus, the dimension function is indeed a discrete Morse function and every simplex in K is critical. In the results section, we present our new scheme and algorithm 1 to assign a discrete Morse function f to a clique complex K of an unweighted graph G. We next describe the filtration of the clique simplicial complex K based on the discrete Morse function f. Given a discrete Morse function f on a simplicial complex K and a real number r, a level subcomplex K(r) is defined8,9 as follows: $$K(r)=\mathop{\cup }\limits_{f(\beta )\le r}\,\mathop{\cup }\limits_{\alpha \le \beta }\,\alpha $$ Simply stated, K(r) contains all simplices β in K with the value of the discrete Morse function or assigned weight \(f(\beta )\le r\) along with any face α of β. Note that a face α of β is included in \(K(r)\) even if the discrete Morse function or assigned weight to a face α is greater than r. Let \({\{f(\sigma )\}}_{\sigma \in K}\) denote the entire set of values assigned to simplices in K using the discrete Morse function f. Then, let \({\{{w}_{k}\}}_{k=0,\ldots ,n}\) denote the finite increasing sequence of the unique values in the set \({\{f(\sigma )\}}_{\sigma \in K}\) associated with the finite simplicial complex considered here. We now have a sequence of inclusions of level subcomplexes corresponding to this increasing sequence \(\{{w}_{k}\}\) as follows: $$\varnothing \subseteq K({w}_{0})\subseteq K({w}_{1})\subseteq \cdots \subseteq K({w}_{n-1})\subseteq K({w}_{n})=K.$$ This nested sequence gives a filtration of the simplicial complex K which enables the study of persistent homology in the context of unweighted networks. According to Lemma 2.6 by Forman9, if there are no critical simplices α with \(f(\alpha )\in (a,b]\), then K(b) is homotopy equivalent to K(a). The implications of this Lemma are as follows. Let \(\{f({\sigma }_{c})\}\) denote the set of values assigned to critical simplices σc in K by the discrete Morse function f and \({\{{w}_{{c}_{k}}\}}_{k=0,\ldots ,m}\) denote the increasing sequence of the unique values in \(\{f({\sigma }_{c})\}\). We refer to the function values \(\{f({\sigma }_{c})\}\) assigned to the critical simplices σc in K as critical weights. Note that the set \(\{f({\sigma }_{c})\}\) defined for critical simplices is a subset of \(\{f(\sigma )\}\) defined for all simplices in K and the increasing sequence \({\{{w}_{{c}_{k}}\}}_{k=0,\ldots ,m}\) is a subsequence of \({\{{w}_{k}\}}_{k=0,\ldots ,n}\) with \(m\le n\). The above definition implies that there are no critical simplices α with \(f(\alpha )\in ({w}_{{c}_{i}},{w}_{{c}_{i+1}})\). As homology is invariant under homotopy equivalence, Forman's Lemma 2.6 gives us that for any x and y belonging to the real number interval \(({w}_{{c}_{i}},{w}_{{c}_{i+1}})\), the homology groups of K(x) and K(y) are isomorphic. Thus, in order to observe the changes in homology as the filtration proceeds, it suffices to study the persistent homology of a filtration which corresponds to the subsequence \({\{{w}_{{c}_{k}}\}}_{k=0,\ldots ,m}\) of \({\{{w}_{k}\}}_{k=0,\ldots ,n}\), where \(m\le n\), and this results in a potential decrease in the required number of filtration steps. The new filtration sequence can be represented as: $$\varnothing \subseteq K({w}_{{c}_{0}})\subseteq K({w}_{{c}_{1}})\subseteq \cdots \subseteq K({w}_{{c}_{m-1}})\subseteq K({w}_{{c}_{m}})\subseteq K.$$ Note that each simplex α in the clique complex K is first introduced as part of certain level subcomplex \(K({w}_{{c}_{i}})\) in the above nested filtration sequence. Therefore, each simplex α in K can be associated with a unique weight \({w}_{{c}_{i}}\) referred to as the filtration weight of α. In SI Appendix, we present algorithms 2 and 3 which depict the procedure to compute the filtration weights of simplices in the clique complex K of a graph G. In SI Appendix, we also give a sufficient condition for two discrete Morse functions to induce the same filtration and thus the persistent homology groups. Using an example network in Fig. 2, we also show that the persistent homology observed using the filtration based on the entire sequence of weights satisfying discrete Morse function is equivalent to that observed using the filtration based on the subsequence of critical weights. Let mp represent the number of critical p-simplices in a simplicial complex K and let βp denote the p-Betti number of K. Then Theorem 2.11 by Forman9 can be stated as follows. (i) For each p = 0, 1, 2, …, d (where d is the dimension of K), mp ≥ βp. (ii) m0 − m1 + m2 − \(\cdots \) + (−1)dmd = β0 − β1 + β2 − \(\cdots \) + (−1)dβd. In other words, the above theorem gives a lower bound of the number of critical p-simplices mp for each dimension p as the p-Betti number βp of K. In results section, we present our algorithm 1 to assign weights satisfying discrete Morse function to simplices in the clique complex K of a graph G. Our choice of the function in algorithm 1 to assign weights to simplices in the clique complex K tries to minimize the number of critical simplices (which has a lower bound given by Forman's Theorem 2.119), and thus, reduces the number of filtration steps required to compute the persistent homology without loss of information. In the results section, we will show that our algorithm achieves near-optimal number of critical weights in clique complexes corresponding to many model and real networks analyzed here. Comparing persistence diagrams Given a discrete Morse function f and its associated filtration \(\{K({w}_{{c}_{k}})\}\) of the clique complex K of a graph G (Eq. 10), each p-hole has a critical weight \({w}_{{c}_{birth}}\) which corresponds to its birth index and \({w}_{{c}_{death}}\) which corresponds to its death index, with \({w}_{{c}_{birth}} < {w}_{{c}_{death}}\). Persistence diagram D(f) for a d-dimensional simplicial complex K is the collection of points in \({{\mathbb{R}}}^{2}\) whose first and second coordinates, x and y, respectively, correspond to the birth weight and death weight of a p-hole where \(0\le p\le d\)37. Since two different holes can have the same birth and death weights, each point in the persistence diagram has a corresponding multiplicity, we refer the reader to the SI Appendix for more details. Thus, the persistence diagram is a multiset of points in \({{\mathbb{R}}}^{2}\). The persistence of a p-hole which has birth and death weights, \({w}_{{c}_{birth}}\) and \({w}_{{c}_{death}}\), respectively, is defined as \({w}_{{c}_{birth}}-{w}_{{c}_{death}}\). Thus, the persistence diagram for a clique complex K corresponding to a graph G is a compact representation of the persistent homology of a network. Given two persistence diagrams X and Y (which may correspond to two different networks), the \(\infty \)-Wasserstein distance between X and Y, also known as the bottleneck distance37, is defined as follows: $${W}_{\infty }(X,Y)=\mathop{{\rm{\inf }}}\limits_{\eta :X\to Y}\,{{\rm{\sup }}}_{x\in X}\,\parallel x-\eta (x){\parallel }_{\infty }.$$ Similarly, given two persistence diagrams X and Y, the q-Wasserstein distance38 between X and Y is defined as follows: $${W}_{q}(X,Y)={[\mathop{{\rm{\inf }}}\limits_{\eta :X\to Y}\sum _{x\in X}\parallel x-\eta (x){\parallel }_{\infty }^{q}]}^{\frac{1}{q}}.$$ In the above equations, \(\eta \) ranges over all bijective maps from X to Y, and given \((a,b)\in {{\mathbb{R}}}^{2}\), \(\parallel (a,b){\parallel }_{\infty }=\,{\rm{\max }}\,\{|a|,|b|\}\) is the \({L}_{\infty }\) norm. In this work, we use Dionysus 2 package (http://www.mrzv.org/software/dionysus2/) to compute the Wasserstein distance between two persistence diagrams corresponding to two different model networks (See results section). Note that it is not generally true that two persistence diagrams X and Y have the same number of off-diagonal points, i.e., features with non-zero persistence, and we refer the readers to Kerber et al.38 for details on circumventing this issue and further information regarding how the computation of the Wasserstein distance is reduced to a bipartite graph matching problem in Dionysus 2 package. We remark that the bottleneck distance between two persistence diagrams which are subsets of the unit square is in the range 0 to 1. Stability of persistence diagrams with respect to small changes in the discrete Morse function is a key property of persistent homology. The first such stability theorem was given by Cohen-Steiner et al.37, and later refined by Chazal et al.39. In the SI Appendix, using results from Chazal et al.39, we give a stability result for persistence diagrams of discrete Morse functions with respect to the bottleneck distance. Network Datasets Model networks We have investigated the following models of unweighted and undirected networks, namely, the Erdös-Rényi (ER)40, the Watts-Strogatz (WS)10, the Barabási-Albert (BA)11 and the Hyperbolic Graph Generator (HGG)41. The ER model40 is characterized by the property that the probability p of the existence of each possible edge between any two vertices among the n vertices in the graph G is constant. The existence of edges in the ER model are independent of each other, and thus, the model produces random graphs \(G(n,p)\) with average vertex degree \(p(n-1)\). The WS model10 produces small-world graphs as follows. The WS model starts with an initial regular graph with n vertices where each vertex is connected to its k nearest neighbours. Next, the endpoint of each edge in the initial regular graph of the WS model is randomly chosen for rewiring based on a fixed rewiring probability p and is rewired to another vertex in the graph which is chosen with uniform probability. The BA model11 produces scale-free graphs which are characterized by a degree distribution that follows a power law decay. The BA model utilizes a preferential attachment scheme to produce scale-free graphs. The BA model generates an initial graph of m0 vertices, and then, at each successive iteration a new vertex is added with edges to m already existing vertices which are chosen with probability proportional to their degree at that particular iteration. The iterations in the BA model cease when the graph has attained the requisite number n of vertices. The HGG model41,42 produces a random graph of n vertices by initially fixing n vertices to n points on a hyperbolic disk. In the HGG model, the probability of existence of an edge between two vertices is proportional to the hyperbolic distance between the two points on the hyperbolic disk that correspond to these two vertices. By tuning the input parameter γ, the HGG model can produce either a hyperbolic or a spherical random graph41,42. Specifically, the HGG model produces hyperbolic random graphs for \(\gamma \in [2,\infty )\) whereas spherical random graphs for \(\gamma =\infty \). Real networks We have also studied seven real-world networks which are represented as unweighted and undirected graphs. We have considered two biological networks, namely, the Yeast protein interaction network43 with 1870 vertices and 2277 edges, and the Human protein interaction network44 with 3133 vertices and 6726 edges. In both biological networks, each vertex represents a protein and an edge represents an interaction between the two proteins. We have considered two infrastructure networks, namely, the US Power Grid network45 and the Euro road network46. In the US Power Grid network, the 4941 vertices represent the generators, transformers and substations in the Western states of USA and the 6594 edges represent power links between them. The 1174 vertices of the Euro road network correspond to cities in Europe and the 1417 edges correspond to roads linking the cities. We have also studied the Email network47 of the University of Rovira i Virgili with 1133 vertices representing users and 5451 edges, each representing the existence of at least one Email communication between the two users corresponding to the vertices anchoring the edge. We have also studied the Route views network45 which has 6474 autonomous systems as vertices and 13895 edges representing communication between the systems that are represented as vertices. We have considered a social network, the Hamsterster friendship network48, containing 1858 vertices which represent the users and 12534 edges which represent friendships between the users. Note that we omit self-loops while constructing the clique complex K corresponding to the undirected graph G of a real-world network. Algorithm to construct discrete Morse function on a simplicial complex From an unweighted and undirected graph \(G({\mathscr{V}}, {\mathcal E} )\) with vertex set \({\mathscr{V}}\) and edge set \( {\mathcal E} \), it is straightforward to construct a clique simplicial complex K with dimension d (See Theory section). Figure 1 shows the construction of a clique complex starting from an example network. Given a simplicial complex K, its dimension d and a non-negative real-valued function g on the 0-simplices of K, the algorithm 1 assigns weights to any simplex in K, producing a discrete Morse function f defined in Eq. 6. In the pseudocode of the algorithm 1, lines 2–6 initialize a variable Flag[α] for every simplex α in clique complex K with the value 0. We remark that the variable Flag[α] associated with a simplex α in K serves as a counter for the size of the set Uα defined in Eq. 4. Lines 7–9 assign weights to every 0-simplex in K based on the input non-negative function g. Lines 10–24 assign weights to 1- or higher-dimensional simplices in K in a manner which is consistent with the definition in Eq. 6 of a discrete Morse function. In summary, algorithm 1 outputs a discrete Morse function f on K, and in SI Appendix, we present a rigorous proof for the following theorem which states the same. Theorem. Algorithm 1 produces a discrete Morse function f on any simplicial complex K of finite dimension d. Algorithm 1 Algorithm to construct a discrete Morse function on a d-dimensional simplicial complex K. Given a simplicial complex K, its dimension d and a discrete Morse function f on K, the algorithm 2 in SI Appendix determines the weights of critical simplices in K. Given an unweighted and undirected graph G, we restrict the construction of clique complex K by including simplices up to a maximum dimension d. Then, the algorithm 3 in SI Appendix creates the filtration of clique complex K based on weights of critical simplices as described in the Theory section. In SI Table S1, we describe the role of key variables which appear in algorithms 1, 2 and 3. In SI Appendix, we also give a time complexity analysis for these algorithms. Rationale for the choice of function on vertices In order to construct a discrete Morse function f on clique complex K corresponding to a graph G using our algorithm 1, a real-valued function g has to be fixed on the 0-simplices of K (See lines 7–9 in algorithm 1). Let degmax denote the maximum degree of a vertex in the graph \(G({\mathscr{V}}, {\mathcal E} )\). Our choice for the function value on the vertices or 0-simplices, \(g:{\mathscr{V}}\to {\mathbb{R}}\), is as follows: $$g(v)={de}{{g}}_{{\max }}-{\rm{degree}}(v)+\varepsilon $$ where degree(v) is the degree of the vertex \(v\in G\) and \(\varepsilon \) corresponding to each vertex is a random number (noise) generated using the uniform distribution on the interval (0, 0.5). In the Theory section, we had highlighted the Theorem 2.11 by Forman9 which gives a lower bound on the number of critical p-simplices, mp, in a simplicial complex K as the p-Betti number βp. The choice of the real-valued function g in algorithm 1 plays a key role in determining if mp is close to the theoretical minimum βp stated above. In the Theory section, we have shown that the number of critical simplices determines the effective number of filtration weights to study the persistent homology of a clique complex (See Eq. 10). This motivated our choice for the real-valued function (Eq. 13) which determines the weights of 0-simplices, and the rationale for this choice is as follows. Ignoring the noise term \(\varepsilon \) in Eq. 13, the reader can discern our intuition for choosing the function \(g(v)={de}{{g}}_{{\max }}-{\rm{degree}}(v)\) for any vertex v in G with the following example. Consider the simple example of the clique complex K corresponding to a graph G in Fig. 1. Here, we would like to obtain a discrete Morse function f on K such that the number of critical simplices is close to the theoretical minimum. This requirement applies to simplices of any dimension in K, and in the context of this example, we would like the number of critical 1-simplices (edges) to be as close as possible to the 1-Betti number β1 of K. Note that \({\beta }_{1}=1\) for the example clique complex K in Fig. 1. Let us now examine lines 11–23 in algorithm 1. Consider any edge \({e}_{vw}=[v,w]\) such that \(g(v) > g(w)\). While assigning the function value to the edge evw in algorithm 1, the edge evw and the vertex v are guaranteed to be not critical provided that the if condition in the line 16 is satisfied. This is a consequence of the definition of a critical simplex (See Eq. 7). Thus, we would like to force this if condition to be True for as many edges as possible. Moreover, once the function value of the 1-simplex evw is set, we set the variable Flag[v] to 1 in line 18, and this subsequently forces the if condition in the line 16 to fail for all other edges \({e}_{vz}=[v,z]\) in the graph that contain v and have function value \(g(v) > g(z)\). Let us now examine the edge \({e}_{78}=[{v}_{7},{v}_{8}]\) in Fig. 1 which is anchored by vertices v7 and v8 with degree 5 and 1, respectively. As the degree of a vertex gives the number of edges that contain the vertex, v7 is part of 4 other edges apart from e78 while v8 is part of only the edge e78. Suppose e78 is the first edge chosen for the function assignment in line 11 of algorithm 1 and both Flag[v7] and Flag[v8] for the anchoring vertices are 0. We would then prefer that the if condition in the line 16 is satisfied for e78, and as v7 is part of 4 other edges apart from e78 while v8 is part of only e78, ideally Flag[v8] is set to 1 instead of Flag[v7], in other words, we need the function value \(g({v}_{8}) > g({v}_{7})\). We emphasize that this choice of v8 over v7 prevents the forced failure (described in the previous paragraph) of the if condition for the 4 other edges apart from e78 that contain v7. The above example suggests a need for a function g on the vertices that has an inverse relationship with the degree of the vertices. Hence, our choice \(g(v)=de{g}_{max}-{\rm{degree}}(v)\) provides a simple and effective solution for the above requirement. As a consequence of this choice for the function g, the weights assigned to the simplices by algorithm 1 reflect the degree of its constituent vertices. The degree of a vertex can be thought of as a measure of its importance in the network. Hence, the intuition behind the assignment of weights to simplices by our method is to have an inverse or opposite relationship between 'weight' and 'importance' of a simplex while simultaneously satisfying the definition of a discrete Morse function. This inverse relationship instead of a proportional relationship between 'importance' and 'weight' plays a key role in our filtration scheme which is based on the sequence of level subcomplexes corresponding to the increasing sequence of critical weights (See Theory section and algorithm 3 in SI Appendix). Thus, within the constraint of ensuring that the definition for a simplicial complex is satisfied at each stage of the filtration, our scheme prioritizes the addition of simplices with higher 'importance' at earlier stages of the filtration due to their lower weights. We now provide a rationale for the addition of a random noise \(\varepsilon \) in Eq. 13. As reasoned above, we would like to force the if condition in line 16 of algorithm 1 to be True for as many edges as possible. Consider an edge \({e}_{vw}=[v,w]\) such that \(degree(v)=degree(w)\). The absence of a random noise \(\varepsilon \) in Eq. 13 forces \(g(v)=g(w)\). Thus, irrespective of the state of Flag[v] and Flag[w], the if condition fails. This implies that the set \({V}_{{e}_{vw}}\) (See Eq. 5) corresponding to the edge evw is forced to be empty since the only other possibility is \(|{V}_{{e}_{vw}}|=2\) which cannot be the case because algorithm 1 produces a discrete Morse function (See Theorem and Eq. 6). Thus, provided evw is not a face of any higher dimensional simplex, evw would be a critical simplex irrespective of the state of Flag[v] and Flag[w]. Hence, we would like \(g(v)\ne g(w)\) while also retaining the inverse relationship of the function with the degree. Thus generating a small random noise \(\varepsilon \) in the range \((0,0.5)\) for each vertex as in Eq. 13 provides a simple resolution. We remark that the above argument can be generalized to higher-dimensional simplices, and thus, provides the intuition for the addition of noise \(\varepsilon \) in line 21 of algorithm 1. We remind the readers that our initial motivation was not to develop a scheme to construct the optimal discrete Morse function on a clique complex corresponding to a graph. Rather, our main goal is to develop a systematic filtration scheme to study persistent homology in unweighted and undirected networks. In fact, constructing an optimal discrete Morse function in the general case has been shown to be MAX-SNP Hard49. The primary utility of our scheme is to create a filtration by assigning weights to simplices in the clique complex K of a graph G. However, we next report our empirical results from an exploration of model and real-world networks which underscore the following. Although our scheme is not optimal in the sense of minimizing the number of critical simplices, in practice, it achieves near-optimal results in several model and real-world networks (Table 1). Hence, our scheme based on discrete Morse theory reduces the number of filtration steps and increases the applicability of persistent homology to study complex networks. Table 1 The table lists the number of p-simplices (np), the number of critical p-simplices (mp) and the p-Betti number βp for clique complexes corresponding to model and real networks. Application to model and real networks Given a model or real network G, we limit our study of persistent homology to the 3-dimensional clique simplicial complex K corresponding to G. In other words, during the construction of the clique complex, we only include p-simplices which have dimension \(0\le p\le 3\) (See Theory section). On this 3-dimensional clique complex K, we create the corresponding filtration based on the assigned weights to simplices using discrete Morse theory. In SI Appendix, we present algorithm 3 which outlines the procedure to compute the filtration weights of simplices. Thereafter, we make use of GUDHI50, a C++ based library for Topological Data Analysis (http://gudhi.gforge.inria.fr/) to study the persistent homology of this filtration of K. Each hole of any dimension in K has a corresponding birth filtration weight and a death filtration weight (See Theory section). We normalize the birth and death filtration weights of all holes in K by dividing with \({w}_{N}=1+\,{\rm{\max }}\,\{f(\alpha )\,|\,\alpha \in K\}\). In other words, wN is 1 plus the maximum value among weights assigned to the simplices in K. We also make the convention that a normalized death filtration weight of 1 for a hole in K represents that the particular hole never dies. A Hp-barcode diagram corresponding to a filtration of the clique complex K is a graphical representation containing horizontal line segments, each of which represents a p-hole in K, plotted against the x-axis ranging from 0 to 1 which corresponds to the normalized filtration weights of simplices in K51. A horizontal line in the Hp-barcode diagram of K is referred to as a barcode. Thus, a barcode in the Hp-barcode diagram of K which begins at a x-axis value of w1 and ends at a x-axis value of w2 represents a p-hole in K whose birth and death weights are w1 and w2, respectively. In Figs 3, 4 and 5, we display the barcode diagrams for model and real-world networks analyzed here. Barcode diagrams for H0 and H1 in model networks. (a) ER model with \(n=1000\) and \(p=0.004\). (b) WS model with \(n=1000\), \(k=4\) and \(p=0.5\). (c) BA model with \(n=1000\) and \(m=2\). (d) Spherical random graphs produced from HGG model with \(n=1000\), \(T=0\), \(k=4\) and \(\gamma =\infty \). (e) Hyperbolic random graphs produced from HGG model with \(n=1000\), \(T=0\), \(k=4\) and \(\gamma =2\). Barcode diagrams for H0 and H1 in real networks. (a) US Power Grid. (b) Email communication. (c) Route views. (d) Yeast protein interaction. (e) Hamsterster friendship. Barcode diagrams for H2 in model and real networks. (a) Spherical random graphs produced from HGG model with \(n=1000\), \(T=0\), \(k=4\) and \(\gamma =\infty \). (b) Hyperbolic random graphs produced from HGG model with \(n=1000\), \(T=0\), \(k=4\) and \(\gamma =2\). (c) US Power Grid. (d) Email communication. (e) Route views. (f) Yeast protein interaction. (g) Hamsterster friendship. In this work, we have investigated the persistent homology of unweighted and undirected graphs corresponding to five model networks, namely, ER, WS, BA, hyperbolic random graphs and spherical random graphs. We have considered model networks with 1000 vertices and expected average degree 4, 6 and 8. In main text, we report results for model networks with expected average degree 4, and in SI, those with expected average degree 6 and 8. The H0 barcode diagram of BA networks indicate a low number of 0-holes in BA networks across the entire filtration (Fig. 3 and SI Figs S1 and S2). A standard result in algebraic topology36 gives that the 0-Betti number of a simplicial complex K is equal to the number of connected components in K. In other words, the above observation indicates that the scale-free BA network has a strong tendency to maintain a low number of connected components during filtration (Fig. 3 and SI Figs S1 and S2). In contrast, both the random ER network and small-world WS network have a relatively high number of connected components at initial phase of the filtration and then progress towards a more connected network at later stages of the filtration (Fig. 3 and SI Figs S1 and S2). This indicates that the simplices in the clique complex which are key to the connectivity of the model networks are introduced very early into the filtration for the scale-free BA network while this is not the case for the random ER network or small-world WS network. The H1 barcode diagram of BA networks also indicate late introduction of 1-holes during filtration in contrast to both ER and WS networks where 1-holes appear across a wider range of the filtration (Fig. 3 and SI Figs S1 and S2). Moreover, in ER and WS networks, it is interesting to observe that the 1-holes start to appear at roughly the same stage of filtration which corresponds to a sharp reduction in the number of connected components (Fig. 3 and SI Figs S1 and S2). In contrast to ER, WS and BA networks, the spherical and hyperbolic networks are characterized by a relatively high 0-Betti number β0 and a low 1-Betti number β1 (Table 1 and SI Table S2). Simply stated, this observation on the magnitude of β0 indicates that both hyperbolic and spherical networks have a higher number of connected components in comparison to the ER, WS and BA networks of similar size, i.e., number of vertices, and average vertex degree (Table 1 and SI Table S2). Although, both spherical and hyperbolic networks exhibit a higher number of connected components, they differ from each other with respect to the evolution of these connected components during filtration (Fig. 3 and SI Figs S1 and S2). The hyperbolic model maintains a relatively low number of connected components until very late in the filtration wherein there is a sharp increase in the number of connected components (Fig. 3 and SI Figs S1 and S2). This is in contrast with the behavior of the H0 barcode diagram of the spherical model which exhibits a more distributed evolution of connected components during filtration (Fig. 3 and SI Figs S1 and S2). In addition, the low β1 for spherical and hyperbolic networks conveys the lack of 1-holes in these networks. A possible reason for this observation is the incidence of higher number of 2-simplices in the clique complex K of the spherical and hyperbolic networks in comparison to the ER, WS and BA networks (Table 1 and SI Table S2). Note that the formation of a 2-simplex can potentially fill in a 1-hole, and thus, result in a low value for β1 (See Theory section). Such a behaviour is also seen in the H2 barcode diagrams of spherical and hyperbolic networks wherein the 2-holes have very short persistence since the addition of 3-simplices successively fill in the 2-holes (Fig. 5 and SI Fig. S3). The H3 barcode diagrams of spherical and hyperbolic networks (See SI Figs S3 and S4) also indicate a clear difference in the evolution of their corresponding topological features during the filtration. The H2 and H3 barcode diagrams of ER, WS and BA networks do not provide any insight into network structure primarily due to a lack of higher-order correlations in these model networks that is essential for the formation of 2-holes and 3-holes. A visual inspection of the barcode diagrams for the five model networks (Figs 3 and 5 and SI Figs S1–S4) suggests that the different models can be distinguished based on their persistent homology. In Theory section, we had introduced the bottleneck distance which can be employed to quantify the differences between persistence diagrams from filtration of clique complexes corresponding to different model networks. Recall that the persistence diagram of a d-dimensional simplicial complex K is a compact representation of the persistent homology of K which encompasses topological information across all d dimensions. Figure 6 and SI Table S3 give the bottleneck distance between different model networks with the same number of vertices and similar average vertex degree. For each of the five model networks, 10 random samples are generated by fixing the number of vertices n and other parameters of the model. In Fig. 6 and SI Table S3, we report the distance between two different models as the average of the distance between each of the possible pairs of the 10 sample networks corresponding to the two models along with the standard error. We find a relatively higher distance between a random instance of a BA network and a random instance of a ER network with the same number of vertices and similar average degree (Fig. 6 and SI Table S3). Similarly, we observe a relatively higher distance between a random instance of a BA network and a random instance of a WS network with similar size and average degree. In contrast, a relatively lower average distance is observed between a random instance of a ER network and a random instance of a WS network with similar size and average degree (Fig. 6 and SI Table S3). These observations indicate a similarity between networks generated by ER and WS models in terms of their corresponding persistence diagrams and also show that the BA model exhibits topological properties that are different from the other two model networks with the same number of vertices and similar average degree. Moreover, the average distance between a random instance of a spherical network and a random instance of a hyperbolic network with similar size and average degree is very high (Fig. 6 and SI Table S3). The last observation is a reflection of the differences in the persistent homology of the clique complexes corresponding to spherical and hyperbolic networks. Finally, the nature of differences observed between persistence diagrams of different model networks using bottleneck distance as shown in Fig. 6 remain consistent if the 1-Wasserstein or 2-Wasserstein distance is employed in place of the bottleneck distance (data not shown). Bottleneck distance between persistence diagrams of model networks, namely, ER model with \(n=1000\) and \(p=0.004\), WS model with \(n=1000\), \(k=4\) and \(p=0.5\), BA model with \(n=1000\) and \(m=2\), Spherical random graphs produced from HGG model with \(n=1000\), \(T=0\), \(k=4\) and \(\gamma =\infty \), and Hyperbolic random graphs produced from HGG model with \(n=1000\), \(T=0\), \(k=4\) and \(\gamma =2\). For each of the five model networks, 10 random samples are generated by fixing the number of vertices n and other parameters of the model. We report the distance (rounded to two decimal places) between two different models as the average of the distance between each of the possible pairs of the 10 sample networks corresponding to the two models along with the standard error. Table 1 and SI Table S2 list the empirical data on the number of p-simplices np, the number of critical p-simplices mp, that our algorithm achieves and the p-Betti number βp of the clique complexes corresponding to the five model networks. In Table 2 we report a value μ which indicates the optimality of our algorithm with respect to reducing the number of critical simplicies for each model network that has been analyzed. The definition for μ is as follows. $$\mu =\frac{{\sum }_{p=0}^{d}\,{n}_{p}-{\sum }_{p=0}^{d}\,{m}_{p}}{{\sum }_{p=0}^{d}\,{n}_{p}-{\sum }_{p=0}^{d}\,{\beta }_{p}}$$ Table 2 The table lists the value of optimality indicator μ for various model and real networks analyzed here. Here d is the dimension of the corresponding clique complex. The value μ corresponding to a particular discrete Morse function on the clique complex of a given network is an indicator of its optimality with respect to minimizing the number of critical simplices. The value of μ ranges from 0 to 1. \(\mu =1\) indicates that the discrete Morse function achieves exactly the minimum number of critical simplices thereby corresponding to the most optimal situation, while \(\mu =0\) indicates that all simplices are critical thereby corresponding to the least optimal case. Note that the value of μ for a particular network increases linearly with a decrease in the number of critical simplices. In Table 2, for model networks, the value reported is the average of μ across 10 samples of each model network for a chosen set of parameter values along with the corresponding standard deviations. Based on the data presented in Table 2, we report that our algorithm achieves near-optimal results in terms of reducing the number of critical simplices for each of the five model networks analyzed. Moreover for these model networks, data presented in SI Table S2 underlines the close proximity of mp to the theoretical lower bound βp across all dimensions \(p=0,1,2,3\) (See Theory section). We remark that the worst case scenario corresponds to all simplices in the clique complex being critical which is never attained by our algorithm in the analyzed networks. Comparison of our method with dimension based filtration in model networks In SI Figs S5 and S6, we show the H0, H1, H2 and H3 barcode diagrams for model networks with expected average degree 4 obtained using the dimensional filtration scheme used in Horak et al.19. We restrict our investigation to the three-dimensional clique complex while computing the barcode diagrams for model networks using the dimensional filtration scheme of Horak et al.19. In SI Figs S5 and S6, we normalize the filtration index to be in the range 0 to 1, and p-holes with normalized filtration index 1 indicate that they never die. Moreover, we also report in SI Table S4 and SI Fig. S7, the bottleneck distances between persistence diagrams of model networks obtained by the dimensional filtration scheme used in Horak et al.19, and we find that the resultant barcode diagrams and the bottleneck distances between the persistence diagrams are inconclusive to distinguish between the five model networks which have similar size and average vertex degree. The discrete Morse function produced by our algorithm is empirically near-optimal for the model networks analyzed here, in the sense of minimizing the number of critical simplices (See Table 2). On the other hand, it should be noted that the dimension function used in Horak et al.19 is the most non-optimal in this regard. Moreover, in terms of persistent homology, our methods based on the discrete Morse function constructed using algorithms 1, 2 and 3 have the distinct feature of being able to distinguish between various model networks with an efficient filtration. Although the dimension function used in Horak et al.19 is algorithmically efficient in terms of having a low number of filtration steps, unlike our method it does not conclusively distinguish between different model networks based on the barcode diagrams and also the bottleneck distances between the corresponding persistence diagrams. Thus our discrete Morse function is a good candidate for applications in both, computational aspects of discrete Morse theory as well as in persistent homology of unweighted networks, and achieves a tradeoff between efficiency and applicability. In this work, we have investigated the persistent homology of unweighted and undirected graphs corresponding to seven real-world networks and the barcode diagrams for five of these real networks is shown in Figs 4, 5 and SI Fig. S4. Based on the H0 barcode diagrams, the behavior of most real networks considered here can be broadly classified into two categories. Real networks such as the Email communication, the Hamsterster friendship and the Route views exhibit a relatively low number of connected components across the entire range of filtration (Fig. 4). On the other hand, the two biological networks, namely, the Yeast protein interaction and the Human protein interaction, exhibit a sharp increase in the number of connected components at the later stages of filtration. The H0 barcode diagrams for the US Power Grid and Euro road do not conform with either of the above characterizations. The H0 barcode diagram of the US Power Grid network reveals that though there exists only a single connected component at the end of the filtration, there are a considerable number of non-persisting connected components that appear and subsequently disappear during filtration (Fig. 4). The H0 barcode diagram of the Euro road network shows a more distributed increase in the number of connected components (data not shown). In the context of the H1 barcode diagrams, the real networks considered here exhibit similar properties with 1-holes appearing late in the filtration (Fig. 4). The H2 barcode diagrams reveal a lack of 2-holes with long persistence in both biological networks, as well as the Route views network, the Euro road network and the US Power Grid network (Fig. 5). In contrast, from the H2 and H3 barcode diagrams (Fig. 5, SI Fig. S4) we find that the social network, Hamsterster friendship, and the Email communication network exhibit a relatively high number of 2-holes and 3-holes with longer persistence. Table 2 lists the value of the optimality indicator μ (See Eq. 14) for each of the seven real networks analyzed here. This data indicates near-optimal performance of our algorithm with respect to minimizing the number of critical simplices for each of these seven real networks. Table 1 also lists the empirical data on the number of critical p-simplices, mp, that our algorithm achieves and the p-Betti number βp of the clique complexes across each dimension p, corresponding to the seven real networks analyzed here. To conclude, we have proposed a systematic scheme based on discrete Morse theory to study the persistent homology of unweighted and undirected networks. Our methods leverage the concept of critical simplices to permit a reduced filtration scheme while simultaneously admitting a finer inspection of the changes in topology across the filtration of a clique complex corresponding to an unweighted network. Moreover, our proposed algorithm to construct a discrete Morse function on the clique complex of a simple graph achieves close to optimal number of critical simplices for several model and real networks that have been studied here. Furthermore, based on visual representations of persistent homology such as the barcode diagrams as well as quantitative information in the form of distance between persistence diagrams, our methods successfully distinguish various model networks that exhibit inherently different properties. This motivates the application of our methods to real-world networks. We report the results obtained for seven real-world networks that are well studied in the network science community and observe certain patterns in the evolution of their topological features across the filtration. For instance both biological networks, namely the Yeast protein interaction network and the Human protein interaction network exhibit similar characteristics with respect to the H0, H1, H2 and H3 barcode diagrams. Similarly, both the Email network and the Hamsterster friendship network, exhibit shared features with respect to H0, H1, H2 and H3 barcode diagrams that vary from the characteristics of the two biological networks considered here. Our observations hint at the ability and possible applications of our methods to detect and classify real-world networks that are inherently different. Future directions and ongoing work include examining the significance of critical simplices in the context of real-world networks. In other words, we aim to determine whether a critical edge in the context of discrete Morse theory holds any key significance when it is viewed as a link between two real entities in a real-world network. We also intend to explore the presence or absence of a correlation between the notion of critical simplices and network curvature. Since discrete Morse theory captures information about the Euler characteristic of the clique complex corresponding to a graph, the presence of such a correlation could potentially signify a close relationship between the discrete curvature of a graph and its topology, much like in the case of smooth, compact surfaces wherein the Gauss-Bonnet theorem relates the Gaussian curvature of a surface to its Euler characteristic. All data generated or analysed during this study are included in this article or is available upon request from the corresponding author. Carlsson, G. Topology and data. Bulletin of the American Mathematical Society 46, 255–308 (2009). MathSciNet Article Google Scholar Pranav, P. et al. The topology of the cosmic web in terms of persistent Betti numbers. Monthly Notices of the Royal Astronomical Society 465, 4281–4310 (2016). ADS Article Google Scholar Günther, D., Reininghaus, J., Hotz, I. & Wagner, H. Memory-efficient computation of persistent homology for 3d images using discrete Morse theory. In 2011 24th SIBGRAPI Conference on Graphics, Patterns and Images, 25–32 (IEEE, 2011). Nicolau, M., Levine, A. & Carlsson, G. Topology based data analysis identifies a subgroup of breast cancers with a unique mutational profile and excellent survival. Proceedings of the National Academy of Sciences USA 108, 7265–7270 (2011). ADS CAS Article Google Scholar Morse, M. The calculus of variations in the large, vol. 18 (American Mathematical Society, 1934). Edelsbrunner, H. & Harer, J. Persistent homology-a survey. Contemporary Mathematics 453, 257–282 (2008). Forman, R. A discrete Morse theory for cell complexes. In Yau, S.-T. (ed.) Geometry, Topology and Physics for Raoul Bott (International Press of Boston, 1995). Forman, R. Morse theory for cell complexes. Advances in Mathematics 134, 90–145 (1998). Forman, R. A user's guide to discrete Morse theory. Sém. Lothar. Combin. 48, 1–35 (2002). MathSciNet MATH Google Scholar Watts, D. J. & Strogatz, S. H. Collective dynamics of small-world networks. Nature 393, 440–442 (1998). Barabási, A. L. & Albert, R. Emergence of scaling in random networks. Science 286, 509–512 (1999). ADS MathSciNet Article Google Scholar Albert, R. & Barabási, A. L. Statistical mechanics of complex networks. Reviews of Modern Physics 74, 47–97 (2002). Newman, M. E. J. Networks: An Introduction (Oxford University Press, 2010). Bianconi, G. Interdisciplinary and physics challenges of network theory. Europhysics Letters 111, 56001 (2015). Kartun-Giles, A. P. & Bianconi, G. Beyond the clustering coefficient: A topological analysis of node neighbourhoods in complex networks. Chaos, Solitons and Fractals: X 1(1), 100004 (2019). Iacopini, I., Petri, G., Barrat, A. & Latora, V. Simplicial models of social contagion. Nature Communications 10(1), 2485 (2019). Ritchie, M., Berthouze, L. & Kiss, I. Generation and analysis of networks with a prescribed degree sequence and subgraph family: higher-order structure matters. Journal of Complex Networks 5(1), 1–31 (2017). MathSciNet Google Scholar De Silva, V. & Ghrist, R. Homological sensor networks. Notices of the American Mathematical Society 54 (2007). Horak, D., Maletić, S. & Rajković, M. Persistent homology of complex networks. Journal of Statistical Mechanics: Theory and Experiment P03034 (2009). Petri, G., Scolamiero, M., Donato, I. & Vaccarino, F. Topological strata of weighted complex networks. PloS One 8, e66506 (2013). Petri, G. et al. Homological scaffolds of brain functional networks. Journal of The Royal Society Interface 11, 20140873 (2014). Wu, Z., Menichetti, G., Rahmede, C. & Bianconi, G. Emergent complex network geometry. Scientific Reports 5, 10073 (2015). Sizemore, A., Giusti, C. & Bassett, D. Classification of weighted networks through mesoscale homological features. Journal of Complex Networks 5, 245–273 (2016). Courtney, O. & Bianconi, G. Weighted growing simplicial complexes. Physical Review E 95, 062301 (2017). Courtney, O. & Bianconi, G. Dense power-law networks and simplicial complexes. Physical Review E 97, 052303 (2018). Lee, H., Kang, H., Chung, M., Kim, B.-N. & Lee, D. Persistent brain network homology from the perspective of dendrogram. IEEE transactions on medical imaging 31, 2267–2277 (2012). Freeman, L. C. A set of measures of centrality based on betweenness. Sociometry 40, 35–41 (1977). Girvan, M. & Newman, M. Community structure in social and biological networks. Proceedings of the National Academy of Sciences USA 99, 7821–7826 (2002). ADS MathSciNet CAS Article Google Scholar Sreejith, R. P., Mohanraj, K., Jost, J., Saucan, E. & Samal, A. Forman curvature for complex networks. Journal of Statistical Mechanics: Theory and Experiment P063206 (2016). Samal, A. et al. Comparative analysis of two discretizations of Ricci curvature for complex networks. Scientific Reports 8, 8650 (2018). Bubenik, P., Carlsson, G., Kim, P. & Luo, Z. Statistical topology via Morse theory persistence and nonparametric estimation. Algebraic methods in statistics and probability II 516, 75–92 (2010). Mischaikow, K. & Nanda, V. Morse theory for filtrations and efficient computation of persistent homology. Discrete & Computational Geometry 50(2), 330–353 (2013). Delgado-Friedrichs, O., Robins, V. & Sheppard, A. Morse theory and persistent homology for topological analysis of 3d images of complex materials. In 2014 IEEE International Conference on Image Processing (ICIP), 4872–4876 (IEEE, 2014). Bollobas, B. Modern Graph Theory (Springer, 1998). Zomorodian, A. & Carlsson, G. Computing persistent homology. Discrete & Computational Geometry 33, 249–274 (2005). Munkres, J. Elements of algebraic topology (CRC Press, 2018). Cohen-Steiner, D., Edelsbrunner, H. & Harer, J. Stability of persistence diagrams. Discrete & Computational Geometry 37, 103–120 (2007). Kerber, M., Morozov, D. & Nigmetov, A. Geometry helps to compare persistence diagrams. J. Exp. Algorithmics 22, 1.4:1–1.4:20 (2017). Chazal, F., Cohen-Steiner, D., Guibas, L. J. & Oudot, S. Stability of persistence diagrams revisited, INRIA Research report RR-6568 available at: https://hal.inria.fr/inria-00292566v1/ (2008). Erdös, P. & Rényi, A. On the evolution of random graphs. Bull. Inst. Internat. Statist 38, 343–347 (1961). Krioukov, D., Papadopoulos, F., Kitsak, M., Vahdat, A. & Boguná, M. Hyperbolic geometry of complex networks. Physical Review E 82, 036106 (2010). Aldecoa, R., Orsini, C. & Krioukov, D. Hyperbolic graph generator. Computer Physics Communications 196, 492–496 (2015). Jeong, H., Mason, S. P., Barabási, A. L. & Oltvai, Z. N. Lethality and centrality in protein networks. Nature 411, 41–42 (2001). Rual, J. F. et al. Towards a proteome-scale map of the human protein–protein interaction network. Nature 437, 1173–1178 (2005). Leskovec, J., Kleinberg, J. & Faloutsos, C. Graph evolution: Densification and shrinking diameters. ACM Transactions on Knowledge Discovery from Data (TKDD) 1, 2 (2007). Šubelj, L. & Bajec, M. Robust network community detection using balanced propagation. European Physical Journal B 81, 353–362 (2011). Guimera, R., Danon, L., Diaz-Guilera, A., Giralt, F. & Arenas, A. Self-similar community structure in a network of human interactions. Physical Review E 68, 065103 (2003). Kunegis, J. Konect: The Koblenz network collection. In Proceedings of the 22nd International Conference on World Wide Web companion, 1343–1350 (ACM, New York, NY, USA, 2013). Lewiner, T., Lopes, H. & Tavares, G. Toward optimality in discrete Morse theory. Experimental Mathematics 12, 271–285 (2003). Maria, C., Boissonnat, J.-D., Glisse, M. & Yvinec, M. The GUDHI Library: Simplicial complexes and persistent homology. In International Congress on Mathematical Software, 167–174 (Springer, 2014). Ghrist, R. Barcodes: the persistent topology of data. Bulletin of the American Mathematical Society 45, 61–75 (2008). We thank Amritanshu Prasad for fruitful discussions. A.S. would like to acknowledge support from the Max Planck Society, Germany, through the award of a Max Planck Partner Group in Mathematical Biology, and I.R. from the Science and Engineering Research Board (SERB) of the Department of Science and Technology (DST) India through the award of a MATRICS grant [MTR/2017/000835]. We thank the anonymous reviewers for their comments which have helped improve this manuscript. The Institute of Mathematical Sciences (IMSc), Homi Bhabha National Institute (HBNI), Chennai, 600113, India Harish Kannan, Indrava Roy & Areejit Samal Department of Applied Mathematics, ORT Braude College, Karmiel, 2161002, Israel Emil Saucan Department of Electrical Engineering, Technion, Israel Institute of Technology, Haifa, 3200003, Israel Max Planck Institute for Mathematics in the Sciences, Leipzig, 04103, Germany Areejit Samal Harish Kannan Indrava Roy H.K., I.R. and A.S. designed the study. H.K. performed the simulations. H.K., E.S., I.R. and A.S. analyzed results. H.K., I.R. and A.S. wrote the manuscript. All authors reviewed and approved the manuscript. Correspondence to Indrava Roy or Areejit Samal. The authors declare no competing interests. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Figures Supplementary Tables Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Kannan, H., Saucan, E., Roy, I. et al. Persistent homology of unweighted complex networks via discrete Morse theory. Sci Rep 9, 13817 (2019). https://doi.org/10.1038/s41598-019-50202-3 Characterization of structures of particles Konstantinos Manikas Georgios G. Vogiatzis Markus Hütter Applied Physics A (2020) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. About Scientific Reports Guide to referees Guest Edited Collections Scientific Reports Top 100 2019 Scientific Reports Top 10 2018 Editorial Board Highlights Author Highlights 10th Anniversary Editorial Board Interviews Search articles by subject, keyword or author Show results from All journals This journal Explore articles by subject Scientific Reports (Sci Rep) ISSN 2045-2322 (online) nature.com sitemap Protocol Exchange Nature portfolio policies Author & Researcher services Scientific editing Nature Masterclasses Nature Research Academies Librarian service & tools Librarian portal Nature Conferences Nature Africa Nature China Nature India Nature Italy Nature Japan Nature Korea Nature Middle East Manage cookies/Do not sell my data Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
High resolution hemodynamic profiling of murine arteriovenous fistula using magnetic resonance imaging and computational fluid dynamics Daniel Pike1,2, Yan-Ting Shiu1,2, Maheshika Somarathna3, Lingling Guo3, Tatyana Isayeva3, John Totenhagen4 & Timmy Lee ORCID: orcid.org/0000-0002-5305-72683,5 The Correction to this article has been published in Theoretical Biology and Medical Modelling 2019 16:8 Arteriovenous fistula (AVF) maturation failure remains a major cause of morbidity and mortality in hemodialysis patients. The two major etiologies of AVF maturation failure are early neointimal hyperplasia development and persistent inadequate outward remodeling. Although hemodynamic changes following AVF creation may impact AVF remodeling and contribute to neointimal hyperplasia development and impaired outward remodeling, detailed AVF hemodynamics are not yet fully known. Since murine AVF models are valuable tools for investigating the pathophysiology of AVF maturation failure, there is a need for a new approach that allows the hemodynamic characterization of murine AVF at high resolutions. This methods paper presents a magnetic resonance imaging (MRI)-based computational fluid dynamic (CFD) method that we developed to rigorously quantify the evolving hemodynamic environment in murine AVF. The lumen geometry of the entire murine AVF was reconstructed from high resolution, non-contrast 2D T2-weighted fast spin echo MRI sequence, and the flow rates of the AVF inflow and outflow were extracted from a gradient echo velocity mapping sequence. Using these MRI-obtained lumen geometry and inflow information, CFD modeling was performed and used to calculate blood flow velocity and hemodynamic factors at high resolutions (on the order of 0.5 μm spatially and 0.1 ms temporally) throughout the entire AVF lumen. We investigated both the wall properties (including wall shear stress (WSS), wall shear stress spatial gradient, and oscillatory shear index (OSI)) and the volumetric properties (including vorticity, helicity, and Q-criterion). Our results demonstrate increases in AVF flow velocity, WSS, spatial WSS gradient, and OSI within 3 weeks post-AVF creation when compared to pre-surgery. We also observed post-operative increases in flow disturbances and vortices, as indicated by increased vorticity, helicity, and Q-criterion. This novel protocol will enable us to undertake future mechanistic studies to delineate the relationship between hemodynamics and AVF development and characterize biological mechanisms that regulate local hemodynamic factors in transgenic murine AVF models. Hemodialysis vascular access dysfunction remains the Achilles heel of the hemodialysis procedure. The arteriovenous fistula (AVF) is the preferred choice of vascular access for hemodialysis patients, but AVF maturation failure remains a critically important clinical problem in end stage renal disease (ESRD) patients on hemodialysis. Up to 60% of newly created AVFs did not successfully mature to become usable [1]. The most common angiographic lesion present in AVF maturation failure is stenosis at the juxta-anastomotic region of the AVF. The two main etiologies of AVF maturation failure are early aggressive neointimal hyperplasia (NH) development and persistent inadequate outward remodeling of the AVF, both of which contribute to formation of venous stenosis at the juxta-anastomotic region of the AVF [2, 3]. AVF maturation failure results in dialysis therapy with a tunneled dialysis catheter or a synthetic AV graft (AVG). Mortality in patients dialyzing with catheters has been reported to be 1.5 times greater than that in patients dialyzing with AVF [4, 5]. When compare to matured AVFs, synthetic AVGs have high failure rates due to stenosis, reported to be 50 and 75% at 1 and 2 years after implantation, respectively [6]. Thus, there is an unmet clinical need to improve our understanding of AVF maturation failure and devise a strategy to improve maturation. Local wall hemodynamic factors likely play a key role in successful or failed AVF development. The creation of the arteriovenous anastomosis (connection of low-pressure vein to the high-pressure arterial system) results in an immediate increase in blood flow and wall shear stress (WSS) through the AVF inflow artery and outflow vein. Computational fluid dynamic (CFD) modeling of WSS has previously been reported in porcine AVF models [7, 8] and recently in human AVF [9, 10], using non-contrast MRI, computed tomography scans, or three-dimensional ultrasound imaging protocols to obtain information needed for CFD modeling. These previous human and porcine CFD studies had temporal and spatial resolutions of on the order of 5 μm and 1 ms. A murine model has the advantage of readily available genetic manipulation by knockout and overexpression to investigate the mechanisms of AVF maturation failure. This genetic modification cannot be readily performed in porcine or other large animal models. Since murine AVF models are valuable tools for investigating the pathophysiology of AVF maturation failure, there is a need for a new approach that allows the hemodynamic characterization of murine AVF at higher resolutions (on the order of 0.5 μm and 0.1 ms) than those described in the literature. In this manuscript, we report the technical development of non-contrast MRI imaging of a murine AVF model and the subsequent CFD modeling. Our goal is to develop a technique that can be utilized in future transgenic AVF rodent studies in order to elucidate causal mechanisms of AVF failure focusing on local hemodynamic factors that change following AVF creation. To our knowledge, this methods paper is the first to date to present both non-contrast MRI scans and CFD modeling in a murine AVF model. Surgical arteriovenous fistula creation All animal studies and experiments were approved by the University of Alabama at Birmingham Institutional Animal Care and Use Committee (IACUC) and performed in accordance with National Institutes of Health guidelines. Our studies utilized male C57BL/6J mice (n = 3, Taconic Biosciences, Hudson, NY) aged 8-12 weeks. After mice (n = 2) with AVF were anesthetized with isoflurane, buprenorphine, xyalazine, and ketamine, a midline incision of the surgical area was performed. Using a surgical microscope, the right carotid artery and jugular vein were then exposed. Using 10-0 monofilament microsurgical sutures, a side-to-end anastomosis was created using the carotid artery (side) and jugular vein (end) (Fig. 1a). After unclamping, dilation of the vein and patency was confirmed visually. The mice were maintained on a warming blanket following surgery and buprenorphine was administered two times at 12 hours apart. NH was consistently observed by day 21 post-op (Fig. 1b). The control blood vessels were the pre-surgical carotid artery and jugular vein (n = 1), and the contralateral non-surgery carotid artery and jugular vein in the AVF mice at day 7 (n = 1) and day 21 (n = 1) post-operatively. Surgical procedure and histology: (a) Arteriovenous fistula (AVF) mouse model using jugular vein (end) to carotid artery (side) configuration. Asterisk (*) depicts the arteriovenous anastomosis. The white arrow indicates the direction of blood flow in the venous outflow tract. (b) Representative histology of AVF dysfunction (Movat's stain). Neointimal hyperplasia (NH) was present at 21 Days MRI imaging acquisition: time of flight angiography, black blood imaging, and velocity mapping Mice were studied with anatomical imaging and angiography velocity mapping techniques to determine AVF lumen geometry and flow characteristics. Each MRI session lasted approximately 2 hours per animal. During this time, the animal was anesthetized with isoflurane gas at a concentration of 1.5%, and respiration and electrocardiogram (ECG) signals were monitored during MRI acquisition with a physiological monitoring system (SA Instruments Inc., Stony Brook, NY). Mice were imaged in supine or prone position in an animal bed system with integrated tubing for the circulation of heated water (Bruker Biospin, Billerica, MA) to maintain the mouse at 36-38 °C (Fig. 2b). Representative magnetic resonance images of an AVF mouse. (a) and (b) 9.4 T MRI Imaging of rodents. (c) Turbo spin-echo, (d) phase contrast, and (e) time of flight images MRI scanning was conducted using a 9.4 Tesla Bruker Biospec horizontal 20 cm bore instrument with Paravision 5.1 software (Bruker Biospin, Billerica, MA) (Fig. 2a). A 72 mm internal diameter birdcage volume coil was used for signal excitation and a 12 mm diameter surface coil used for reception (Doty Scientific Inc., Columbia, SC). Scout images were acquired in the coronal, sagittal, and axial dimensions to verify surface coil placement and the location and orientation of the AVF vessel. The scout images were acquired with a T2-weighted RARE (Rapid Acquisition with Relaxation Enhancement) sequence and the following imaging parameters: TR (Repetition Time) 2000 ms, TE (Echo Time) 24 ms, RARE factor 4, 1 average, matrix 128×256, FOV (Field Of View) 25.6 mm×51.2 mm for an in-plane resolution of 0.2 mm. 13, 19, and 25 contiguous 1 mm thick slices were acquired, for coronal, sagittal, and axial orientations respectively. A 2D time-of-flight angiography sequence based on the FLASH method was used to obtain a global view of the AVF geometry (Fig. 2e) in order to orient the 2D T2-weighted fast spin echo sequence. The following imaging parameters were used: TR 18 ms, TE 4 ms, 8 averages, matrix 171 × 171, FOV 25.6 mm × 25.6 mm for an in-plane resolution of 0.15 mm. 50 overlapping axial slices were acquired with a thickness of 0.5 mm and between-slice spacing of 0.35 mm. A 2D T2-weighted fast spin echo sequence was used with a black-blood double inversion preparation to reduce signal from the blood within the vessels and allow for better visualization of the vessel lumen. The imaging parameters used were: TR 10000 ms, TE 33 ms, 4 averages, matrix 256 × 256, FOV 25.6 mm × 25.6 mm for an in-plane resolution of 0.1 mm. 35 contiguous 0.5 mm thick axial slices were acquired. This scan was used to create the 3D geometry for CFD simulation, and an example is shown in Fig. 2c. A gradient echo velocity mapping sequence based on the use of bipolar gradient pulses to produce a flow-dependent signal phase was used to obtain quantitative measures of the blood flow at 3 locations in the vicinity of the fistula (the feeding and draining artery, and fistula vein), and at multiple time points within the cardiac cycle. Respiratory and ECG gating were used to minimize image artifacts due to motion. The imaging parameters used were: TR 15 ms, TE 6 ms, 20 averages, matrix 150 × 256, FOV 15.0 mm × 25.6 mm for an in-plane resolution of 0.1 mm. Velocity maps were acquired at 8 frames during the cardiac cycle, with a 16 ms period between frames. A single 1.5 mm thick slice was collected at each of 3 locations in the vicinity of the fistula. An example was shown in Fig. 2d. AVF lumen segmentation, reconstruction, and meshing 3D geometric lumen reconstructions were created from multislice 2D T2-weighted fast spin echo sequences with a black-blood double inversion preparation (as detailed in MRI Imaging Acquisition above) using Amira 5.2.1 (Visage Imaging, Inc., San Diego, CA). Image data was segmented manually by intensity thresholding using the blowout tool. The resulting 2D sections were reconstructed to generate a 3D surface in STL format, then smoothed in Amira. The AVF mice (n = 2) used for developing this MRI-based CFD approach each had a side branch approximately 4 - 5 mm away from the anastomosis (see "AVF vein branch" in Fig. 2e), and this branch was included in CFD modeling (see CFD Modeling below). A high-resolution tetrahedral mesh was created from the STL geometry using Ansys ICEM 15.0, with the number of tetrahedra in the final meshes approximately 1.5 million, with an average length of 0.37 μm. This mesh density was determined as described in the CFD Modeling section below. Measurement of AVF lumen area Centerlines of the entire murine AVF lumen, from the final smoothed volumetric mesh mentioned in the previous section, were calculated at 0.1 mm intervals using Vascular Modeling Toolkit (VMTK, www.vmtk.org). Next, lumen areas perpendicular to the centerline of the vessel lumen were calculated using a MATLAB script [9]. The average cross-sectional area for the first 4 mm of AVF vein and inflow artery (starting at the anastomosis and moving toward the heart), as well as the area averaged over a 4 mm segment of the pre-surgical and contralateral non-surgery vessels was calculated along the centerline of the vessel lumen and is used to standardize the regions between vessels for comparison. We chose a 4 mm length because the side branch starts between 4 and 5 mm downstream to the anastomosis, and here we focus on the main and proximal AVF vein. AVF blood flow extraction AVF blood flow velocities were extracted from the gradient echo velocity mapping sequence (as detailed in MRI Imaging Acquisition above) using Segment 1.9 R2761 (http://segment.heiberg.se) [11] at three locations in the vicinity of the fistula: the feeding (proximal, inflow) and draining (distal, outflow) artery, and fistula vein. CFD Modeling CFD simulations were performed in ANSYS Fluent 15.0 (ANSYS, Inc., Canonsburg, PA), using the 3D volumetric meshes created previously in the AVF Lumen Segmentation, Reconstruction, and Meshing section. The entire simulation domain includes the inflow artery, the outflow artery, the main and proximal AVF vein (the segment of the AVF vein upstream to the side branch), the distal AVF vein (the segment of the AVF vein downstream to the side branch) and the side branch. The inlet boundary condition (inflow artery) and two outlet boundary conditions (the distal AVF vein and the side branch) were prescribed as the pulsatile cross-sectional average blood flow velocity calculated from the AVF Blood Flow Extraction section. The remaining outlet boundary condition (outflow artery) was set to zero stress. No-slip boundary conditions were prescribed at the vessel wall. The Reynolds numbers ranged between 3 and 60 in our simulation domain and therefore, the laminar flow assumption was used. The first cell height in our models was prescribed to be 0.1 μm, which was 0.4 – 5.9% of the boundary layer thickness. We also assumed that blood was incompressible and Newtonian, and that the vessel wall was rigid and immobile [9]. With these assumptions, the Navier-Stokes equations reduced to the form in Eq. 1, and the Equation of Continuity reduced to the form in Eq. 2, where ρ is blood density (1050 kg/m3), u is blood velocity (m/s), t represents time (s), p is blood pressure (100 mmHg), and μ is blood dynamic viscosity (0.0035 Pa · s). $$ \uprho \left(\frac{\partial u}{\partial t}\kern0.5em +\kern0.5em u\cdot \nabla u\right)\kern0.5em =\kern0.5em -\nabla p\kern0.5em +\kern0.5em \mu {\nabla}^2 u $$ $$ \nabla \cdot u\kern0.5em =\kern0.5em 0 $$ Similarly to a previous study [9], time dependent terms were discretized implicitly with second-order accuracy, while the Navier-Stokes equations were discretized with a second-order upwind scheme. A segregated solver was used to solve the Navier-Stokes and continuity equations, with pressure-velocity coupling defined using the Semi-Implicit Method for Pressure-Linked Equations (SIMPLE) algorithm. All simulations were performed as pulsatile, with approximately 1200 time-steps being used over the length of the cardiac cycle (~120 ms, for a step size of 0.1 ms). Each CFD simulation was run for at least 3 cardiac cycles. All results are from the third cycle. Convergence criteria were set as x-, y-, and z-residuals of 1x10-5, and a total residual of 1x10-5. To determine the mesh independence, we performed the simulations of the mouse AVF model (Day 7) at 0.5×106 tetrahedra (average length 0.52 μm), 1.5×106 tetrahedra (average length 0.37 μm), and 15×106 tetrahedra (average length 0.23 μm); the time step was 0.1 ms for all. To determine the time step independence, we performed the simulations of the mouse AVF model (Day 7) at three time steps: 1, 0.1, and 0.01 ms; the mesh density was 1.5×106 tetrahedra for all. Results were considered independent when the differences in the simulation results were <5% between two consecutive simulations. This was achieved between 1.5×106 and 15×106 tetrahedra and between 0.1 and 0.01 ms. Therefore, 1.5×106 tetrahedra and 0.1 ms were chosen for all simulations. As compared to previous CFD simulations for human or pig AVF, our step size is an order of magnitude smaller (0.1 ms vs. 1 ms), and so is our length scale (0.5 μm vs. 5 μm). This difference is critically important for investigating the murine geometry and hemodynamics, which are much different from porcine models or human patients studied previously in the literature [7–10]. Post-CFD processing for hemodynamic parameters All hemodynamic parameters were calculated in Tecplot 360 (Tecplot, Inc., Bellevue, WA). The hemodynamic parameters of interest were wall shear stress (WSS) magnitude (Eq. 3), spatial wall shear stress gradient (WSSg) (Eq. 4), oscillatory shear index (OSI) (Eq. 5), vorticity (Ω) (Eq. 6), helicity (Eq. 7), and Q-criterion (Eq. 8), where τw,x, τw,y and τw,z represent WSS in the x, y, and z direction, respectively, and V represents the volume in a closed surface. Shear stress was calculated in Tecplot 360 as a function of the velocity at each volumetric mesh node. WSS was then calculated as the root-mean-square average of shear stress at the vessel wall. OSI was calculated from WSS over the cardiac cycle. Vorticity is a derived measurement of the local rotation of fluid flow at a point in a flow field, reported in 1/s and calculated from the spatial velocity gradient at any location in the flow field [12] (Eq. 6). Helicity is a derived measurement of the local 'twisting' of flow (Eq. 7). Specifically, helicity measures the local linkage or knottedness of streamlines, either in a right-handed helical pattern (positive) or a left-handed helical pattern (negative) [13]. Q-criterion is a derived measurement used to identify local vortices in a flow field. Q-criterion was calculated from both the vorticity and strain rate (S) within the flow field (Eq. 8). Strain rate is a measurement of the change in deformation (without changing volume) with respect to time. A vortex is defined as a location with positive Q-criterion. This indicates that the magnitude of vorticity (rotation) is greater than shear forces (deformation) at that region. This is characteristic of vortices, which tend to have much greater rotational motion as compared to linear motion (deformation). Negative Q-criterion indicates a non-vortex region; this is not necessarily laminar or undisturbed, but has low vorticity compared to strain rate [14]. $$ W S S\kern0.5em =\kern0.5em {\left({\tau}_{w, x}^2\kern0.5em +\kern0.5em {\tau}_{w, y}^2\kern0.5em +\kern0.5em {\tau}_{w, z}^2\right)}^{\frac{1}{2}} $$ $$ WSSg\kern0.5em =\kern0.5em {\left({\left(\frac{\partial {\tau}_{w, x}}{\partial x}\right)}^2\kern0.5em +\kern0.5em {\left(\frac{\partial {\tau}_{w, y}}{\partial y}\right)}^2\kern0.5em +\kern0.5em {\left(\frac{\partial {\tau}_{w, z}}{\partial z}\right)}^2\right)}^{\frac{1}{2}} $$ $$ O S I\kern0.5em =\kern0.5em 0.5\left(1-\frac{\left|{\int}_{\mathrm{o}}^t\left.{\tau}_w dt\right|\right.}{\int_{\mathrm{o}}^t\left|\left.{\tau}_w\right| dt\right.}\right) $$ $$ Vorticity\kern0.5em \left(\varOmega \right)\kern0.5em =\kern0.5em \nabla \kern0.5em \times \kern0.5em u $$ $$ Helicity\kern0.5em =\kern0.5em {\int}_v u\cdot \left(\nabla \times u\right) d V $$ $$ Q- criterion\kern0.5em =\kern0.5em \frac{1}{2}\left({\Omega}^2-{\mathrm{S}}^2\right) $$ Each wall surface hemodynamic parameter (WSS, WSSg, OSI) is a variable along the vessel wall both axially and circumferentially; the cardiac-cycle average was calculated for WSS and WSSg, and their values at systole and diastole of the cardiac cycle were also given in the text or figure legends. Values of WSS, WSSg, and OSI were averaged over the first 4 mm of the AVF vein and inflow artery, starting from the anastomosis, as well as averaged over 4-mm segments of contralateral non-surgery vessels. Each lumen volumetric hemodynamic parameter (vorticity, helicity, Q-criterion) is a variable throughout the vessel lumen axially; isosurfaces were calculated at three discrete values (indicated in each figure). The single-color 3D surface (the isosurface) in the vessel lumen represents all points in the vessel lumen with a value equal to the specified number. Figure 3 shows the cross-sectional area of a mouse AVF at day 7 and day 21 post-operatively, as distance from the AVF anastomosis. The cross-sectional area of control vessels (pre-surgery and contralateral non-surgery vessels) is also shown in Fig. 3. We found that, after 21 days, the lumen size of the contralateral non-surgery vessels in the AVF mouse (for the first 4 mm) is similar to pre-surgical size (0.32 ± 0.05×106 μm2 pre-surgical artery vs. 0.30 ± 0.04×106 μm2 contralateral artery; 2.02 ± 0.29×106 μm2 pre-surgical vein vs. 1.77 ± 0.30×106 μm2 contralateral vein). On the AVF side (for the first 4 mm), the inflow artery area at both 7 and 21 days post-operatively (0.51 ± 0.15×106 μm2 on day 7; 0.31 ± 0.12×106 μm2 on day 21) is similar to control arteries, whereas the AVF vein has a trend of increased cross-sectional area within 4 mm proximal to the anastomosis (3.39 ± 0.80×106 μm2 on day 7; 3.90 ± 1.44×106 μm2 on day 21). Further, this increase was not homogenous along the main and proximal AVF vein (Fig. 3). The increase adjacent to the anastomosis was small, which was likely a result of the restriction of the suture. Lumen expansion at 1 mm was the largest, and then the expansion became smaller. This heterogeneous lumen expansion is also common in human AVFs [9]. Cross-sectional area of mouse AVF vessels. (a) The 3D reconstruction of a representative mouse AVF. Black arrows indicate the direction of blood flow. (b) The anastomosis is used as the anatomical landmark and set to be 0. The positive direction is toward the heart, and the negative direction is away from the heart. The blue line is the average area of control veins (i.e., pre-surgery and contralateral non-surgery veins): 1.55 ± 0.42×106 μm2. The red line is the average area of control arteries (i.e., pre-surgery and contralateral non-surgery arteries): 0.29 ± 0.05×106 μm2 Figure 4 shows velocity streamlines, Fig. 5 shows the resulting cardiac-cycle averaged WSS, and Fig. 6 shows the spatial gradient of WSS (WSSg), all of which are averaged over a cardiac cycle. Additional files 1, 2 and 3 show the videos of the time course of velocity streamlines, WSS, and WSSg, respectively, over a cardiac cycle. Figure 7 shows OSI, and Additional file 4 shows the rotated video of OSI. Velocity streamlines for the AVF and contralateral non-surgery controls. The color bars are adjusted to emphasize the velocity distributions in the vein (a) and artery (b). The velocity was averaged over a cardiac cycle. Black arrows indicate the direction of overall blood flow. Black arrow heads indicate flow jet into the AVF vein, with velocity of this flow jet increasing from day 7 to day 21. Red arrow heads indicate recirculating disturbed flow in the AVF vein near the anastomosis WSS color maps for the AVF and contralateral non-surgery controls. The color bars are adjusted to emphasize the WSS distributions in the vein (a) and artery (b). The WSS on the maps was averaged over a cardiac cycle. Peak WSS in the AVF vein (black arrow heads) was approximately 200 dyne/cm2, while peak WSS in the inflow artery (red arrow heads) was approximately 500 dyne/cm2 Spatial WSSg color maps for the AVF and contralateral non-surgery controls. The color bars are adjusted to emphasize the WSSg distributions in the vein (a) and artery (b). The WSSg was averaged over a cardiac cycle. Peak WSSg in the AVF vein (black arrow heads) and in the inflow artery (red arrow head) was approximately 2×106 dyne/cm3 OSI color maps for the AVF and contralateral non-surgery controls. The color bars are adjusted to emphasize the OSI distributions in the vein (a) and artery (b). Peak OSI in the AVF vein (black arrow heads) was approximately 0.4, while peak OSI in the inflow artery (red arrow head) was approximately 0.01 Additional file 1: Video of day 21 AVF velocity streamlines over a cardiac cycle. The color bars are adjusted to emphasize the velocity distribution in the vein. See Fig. 4a for the labeling of the fistula vein and anastomosis. The velocity plot in the lower left indicates the velocity boundary conditions used in the CFD simulations. (MOV 1698 kb) Additional file 2: Video of day 21 AVF WSS color map over a cardiac cycle. The color bars are adjusted to emphasize the WSS distribution in the vein. See Fig. 5a for the labeling of the fistula vein and anastomosis. The velocity plot in the lower left indicates the velocity boundary conditions used in the CFD simulations. (MOV 622 kb) Additional file 3: Video of day 21 AVF WSSg color map over a cardiac cycle. The color bars are adjusted to emphasize the WSSg distribution in the vein. See Fig. 6a for the labeling of the fistula vein and anastomosis. The velocity plot in the lower left indicates the velocity boundary conditions used in the CFD simulations. (MOV 917 kb) Additional file 4: Video of day 21 AVF OSI color map rotated. The color bars are adjusted to emphasize the OSI distribution in the vein. See Fig. 7a for the labeling of the fistula vein and anastomosis. (MOV 8938 kb) The AVF vein had increased WSS, WSSg, and OSI as compared to their respective contralateral non-surgery veins (Figs. 5, 6 and 7). Note that this increase is not homogenous throughout the AVF vein. Specifically, at 21 days post-operatively, the velocity, WSS, and WSSg reach peaks of approximately 50 cm/s, 200 dyne/cm2, and 2×106 dyne/cm3, respectively, in a flow jet along the outer AVF vein wall (Figs. 4, 5 and 6), as compared to peaks of approximately 4 cm/s, 10 dyne/cm2, and 2.5×104 dyne/cm3 in the contralateral non-surgery veins. Additionally, the expanded AVF vein has distinct recirculating flow (Fig. 4, red arrow head), coinciding with the smaller flow velocity and increased OSI (back arrows in Fig. 7, peaks of 0.4) in that region. At 21 days post-operatively, when we compare the inflow artery (peaks of approximately 140 cm/s, 500 dyne/cm2, 2×106 dyne/cm3 and 0.01 for velocity, WSS, WSSg and OSI, respectively) and the contralateral non-surgery artery (peaks of approximately 6 cm/s, 20 dyne/cm2, 2.5×104 dyne/cm3 and 1x10-4 for velocity, WSS, WSSg and OSI, respectively), we found much greater values of all four wall surface hemodynamic factors in the inflow artery. The velocity in the inflow artery is the highest near the AV anastomosis (Fig. 4). When compared to the contralateral non-surgery artery, the averaged WSS and WSSg were higher in the inflow artery (Figs. 5 and 6). Figures 8, 9 and 10 show lumen volumetric hemodynamic parameters (vorticity, helicity, Q-criterion) that are used to describe flow disturbances and vortices. Additional files 5, 6 and 7 show the rotated videos of vorticity, helicity, and Q-criterion, respectively. At 21 days post-operatively, increased vorticity is clear in the AVF vein (peaks of 500 1/s, black arrow heads in Fig. 8) as compared to the contralateral non-surgery vein (20 1/s). In the same vein region, helicity (±200 cm/s2 in Fig. 9) and Q-criterion (4000 1/s2 in Fig. 10) are also elevated as compared to the contralateral non-surgery vein (helicity 0 cm/s2; Q-criterion 200 1/s2). Vorticity isosurfaces for the AVF and contralateral non-surgery controls. The colors are adjusted to emphasize the vorticity distributions in the vein (a) and artery (b). The AVF vein had substantial regions where vorticity was 500 1/s (black arrow heads), and the inflow artery had substantial regions where vorticity was 5000 1/s (red arrow heads) Helicity isosurfaces for the AVF and contralateral non-surgery controls. The colors are adjusted to emphasize the helicity distributions in the vein (a) and artery (b). Helicity in the AVF vein (black arrow heads) and inflow artery (red arrow head) had extensive regions equal to ±200 cm/s2 Q-criterion for the AVF and contralateral non-surgery controls. The colors are adjusted to emphasize the Q-criterion distributions in the vein (a) and artery (b). The AVF vein had regions where Q-criterion was 4000 1/s2 (black arrow heads), and the inflow artery had regions where Q-criterion was 5×105 1/s2 (red arrow head) Additional file 5: Video of day 21 AVF time-averaged vorticity isosurfaces rotated. The isosurfaces are selected to emphasize the vorticity distribution in the vein. See Fig. 8a for the labeling of the fistula vein and anastomosis. (MOV 8810 kb) Additional file 6: Video of day 21 AVF time-averaged helicity isosurfaces rotated. The isosurfaces are selected to emphasize the helicity distribution in the vein. See Fig. 9a for the labeling of the fistula vein and anastomosis. (MOV 8693 kb) Additional file 7: Video of day 21 AVF time-averaged Q-criterion isosurfaces rotated. The isosurfaces are selected to emphasize the Q-criterion distribution in the vein. See Fig. 10a for the labeling of the fistula vein and anastomosis. (MOV 8783 kb) At 21 days post-operatively, when we compared the inflow artery (vorticity peaks of 5000 1/s; helicity peaks of ±200 cm/s2; Q-criterion peaks of 5x105 1/s2) and the contralateral non-surgery artery (vorticity peaks of 500 1/s; helicity peaks of 0 cm/s2; Q-criterion peaks of 200 1/s2), we found markedly increased values of vorticity, helicity, and Q-criterion in the inflow artery. Nearly 70% of end stage renal disease patients utilize hemodialysis as their renal replacement modality of choice [15]. This population of patients requires a functional vascular access to obtain successful long-term dialysis therapy. The recommended vascular access for hemodialysis patients is an AVF [16]. However, up to 60% of AVFs created in the United States fail to mature successfully for dialysis use from a published multicenter randomized controlled trial [1]. The pathophysiology of AVF maturation failure remains poorly understood. In a few small clinical studies, hemodynamic changes following AVF creation have been suggested to play an important role in vascular wall remodeling and AVF development, but have not been characterized in fine spatial or temporal details [17, 18]. Murine AVF models allow the opportunity for detailed mechanistic studies of specific signaling pathways involved in AVF development and maturation. Non-contrast MRI-based CFD modeling allows for analysis of WSS and other hemodynamic measures in AVFs, and currently there are no published techniques regarding CFD modeling of AVF blood flow in small animal models. Here we report a detailed protocol and proof of concept for serial assessment of WSS parameters and lumen area change using an MRI-based CFD approach. In patients, vein side branches (accessory veins) in the AVF vein are common, and may be ligated during AVF creation surgery [19, 20]. We have found that side branches are also common in our murine AVF model, and our MRI-CFD method can characterize hemodynamics in the side branch as well (data not down). The venous side branch is not the focus of our paper, but in the future, our approach could be used to investigate the effect of side branches on AVF flow and development. Because the segment of the AVF vein between the anastomosis and the branch is the most important part of AVF maturation, here we focus our paper on the AVF vein segment proximal to the anastomosis, before the branch. In the present study, velocity and resulting hemodynamic parameters (WSS, WSSg, OSI) are substantially elevated (as compared to contralateral non-surgery controls) in both the AVF inflow artery and the main AVF vein (Figs. 5, 6 and 7). We also quantitatively described the disturbed patterns (recirculation, vortices) of the flow paths through the lumen by vorticity, helicity, and Q-criterion (Figs. 8, 9 and 10). Disturbed flow patterns have been linked to the development of atherosclerosis [21] and neointimal hyperplasia in AVF [22–24]. Previous AVF CFD studies in the literature have focused primarily on wall hemodynamics and velocity streamlines; we have expanded this present analysis to include the flow patterns. Regarding wall hemodynamics and velocity streamlines, previous studies have evaluated the hemodynamics and WSS throughout the arterial or venous tree in mice using various models, including aortocaval AVF, carotid-jugular AVF, carotid artery stenosis by external cast, transverse aortic constriction between the right and left carotid arteries, and partial carotid ligation [25–30]. These studies either calculated WSS using an analytical approximation (such as Poiseuille flow) or used CFD at a much larger spatial interval, such as 1 cm averages. In these studies, the magnitude of WSS in the pre-surgical artery and vein ranges from approximately 10-240 dyne/cm2 and 8-18 dyne/cm2, respectively; the magnitude of WSS in the post-surgical artery and vein ranges from approximately 100-320 dyne/cm2 and 15-180 dyne/cm2, respectively [25–30]. Since these studies are in various pathological models, the range of WSS is large. However, our results fit within this range, while providing an order of magnitude smaller spatial resolution (0.5 μm vs. 5 μm in human/pig models) for WSS and other hemodynamic parameters that are necessary for murine AVFs. In addition, our 0.1 ms time step over a 120 ms cardiac cycle in mice distinguishes our study from large animal models with much longer cardiac cycles, such as pigs (cardiac cycle ~800-1000 ms, time step 1 ms) [9]. Currently, there is no consensus on the physiologically relevant level of OSI with respect to causing damage to the endothelium, but similar OSI peaks to our mouse AVF vein at Day 7 and Day 21 (0.4, Fig. 7) were seen in the vein wall of a porcine AVF model (>0.3) [8]. In addition to wall hemodynamics, studying blood flow patterns may be important in helping us better understand the pathophysiology of AVF dysfunction. Previous studies [22] have suggested that the locations of stenosis and NH development in AVFs were associated with low and/or oscillating flow. Defining and characterizing this disturbed flow can be achieved in a more quantitative manner by using volumetric flow measurements, such as vorticity, helicity, and Q-criterion. A study using an in vitro model of human AVG reported directional vorticity of ±550 1/s at the AVG anastomosis [31], and a CFD simulation of intracranial arterial aneurysms reported directional vorticity of -1000 to +1600 1/s [12]. In addition, in a CFD analysis of blood flow in an outflow cannula for cardiopulmonary bypass, helicity magnitude was reported up to 1.5x105 cm/s2 [13]. Increasing positive values of Q-criterion (indicating vortices) have been reported in a CFD model of arterial stenosis, with greater regions of positive Q-criterion throughout the artery volume as the stenosis becomes more constrictive [14]. Comparing these previous findings to our work, we find that similar vorticity was seen in our mouse AVF in 4 mm averages (349 ± 768 1/s at Day 21) compared to human AVF and AVG in the vein (550-1600 1/s) [12, 31]. In our mouse AVF vein at Day 21, we calculated increased helicity (96.4 ± 5460 cm/s2) and positive Q-criterion (1.21 ± 100×103 1/s2) when compared to non-surgery veins, indicating helical, disturbed flow, and the formation of vortices in the AVF veins, as seen in vascular studies of helical, disturbed flow in cannulated cardiopulmonary bypass and arterial stenosis [13, 14, 32]. The simulation results have been validated against velocity measurements in this study, but not against pressure measurements. Future studies can involve intravenous pressure probes to validate the pressure in addition to the current validated velocity measurements. In this study, our sample size of mice is small, so we did not perform statistical analysis to identify any association between the CFD results and the formation of neointimal hyperplasia, which will need a larger group of mice. However, the main purpose of this study was to describe and report the novel MRI-CFD methodology in a murine AVF model. Our future studies will implement this methodology in larger number of wild type mice to investigate the relationships between the CFD results and the formation of neointimal hyperplasia. Transgenic mice where AVFs are created will be used in concert to delineate the mechanisms. To the best of our knowledge, no previous techniques have been published on MRI-based CFD modeling in a murine AVF model, thus, this study is an important technology and tool to advance in the study of the pathobiology of AVF development. Using this non-contrast MRI sequence as the basis for CFD modeling allows for greater spatial and temporal characterization of the resulting hemodynamics. This detail allows for greater insights into the successful or unsuccessful maturation of AVF, particularly in the hemodynamic conditions that lead to pathologic changes such as NH development and resulting stenosis. In the future, our CFD studies will be enhanced in the setting of transgenic AVF mice and complemented with the addition of histological analysis, which can be used to correlate hemodynamic parameters at early time points with NH development and wall thickness changes in the regions of lumen narrowing at later time point. We have developed a novel approach and method for imaging AVFs created in mice and characterizing AVF flow at high resolutions. Our high spatial and temporal resolution MRI imaging and CFD modeling protocol allows for calculations of WSS, WSS gradients, OSI, as well as quantitation of the complex blood flow patterns by using vorticity, helicity, and Q-criterion. These high quality protocols and tools are currently being used to study hemodynamic wall changes in an expanded research study of transgenic mice with AVFs created to elucidate causal mechanistic changes related to pathways that regulate AVF remodeling in murine AVF models. After publication of the original article AVF: Arteriovenous fistula Arteriovenous graft CFD: ECG: ESRD: End stage renal disease FOV: IACUC: Institutional Animal Care and Use Committee MRI: NH: Neointimal hyperplasia NIDDK: National Institutes of Diabetes, Digestive and Kidney Diseases OSI: Oscillatory shear index Rapid acquisition with relaxation enhancement Semi-Implicit Method for Pressure-Linked Equations VMTK: Vascular Modeling Toolkit WSS: Wall shear stress WSSg: Spatial wall shear stress gradient Dember LM, Beck GJ, Allon M, Delmez JA, Dixon BS, Greenberg A, Himmelfarb J, Vazquez MA, Gassman JJ, Greene T, Radeva MK, Braden GL, Ikizler TA, Rocco MV, Davidson IJ, Kaufman JS, Meyers CM, Kusek JW, Feldman HI. Effect of clopidogrel on early failure of arteriovenous fistulas for hemodialysis: a randomized controlled trial. JAMA. 2008;299:2164–71. Lee T. Novel paradigms for dialysis vascular access: downstream vascular biology--is there a final common pathway? Clin J Am Soc Nephrol. 2013;8:2194–201. Rothuizen TC, Wong C, Quax PH, van Zonneveld AJ, Rabelink TJ, Rotmans JI. Arteriovenous access failure: more than just intimal hyperplasia? Nephrol Dial Transplant. 2013;28:1085–92. Astor BC, Eustace JA, Powe NR, Klag MJ, Fink NE, Coresh J. Type of vascular access and survival among incident hemodialysis patients: the Choices for Healthy Outcomes in Caring for ESRD (CHOICE) Study. J Am Soc Nephrol. 2005;16:1449–55. Dhingra RK, Young EW, Hulbert-Shearon TE, Leavey SF, Port FK. Type of vascular access and mortality in U.S. hemodialysis patients. Kidney Int. 2001;60:1443–51. Gibson KD, Gillen DL, Caps MT, Kohler TR, Sherrard DJ, Stehman-Breen CO. Vascular access survival and incidence of revisions: a comparison of prosthetic grafts, simple autogenous fistulas, and venous transposition fistulas from the United States Renal Data System Dialysis Morbidity and Mortality Study. J Vasc Surg. 2001;34:694–700. Krishnamoorthy MK, Banerjee RK, Wang Y, Zhang J, Roy AS, Khoury SF, Arend LJ, Rudich S, Roy-Chaudhury P. Hemodynamic wall shear stress profiles influence the magnitude and pattern of stenosis in a pig AV fistula. Kidney Int. 2008;74:1410–9. Rajabi-Jagahrgh E, Roy-Chaudhury P, Wang Y, Al-Rjoub M, Campos-Naciff B, Choe A, Dumoulin C, Banerjee RK. New techniques for determining the longitudinal effects of local hemodynamics on the intima-media thickness in arteriovenous fistulae in an animal model. Semin Dial. 2014;27:424–35. He Y, Terry CM, Nguyen C, Berceli SA, Shiu YT, Cheung AK. Serial analysis of lumen geometry and hemodynamics in human arteriovenous fistula for hemodialysis using magnetic resonance imaging and computational fluid dynamics. J Biomech. 2013;46:165–9. McGah PM, Leotta DF, Beach KW, Eugene Zierler R, Aliseda A. Incomplete restoration of homeostatic shear stress within arteriovenous fistulae. J Biomech Eng. 2013;135:011005. Heiberg E, Sjogren J, Ugander M, Carlsson M, Engblom H, Arheden H. Design and validation of Segment--freely available software for cardiovascular image analysis. BMC Med Imaging. 2010;10:1. Pereira VM, Brina O, Marcos Gonzales A, Narata AP, Bijlenga P, Schaller K, Lovblad KO, Ouared R. Evaluation of the influence of inlet boundary conditions on computational fluid dynamics for intracranial aneurysms: a virtual experiment. J Biomech. 2013;46:1531–9. Neidlin M, Jansen S, Moritz A, Steinseifer U, Kaufmann TA. Design modifications and computational fluid dynamic analysis of an outflow cannula for cardiopulmonary bypass. Ann Biomed Eng. 2014;42:2048–57. Keshavarz-Motamed Z, Kadem L. 3D pulsatile flow in a curved tube with coexisting model of aortic stenosis and coarctation of the aorta. Med Eng Phys. 2011;33:315–24. Saran R, Li Y, Robinson B, Ayanian J, Balkrishnan R, Bragg-Gresham J, Chen JT, Cope E, Gipson D, He K, Herman W, Heung M, Hirth RA, Jacobsen SS, Kalantar-Zadeh K, Kovesdy CP, Leichtman AB, Lu Y, Molnar MZ, Morgenstern H, Nallamothu B, O'Hare AM, Pisoni R, Plattner B, Port FK, Rao P, Rhee CM, Schaubel DE, Selewski DT, Shahinian V, Sim JJ, Song P, Streja E, Kurella Tamura M, Tentori F, Eggers PW, Agodoa LY, Abbott KC. US renal data system 2014 annual data report: epidemiology of kidney disease in the United States. Am J Kidney Dis. 2015;66:Svii, S1–305. Vascular Access Work Group. Clinical practice guidelines for vascular access. Am J Kidney Dis. 2006;48 Suppl 1:S176–247. Corpataux JM, Haesler E, Silacci P, Ris HB, Hayoz D. Low-pressure environment and remodelling of the forearm vein in Brescia-Cimino haemodialysis access. Nephrol Dial Transplant. 2002;17:1057–62. Ene-Iordache B, Mosconi L, Antiga L, Bruno S, Anghileri A, Remuzzi G, Remuzzi A. Radial artery remodeling in response to shear stress increase within arteriovenous fistula for hemodialysis access. Endothelium. 2003;10:95–102. Faiyaz R, Abreo K, Zaman F, Pervez A, Zibari G, Work J. Salvage of poorly developed arteriovenous fistulae with percutaneous ligation of accessory veins. Am J Kidney Dis. 2002;39:824–7. Palder SB, Kirkman RL, Whittemore AD, Hakim RM, Lazarus JM, Tilney NL. Vascular access for hemodialysis. Patency rates and results of revision. Ann Surg. 1985;202:235–9. Paszkowiak JJ, Dardik A. Arterial wall shear stress: observations from the bench to the bedside. Vasc Endovascular Surg. 2003;37:47–57. Dixon BS. Why don't fistulas mature? Kidney Int. 2006;70:1413–22. Sivanesan S, How TV, Black RA, Bakran A. Flow patterns in the radiocephalic arteriovenous fistula: an in vitro study. J Biomech. 1999;32:915–25. Van Tricht I, De Wachter D, Tordoir J, Verdonck P. Hemodynamics and complications encountered with arteriovenous fistulas and grafts as vascular access for hemodialysis: a review. Ann Biomed Eng. 2005;33:1142–57. Castier Y, Brandes RP, Leseche G, Tedgui A, Lehoux S. p47phox-dependent NADPH oxidase regulates flow-induced vascular remodeling. Circ Res. 2005;97:533–40. Cheng C, Tempel D, van Haperen R, van der Baan A, Grosveld F, Daemen MJ, Krams R, de Crom R. Atherosclerotic lesion size and vulnerability are determined by patterns of fluid shear stress. Circulation. 2006;113:2744–53. Li YH, Hsieh CY, Wang DL, Chung HC, Liu SL, Chao TH, Shi GY, Wu HL. Remodeling of carotid arteries is associated with increased expression of thrombomodulin in a mouse transverse aortic constriction model. Thromb Haemost. 2007;97:658–64. Nam D, Ni CW, Rezvan A, Suo J, Budzyn K, Llanos A, Harrison D, Giddens D, Jo H. Partial carotid ligation is a model of acutely induced disturbed flow, leading to rapid endothelial dysfunction and atherosclerosis. Am J Physiol Heart Circ Physiol. 2009;297:H1535–43. Yamamoto K, Protack CD, Kuwahara G, Tsuneki M, Hashimoto T, Hall MR, Assi R, Brownson KE, Foster TR, Bai H, Wang M, Madri JA, Dardik A. Disturbed shear stress reduces Klf2 expression in arterial-venous fistulae in vivo. Physiol Rep. 2015;3:e12348. Yamamoto K, Protack CD, Tsuneki M, Hall MR, Wong DJ, Lu DY, Assi R, Williams WT, Sadaghianloo N, Bai H, Miyata T, Madri JA, Dardik A. The mouse aortocaval fistula recapitulates human arteriovenous fistula maturation. Am J Physiol Heart Circ Physiol. 2013;305:H1718–25. Heise M, Schmidt S, Kruger U, Pfitzmann R, Scholz H, Neuhaus P, Settmacher U. Local haemodynamics and shear stress in cuffed and straight PTFE-venous anastomoses: an in-vitro comparison using particle image velocimetry. Eur J Vasc Endovasc Surg. 2003;26:367–73. Bozzetto M, Ene-Iordache B, Remuzzi A. Transitional flow in the venous side of patient-specific arteriovenous fistulae for hemodialysis. Ann Biomed Eng. 2015;44(8):2388–401. The authors would like to thank the UAB-UCSD O'Brien Core Center for Acute Injury Research (P30 DK079337) for performing all mouse AVF surgeries. Dr. Lee is supported by an American Society of Nephrology Carl W. Gottschalk Scholar grant, the University of Alabama at Birmingham Nephrology Research Center Anderson Innovation award, the University of Alabama at Birmingham Center for Clinical and Translational Science Multidisciplinary Pilot award 1UL1TR001417-01, and grant 1R43DK109789-01 from National Institutes of Diabetes, Digestive and Kidney Diseases (NIDDK). Dr. Shiu is supported by the grant R01 DK100505 from NIDDK. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. DP performed AVF model reconstruction, CFD simulation, initial development, and revision of the manuscript. YTS contributed to data interpretation, initial development, and revision of the manuscript. MS and TI contributed with animal surgery and histology. LG contributed animal surgery. JT performed all MRI scans and contributed to interpretation of these scans. TL was the primary investigator for this study and contributed to the AVF surgery, MRI scans, initial development, and revision of the manuscript. All authors read and approved the final manuscript. All animal studies and experiments were approved by the University of Alabama at Birmingham Institutional Animal Care and Use Committee (IACUC) and performed in accordance with National Institutes of Health guidelines. Department of Bioengineering, University of Utah, Salt Lake City, UT, USA Daniel Pike & Yan-Ting Shiu Division of Nephrology and Hypertension, Department of Internal Medicine, University of Utah, Salt Lake City, UT, USA Department of Medicine and Division of Nephrology, University of Alabama at Birmingham, 1720 2nd Ave South, Birmingham, AL, 35294-0007, USA Maheshika Somarathna , Lingling Guo , Tatyana Isayeva & Timmy Lee Department of Radiology, University of Alabama at Birmingham, Birmingham, AL, USA John Totenhagen Veterans Affairs Medical Center, Birmingham, AL, USA Timmy Lee Search for Daniel Pike in: Search for Yan-Ting Shiu in: Search for Maheshika Somarathna in: Search for Lingling Guo in: Search for Tatyana Isayeva in: Search for John Totenhagen in: Search for Timmy Lee in: Correspondence to Timmy Lee. Pike, D., Shiu, Y., Somarathna, M. et al. High resolution hemodynamic profiling of murine arteriovenous fistula using magnetic resonance imaging and computational fluid dynamics. Theor Biol Med Model 14, 5 (2017) doi:10.1186/s12976-017-0053-x Hemodialysis fistula
CommonCrawl
Quantum field theory, axioms for The mathematical axiom systems for quantum field theory (QFT) grew out of Hilbert's sixth problem [a6], that of stating the problems of quantum theory in precise mathematical terms. There have been several competing mathematical systems of axioms, and below those of A.S. Wightman [a5], and of K. Osterwalder and R. Schrader [a4] are given, stated in historical order. They are centred around group symmetry, relative to unitary representations of Lie groups in Hilbert space. 1 Wightman axioms. 2 Osterwalder–Schrader axioms. 2.1 Relation to unitary representations of Lie groups. 2.1.1 References Wightman axioms. Wightman's axioms involve: a unitary representation $U$ of $G = \operatorname{SL} ( 2 , {\bf C} ) \rtimes {\bf R} ^ { 4 }$ as a covering of the Poincaré group of relativity, and a vacuum state vector $\psi_0$ fixed by the representation. Quantum fields $\varphi _ { 1 } ( f ) , \dots , \varphi _ { n } ( f )$, say, as operator-valued distributions, $f$ running over a specified space of test functions, and the operators $\varphi _ { i } ( f )$ defined on a dense and invariant domain $D$ in $\mathcal{H}$ (the Hilbert space of quantum states), and $\psi _ { 0 } \in D$. A transformation law which states that $U ( g ) \varphi_j ( f ) U ( g ^ { - 1 } )$ is a finite-dimensional representation $R$ of the group $G$ (cf. also Representation of a group) acting on the fields $\varphi _ { i } ( f )$, i.e., $\sum _ { i } R _ { j i } ( g ^ { - 1 } ) \varphi _ { i } ( g [ f ] )$, $g$ acting on space-time and $g [ f ] ( x ) = f ( g ^ { - 1 } x )$, $x \in \mathbf{R} ^ { 4 }$. The fields $\varphi_j ( f )$ are assumed to satisfy locality and one of the two canonical commutation relations of $[ A , B ] _ { \pm } = A B \pm B A$, for fermions, respectively bosons. Finally, it is assumed that there is scattering with asymptotic completeness, in the sense $\mathcal{H} = \mathcal{H} ^ { \text{in} } = \mathcal{H} ^ { \text{out} }$. Osterwalder–Schrader axioms. The Wightman axioms were the basis for many of the spectacular developments in QFT in the 1970s, see, e.g., [a1], [a2], and the Osterwalder–Schrader axioms [a3], [a4] came in response to the dictates of path-space measures. The constructive approach involved some variant of the Feynman measure. But the latter has mathematical divergences that can be resolved with an analytic continuation, so that the mathematically well-defined Wiener measure becomes instead the basis for the analysis. Two analytical continuations were suggested in this connection: in the mass-parameter, and in the time-parameter, i.e., $t \mapsto \sqrt { - 1 }t$. With the latter, the Newtonian quadratic form on space-time turns into the form of relativity, $x _ { 1 } ^ { 2 } + x _ { 2 } ^ { 2 } + x _ { 3 } ^ { 2 } - t ^ { 2 }$. One gets a stochastic process $\mathcal{X} _ { t }$ that is: symmetric, i.e., ${\cal X} _ { t } \sim {\cal X}_{ - t }$; stationary, i.e., $\mathcal{X} _ { t + s } \sim \mathcal{X} _ { s }$; and Osterwalder–Schrader positive, i.e., $\int _ { \Omega } f _ { 1 } \circ \mathcal{X} _ { t _ { 1 } } \ldots f _ { n } \circ \mathcal{X} _ { t _ { n } } d P \geq 0$, $f _ { 1 } , \ldots , f _ { n }$ test functions, $- \infty < t _ { 1 } \leq \ldots \leq t _ { n } < \infty$, and $P$ denoting a path space measure. Specifically: If $- t / 2 < t _ { 1 } \leq \ldots \leq t _ { n } < t / 2$, then \begin{equation*} = \operatorname { lim } _ { t \rightarrow \infty } \int \prod _ { k = 1 } ^ { n } A _ { k } ( q ( t _ { k } ) ) d \mu _ { t } ( q ( \cdot ) ). \end{equation*} By Minlos' theorem, there is a measure $\mu$ on ${\cal D} ^ { \prime }$ such that \begin{equation} \tag{a2} \operatorname { lim } _ { t \rightarrow \infty } \int e ^ { i q ( f ) } d \mu _ { t } ( q ) = \int e ^ { i q ( f ) } d \mu ( q ) = : S ( f ) \end{equation} for all $f \in \mathcal{D}$. Since $\mu$ is a positive measure, one has \begin{equation*} \sum _ { k } \sum _ { l } \overline { c } _ { k } c _ { l } S ( f _ { k } - \overline { f } _ { l } ) \geq 0 \end{equation*} for all $c_1 , \ldots , c_n \in \mathbf{C}$, and all $f _ { 1 } , \dots , f _ { n } \in \mathcal{D}$. When combining (a1) and (a2), one can note that this limit-measure $\mu$ then accounts for the time-ordered $n$-point functions which occur on the left-hand side in (a1). This observation is further used in the analysis of the stochastic process $\mathcal{X} _ { t }$, $\mathcal{X} _ { t } ( q ) = q ( t )$. But, more importantly, it can be checked from the construction that one also has the following reflection positivity: Let $( \theta f ) ( s ) : = f ( - s )$, $f \in \mathcal{D}$, $s \in \mathbf{R}$, and set \begin{equation*} \mathcal{D} _ { + } = \{ f \in \mathcal{D} : f \ \text { real valued, } f ( s ) = 0 \text { for } s < 0 \}. \end{equation*} \begin{equation*} \sum _ { k } \sum _ { l } \overline { c } _ { k } c_{ l} S ( \theta (\, f _ { k } ) - f _ { l } ) \geq 0 \end{equation*} for all $c_1 , \ldots , c_n \in \mathbf{C}$ and all $f _ { 1 } , \dots , f _ { n } \in \mathcal{D} _ { + }$, which is one version of Osterwalder–Schrader positivity. Relation to unitary representations of Lie groups. Since the Killing form of Lie theory may serve as a finite-dimensional metric, the Osterwalder–Schrader idea [a4] turned out also to have implications for the theory of unitary representations of Lie groups. In [a3], P.E.T. Jorgensen and G. Ólafsson associate to Riemannian symmetric spaces $G / K$ of tube domain type (cf. also Symmetric space), a duality between complementary series representations of $G$ on one side, and highest-weight representations of a $c$-dual $G ^ { c }$ on the other side. The duality $G \leftrightarrow G ^ { c }$ involves analytic continuation, in a sense which generalizes $t \mapsto \sqrt { - 1 }t$, and the reflection positivity of the Osterwalder–Schrader axiom system. What results is a new Hilbert space, where the new representation of $G ^ { c }$ is "physical" in the sense that there is positive energy and causality, the latter concept being defined from certain cones in the Lie algebra of $G$. A unitary representation $\pi$ acting on a Hilbert space $\mathcal{H} ( \pi )$ is said to be reflection symmetric if there is a unitary operator $J : \mathcal{H} ( \pi ) \rightarrow \mathcal{H} ( \pi )$ such that R1) $J ^ { 2 } = \operatorname{id}$; R2) $J \pi ( g ) = \pi ( \tau ( g ) ) J$, $g \in G$. Here, $\tau \in \operatorname { Aut } ( G )$, $\tau ^ { 2 } = \operatorname{id}$, and $H = \{ g \in G : \tau ( g ) = g \}$. A closed convex cone $C \subset \text{q}$ is hyperbolic if $C ^ { o } \neq \emptyset$, and if $\operatorname { ad } X$ is semi-simple (cf. also Semi-simple representation) with real eigenvalues for every $X \in C ^ { o }$. Assume the following, for $( G , \pi , \tau , J )$: PR1) $\pi$ is reflection symmetric with reflection $J$. PR2) There is an $H$-invariant hyperbolic cone $C \subset \text{q}$ such that $S ( C ) = H \operatorname { exp } C$ is a closed semi-group and $S ( C ) ^ { o } = H \operatorname { exp } C ^ { o }$ is diffeomorphic to $H \times C ^ { o }$. PR3) There is a subspace $0 \neq \mathcal{K} _ { 0 } \subset \mathcal{H} ( \pi )$, invariant under $S ( C )$, satisfying the positivity condition Assume that $( \pi , C , \mathcal{H} , J )$ satisfies PR1)–PR3). Then the following hold: $S ( C )$ acts via $s \mapsto \widetilde{\pi} ( s )$ by contractions on $\mathcal{K}$ (the Hilbert space obtained by completion of $\mathcal{K} _ { 0 }$ in the norm from PR3)). Let $G ^ { c }$ be the simply-connected Lie group with Lie algebra $\mathfrak { g } ^ { c }$. Then there exists a unitary representation $\tilde{\pi} ^ { c }$ of $G ^ { c }$ such that $d \tilde { \pi } ^ { c } ( X ) = d \tilde { \pi } ( X )$ for $X \in \mathfrak { h }$ and for $Y \in C$, where $\mathfrak { h } = \{ X \in \mathfrak { g } : \tau ( X ) = X \}$. The representation $\tilde{\pi} ^ { c }$ is irreducible if and only if $\tilde{\pi}$ is irreducible. [a1] J. Glimm, A. Jaffe, "Quantum field theory and statistical mechanics (a collection of papers)" , Birkhäuser (1985) [a2] J. Glimm, A. Jaffe, "Quantum physics" , Springer (1987) (Edition: Second) [a3] P.E.T. Jorgensen, G. Ólafsson, "Unitary representations of Lie groups with reflection symmetry" J. Funct. Anal. , 158 (1998) pp. 26–88 [a4] K. Osterwalder, R. Schrader, "Axioms for Euclidean Green's functions" Comm. Math. Phys. , 31/42 (1973/75) pp. 83–112;281–305 [a5] R.F. Streater, A.S. Wightman, "PCT, spin and statistics, and all that" , Benjamin (1964) [a6] A.S. Wightman, "Hilbert's sixth problem: Mathematical treatment of the axioms of physics" F.E. Browder (ed.) , Mathematical Developments Arising from Hilbert's Problems , Proc. Symp. Pure Math. , 28:1 , Amer. Math. Soc. (1976) pp. 241–268 Quantum field theory, axioms for. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Quantum_field_theory,_axioms_for&oldid=50601 This article was adapted from an original article by Palle E.T. JorgensenGestur Ólafsson (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Quantum_field_theory,_axioms_for&oldid=50601" TeX semi-auto TeX partially done
CommonCrawl
Culicoides species composition and molecular identification of host blood meals at two zoos in the UK Marion E. England1, Paul Pearce-Kelly2, Victor A. Brugman3, Simon King1, Simon Gubbins1, Fiona Sach2, Christopher J. Sanders1, Nic J. Masters2, Eric Denison1 & Simon Carpenter1 Parasites & Vectors volume 13, Article number: 139 (2020) Cite this article Culicoides biting midges are biological vectors of arboviruses including bluetongue virus (BTV), Schmallenberg virus (SBV) and African horse sickness virus (AHSV). Zoos are home to a wide range of 'at risk' exotic and native species of animals. These animals have a high value both in monetary terms, conservation significance and breeding potential. To understand the risk these viruses pose to zoo animals, it is necessary to characterise the Culicoides fauna at zoos and determine which potential vector species are feeding on which hosts. Light-suction traps were used at two UK zoos: the Zoological Society of London (ZSL) London Zoo (LZ) and ZSL Whipsnade Zoo (WZ). Traps were run one night each week from June 2014 to June 2015. Culicoides were morphologically identified to the species level and any blood-fed Culicoides were processed for blood-meal analysis. DNA from blood meals was extracted and amplified using previously published primers. Sequencing was then carried out to determine the host species. A total of 11,648 Culicoides were trapped and identified (n = 5880 from ZSL WZ; n = 5768 from ZSL LZ), constituting 25 different species. The six putative vectors of BTV, SBV and AHSV in northern Europe were found at both zoos and made up the majority of the total catch (n = 10,701). A total of 31 host sequences were obtained from blood-fed Culicoides. Culicoides obsoletus/C. scoticus, Culicoides dewulfi, Culicoides parroti and Culicoides punctatus were found to be biting a wide range of mammals including Bactrian camels, Indian rhinoceros, Asian elephants and humans, with Culicoides obsoletus/C. scoticus also biting Darwin's rhea. The bird-biting species, Culicoides achrayi, was found to be feeding on blackbirds, blue tits, magpies and carrion crows. To our knowledge, this is the first study to directly confirm blood-feeding of Culicoides on exotic zoo animals in the UK and shows that they are able to utilise a wide range of exotic as well as native host species. Due to the susceptibility of some zoo animals to Culicoides-borne arboviruses, this study demonstrates that in the event of an outbreak of one of these viruses in the UK, preventative and mitigating measures would need to be taken. Culicoides biting midges (Diptera: Ceratopogonidae) transmit economically important arboviruses (arthropod-borne viruses), including bluetongue virus (BTV), Schmallenberg virus (SBV) and African horse sickness virus (AHSV) [1]. In the past two decades, BTV and SBV have inflicted unprecedented epidemics of disease in northern Europe, where there were no previous records of Culicoides-borne virus incursion, with major impacts on the health, productivity and trade of susceptible ruminant livestock hosts (cattle, sheep and goats) [2, 3]. AHSV is also viewed as a threat to equine hosts in Europe, although the degree to which sustained transmission could occur in northern Europe in the absence of widespread donkey or zebra reservoir hosts is unclear [4, 5]. In addition to livestock and companion animals, a range of wildlife can also become infected with BTV, SBV and AHSV, including species only distantly related to primary livestock hosts. In North America, outbreaks of both BTV and epizootic haemorrhagic disease virus (EHDV) cause severe clinical disease in cervids and impact on farming of the white-tailed deer (Odocoileus virginianus) [6, 7]. During the recent outbreaks of BTV and SBV in northern Europe, the potential for cervids to act as a reservoir host for these viruses was also considered, although this potential mechanism of virus persistence and re-emergence remains poorly understood [8,9,10]. Antibodies to BTV, SBV or AHSV have also been found in a wide range of other mammals including dogs (Canis lupis familiaris), wild dogs (Lycaon pictus), jackals (Canis spp.), lions (Panthera leo), spotted hyenas (Crocuta crocuta), black bears (Ursus americanus), African elephants (Loxodonta africana), white rhinoceros (Ceratotherium simum) and a range of antelope species (Bovidae) [11,12,13,14,15]. Herbivorous species are thought to become infected with BTV primarily through biological transmission by Culicoides spp., while infection of carnivores is ascribed to feeding on meat from infected mammals. However, specifying these transmission routes is challenging and evidence is largely anecdotal [11, 14]. Culicoides spp. have been shown to feed on domestic dogs [16] and AHSV has been detected in domestic dogs with no history of ingestion of horse meat [17]. For BTV, the potential for animals of these species to develop a transmissible viraemia remains unclear in all species with the exception of deer. Disease outcomes of infection in wildlife also remain unpredictable for all three viruses, particularly in areas of emergence or re-emergence [11, 12]. In the UK, seroconversion to SBV was detected in a range of exotic animals screened at the Zoological Society of London (ZSL) London Zoo, ZSL Whipsnade Zoo and Chester Zoo using a cELISA and positive samples were confirmed and quantified using a plaque reduction neutralisation test [18]. Species testing positive for exposure to SBV on both assay systems included Asian elephants (Elephas maximus), reticulated giraffes (Giraffa camelopardalis reticulata), red river hogs (Potamochoerus porcus), deer (hog deer, Axis porcinus; reindeer, Rangifer tardinus), antelopes (greater kudu, Tragelaphus strepsiceros; blackbuck, Antelope cervicapra) and bovids (yak, Bos grunniens; gaur, Bos gaurus). No clinical disease was reported in any of these hosts, but some UK zoos have previously carried out precautionary vaccination of high-value animals against BTV-8 (T. Woodfine, personal communication., NM, unpublished data). At present for BTV all post-import testing of susceptible exotic ruminants (restricted to the Cervidae, Camelidae, Giraffidae, Antilocapridae and Bovidae) and movements are facilitated on a case-by-case basis through bilateral agreements based on Article 8.1(b) of Commission Regulation (EC) No. 1266/2007. Equine hosts susceptible to AHSV are also subject to stringent pre- and post-movement testing procedures. Infection with SBV is not routinely examined in these species. Globally, just three studies have investigated Culicoides populations in zoos. One study carried out trapping for Culicoides over a two-year period at the National Zoological Gardens in Pretoria, South Africa using Onderstepoort Veterinary Institute (OVI) light-suction traps [19]. These detected the presence and high abundance of the main afrotropical vector of BTV and AHSV, Culicoides imicola Kieffer, 1913, alongside 36 other species known to utilise mammalian and avian hosts [19]. In the USA, a study used Centre for Disease Control (CDC) traps baited with ultraviolet light (model 1212) and ABC traps baited with incandescent light, to survey Culicoides populations in two zoos in South Carolina [20]. These surveys detected 16 species of Culicoides including putative vectors of BTV. In the UK, trapping for Culicoides was carried out at Chester Zoo as part of a preliminary experiment in June 2008, at five sites over four consecutive nights using OVI traps [21]. Over 35,000 Culicoides were collected, and 25 species recorded, including all species implicated in the recent outbreaks of SBV and BTV in northern Europe [1, 3]. Interestingly, large catches greater than 1000 individuals in a single trap night were made from within enclosures containing white rhinoceros and zebra, indicating a high local abundance of potential vector species [21]. However, there was no attempt to examine feeding history or preference of the collected Culicoides. Following outbreaks of Culicoides-borne arboviruses across Europe, a series of studies across the region have used identification of blood meals via molecular assays to define host range [22]. These studies have demonstrated that while Culicoides usually exhibit a preference for either avian or mammalian hosts, they blood-feed on a wide variety of species within these classes. Within those species that feed on mammals, those that have been implicated as primary vectors in transmission of SBV and BTV demonstrate broad host range, including ruminants, equids, camelids, lagomorphs and rodents [23, 24]. This is despite significant variation in the degree to which these are reliant on livestock for larval development sites (Culicoides obsoletus (Meigen, 1818) and Culicoides scoticus Downes & Kettle, 1952 develop in a wide range of organically enriched substrates while in contrast Culicoides dewulfi Goetghebuer, 1936 and Culicoides chiopterus (Meigen, 1830) develop in animal dung) [25, 26]. Within the UK, there have been very few studies that have carried out blood-meal analysis of Culicoides. One study confirmed that potential UK Culicoides vector species of AHSV were blood-feeding on horses, proving a direct host-vector interaction [27]. Another study found that Culicoides impunctatus Goetghebuer, 1920, a species that is generally considered to have a very minor or no role in disease transmission, had fed on cows, sheep, deer and humans in Scotland [28]. It is important to establish host preferences of vector species and the extent of opportunistic biting behaviour as these have implications for disease spread and may affect disease dynamics in an outbreak scenario [5, 29]. In this study we used DNA sequencing of a mitochondrial-derived marker to directly link Culicoides populations with blood-feeding on exotic animals in zoological gardens for the first time. Additionally, we examined the seasonality of adult flying populations at these sites in order to understand how transmission risk fluctuates across seasons within these environments. We compared these results with standard surveillance schemes, with particular reference to the seasonal vector-free period (SVFP) as defined by the collection of < 5 pigmented female vector Culicoides [30]. These data are important for understanding and quantifying the risk of Culicoides-borne viruses to susceptible, valuable and in some cases, endangered, zoo animals. They also provide insight into the utilisation of hosts to which these species of Culicoides have not been exposed previously. Trapping and identification of Culicoides Onderstepoort Veterinary Institute (OVI) 220V light-suction traps were used to monitor Culicoides populations at two zoological gardens using standard surveillance approaches (Fig. 1) [31]. The zoos chosen were ZSL London Zoo (ZSL LZ) and ZSL Whipsnade Zoo (ZSL WZ) sites. ZSL LZ (51°32′6.2268′′N, 0°9′13.0824′′W) is located in an urban setting on the edge of Regent's Park in central London. In contrast, ZSL WZ (51°50′39.1236′′N, 0°32′27.8772′′W) is located in a rural area surrounded by countryside, at a higher altitude (216.72 m above sea level compared to 35.88 m above sea level at ZSL LZ). The vegetation within the exhibits at ZSL LZ is mostly made up of exotic species of shrubs and trees, with large areas of paving and small lawns in between exhibits. The zoo is adjacent to Regent's Park, characterised by lawns with native trees and hedgerows. There are many large open paddocks with native trees at ZSL WZ, with exotic planting close to animal housing and in smaller exhibits. ZSL LZ holds a collection of 60 different species of mammal, 97 species of bird, 49 species of reptile and 20 species of amphibian, constituting 2125 individuals excluding fish and invertebrates while ZSL WZ holds a collection of 56 species of mammal, 64 species of bird and 17 species of reptile, constituting 1364 individuals excluding fish and invertebrates [32]. Two OVI light-suction traps located at ZSL Whipsnade Zoo. Trap EL1 located next to the elephant enclosure and Trap BG1 located in the bird garden Five trap locations were used at each zoo, with site selection based on targeting a range of host species, including those susceptible to bluetongue and clinical signs of SBV (Figs. 2, 3). Trapping was conducted from June 2014 to June 2015. OVI 8w light-suction traps were run overnight for one night each week by a volunteer and collections made into water with a drop of detergent were sieved and transferred into 70% ethanol for identification and storage. Culicoides were morphologically identified to species level under a dissecting microscope using published keys [33,34,35]. Females of C. obsoletus and C. scoticus were grouped together as C. obsoletus complex. Female Culicoides were further classified as unpigmented, pigmented, gravid or blood-fed based on the morphology of their abdomen [36]. Map of ZSL London Zoo showing trap locations. The red lines indicate the distance between the traps where blood-fed Culicoides were caught and the location of the respective host (if known) Map of ZSL Whipsnade Zoo showing trap locations. The red lines indicate the distance between the traps where blood-fed Culicoides were caught and the location of the respective host (if known) The Corine Land Cover (CLC) 2018 map (available from https://land.copernicus.eu/) was used to compare the land cover at each site. The percentage of each land cover class was extracted from under a buffer zone of radius 3125 m from the centre of each zoo, using ArcGIS Pro 2.3.1 (ESRI, Redlands, CA, USA). The radius of the buffer zone was set according to the maximum dispersal distance identified by a previous study on Culicoides in the south of England [37]. The percentage of each Corine land cover class that fell within the buffer zones is summarised in Table 1. ZSL LZ and the surrounding area is dominated by urban fabric (86.4%), whereas ZSL WZ and the surrounding area is dominated by arable land and pastures (82.2%), with urban fabric constituting just 3.1% of the land cover. Table 1 Percentage of each land cover classification at ZSL London Zoo and ZSL Whipsnade Zoo Blood-meal molecular analysis After species identification, Culicoides that contained blood were transferred into individual 1.5 ml Eppendorf tubes containing 200 µl phosphate-buffered saline (PBS) and homogenised with a pellet pestle for 30 s. Following the addition of 20 µl proteinase K and 200 µl buffer AL, each sample was incubated at 56 °C for 30 min. Host DNA was then extracted from the sample using the Qiagen DNeasy Blood and Tissue Kit (Qiagen, Manchester, UK), following the manufacturer's instructions (see Additional file 1: Text S1). Samples were stored at − 20 °C until further analysis. A 685 bp region of the cytochrome c oxidase subunit 1 (cox1) gene was targeted using a combination of existing primers [38]. The polymerase chain reaction (PCR) was performed in a final volume of 25 µl comprising 2.5 µl of PCR buffer, 0.75 µl of 1.5 mM MgCl2, 0.5 µl of 200 µM dNTP, 0.25 µl of 0.1 µM primers VF1_t1, VF1d_t1, VR1_t1 and VR1d_t1, 0.5 µl of 0.2 µM primers VF1i_t1 and VR1i_t1, 5 µg bovine serum albumin, 0.1 µl of 1 U Platinum Taq DNA polymerase (Invitrogen, Paisley, UK), 13.9 µl of nuclease-free water and 5 µl extract. The PCR cycling conditions were an initial denaturation of 94 °C for 2 min, followed by 40 cycles of (i) 94 °C for 30 s; (ii) 54 °C for 30 s; and (iii) 72 °C for 1 min. A final elongation step of 72 °C for 10 min was used. PCR products were visualised on a 1.5% agarose gel and samples of the correct size were purified using the Illustra GFX PCR purification kit (GE Healthcare, Amersham, UK), following the manufacturer's instructions (Additional file 2: Text S2). Purified PCR products were sequenced bi-directionally using M13 primers, by Source Biosciences (Cambridge, UK). Sequence electropherograms were checked manually and assembled using SeqMan Pro v14 (DNAStar, Madison, USA). DNA sequences derived from blood meals were compared against all available sequences on GenBank using BLAST and assigned to a host vertebrate species with a match of > 98% [39]. Temperature data Temperature data for 2014 and 2015 were obtained from the UK Climate Projections (UKCP09) gridded observation datasets (Additional file 3: Figure S1). These cover the UK at 5 × 5 km resolution with the data for each zoo extracted for the grid square in which it is located. The daily trap catches for each species/group were analysed using generalised linear models with a log link function and either a Poisson or negative binomial distribution. In the model for each species, the expected catch (µjk) for the jth collection at trap k (taken on day tjk) was given by: $$ \log (\mu_{jk} ) = a_{k} + \sum\limits_{n = 1}^{2} {b_{1n} \sin \left( {\frac{2n\pi }{365}t_{jk} } \right) + b_{2n} \sin \left( {\frac{2n\pi }{365}t_{jk} } \right)} + c_{{y_{jk} }} + dT_{k} (t_{jk} ), $$ where ak is the baseline catch for trap k, the summation describes seasonality in the Culicoides population [using sine and cosine functions with periods of 12 (n = 1) and 6 (n = 2) months], cy captures the difference in catch between years and Tk(t) is the daily mean temperature for trap k on day t. Model construction proceeded by stepwise deletion of non-significant (P > 0.05) terms (as judged by a likelihood ratio test), starting from a model including sine and cosine functions with twelve and six month periods (i.e. describing seasonality), daily mean temperature, year (2014 or 2015) and trap location. The statistical models were implemented using the MASS package [40] in R (version 3.4.3) [41]. Models were fitted to data for total Culicoides, C. obsoletus/C. scoticus females, C. chiopterus females, C. pulicaris females and C. punctatus females. However, there were too few females of C. dewulfi or males of any species collected to allow robust models to be fitted. A total of 11,648 Culicoides (5880 from WZ and 5768 from LZ) were collected and identified (Additional file 4: Dataset S1) from a total of 280 trap catches (118 from WZ and 162 from LZ, Table 2). Twenty species of Culicoides were caught at ZSL LZ and 18 different species of Culicoides were caught at ZSL WZ. The majority of specimens caught (92%, n = 10,701) were of the species described as putative vectors of BTV and SBV in northern Europe. i.e. members of the subgenus Avaritia (C. obsoletus, C. scoticus, C. dewulfi and C. chiopterus), C. pulicaris and C. punctatus. Of these, C. obsoletus and C. scoticus alone constituted 71.5% of collections (Table 3). A total of five Culicoides (0.02%) could not be identified due to damage. Table 2 Trapping sites used at London and Whipsnade Zoological Gardens and associated Culicoides collections Table 3 Species and abundance of Culicoides trapped at each zoo across the sampling sites For most trap locations at both ZSL LZ and ZSL WZ, the catches were dominated by members of the subgenus Avaritia. The main exception is the trap located at the Snowdon Aviary (B1) at ZSL LZ, where a wide range of species contributed to a relatively small catch. At ZSL LZ, the trap located next to the Bactrian camels (AA1) caught the most Culicoides, with the trap located next to the giraffe (MN2) collecting the most individuals in a single night (1584 individuals, Table 2). Traps MS1 and MN1 were not run during 2015 due to damage and, hence, collected lower numbers in total. At ZSL WZ, the trap located next to the Asian elephants (EL1) caught the most Culicoides and collected the most in a single night (1393 individuals, Table 2). Traps AA1 and EL1 also caught the greatest variety of species (n = 17 and n = 16, respectively, Table 2). Female Culicoides were categorised as unpigmented, pigmented, gravid or blood-fed (Table 4). Unpigmented Culicoides constituted 63% of the total identifiable catch, with pigmented constituting 25.5%, gravid constituting 5.9% and blood-fed constituting 0.61%. A total of 571 males were trapped, constituting 4.9% of the total identifiable catch. At LZ, the seasonal vector-free period (SVFP) started on 6th November 2014, compared to 15th October 2014 at WZ. The SVFP ended on 15th April 2015 at both zoos. Table 4 Number of male and age-graded female Culicoides caught in each trap Blood-meal analysis In total, 71 Culicoides contained a blood meal (0.61% of total individuals caught, Additional file 5: Table S1). Of these, three blood-fed individuals were not processed for blood meal analysis as they contained only a partial blood meal at an advanced stage of digestion. Sequences which could be matched to vertebrate hosts were obtained from 31 Culicoides (46% of processed blood-fed individuals, Additional file 6: Table S2). Most sequences aligned to mammalian species (n = 24, 77%), with the remaining comprising avian species (n = 7, 23%). A total of 13 different host species were identified (Fig. 4). Most host blood meals identified were from exotic zoo animals (n = 22, 71%), demonstrating opportunistic feeding behaviour of C. obsoletus/C. scoticus, C. dewulfi and C. punctatus. Host blood meals identified from Culicoides trapped at ZSL London Zoo and ZSL Whipsnade Zoo Culicoides obsoletus/C. scoticus fed on four different exotic mammalian species: Asian elephants (Elephas maximus), alpaca/llama (Vicugna pacos/Lama glama), Bactrian camels (Camelus bactrianus) and Przewalski's horse (Equus przewalskii). A single blood-fed C. punctatus had fed on an Indian rhinoceros (Rhinoceros unicornis) and four C. dewulfi had fed on Asian elephants. Additionally, we were unable to discriminate between domestic pig (Sus scrofa domesticus) and wild boar (Sus scrofa scrofa) as the host of a blood-fed C. obsoletus/C. scoticus. Both wild boar and domestic pigs were present at ZSL WZ at the time of trapping. Blood meals from Culicoides achrayi Kettle & Lawson, 1955 were all identified as being from indigenous bird species. The only exotic bird species identified was Darwin's rhea (Rhea pennata) in blood-fed C. obsoletus/C. scoticus. Some blood-fed Culicoides were collected from traps that were not close to the animal that they had fed on, suggesting some level of post-feeding dispersal by female Culicoides (Figs. 2, 3). For each species full details of model selection are provided in Additional file 7: Table S3 and the coefficients for the final models are presented in Additional file 8: Table S4; here we summarise the results across all species/groups. There was significant (P < 0.001) seasonal variation in trap catches for all five species/groups analysed, with peaks in spring (May-June) and autumn (September-October) (Additional files 9: Figures S2, Additional files 10: Figure S3, Additional file 11: Figure S4, Additional file 12: Figure S5, Additional file 13: Figure S6). Specifically, the spring peak was in mid-May for total Culicoides, C. obsoletus/C. scoticus females and C. punctatus females, in late May for C. chiopterus females and in late June for C. pulicaris females. The autumn peak was in mid-September for total Culicoides, C. obsoletus/C. scoticus females, C. pulicaris females and C. punctatus females and in early October for C. chiopterus females. Abundance was greatest at the spring peak for total Culicoides, C. obsoletus/C. scoticus females, C. pulicaris females and C. punctatus females and at the autumn peak for C. chiopterus females. In addition, trap catches differed significantly (P < 0.001) between years, with around a 10-fold reduction in numbers caught in 2015 compared with 2014 for each species/group. Significantly (P < 0.001) higher trap catches were associated with warmer temperatures for all species/groups (Additional file 8: Table S4). There were significant (P < 0.001) differences amongst traps in the numbers of Culicoides collected and these differences were broadly similar for all species/groups (Additional file 8: Table S4, Additional files 9: Figures S2, Additional files 10: Figure S3, Additional file 11: Figure S4, Additional file 12: Figure S5, Additional file 13: Figure S6). Significantly more Culicoides were caught in the trap located at the elephant enclosure (EL1) at ZSL WZ than any of the other trap locations at either zoo. Significantly more Culicoides were caught in the trap AA1 compared with any of the other traps at ZSL LZ, with the exception of trap MN2 that caught similar numbers to trap AA1 for total Culicoides and C. obsoletus/C. scoticus females. The numbers of Culicoides caught in the remaining traps (B1, MN1 and MS1) did not differ significantly. Comparing only traps at ZSL WZ, significantly more midges were caught in trap EL1 than any of the other traps (BG1, HUL1, PR1 and RO1). The numbers caught in these traps did not differ significantly. This study has demonstrated seasonal activity of adult Culicoides in proximity to exotic animals in two UK zoos and provided direct evidence of blood-feeding behaviour by vectors of BTV and SBV on exotic zoo animals in northern Europe for the first time. Most blood meals identified in UK Culicoides species were taken from exotic animals, rather than native wildlife, demonstrating opportunistic feeding on hosts to which these species of Culicoides have not been previously exposed. Divergence in host selection was observed between avian (C. achrayi) and mammalian (C. dewulfi) feeders, but C. obsoletus/C. scoticus was found to feed on both classes of host. Further investigation is required to understand the drivers of host location involved in these observations which may include a wide range of visual, thermal and olfactory cues [22, 42]. The diversity of species of Culicoides collected in the two zoological gardens was similar, with 20 different species caught at ZSL LZ and 18 different species of Culicoides caught at ZSL WZ from a total of 46 species recorded in Britain [35]. Within these collections, five species were found only at ZSL WZ and seven species were found only at ZSL LZ. The observed difference in faunal composition of Culicoides at each zoo may be related to trap placement within each zoo or may reflect genuine differences in the Culicoides community at each zoo. Notably, all putative vectors of BTV and SBV in northern Europe were identified at both zoos. All species collected had been previously identified within the UK and the species composition at each zoo is typical of that found at livestock farms across northern Europe [31, 34, 43]. The findings are also consistent with the species composition found at Chester Zoo previously [21], where 25 species of Culicoides were identified. It is not known the extent to which the artificial diet of the exotic zoo animals can influence the composition of their dung and thus the suitability thereof for Culicoides larval development. There have been no studies to date in zoological gardens that have identified whether Culicoides can adapt to breeding in the dung of exotic ruminant or equine hosts or in habitats enriched from a source that may be substantially different in constitution and microfauna compared to that from domestic livestock. Within the two zoos examined, dung is cleared from paddocks and animal houses on a daily basis. At ZSL LZ, the waste material is loaded straight into a compactor. At ZSL WZ, there is a dung pile close to the elephant enclosure where all the collected material from across the zoo is deposited. This is then removed from the site on an ad hoc basis (M. Shillingford, personal communication) providing a temporary abundance of potential larval development sites. The differences in the land cover between sites would suggest that ZSL WZ would support a greater diversity of Culicoides species and in greater abundance as a primarily rural site. Indeed, livestock are present on farms within 1 km of the zoo. This is not, however, borne out in the collections made during this study, with two more species being found at ZSL LZ than ZSL WZ and the small difference in number of individuals caught between the sites. This suggests that Culicoides are able to adapt to an urban landscape and colonise the small pockets of suitable habitat that are available in addition to being able to disperse significant distances to find hosts [37]. Inferring host preference within Culicoides is challenging and cannot be quantified without direct collections on the animals themselves which is very challenging to impose on wildlife hosts [44,45,46]. Within the present study, relatively clear demarcation was found in C. achrayi feeding solely on avian hosts, as expected from previous studies [22, 47, 48]. In a rare, direct, investigation of the blood-feeding behaviour of Culicoides circumscriptus Kieffer, 1918 (a bird-biting species) in Spain, this species was identified at the nests of cavity-nesting birds, where it was found to contain haemosporidian parasites [49]. Infection with these malaria-like parasites causes chronic infections in wild birds, although their impact on condition and survival remain significant, but poorly understood, including in relation to the impact of infection of exotic birds with European lineages [50, 51]. In Culicoides blood-feeding on mammals, there was no evidence that feeding had occurred on livestock or wildlife that were not contained within the zoo. Domestic pig and wild boar could not be separated based on the cox1 gene that we targeted for blood-meal analysis. At the time the study was carried out, ZSL WZ had 14 wild boar and five domestic pigs within its collection (H. Jenkins, personal communication). Therefore, it is not possible to tell if the Culicoides had fed on the wild boar that are in an enclosure some distance from the trap in question (HUL1) or the domestic pigs that were close to the trap. The nearest pig farm was located approximately 2 km from the trap and, therefore, may have been a source, although less likely than those animals within the zoo boundary. Culicoides found to have fed on an animal species that was not held in close proximity of the trap that they were caught in, are indicated in Figs. 2 and 3 by red lines, showing the distance from host location to trap. The furthest flight documented was 600 m in the case of a C. obsoletus/C. scoticus female that had fed on a Przewalski's horse and was subsequently caught in the trap located in the bird garden, BG1. The dispersal of blood-fed Culicoides has not previously been quantified in this way and the presence of readily identifiable and static hosts presents an ideal environment to investigate this aspect of their ecology. Previous studies have had a higher success rate with the number of blood meals that they have been able to amplify. In this study, our success rate was approximately 50%, whilst previous studies have had up to 90% success rate [23]. This could be due to a combination of blood being partially digested by the time the Culicoides was trapped, DNA degradation from time spent in storage, or samples not being transferred to ethanol quickly enough following collection. Improper storage has been noted previously as a reason for lower success rates of blood meal identification [52]. Twenty-four of the 68 blood-fed specimens processed for blood meal analysis contained only partial blood meals and of these, a total of 15 failed to amplify. Some previous studies only processed fully engorged specimens, or a subset of blood-fed Culicoides, and as such have achieved high success rates [23, 53]. However, we processed all fully and partially blood-fed specimens, with the exception of three C. obsoletus/C. scoticus which contained a very small amount of blood in an advanced stage of digestion and were deemed unsuitable for analysis. There were no mixed blood meals identified. Within the zoos, the trap that caught the most Culicoides was AA1 at ZSL LZ and EL1 at ZSL WZ. These traps were located next to Bactrian camels and Asian elephants, respectively. A previous study conducted at the National Zoological Gardens of South Africa, also found that the trap closest to the elephants collected the most Culicoides [19]. A total of 13 Culicoides were found to have fed on Asian elephants and four of these Culicoides were C. dewulfi (30%), with the remaining nine being C. obsoletus/C. scoticus. Culicoides dewulfi represented just 3.5% of total catches of all adult forms of Culicoides in light-traps but represented 11.8% of the total blood-fed collections made. This disproportionate abundance could be due to host preference towards this host, a tendency for this species to be more actively flying in the blood-fed state than other species, or a greater attraction to light in the blood-fed state than other species, although there is no indication of the latter explanation in studies carried out previously in the UK [31, 54, 55]. Seroconversion to SBV was observed in Asian elephants in the UK previously [18], and this observation demonstrates the potential for transmission of arboviruses to this species. Future studies should look to examine the host preference and utilisation of elephant dung for larval development by C. dewulfi as it is one of only two species where larval habitat is considered to be restricted to dung [25, 35]. Similarly, the trap located next to the Bactrian camels at ZSL LZ caught by far the most Culicoides at this zoo. Furthermore, two C. obsoletus/C. scoticus had fed on the Bactrian camels suggesting that these animals are at a relatively higher risk of vector-borne disease compared to other animals. This is supported by the fact that a single C. obsoletus/C. scoticus had been feeding on a Bactrian camel at ZSL WZ, despite them being kept in a large, wind-exposed outdoor paddock, which would be assumed to be less favourable to Culicoides. Bactrian camels are susceptible to BTV [56, 57] and a fatal clinical case was identified in a European zoo during the BTV-8 outbreak in northern Europe in 2006 [58]. In addition to susceptible animals, zoos may be home to a number of animals that could act as reservoir hosts for vector-borne diseases. For example, all nine serotypes of AHSV have been isolated from plains zebra in the Republic of South Africa [4] and ZSL LZ has four plains zebra (two Equus quagga chapmani and two Equus quagga burchelli). Zebra kept in zoos in the UK are unlikely to be of great epidemiological significance, but there may be risk associated with importation of these animals due to their role as reservoir hosts. For example, an outbreak of AHSV occurred in Spain in 1987 due to the importation of zebra from Namibia [4]. The study conducted at Chester Zoo in 2008, concluded that there needs to be pre-import testing of zoo animals arriving to the UK from BTV-endemic areas, due to the potential for onward transmission by UK vectors present at zoos [21]. This was supported by detection of SBV antibodies in three yaks a week after importation from the Netherlands and in a greater kudu, prior to import from France [18]. The seasonal profile for Culicoides observed in this study has been previously demonstrated in the UK [31, 59]. These studies have shown peaks in abundance in May/June and again in September and our data also show this bimodal pattern. Variation in the SVFP was driven primarily by two large catches on 16th (n = 137 pigmented Culicoides) and 30th October (n = 63 pigmented Culicoides) in the trap AA1 at ZSL LZ. The SVFP ended on 15th April 2015 at both zoos. The mean daily temperature (°C) was higher at ZSL LZ than at ZSL WZ (mean = 12.3 °C and 10.6 °C, respectively, Additional file 3: Figure S1). At a national scale, the SVFP began on 26th November 2014 and ended on 14th April 2015 (M. England, unpublished data). The national scale end date is very close to that found at both zoos, whilst the start date is later at the national scale. This highlights the fact that the measurement of the SVFP is to a significant degree dependent upon the sensitivity of surveillance measures employed. A previous study found that, on average, the end date of the SVFP was early May for years 2006 to 2010 [31]. However, national scale surveillance has reported this date as occurring during April every year from 2014 to 2019, suggesting a trend towards a shorter SVFP in more recent years. Indeed, a recent study identified a long-term shift in abundance and seasonality of Culicoides associated with climate change [59]. The significant difference observed in total catch between years is likely to be largely due to the fact that several traps were not operational in 2015 due to damage. Additionally, there were significant differences observed between trap locations for all Culicoides species, across both zoos. The greater performance of one trap over another is likely due to a range of factors such as proximity to potential hosts, the level of wind exposure, the relative size and density of hosts surrounding the trap and the availability of larval habitat close to the traps. The dynamics of Culicoides at the local scale were examined in a previous study, showing host proximity and exposure were significant factors affecting spatial clustering and abundance [60]. Zoo animals have a very high value, both financially and as part of international breeding programmes for species conservation. It is, therefore, very important to understand the risk that they face from Culicoides-borne arboviruses. Here, we have shown for the first time to our knowledge, through blood-meal analysis, that the putative vectors of SBV and BTV in the UK are feeding on exotic zoo animals. We have also highlighted the need for vaccination and/or mitigating measures for susceptible animals within zoos in the event of an outbreak to protect these endangered species. All data generated or analysed during this study are included in this published article and its additional files. Purse BV, Carpenter S, Venter GJ, Bellis G, Mullens BA. Bionomics of temperate and tropical Culicoides midges: knowledge gaps and consequences for transmission of Culicoides-borne viruses. Annu Rev Entomol. 2015;60:373–92. Carpenter S, Wilson A, Mellor PS. Culicoides and the emergence of bluetongue virus in northern Europe. Trends Microbiol. 2009;17:172–8. Balenghien T, Pages N, Goffredo M, Carpenter S, Augot D, Jacquier E, et al. The emergence of Schmallenberg virus across Culicoides communities and ecosystems in Europe. Prev Vet Med. 2014;116:360–9. Carpenter S, Mellor PS, Fall AG, Garros C, Venter GJ. African horse sickness virus: history, transmission, and current status. Annu Rev Entomol. 2017;62:343–58. Lo Iacono G, Robin CA, Newton JR, Gubbins S, Wood JLN. Where are the horses? With the sheep or cows? Uncertain host location, vector-feeding preferences and the risk of African horse sickness transmission in Great Britain. J R Soc Interface. 2013;10:20130194. Stallknecht DE, Allison AB, Park AW, Phillips JE, Goekjian VH, Nettles VF, et al. Apparent increase of reported hemorrhagic disease in the midwestern and northeastern USA. J Wildl Dis. 2015;51:348–61. Gaydos JK, Davidson WR, Elvinger F, Mead DG, Howerth EW, Stallknecht DE. Innate resistance to epizootic hemorrhagic disease in white-tailed deer. J Wildl Dis. 2002;38:713–9. Lorca-Oró C, López-Olvera JR, Ruiz-Fons F, Acevedo P, García-Bocanegra I, Oleaga Á, et al. Long-term dynamics of bluetongue virus in wild ruminants: relationship with outbreaks in livestock in Spain, 2006–2011. PLoS ONE. 2014;9:e100027. Talavera S, Muñoz-Muñoz F, Verdún M, Pujol N, Pagès N. Revealing potential bridge vectors for BTV and SBV: a study on Culicoides blood feeding preferences in natural ecosystems in Spain. Med Vet Entomol. 2018;32:35–40. Falconi C, López-Olvera JR, Gortázar C. BTV infection in wild ruminants, with emphasis on red deer: a review. Vet Microbiol. 2011;151:209–19. Alexander KA, Maclachlan NJ, Kat PW, House C, O'brien SJ, Lerche NW, et al. Evidence of natural bluetongue virus infection among African carnivores. Am J Trop Med Hyg. 1994;51:568–76. Simpson VR. Bluetongue antibody in Botswana domestic and game animals. Trop Anim Health Prod. 1979;11:43–9. Dunbar MR, Cunningham MW, Roof JC. Seroprevalence of selected disease agents from free-ranging black bears in Florida. J Wildl Dis. 1998;34:612–9. Oura CAL, El Harrak M. Midge-transmitted bluetongue in domestic dogs. Epidemiol Infect. 2011;139:1396–400. Barnard BJH. Antibodies against some viruses of domestic animals in southern African wild animals. Onderstepoort J Vet Res. 1997;64:95–110. Slama D, Haouas N, Mezhoud H, Babba H, Chaker E. Blood meal analysis of Culicoides (Diptera: Ceratopogonidae) in central Tunisia. PLoS ONE. 2015;10:e0120528. van Sittert SJ, Drew TM, Kotze JL, Strydom T, Weyer CT, Guthrie AJ. Occurrence of African horse sickness in a domestic dog without apparent ingestion of horse meat. J South Afr Vet Assoc. 2013;84:1–5. Molenaar FM, La Rocca SA, Khatri M, Lopez J, Steinbach F, Dastjerdi A. Exposure of Asian elephants and other exotic ungulates to Schmallenberg virus. PLOS ONE. 2015;10:e0135532. Labuschagne K, Gerber LJ, Espie I, Carpenter S. Culicoides biting midges at the National Zoological Gardens of South Africa. Onderstepoort J Vet Res. 2007;74:343–7. Nelder MP, Swanson DA, Adler PH, Grogan WL. Biting midges of the genus Culicoides in South Carolina Zoos. J Insect Sci. 2010;10:55. Vilar MJ, Guis H, Krzywinski J, Sanderson S, Baylis M. Culicoides vectors of bluetongue virus in Chester Zoo. Vet Rec. 2011;168:242. Martínez-de la Puente J, Figuerola J, Soriguer R. Fur or feather? Feeding preferences of species of Culicoides biting midges in Europe. Trends Parasitol. 2015;31:16–22. Lassen SB, Nielsen SA, Skovgård H, Kristensen M. Molecular identification of bloodmeals from biting midges (Diptera: Ceratopogonidae: Culicoides Latreille) in Denmark. Parasitol Res. 2011;108:823–9. Pettersson E, Bensch S, Ander M, Chirico J, Sigvald R, Ignell R. Molecular identification of bloodmeals and species composition in Culicoides biting midges. Med Vet Entomol. 2012;27:107–22. Kettle DS, Lawson JWH. The early stages of the British biting midges Culicoides Latreille (Diptera, Ceratopogonidae) and allied genera. Bull Entomol Res. 1952;43:421–67. Harrup LE, Purse BV, Golding N, Mellor PS, Carpenter S. Larval development and emergence sites of farm-associated Culicoides in the United Kingdom. Med Vet Entomol. 2013;27:441–9. Robin M, Archer D, Garros C, Gardès L, Baylis M. The threat of midge-borne equine disease: investigation of Culicoides species on UK equine premises. Vet Rec. 2014;174:301. Blackwell A, Mordue AJ, Mordue W. Identification of bloodmeals of the Scottish biting midge, Culicoides impunctatus, by indirect enzyme-linked immunosorbent assay (ELISA). Med Vet Entomol. 1994;8:20–4. Bessell PR, Auty HK, Searle KR, Handel IG, Purse BV, de Bronsvoort BMC. Impact of temperature, feeding preference and vaccination on Schmallenberg virus transmission in Scotland. Sci Rep. 2014;4:5746. Official Journal of the European Union. Commission Regulation (EC) No 1266/2007 of 26 October 2007 on implementing rules for Council Directive 2000/75/EC as regards the control, monitoring, surveillance and restrictions on movements of certain animals of susceptible species in relation to bluetongue. Luxembourg: The Publications Office of the European Union; 2007. p. 37–52. Searle KR, Barber J, Stubbins F, Labuschagne K, Carpenter S, Butler A, et al. Environmental drivers of Culicoides phenology: how important is species-specific variation when determining disease policy? PLoS ONE. 2014;9:e111876. ZSL: ZSL Animal Inventory Summary 2018; 2018. https://www.zsl.org/about-us/animal-inventory. Accessed 8 May 2018. Mathieu B, Cêtre-Sossah C, Garros C, Chavernac D, Balenghien T, Carpenter S, et al. Development and validation of IIKC: an interactive identification key for Culicoides (Diptera: Ceratopogonidae) females from the Western Palaearctic region. Parasit Vectors. 2012;5:137. Campbell JA, Pelham-Clinton ECX. A taxonomic review of the British species of Culicoides Latreille (Diptera, Ceratopogonidæ). Proc R Soc Edinb Biol. 1960;67:181–302. Boorman J. British Culicoides (Diptera: Ceratopogonidae): notes on distribution and biology. Entomol Gaz. 1986;37:253–66. Dyce A. The recognition of nulliparous and parous Culicoides (Diptera: Ceratopogonidae) without dissection. J Aust Entomol Soc. 1969;8:11–5. Sanders CJ, Harrup LE, Tugwell LA, Brugman VA, England M, Carpenter S. Quantification of within- and between-farm dispersal of Culicoides biting midges using an immunomarking technique. J Appl Ecol. 2017;54:1429–39. Ivanova NV, Zemlak TS, Hanner RH, Herbert PDN. Universal primer cocktails for fish DNA barcoding. Mol Ecol Notes. 2007;7:544–8. Brugman VA, Hernández-Triana LM, England ME, Medlock JM, Mertens PPC, Logan JG, et al. Blood-feeding patterns of native mosquitoes and insights into their potential role as pathogen vectors in the Thames estuary region of the United Kingdom. Parasit Vectors. 2017;10:163. Venables WN, Ripley BD. Modern applied statistics with S. 4th ed. New York: Springer; 2002. R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2017. https://www.R-project.org/. Augot D, Hadj-Henni L, Strutz SE, Slama D, Millot C, Depaquit J, et al. Association between host species choice and morphological characters of main sensory structures of Culicoides in the Palaeartic region. PeerJ. 2017;5:e3478. Cuellar AC, Kjaer LJ, Kirkeby C, Skovgard H, Nielsen SA, Stockmarr A, et al. Spatial and temporal variation in the abundance of Culicoides biting midges (Diptera: Ceratopogonidae) in nine European countries. Parasit Vectors. 2018;11:18. Viennet E, Garros C, Lancelot R, Allene X, Gardes L, Rakotoarivony I, et al. Assessment of vector/host contact: comparison of animal-baited traps and UV-light/suction trap for collecting Culicoides biting midges (Diptera: Ceratopogonidae), vectors of Orbiviruses. Parasit Vectors. 2011;4:119. Elbers ARW, Gonzales JL, Meiswinkel R. Comparing Culicoides biting rates in horses and cattle in The Netherlands: potential relevance to African horse sickness epidemiology. Entomol Exp Appl. 2018;166:535–44. Elbers ARW, Meiswinkel R. Culicoides (Diptera: Ceratopogonidae) and livestock in the Netherlands: comparing host preference and attack rates on a Shetland pony, a dairy cow, and a sheep. J Vector Ecol. 2015;40:308–17. Braverman Y, Frish K, Reis M, Mumcuoglu KY. Host preference of Culicoides spp. from Israel based on sensory organs and morphometry (Diptera: Ceratopogonidae). Entomol Gen. 2012;34:97–110. Martínez-de la Puente J, Merino S, Tomás G, Moreno J, Morales J, Lobato E, et al. Factors affecting Culicoides species composition and abundance in avian nests. Parasitology. 2009;136:1033–41. Veiga J, Martínez-de la Puente J, Václav R, Figuerola J, Valera F. Culicoides paolae and C circumscriptus as potential vectors of avian haemosporidians in an arid ecosystem. Parasit Vectors. 2018;11:524. Hellgren O, Pérez-Tris J, Bensch S. A jack-of-all-trades and still a master of some: prevalence and host range in avian malaria and related blood parasites. Ecology. 2009;90:2840–9. Asghar M, Hasselquist D, Bensch S. Are chronic avian haemosporidian infections costly in wild birds? J Avian Biol. 2011;42:530–7. Hopken MW, Ryan BM, Huyvaert KP, Piaggio AJ. Picky eaters are rare: DNA-based blood meal analysis of Culicoides (Diptera: Ceratopogonidae) species from the United States. Parasit Vectors. 2017;10:169. Bartsch S, Bauer B, Wiemann A, Clausen P-H, Steuber S. Feeding patterns of biting midges of the Culicoides obsoletus and Culicoides pulicaris groups on selected farms in Brandenburg, Germany. Parasitol Res. 2009;105:373–80. Carpenter S, Szmaragd C, Barber J, Labuschagne K, Gubbins S, Mellor P. An assessment of Culicoides surveillance techniques in northern Europe: have we underestimated a potential bluetongue virus vector? J Appl Ecol. 2008;45:1237–45. Sanders CJ, Shortall CR, Gubbins S, Burgin L, Gloster J, Harrington R, et al. Influence of season and meteorological parameters on flight activity of Culicoides biting midges. J Appl Ecol. 2011;48:1355–64. Batten CA, Harif B, Henstock MR, Ghizlane S, Edwards L, Loutfi C, et al. Experimental infection of camels with bluetongue virus. Res Vet Sci. 2011;90:533–5. Chandel BS, Chauhan HC, Kher HN. Comparison of the standard AGID test and competitive ELISA for detecting bluetongue virus antibodies in camels in Gujarat, India. Trop Anim Health Prod. 2003;35:99–104. Spickler AR. Bluetongue; 2015. http://www.cfsph.iastate.edu/DiseaseInfo/factsheets.php. Accessed 16 July 2019. Sanders CJ, Shortall CR, England M, Harrington R, Purse B, Burgin L, et al. Long-term shifts in the seasonal abundance of adult Culicoides biting midges and their impact on potential arbovirus outbreaks. J Appl Ecol. 2019;56:1649–60. Kirkeby C, Bødker R, Stockmarr A, Lind P. Spatial abundance and clustering of Culicoides (Diptera: Ceratopogonidae) on a local scale. Parasit Vectors. 2013;6:43. We thank Dr Emma Howson (The Pirbright Institute) for assistance with molecular techniques and Nick Lindsay (ZSL) for assistance with project development. The UK climate projections data have been made available by the Department for Environment, Food and Rural Affairs (Defra) and Department for Energy and Climate Change (DECC) under licence from the Met Office, Newcastle University, University of East Anglia and Proudman Oceanographic Laboratory. These organisations accept no responsibility for any inaccuracies or omissions in the data, nor for any loss or damage directly or indirectly caused to any person or body by reason of, or arising out of, any use of these data. Maps throughout this paper were created using ArcGIS® software by Esri. ArcGIS® and ArcMap™ are the intellectual property of Esri and are used herein under license. Copyright © Esri. ME and SC are funded by the Defra national Culicoides laboratory. SC, SG and CS are funded through strategic grants BBS/E/I/00007039; BBS/E/I/00007033; BBS/E/I/00007038 awarded by the Biotechnology and Biological Sciences Research Council, UK. This study was supported by "Department for Environment, Food and Rural Affairs". The Pirbright Institute, Ash Road, Woking, Surrey, GU24 0NF, UK Marion E. England, Simon King, Simon Gubbins, Christopher J. Sanders, Eric Denison & Simon Carpenter Zoological Society of London, Outer Circle, Regent's Park, London, NW1 4BJ, UK Paul Pearce-Kelly, Fiona Sach & Nic J. Masters Department of Disease Control, London School of Hygiene and Tropical Medicine, Keppel St, London, WC1E 7HT, UK Victor A. Brugman Marion E. England Paul Pearce-Kelly Simon King Simon Gubbins Fiona Sach Christopher J. Sanders Nic J. Masters Eric Denison Simon Carpenter MEE performed studies, wrote and approved the submission. SG carried out statistical analyses and edited and approved submission. SC discussed the experimental design, supervised the study and edited and approved the submission. VAB and SK developed and carried out laboratory work. PP-K, CJS, FS and NJM contributed to development of study design and data acquisition. ED contributed to data acquisition and provided taxonomic expertise. All authors read and approved the final manuscript. Correspondence to Marion E. England. Not applicable. No technique used during the trial required ethical approval. Additional file 1: Text S1. Qiagen DNeasy Blood and Tissue Kit (Qiagen) protocol. Illustra GFX PCR Purification Kit (GE Healthcare) protocol. Additional file 3: Figure S1. Daily mean temperature (°C) for ZSL London Zoo and ZSL Whipsnade Zoo. Additional file 4: Dataset 1. Full data set of collected Culicoides including site, trap location, collection date and morphological identification. Species composition of blood-fed Culicoides caught in traps. Results of blood meal analysis of Culicoides collected from ZSL London Zoo and Whipsnade Zoo. Comparison of different models for the number of Culicoides biting midges caught at London and Whipsnade zoos. Effect of seasonality, year, temperature and trap location on the number of Culicoides collected. Observed and expected daily trap catches for Culicoides obsoletus/C. scoticus females for trap locations at both zoos. Additional file 10: Figure S3. Observed and expected daily trap catches for Culicoides chiopterus females for trap locations at both zoos. Observed and expected daily trap catches for Culicoides pulicaris females for trap locations at both zoos. Observed and expected daily trap catches for Culicoides punctatus females for trap locations at both zoos. Observed and expected daily trap catches for total Culicoides for trap locations at both zoos. England, M.E., Pearce-Kelly, P., Brugman, V.A. et al. Culicoides species composition and molecular identification of host blood meals at two zoos in the UK. Parasites Vectors 13, 139 (2020). https://doi.org/10.1186/s13071-020-04018-0 Received: 12 August 2019 Culicoides Bluetongue virus Arbovirus Vector-borne disease
CommonCrawl
New nearly optimal codebooks from relative difference sets AMC Home On the order bounds for one-point AG codes August 2011, 5(3): 505-520. doi: 10.3934/amc.2011.5.505 On optimal ternary linear codes of dimension 6 Tatsuya Maruta 1, and Yusuke Oya 1, Department of Mathematics and Information Sciences, Osaka Prefecture University, Sakai, Osaka 599-8531, Japan Received November 2010 Revised April 2011 Published August 2011 We prove that $[g_3(6,d),6,d]_3$ codes for $d=253$-$267$ and $[g_3(6,d)+1,6,d]_3$ codes for $d=302, 303, 307$-$312$ exist and that $[g_3(6,d),6,d]_3$ codes for $d=175, 200, 302, 303, 308, 309$ and a $[g_3(6,133)+1,6,133]_3$ code do not exist, where $g_3(k,d)=\sum_{i=0}^{k-1} \lceil d / 3^i \rceil$. These determine $n_3(6,d)$ for $d=133, 175, 200, 253$-$267, 302, 303, 308$-$312$, where $n_q(k,d)$ is the minimum length $n$ for which an $[n,k,d]_q$ code exists. The updated $n_3(6,d)$ table is also given. Keywords: optimal codes, projective geometry., Ternary linear codes. Mathematics Subject Classification: Primary: 94B27, 94B05; Secondary: 51E20, 05B2. Citation: Tatsuya Maruta, Yusuke Oya. On optimal ternary linear codes of dimension 6. Advances in Mathematics of Communications, 2011, 5 (3) : 505-520. doi: 10.3934/amc.2011.5.505 A. Beutelspacher, Blocking sets and partial spreads in finite projective spaces,, Geom. Dedicata, 9 (1980), 425. doi: 10.1007/BF00181559. Google Scholar R. C. Bose and R. C. Burton, A characterization of flat spaces in a finite projective geometry and the uniqueness of the Hamming and the MacDonald codes,, J. Combin. Theory, 1 (1966), 96. doi: 10.1016/S0021-9800(66)80007-8. Google Scholar A. A. Bruen, Polynomial multiplicities over finite fields and intersection sets,, J. Combin. Theory Ser. A, 60 (1992), 19. doi: 10.1016/0097-3165(92)90035-S. Google Scholar M. van Eupen and R. Hill, An optimal ternary $[69,5,45]$ code and related codes,, Des. Codes Cryptogr., 4 (1994), 271. doi: 10.1007/BF01388456. Google Scholar M. van Eupen and P. Lisonêk, Classification of some optimal ternary linear codes of small length,, Des. Codes Cryptogr., 10 (1997), 63. doi: 10.1023/A:1008292320488. Google Scholar N. Hamada, A characterization of some $[n,k,d;q]$-codes meeting the Griesmer bound using a minihyper in a finite projective geometry,, Discrete Math., 116 (1993), 229. doi: 10.1016/0012-365X(93)90404-H. Google Scholar N. Hamada, The nonexistence of $[303,6,201;3]$-codes meeting the Griesmer bound,, Technical Report OWUAM-009, (1995). Google Scholar N. Hamada and T. Helleseth, The uniqueness of $[87,5,57;3]$ codes and the nonexistence of $[258,6,171;3]$ codes,, J. Statist. Plann. Inference, 56 (1996), 105. doi: 10.1016/S0378-3758(96)00013-4. Google Scholar N. Hamada and T. Helleseth, The nonexistence of some ternary linear codes and update of the bounds for $n_3(6,d)$, $1\leq d\leq 243$,, Math. Japon., 52 (2000), 31. Google Scholar N. Hamada and T. Maruta, A survey of recent results on optimal linear codes and minihypers,, unpublished manuscript, (2003). Google Scholar U. Heim, On $t$-blocking sets in projective spaces,, unpublished manuscript, (1994). Google Scholar R. Hill, Caps and codes,, Discrete Math., 22 (1978), 111. doi: 10.1016/0012-365X(78)90120-6. Google Scholar R. Hill, Optimal linear codes,, in, (1992), 75. Google Scholar R. Hill, An extension theorem for linear codes,, Des. Codes Cryptogr., 17 (1999), 151. doi: 10.1023/A:1008319024396. Google Scholar R. Hill and D. E. Newton, Optimal ternary linear codes,, Des. Codes Cryptogr., 2 (1992), 137. doi: 10.1007/BF00124893. Google Scholar W. C. Huffman and V. Pless, "Fundamentals of Error-Correcting Codes,", Cambridge University Press, (2003). Google Scholar C. M. Jones, "Optimal Ternary Linear Codes,", Ph.D thesis, (2000). Google Scholar I. N. Landjev, The nonexistence of some optimal ternary linear codes of dimension five,, Des. Codes Cryptogr., 15 (1998), 245. doi: 10.1023/A:1008317124941. Google Scholar I. Landgev, T. Maruta and R. Hill, On the nonexistence of quaternary $[51,4,37]$ codes,, Finite Fields Appl., 2 (1996), 96. Google Scholar T. Maruta, On the nonexistence of $q$-ary linear codes ofdimension five,, Des. Codes Cryptogr., 22 (2001), 165. Google Scholar T. Maruta, Extendability of ternary linear codes,, Des. Codes Cryptogr., 35 (2005), 175. doi: 10.1007/s10623-005-6400-7. Google Scholar T. Maruta, "Griesmer Bound for Linear Codes over Finite Fields,", available online at \url{http://www.geocities.jp/mars39geo/griesmer.htm}, (). Google Scholar Y. Oya, The nonexistence of $[132,6,86]_3$ codes and $[135,6,88]_3$ codes,, Serdica J. Comput., (). Google Scholar G. Pellegrino, Sul massimo ordine delle calotte in S4,3,, Matematiche (Catania), 25 (1970), 1. Google Scholar M. Takenaka, K. Okamoto and T. Maruta, On optimal non-projective ternary linear codes,, Discrete Math., 308 (2008), 842. doi: 10.1016/j.disc.2007.07.044. Google Scholar H. N. Ward, Divisibility of codes meeting the Griesmer bound,, J. Combin. Theory Ser. A, 83 (1998), 79. doi: 10.1006/jcta.1997.2864. Google Scholar Y. Yoshida and T. Maruta, Ternary linear codes and quadrics,, Electronic J. Combin., 16 (2009). Google Scholar Alexander A. Davydov, Massimo Giulietti, Stefano Marcugini, Fernanda Pambianco. Linear nonbinary covering codes and saturating sets in projective spaces. Advances in Mathematics of Communications, 2011, 5 (1) : 119-147. doi: 10.3934/amc.2011.5.119 Jesús Carrillo-Pacheco, Felipe Zaldivar. On codes over FFN$(1,q)$-projective varieties. Advances in Mathematics of Communications, 2016, 10 (2) : 209-220. doi: 10.3934/amc.2016001 Christine Bachoc, Alberto Passuello, Frank Vallentin. Bounds for projective codes from semidefinite programming. Advances in Mathematics of Communications, 2013, 7 (2) : 127-145. doi: 10.3934/amc.2013.7.127 Olof Heden, Martin Hessler. On linear equivalence and Phelps codes. Advances in Mathematics of Communications, 2010, 4 (1) : 69-81. doi: 10.3934/amc.2010.4.69 Jop Briët, Assaf Naor, Oded Regev. Locally decodable codes and the failure of cotype for projective tensor products. Electronic Research Announcements, 2012, 19: 120-130. doi: 10.3934/era.2012.19.120 Petr Lisoněk, Layla Trummer. Algorithms for the minimum weight of linear codes. Advances in Mathematics of Communications, 2016, 10 (1) : 195-207. doi: 10.3934/amc.2016.10.195 Jean Creignou, Hervé Diet. Linear programming bounds for unitary codes. Advances in Mathematics of Communications, 2010, 4 (3) : 323-344. doi: 10.3934/amc.2010.4.323 Liz Lane-Harvard, Tim Penttila. Some new two-weight ternary and quinary codes of lengths six and twelve. Advances in Mathematics of Communications, 2016, 10 (4) : 847-850. doi: 10.3934/amc.2016044 Fernando Hernando, Diego Ruano. New linear codes from matrix-product codes with polynomial units. Advances in Mathematics of Communications, 2010, 4 (3) : 363-367. doi: 10.3934/amc.2010.4.363 Peter Beelen, Kristian Brander. Efficient list decoding of a class of algebraic-geometry codes. Advances in Mathematics of Communications, 2010, 4 (4) : 485-518. doi: 10.3934/amc.2010.4.485 Nuh Aydin, Nicholas Connolly, Markus Grassl. Some results on the structure of constacyclic codes and new linear codes over GF(7) from quasi-twisted codes. Advances in Mathematics of Communications, 2017, 11 (1) : 245-258. doi: 10.3934/amc.2017016 John Sheekey. A new family of linear maximum rank distance codes. Advances in Mathematics of Communications, 2016, 10 (3) : 475-488. doi: 10.3934/amc.2016019 Peter Vandendriessche. LDPC codes associated with linear representations of geometries. Advances in Mathematics of Communications, 2010, 4 (3) : 405-417. doi: 10.3934/amc.2010.4.405 Olof Heden, Martin Hessler. On linear equivalence and Phelps codes. Addendum. Advances in Mathematics of Communications, 2011, 5 (3) : 543-546. doi: 10.3934/amc.2011.5.543 Thomas Feulner. Canonization of linear codes over $\mathbb Z$4. Advances in Mathematics of Communications, 2011, 5 (2) : 245-266. doi: 10.3934/amc.2011.5.245 Gérard Cohen, Sihem Mesnager, Hugues Randriam. Yet another variation on minimal linear codes. Advances in Mathematics of Communications, 2016, 10 (1) : 53-61. doi: 10.3934/amc.2016.10.53 Ali Tebbi, Terence Chan, Chi Wan Sung. Linear programming bounds for distributed storage codes. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020024 Dandan Wang, Xiwang Cao, Gaojun Luo. A class of linear codes and their complete weight enumerators. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020044 Daniele Bartoli, Adnen Sboui, Leo Storme. Bounds on the number of rational points of algebraic hypersurfaces over finite fields, with applications to projective Reed-Muller codes. Advances in Mathematics of Communications, 2016, 10 (2) : 355-365. doi: 10.3934/amc.2016010 Cuiling Fan, Koji Momihara. Unified combinatorial constructions of optimal optical orthogonal codes. Advances in Mathematics of Communications, 2014, 8 (1) : 53-66. doi: 10.3934/amc.2014.8.53 Tatsuya Maruta Yusuke Oya
CommonCrawl
Deconvoluting the diversity of within-host pathogen strains in a multi-locus sequence typing framework Guo Liang Gan1 na1, Elijah Willie1 na1, Cedric Chauve2,3 & Leonid Chindelevitch1 BMC Bioinformatics volume 20, Article number: 637 (2019) Cite this article Bacterial pathogens exhibit an impressive amount of genomic diversity. This diversity can be informative of evolutionary adaptations, host-pathogen interactions, and disease transmission patterns. However, capturing this diversity directly from biological samples is challenging. We introduce a framework for understanding the within-host diversity of a pathogen using multi-locus sequence types (MLST) from whole-genome sequencing (WGS) data. Our approach consists of two stages. First we process each sample individually by assigning it, for each locus in the MLST scheme, a set of alleles and a proportion for each allele. Next, we associate to each sample a set of strain types using the alleles and the strain proportions obtained in the first step. We achieve this by using the smallest possible number of previously unobserved strains across all samples, while using those unobserved strains which are as close to the observed ones as possible, at the same time respecting the allele proportions as closely as possible. We solve both problems using mixed integer linear programming (MILP). Our method performs accurately on simulated data and generates results on a real data set of Borrelia burgdorferi genomes suggesting a high level of diversity for this pathogen. Our approach can apply to any bacterial pathogen with an MLST scheme, even though we developed it with Borrelia burgdorferi, the etiological agent of Lyme disease, in mind. Our work paves the way for robust strain typing in the presence of within-host heterogeneity, overcoming an essential challenge currently not addressed by any existing methodology for pathogen genomics. The study of bacterial pathogens has revealed an impressive genetic diversity that had not been fully suspected prior to the advent of genome sequencing technologies. This diversity may indicate an adaptive response to challenges such as the variability in host genetics, environmental conditions, and, in the case of pathogens affecting humans, the introduction of antibacterial drugs [1–4]. One bacterial pathogen that is particularly well-known for its genetic diversity is Borrelia burgdorferi, the etiological agent of Lyme disease. It has been found that up to six genetically different strains can affect a single host [5, 6]. Furthermore, this diversity may result from both clonal evolution within the host as well as multiple infection events [7]. Unfortunately, techniques such as bacterial culture are difficult to apply to reveal the whole range of diversity in bacteria like B. burgdorferi, a situation common to many bacterial pathogens. Next-generation sequencing (NGS) techniques such as whole-genome sequencing (WGS) with short reads have revolutionized our ability to investigate the genomic diversity of bacteria and other organisms [8]. Recently, an adaptation of WGS technology to B. burgdorferi, called whole-genome capture, has been proposed which is able to reliably filter out irrelevant DNA (such as host DNA) [9]. This novel approach for generating sequence data for B. burgdorferi nicely complements a highly reproducible strain-typing scheme known as multi-locus sequence typing (MLST), which has been developed and found to be useful for different pathogens in a number of contexts [10]. MLST is a summary of the bacterial genotype using the alleles of several (typically 6 to 9) housekeeping genes, which may be further grouped into closely related strain types. In the case of B. burgdorferi, several hundred strain types have been characterized using the MLST scheme developed in [11], while only 111 fully sequenced B. burgdorferi genomesFootnote 1 are currently available in the NCBI databases. MLST strain types thus provide a finer-grained picture of the strain diversity of this pathogen, which motivates the need for developing novel diversity estimation methods that combine NGS data and the wealth of strain types already characterized by MLST. In principle, this problem is a special instance of estimating the diversity and abundance of microbial strains from metagenomics data, a problem for which several accurate methods have recently been developed (e.g. [12–14]). De novo methods, such as DESMAN [12], cannot take advantage of known reference strains or alleles and are likely to be confounded by the high similarity observed between strain types. Other methods such as strainEST [13] are able to consider a large set of reference genomes, which in our case can be defined by the concatenated allele sequences of the known B. burgdorferi strain types, but again, their diversity models are not well adapted to handle the very high similarity between strain types. Moreover, none of the reference-based methods consider the detection of novel strain types. We introduce the first paradigm for extracting MLST information in the presence of within-host heterogeneity, which is also able to simultaneously take multiple samples into account and detect novel strains. Our method is based on mixed integer linear programming (MILP), and consists of two main stages. It starts by filtering the short reads in each sample, selecting those that closely match known alleles in at least one of the housekeeping genes in the MLST scheme, and then assigns fractional abundances to each allele of each gene, ensuring that as few such alleles as possible are used to explain the data. In the second stage, it assigns combinations of these alleles, with corresponding proportions, to each sample, while maximizing the usage of known strains and minimizing the number of novel strains, a parsimony-based approach that has been shown to perform well in related contexts [15]. We evaluate our approach on simulated samples and find that it is accurate in identifying both the fractional allele composition at each housekeeping gene, as well as the complete strain types present in each sample. We then apply it to a dataset of 24 real tick samples containing B. burgdorferi extracted via whole-genome capture, and find a substantial amount of diversity, as well as a number of new strains. In conclusion, our work provides a robust and reproducible pipeline for accurate strain typing via MLST from WGS data even in the presence of substantial within-host heterogeneity. Terminology.An MLST scheme is composed of a set of loci together with a database of known alleles for each locus [16]. An allele distribution for a given locus is a set of alleles for this locus together with a proportion assigned to each allele; the proportions must be non-negative and add up to 1. A strain type is an assignment of a specific allele to each gene of the MLST scheme. A strain type distribution is a set of strain types together with a proportion assigned to each strain type; the proportions must once again be non-negative and add up to 1. A sample is a WGS dataset obtained from a single host that contains the sequence data from one or several pathogen strains present in the host (see Fig. 1). A dataset with two samples and an MLST scheme of three loci (genes clpA, clpX, nifS). The strain type distributions require 5 different strains as the strain (clpA_1,clpX_1, nifS_7) appears in both distributions Data.In the present work we use the traditional B. burgdorferi MLST scheme [11] composed of 8 housekeeping genes having a combined total of 1726 known alleles. For each locus, the various known alleles differ from one another primarily by single nucleotide polymorphisms (SNPs), with small indels also appearing in 4 out of the 8 genes. The number of known strain types is 753. Problems and contribution overview.The problems we address in this work take as input (1) an MLST scheme together with databases of known alleles and strain types and (2) WGS data for a set of samples that are mapped using a short-read mapper of choice onto the database of known alleles for the provided MLST scheme. It then proceeds in two stages, each addressing a specific problems: The Allele Diversity Problem. For a given sample and a given locus of the MLST scheme, given the mappings of DNA reads onto the known alleles for this locus, detect the alleles present in the sample and the corresponding allele distribution. The Strain Diversity Problem. Given a set of samples and an allele distribution for each locus at each sample, compute a strain type distribution per sample that requires the smallest number of novel strain types among all considered samples, which are as similar as possible to known strains. The Allele Diversity Problem We formulate the problem of allele detection as a variant of the Set Cover problem as follows. The input of the Allele Diversity Problem (ADP) is composed of a set of m reads \(\mathcal {R}= \{r_{1},\dots \,r_{m}\}\), a set of n alleles \(\mathcal {A} = \{a_{1},\dots,a_{n}\}\) for the chosen locus, and a set of mappings of the reads onto the alleles, encoded by a matrix M, where mij is the sum of the normalized Phred scores of the mismatched bases in the mapping of read ri onto allele aj (we set it to ∞ if ri does not map onto aj). For instance, assuming that the range of acceptable Phred scores is from 33 to 126, if read ri maps to allele aj with 2 mismatches with base quality scores of 60 and 80, respectively, then \(m_{ij}=\frac {60-33}{126-33} + \frac {80-33}{126-33} = 0.796\). Each allele aj implicitly defines a subset of \(\mathcal {R}\) (the reads aligning with the allele), with each read ri being weighted by mij. Informally, we then aim at selecting a subset of alleles covering the set of reads, while minimizing the sum of the number of required alleles and the sum of the corresponding weights. The ADP is thus very similar to the Uncapacitated Facility Location Problem, and we discuss this observation in Additional file 1. Formally, we define an edge-weighted bipartite graph whose vertex set is \(\mathcal {R} \cup \mathcal {A}\) and whose weighted incidence matrix is M. A read cover is a subset of edges of this graph such that each read belongs to exactly one edge; the cost of a read cover is the number of allele vertices it is incident to plus the sum of the weights of the edges in the cover. The ADP aims at finding a read cover of minimum weight, the allele vertices incident on the edges of the cover representing the selected alleles. Theorem 1 The Allele Diversity Problem is NP-hard. The proof of Theorem 1 relies on a reduction from the 3-dimensional matching problem and is provided in Additional file 1. Before describing our ILP we comment on the relevance of our formulation for selecting a set of alleles from short reads. Our objective function aims to minimize the sum of the number of alleles and the weight of each read based on the Phred scores; the latter part aims at explaining the data (reads) using as few errors/mismatches as possible, accounting for the base quality score of the mismatches, while the former part ensures that an allele is not introduced unnecessarily to reduce the contribution of the mismatches and their quality for a small number of reads. Our experiments on simulated data show that this objective function leads to extremely accurate results. An Integer Linear Program for the Allele Diversity Problem.First we introduce the following notation: Rj={ri:mij≠∞} represents the set of reads mapping onto allele aj (i.e. covered by allele aj), and \(\ M_{i} = \{m_{ij} | 1 \leq j \leq n\} - \{\infty \} = \{q_{i1},..., q_{i|M_{i}|}\}\) represents the distinct summed Phred scores for read ri. The decision variables of the ILP are: xj=1 if allele aj is chosen, and 0 otherwise. yik=1 if a mapping of read ri with score qik is chosen, and 0 otherwise. The objective function is \(\min \!\left (\! \sum _{i=1}^{|\mathcal {R}|}\! \sum _{k=1}^{|M_{i}|} q_{ik}\! \cdot \! y_{ik}\! +\!\! \sum _{j=1}^{n}\! x_{j}\!\right)\). Finally, the constraints of the ILP are the following ones: If yik=1, there exists some allele aj onto which ri maps with score qik. There is a unique score with which read ri is mapped onto the selected alleles. These constraints can be represented as follows: $$\sum_{\{j\ |\ r_{i} \in R_{j}, m_{ij} = q_{ik}\}} x_{j} \geq y_{ik} \, \forall \, i,k \hspace{1cm}\sum_{k=1}^{|M_{i}|} y_{ik} = 1 \, \forall \, i. $$ Post-processing.If the above 0-1 ILP has multiple optimal solutions, we resort to a likelihood based method to select one, namely GAML [17], a probabilistic model for genome assembly. Given a set of solutions where each solution represents a set of alleles, we measure the likelihood of observing the set of reads given a solution and pick the solution which maximizes the likelihood criterion. If there are multiple solutions maximizing the likelihood criterion, we pick one arbitrarily. Computing allele proportions.Finally, once the alleles have been identified for a given locus, we compute the proportion of each allele. The principle is to assign a weight to each allele based on the read mappings (edges) selected by the ILP, and to normalize these weights to obtain proportions. First, we filter out any read that maps equally well (i.e. with the same score k) onto all selected alleles. Then every chosen allele gets an initial weight of 0. Next, for every non-discarded read, say ri, we consider all the alleles it maps onto with optimal score (say qik if yik=1); assuming there are h such alleles, we increase the weight of each by 1/h. We then normalize the weights of the alleles to define their respective proportions. The Strain Diversity Problem Once the alleles present in each sample and their proportions have been identified, this information is passed to the second stage of the pipeline. Its goal is to compute strain types and proportions in all samples jointly, minimizing the number of novel strains required to explain the given allele distributions plus an error term measuring the total discrepancy between each given allele proportion and the proportions of strains having this allele. The rationale behind minimizing the number of new strains is driven by parsimony considerations; we would like to explain the data present in all samples using known strains as much as possible. The error terms allow some flexibility to modify the allele proportions by bounding each error to be ≤ε (in our analysis we set the bound to ε=0.1, or 10%). The Strain Diversity Problem: problem definition and tractability.The Strain Diversity Problem (SDP) can be defined as follows. It takes as input four elements: (1) the set Gij={gij1,gij2,… } of all alleles selected for locus j in sample i (2) the set Pij={pij1,pij2,… } of proportions of these alleles, (3) a database Ω of known strain types, (4) an error bound ε∈[0,1]. From now on, we assume that there are ℓ loci and m samples. From this input, we generate the set of all possible strain types for each sample i, defined as the Cartesian product Gi1×Gi2×⋯×Giℓ which we denote by \(V_{i} = \{V_{i1}, V_{i2}, \dots,V_{iH_{i}}\}\) with \(H_{i} = \prod _{j=1}^{\ell }|G_{ij}|\). We also denote by K the number of strain types that appear in at least one Vi and we define the set \(\mathcal {S}=\{S_{1}, \dots, S_{K}\}\) of all such strain types. We assign a weight wj to each \(\mathcal {S}_{j} \in \mathcal {S}\), where \(w_{j} = N \cdot \min _{\{s \in \Omega \}} d(s, \mathcal {S}_{j})\), where d is the edit distance metric and N is a normalization constant that rescales the weights to the interval [0,1]. These weights measure the distance to the closest known strain; the strains in Ω are assigned a weight of 0. A solution to the SDP is fully described by assigning to every strain type Vih from Vi a proportion πih for this strain type in sample i (where πih is 0 if the strain type is deemed to be absent from sample i). A strain type from \(\mathcal {S} \setminus \Omega \) is said to be present in a solution if it is given a non-zero proportion in at least one sample; we denote by \(\mathcal {S}_{n}\) the set of such novel strain types. The cost of a solution is then defined as $$ \sum_{\{h | \mathcal{S}_{h} \in \mathcal{S}_{n} \}} w_{h} + \sum_{i,j} e_{ij} $$ where the latter term of the cost represents the deviation from the input alleles proportions for sample i at locus j. This cost function penalizes the introduction of novel strains that are very different from known strains and the error introduced in the proportions of the selected alleles. The SDP aims at finding a solution of minimum cost, i.e. one that explains the provided allele distributions as much as possible with known strains and novel strains which are close to the known strains, and also adheres to the desired proportions as closely as possible. As expected, this problem is intractable; its decision version is proven to be NP-complete in Additional file 1, by a reduction from the 3-partition problem. The Strain Diversity Problem is NP-hard. An MILP for the Strain Diversity Problem.We now describe an MILP that solves the SDP. The decision variables of the MILP are the following: Binary variables ak,1≤k≤K, where ak=1 if strain type Sk is chosen to explain the observed allele distribution in at least one sample, and 0 otherwise. Proportion variables πih encoding the proportion of strain type Vih in sample i; their values are constrained to be in [0,1]. Variables eijk∈[0,ε] encoding the absolute error of the observed proportion pijk of allele gijk for locus j in sample i from the assigned proportions, in sample i, of the strain types containing this allele. The objective function of the MILP is $$ \min\left(\sum_{\{k\ |\ S_{k} \notin \Omega\}} w_{k} a_{k} + \sum_{i,j,k} e_{ijk}\right) $$ Finally the constraints of the MILP are the following: For any allele gijk∈Gij, the sum of the proportions of the strain types from Vi that contain this allele, denoted νijk, belongs to [pijk−ε,pijk+ε]. For each sample i, the strain type proportions must form a distribution: \(\sum _{h=1}^{H_{i}}\pi _{ih} = 1\). If the assigned proportion for some strain type Vih=Sk in a sample i is non-zero, then Sk must be chosen: ak≥πih. Conversely, if a strain is chosen, it must be assigned a non-zero proportion: $$0 \leq a_{k} - \frac{1}{|\{\pi_{ih}\ |\ V_{ih} = S_{k}\}|} \cdot \sum_{\{ (i,h) | V_{ih} = S_{k}\}} \pi_{ih} \leq 1 - \delta$$ where δ is a tolerance chosen to match the smallest allowed proportion; we use δ=0.001. This constraint is needed because the binary decision variables for the usage of existing strains have coefficient 0 in the objective function, so setting these variables to 1 will not incur any cost in the objective function. If we do not impose such a constraint, we could end up with an incorrect solution where some existing strains have zero proportions, while the strain usage variables are set to 1, which would then need to be post-processed. Including this constraint eliminates the possibility of such a spurious solution. The absolute error between the input proportion and the assigned proportion for allele gijk for locus j in sample i: eijk=|pijk−νijk|. This is encoded by the following 2 constraints: eijk≥Tijk−pijk and eijk≥pijk−Tijk where \(T_{ijk}=\sum _{\{k\ |\ g_{ijk} \in V_{ik}\}}\pi _{ik}\). Note that since eijk is part of the objective function to be minimized, it will be equal to the error in any optimal solution. All scripts are written in Python 2.7. Both ILPs are formulated and solved using the Python API of IBM's CPLEX 12.6.3.0. For the ADP, each sample and each locus may require a different number of variables in the ILP. To evaluate the practical resources requirements of our ILP, we choose the sample SRR2034336, which has the largest number of reads among our samples. The average number of variables across each gene for this sample is 20,112, the maximum RAM usage is ∼1.5GB, and the time taken for all 8 genes is ∼33 min on a 4 CPUs Intel® Xeon® machine. The total time taken for each sample is presented in Additional file 1. For the MILP solving the SDP on all 30 samples, there are a total of 21,885 variables, with 10,682 strain type variables, 10,795 proportion variables and 408 error variables. Due to the computational complexity of the MILP, we output a solution as long as the relative gap tolerance is within 10% and after a time limit of 24 h. Our code is publicly available at https://github.com/WGS-TB/MLST. Data simulation Given the absence of benchmarks available for estimating diversity at the level of precision considered in this work, we conducted several simulations. All reads are simulated using ART [18], following the characteristics of the reads from the real data set described in "Application to real data" section. ADP simulation.For each locus of the Borrelia MLST scheme, we drew a random number k∈[2,7], selected a random allele from the database and selected k−1 other alleles, each at edit distance at most d (a given parameter) from the first one chosen. Next, we randomly assigned proportions to each selected allele, which sum up to 1, then generated reads with coverage c. To align the simulated reads to the alleles of the database, we used Bowtie v0.12.7 [19]. We used parameters c∈{30,100,300} and d∈{5,10,15,20,25} and we ran 40 simulations for each combination of these parameters. For this experiment, we compared our results with the results obtained with Kallisto [20], a recent method for isoform abundance estimation that has also been applied to metagenomics. SDP simulationFor this simulation we selected random strain type distributions and tested the ability of our SDP method to recover the true diversity given perfect allele calls. We considered 5 different mechanisms to generate strain types distributions. EvoMod1: We select a random existing strain S, which is then mutated m=2 times to obtain a new strain S′, where each mutation results in an allele which has edit distance at most d=15 from the original allele in S. The total number of strains simulated is 2 (1 existing and 1 novel). EvoMod2: We repeat EvoMod1 in parallel from two starting existing strains. The total number of strains simulated is 4 (2 existing and 2 novel). EvoMod2e/EvoMod2n: We apply EvoMod2 then remove a random existing/novel strain. EvoMod3: we apply EvoMod2, then apply a recombination (allele exchange) event on two randomly chosen strains out of the 4 available strains. For all experiments, we assigned random proportions to the chosen strains. Full pipeline simulation.We generated strain type distributions as in the SDP simulations above, then generated reads as in the ADP simulations. The generated reads were then fed to the ADP solver, and the ADP results were provided as input to the SDP solver. We compared our pipeline with strainEST [13], a recent method to estimate the strain composition and abundance in metagenomics datasets. However, strainEST does not predict novel strain types. Hence, to complement EvoMod1, 2, 2e and 2n, we added an additional simulation where we randomly pick k={1,2} existing strains and assign them random proportions. Statistics.For each experiment, we recorded the following statistics: Precision, Recall and Total Variation Distance. Precision and recall are defined as \(\frac {TP}{TP+FP}\) and \(\frac {TP}{TP+FN}\), where TP, FP, FN are the number of true positive calls, false positive calls, and false negative calls, respectively. The Total Variation Distance (TVD) [21, p. 50] is defined as \(TVD = \frac {1}{2}\sum _{a \in S}|Pred(a) - True(a)|\), where Pred and True are the predicted distribution and the true distribution, respectively, and S is the set of all possible outcomes. The TVD basically describes the average amount of distribution to "move" from Pred to True or vice versa. The statistics described above rely on a stringent measure of accuracy in calling alleles, strain types or proportions. For example, a novel strain type called which differs from the true simulated strain type by a single SNP would be considered as a False Positive. To account for this, we considered 3 additional statistics: Earth-Mover's distance (EMD), soft-precision and soft-recall. Soft precision and soft recall are similar to precision and recall, however, a strain is considered a TP if it differs from the true strain type by at most 5 SNPs. The EMD [22] is similar in principle to the TVD, but is more refined as it considers the edit distances between strains and is commonly used in genomics to evaluate haplotype reconstruction methods [23]. We provide a full definition in Additional file 1. Simulated data We describe several sets of experiments based on simulated data. In the first one we evaluate our method for the ADP problem and compare it with Kallisto. In the second experiment, we evaluate our method for the SDP, using simulated allele frequencies, i.e. perfect input to the SDP, and 4 different evolutionary models explaining the diversity within a sample, from a simple model based on within-host mutations to a complex model based on co-infection and recombination. We then repeat the same experiment using simulated short reads, to evaluate our pipeline on ADP + SDP. Finally, we compare our method to strainEST using simulated datasets with no novel strains (the ideal case for strainEST) and then datasets simulated using evolutionary modes identical to the ones in the previous experiment. ADP simulation.Table 1 shows the performance of our method. Overall, our method obtained very high precision and recall statistics. Compared to Kallisto, our method performs better in terms of precision and comparable in terms of TVD, while Kallisto performs better in terms of recall. Gene-by-gene boxplots for our method and Kallisto are available in Additional file 1. Table 1 Average and standard deviation of precision, recall and TVD for each gene of the Borellia MLST scheme (B-MLST) and Kallisto, across all parameters combination SDP and full pipeline simulation.The results are presented in Table 2. Given perfect input data, our SDP algorithm performed extremely well for each mechanism, maintaining a precision and recall of almost 75% with EvoMod3, the model that involves recombination. For the full pipeline simulation, our pipeline performs extremely well on the ADP, which is consistent with our observations in the ADP simulation. However, the full pipeline's performance suffered in the SDP. Soft precision and recall are still high, but exact precision and recall are much lower. We can observe a dramatic impact on the SDP from relatively small errors in the ADP (i.e. wrong allele identification or discrepancy in the allele proportion estimation). Table 2 Average and standard deviation of different statistics for each evolutionary mechanisms Comparison to strainEST.We compared our methods to strainEST in the full pipeline simulation with 2 sets of experiments: (1) benchmark simulation where only existing strains are simulated (2) 4 different evolutionary mechanisms, where novel strains are involved. Our method outperforms strainEST in all situations. We refer the readers to the Additional file 1 for the detailed results. Application to real data The sequencing data we analyzed are from 24 tick samples infected with B. burgdorferi, collected using the standard tick dragging method [24] in 2007 from 8 different sites in Vermont, New York, Massachusetts and Connecticut. For each tick sample, the B. burgdorferi genome was captured as described in [9]. The sequencing data is composed of 2×76bp paired-end reads and the number of read pairs ranges from 2.7·104 to 2.7·106 over all tick samples (coverages ranging from 5X to 500X). Based on the output of the pipeline, 60 novel and 10 existing strains were inferred to be potential candidates for explaining the strain diversity in this large sample of ticks. The total error component of the objective function of the MILP solving the SDP amounts to 1.258, or an average of 0.05 per sample. The total proportion of new strains is 14.67 in these 24 samples, for an average of 61%. For each sample having novel strains, 76% of its genotype is composed of novel strains. Figure 2 further illustrates the diversity, showing a wide range of strain composition in each of the 30 samples, with an average of 3 strains and a maximum of 9 strains infecting each sample, consistent with previous reports [5]. This suggest that the diversity of the B. burgdorferi strain types might be much larger than what was known so far. To further refine our analysis, Fig. 3 illustrates the distribution of strain types in the 30 tick samples and the respective contribution to the total diversity of each strain type. Although we observe that 2 of the 10 detected existing strains are present in more than one sample, only 5 out of the 60 novel strains appear in more than one sample. Distribution of the number of existing and novel strains per tick sample (Left) Cumulative proportion of the 10 existing strains in all 24 samples (within each bar, different colors represent different samples). (Right) Similar graph for the 60 novel strains It is striking to observe that most strain types appear in exactly one tick sample each. We can also observe that for 11 of the 24 samples, we do not detect any existing strains. This suggests that some of these strain types could have been improperly called, and that the correct call should have been another strain type, extremely close to this one in terms of sequence similarity; a reasonable cause for such errors could be a mistake while solving the ADP, in which case a wrongly called allele could be very similar to the correct allele. Due to possibility of wrong allele calls leading to introducing novel strains, we also computed a minimum spanning tree (MST) of the 70 strains found in these 24 samples, with edges weighted by the edit distance between the sequences of the alleles over the 8 genes of the MLST scheme. The MST figures are provided in Additional file 1. We can observe clusters of predicted strains that are very close to each other, such as, for example, a cluster of 8 novel strains and 2 existing strains that are all within edit distance 5 from each other. This suggests, in line with the level of precision and recall we observe in our simulations, that some of these strains might result from a limited level of erroneous allele calls, off by a couple of SNPs from the correct call, that result in this apparent high level of diversity. We presented an optimization-based pipeline for estimating the within-host strain diversity of a pathogen from WGS data analyzed in the MLST framework. This is a specific instance of estimating the diversity of a bacterial pathogen from metagenomics data, focusing on within-host diversity and taking advantage of the availability of a large database of known MLST strain types. Our approach is composed of two main steps, each of a different nature; the first step detects the alleles present in a sample from the sequence data, while the second step estimates the strain diversity based on the output of the first one. In both steps we follow a parsimonious approach that aims at explaining the input using as few alleles or novel strains as possible. The main contribution of our work is the formulation and the solution of the Strain Diversity Problem for a group of samples. The main challenge of this problem is the need to consider a potentially large set of samples at once. While this leads to a relatively complex MILP, with a large number of variables (whose number is determined by the number of potentially present novel strain types), we believe that the ability to consider a large set of samples at once is an important part of the model, for example for analyzing sequencing data from pathogen hosts originating from a single geographical area. Our work shows that this problem, despite its complexity, can actually be solved to a good accuracy using reasonable amounts of computational resources. Our experiments on real data suggest avenues for future research; in particular, the multiplicity of optimal solutions is obviously problematic, as calling a wrong allele in a single sample during the first step might force the MILP computing the strain types to introduce a new strain type. We can observe in our results on real data several groups of very closely related strain types, sometimes differing by a single SNP, which likely results from this issue. At the moment, our approach to this problem is to post-process the result of our pipeline to identify clusters of closely related strains, but other more principled approaches should be explored. Notwithstanding the aforementioned issues, our experiments suggest a strikingly high diversity in our dataset of 24 tick samples. This is not altogether surprising since the library of known strains might be limited, and within-host (or, more precisely, within-vector) evolution might result in the presence of a number of strains that only differ by a small number of SNPs in one or two loci of the MLST scheme. Our work is, to our knowledge, the first comprehensive approach to the problem of reference-based detection of pathogen diversity in a collection of related samples that considers novel strain types. Our two-step pipeline, based on the principle of parsimony implemented through mixed integer linear programming, appears to perform extremely well on simulated data and produces reasonable results on a real dataset. We expect that both our approach and our publicly available pipeline will contribute to the development of accurate and efficient tools for quantifying the within-host diversity of bacterial pathogens. https://www.ncbi.nlm.nih.gov/genome/genomes/738, accessed June 25, 2019. ADP: Allele Diversity problem EMD: Earth-Mover's Distance False Negative FP: ILP: Integer Linear Programming MILP: Mixed Integer Linear Programming MLST: Multi-Locus Sequence Typing MST: Minimum Spanning Tree NGS: SDP: Strain Diversity Problem SNP: Single-Nucleotide Polymorphism TN: True Negative True Positive Total Variation Distance Didelot X, Walker AS, Peto TE, Crook DW, Wilson DJ. Within-host evolution of bacterial pathogens. Nat Rev Microbiol. 2016; 14(3):150–62. Cadena AM, Fortune SM, Flynn JL. Heterogeneity in tuberculosis. Nat Rev Immunol. 2017; 17:691. https://doi.org/10.1038/nri.2017.69. Tyler AD, Randell E, Baikie M, Antonation K, Janella D, Christianson S, Tyrrell GJ, Graham M, Van Domselaar G, Sharma MK. Application of whole genome sequence analysis to the study of Mycobacterium tuberculosis in Nunavut, Canada. PLoS ONE. 2017; 12(10):0185656. https://doi.org/10.1371/journal.pone.0185656. Alizon S, de Roode J. C, Michalakis Y. Multiple infections and the evolution of virulence. Ecol Lett. 2013; 16(4):556–67. https://doi.org/10.1111/ele.12076. Strandh M, Råberg Lars. Within-host competition between Borrelia afzelii ospC strains in wild hosts as revealed by massively parallel amplicon sequencing. Philos Trans R Soc Lond B Biol Sci. 2015; 370(1675). https://doi.org/10.1098/rstb.2014.0293. Brisson D, Baxamusa N, Schwartz I, Wormser GP. Biodiversity of Borrelia burgdorferi strains in tissues of Lyme disease patients. PLoS ONE. 2011; 6(8):22926. https://doi.org/10.1371/journal.pone.0022926. Walter KS, Carpi G, Evans BR, Caccone A, Diuk-Wasser MA. Vectors as epidemiological sentinels: Patterns of within-tick Borrelia burgdorferi diversity. PLoS Pathog. 2016; 12(7):1005759. URL https://doi.org/10.1371/journal.ppat.1005759. Lynch T, Petkau A, Knox N, Graham M, Domselaar GV. A primer on infectious disease bacterial genomics. Clin Microbiol Rev. 2016; 29(4):881–913. https://doi.org/10.1128/cmr.00001-16. Carpi G, Walter KS, Bent SJ, Hoen AG, Diuk-Wasser M, Caccone A. Whole genome capture of vector-borne pathogens from mixed DNA samples: a case study of Borrelia burgdorferi. BMC Genomics. 2015; 16(1). https://doi.org/10.1186/s12864-015-1634-x. Maiden MC, Bygraves JA, Feil E, Morelli G, Russell JE, Urwin R, Zhang Q, Zhou J, Zurth K, Caugant DA, Feavers IM, Achtman M, Spratt BG. Multilocus sequence typing: a portable approach to the identification of clones within populations of pathogenic microorganisms. PNAS. 1998; 95(6):3140–5. Margos G, Gatewood AG, Aanensen DM, Hanincova K, Terekhova D, Vollmer SA, Cornet M, Piesman J, Donaghy M, Bormane A, Hurn MA, Feil EJ, Fish D, Casjens S, Wormser GP, Schwartz I, Kurtenbach K. MLST of housekeeping genes captures geographic population structure and suggests a european origin of Borrelia burgdorferi. PNAS. 2008; 105(25):8730–35. https://doi.org/10.1073/pnas.0800323105. Quince C, Delmont TO, Raguideau S, Alneberg J, Darling AE, Collins G, Eren AM. DESMAN: a new tool for de novo extraction of strains from metagenomes. Genome Biol. 2017; 18(1):181. https://doi.org/10.1186/s13059-017-1309-9. Albanese D, Donati C. Strain profiling and epidemiology of bacterial species from metagenomic sequencing. Nat Commun. 2017; 8(1):2260. https://doi.org/10.1038/s41467-017-02209-5. Li J, Du P, Ye AY, Zhang Y, Song C, Zeng H, Chen C. GPA: A microbial genetic polymorphisms assignments tool in metagenomic analysis by bayesian estimation. Genomics Proteomics Bioinforma. 2019; 17(1):106–17. https://doi.org/10.1016/j.gpb.2018.12.005. Chindelevitch L, Colijn C, Moodley P, Wilson D, Cohen T, Else E. ClassTR: Classifying within-host heterogeneity based on tandem repeats with application to Mycobacterium tuberculosis infections. PLOS Comput Biol. 2016; 12(2):1–16. https://doi.org/10.1371/journal.pcbi.1004475. Page AJ, Alikhan N-F, Carleton HA, Seemann T, Keane JA, Katz LS. Comparison of Multi-Locus Sequence Typing software for Next Generation Sequencing data. Microb Genom. 2017; 3:000124. URL https://doi.org/10.1099/mgen.0.000124. Boža V, Brejová B, Vinař T. GAML: genome assembly by maximum likelihood. Algorithm Mol Biol. 2015; 10(1):18. URL https://doi.org/10.1186/s13015-015-0052-6. Huang W, Li L, Myers JR, Marth GT. ART: a Next-Generation Sequencing read simulator. Bioinformatics. 2012; 28(4):593–4. https://doi.org/10.1093/bioinformatics/btr708. Langmead B, Trapnell C, Pop M, Salzberg SL. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009; 10(3):25. URL https://doi.org/10.1186/gb-2009-10-3-r25. Bray NL, Pimentel H, Melsted P, Pachter L. Near-optimal probabilistic RNA-seq quantification. Nat Biotech. 2016; 34(5):525–7. https://doi.org/10.1038/nbt.3519. Levin DA, Peres Y, Wilmer EL. Markov chains and mixing times. Am Math Soc. 2009. https://doi.org/10.1090/mbk/058. Peleg S, Werman M, Rom H. A unified approach to the change of resolution: space and gray-level. IEEE Trans Pattern Anal Mach Intell. 1989; 11(7):739–42. https://doi.org/10.1109/34.192468. Knyazev S, Tsyvina V, Melnyk A, Artyomenko A, Malygina T, Porozov YB, Campbell E, Switzer WM, Skums P, Zelikovsky A. CliqueSNV: Scalable reconstruction of intra-host viral populations from NGS reads. bioRxiv. 2018. https://doi.org/10.1101/264242. Falco RC, Fish D. A comparison of methods for sampling the deer tick, Ixodes dammini, in a Lyme disease endemic area. Exp Appl Acarol. 1992; 14(2):165–73. https://doi.org/10.1007/BF01219108. The authors would like thank Maria Diuk-Wasser, Katharine Walter and Ben Adams for suggesting the problem as well as helpful discussions with regards to the data provenance and analysis. About this supplement This article has been published as part of BMC Bioinformatics Volume 20 Supplement 20, 2019: Proceedings of the 17th Annual Research in Computational Molecular Biology (RECOMB) Comparative Genomics Satellite Workshop: Bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-20-supplement-20. LC acknowledges support from NSERC, CIHR, Genome Canada and the Sloan Foundation. CC acknowledges support from NSERC. GLG was partially funded by an NSERC CREATE scholarship. EW was partially funded by an SFU KEY fellowship. Publication costs are funded by the SFU Central Open Access Fund. Guo Liang Gan and Elijah Willie contributed equally to this work. School of Computing Science, Simon Fraser University, 8888 University Drive, Burnaby (BC), V5A 1S6, Canada Guo Liang Gan , Elijah Willie & Leonid Chindelevitch Department of Mathematics, Simon Fraser University, 8888 University Drive, Burnaby (BC), V5A 1S6, Canada Cedric Chauve LaBRI, Université de Bordeaux, 351 Cours de la Libération, Talence, 33405, France Search for Guo Liang Gan in: Search for Elijah Willie in: Search for Cedric Chauve in: Search for Leonid Chindelevitch in: LC designed the project, LC, CC, GLG, EW designed the methods, GLG and EW implemented the methods and ran the experiments, LC, CC, GLG, EW analyzed the results and wrote the paper. All authors read and approved the final manuscript. Correspondence to Leonid Chindelevitch. Additional file 1 Supplementary methods, figures and tables. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Gan, G., Willie, E., Chauve, C. et al. Deconvoluting the diversity of within-host pathogen strains in a multi-locus sequence typing framework. BMC Bioinformatics 20, 637 (2019) doi:10.1186/s12859-019-3204-8 Bacterial diversity Submission enquiries: [email protected]
CommonCrawl
Phase transitions of the SIR Rumor spreading model with a variable trust rate Pullback attractors via quasi-stability for non-autonomous lattice dynamical systems A computational modular approach to evaluate $ {\mathrm{NO_{x}}} $ emissions and ozone production due to vehicular traffic Caterina Balzotti 1,, , Maya Briani 2, , Barbara De Filippo 2, and Benedetto Piccoli 3, Dipartimento di Scienze di Base e Applicate per l'Ingegneria, Sapienza Università di Roma, Rome, 00161, Italy Istituto per le Applicazioni del Calcolo "M. Picone", Consiglio Nazionale delle Ricerche, Rome, 00185, Italy Department of Mathematical Sciences, Rutgers University, Camden, NJ 08102, USA * Corresponding author: Caterina Balzotti Received November 2020 Revised June 2021 Early access July 2021 Fund Project: C. B., M. B. and B. D. F. were supported by the Italian Ministry of Instruction, University and Research (MIUR) under PRIN Project 2017 No. 2017KKJP4X, SMARTOUR Project No. B84G14000580008, and by the CNR TIRS Project FOE 2020. B. P.'s work was supported by the National Science Foundation under Cyber-Physical Systems Synergy Grant No. CNS-1837481 Figure(16) / Table(5) The societal impact of traffic is a long-standing and complex problem. We focus on the estimation of ground-level ozone production due to vehicular traffic. We propose a comprehensive computational approach combining four consecutive modules: a traffic simulation module, an emission module, a module for the main chemical reactions leading to ozone production, and a module for the diffusion of gases in the atmosphere. The traffic module is based on a second-order traffic flow model, obtained by choosing a special velocity function for the Collapsed Generalized Aw-Rascle-Zhang model. A general emission module is taken from literature, and tuned on NGSIM data together with the traffic module. Last two modules are based on reaction-diffusion partial differential equations. The system of partial differential equations describing the main chemical reactions of nitrogen oxides presents a source term given by the general emission module applied to the output of the traffic module. We use the proposed approach to analyze the ozone impact of various traffic scenarios and describe the effect of traffic light timing. The numerical tests show the negative effect of vehicles restarts on emissions, and the consequent increase in pollutants in the air, suggesting to increase the length of the green phase of traffic lights. Keywords: Road traffic modeling, second-order traffic models, emissions, ground-level ozone production. Mathematics Subject Classification: Primary: 35L65, 62P12; Secondary: 90B20. Citation: Caterina Balzotti, Maya Briani, Barbara De Filippo, Benedetto Piccoli. A computational modular approach to evaluate $ {\mathrm{NO_{x}}} $ emissions and ozone production due to vehicular traffic. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021192 L. J. Alvarez-Vázquez, N. García-Chan, A. Martínez and M. E. Vázquez-Méndez, Numerical simulation of air pollution due to traffic flow in urban networks, J. Comput. Appl. Math., 326 (2017), 44-61. doi: 10.1016/j.cam.2017.05.017. Google Scholar L. J. Alvarez-Vázquez, N. García-Chan, A. Martínez and M. E. Vázquez-Méndez, Optimal control of urban air pollution related to traffic flow in road networks, Math. Control Relat. F., 8 (2018), 177-193. doi: 10.3934/mcrf.2018008. Google Scholar R. Atkinson, Atmospheric chemistry of $\mathrm{VOC}s$ and $\mathrm{NO_x}$, Atmos. Environ., 34 (2000), 2063-2101. doi: 10.1016/S1352-2310(99)00460-4. Google Scholar R. Atkinson and W. P. L. Carter, Kinetics and mechanisms of the gas-phase reactions of ozone with organic compounds under atmospheric conditions, Chem. Rev., 84 (1984), 437-470. doi: 10.1021/cr00063a002. Google Scholar A. Aw and M. Rascle, Resurrection of "second order" models of traffic flow, SIAM J. Appl. Math., 60 (2000), 916-938. doi: 10.1137/S0036139997332099. Google Scholar M. Barth, F. An, T. Younglove, G. Scora, C. Levine, M. Ross and T. Wenzel, Development of a Comprehensive Modal Emissions Model: Final Report, Technical report, National Research Council, Transportation Research Board, National Cooperative Highway Research Program, NCHRP Project 25–11, 2000. Google Scholar D. C. Carslaw, S. D. Beevers, J. E. Tate, E. J. Westmoreland and M. L. Williams, Recent evidence concerning higher $\mathrm {NO_x}$ emissions from passenger cars and light duty vehicles, Atmos. Environ., 45 (2011), 7053-7063. doi: 10.1016/j.atmosenv.2011.09.063. Google Scholar D. de la Fuente, J. M. Vega, F. Viejo, I. Díaz and M. Morcillo, Mapping air pollution effects on atmospheric degradation of cultural heritage, J. Cult. Herit., 14 (2013), 138-145. doi: 10.1016/j.culher.2012.05.002. Google Scholar European Environment Agency, Air Quality in Europe – 2019 Report, Technical Report, 2019. Google Scholar S. Fan, M. Herty and B. Seibold, Comparative model accuracy of a data-fitted generalized Aw-Rascle-Zhang model, Netw. Heterog. Media, 9 (2014), 239-268. doi: 10.3934/nhm.2014.9.239. Google Scholar S. Fan, Y. Sun, B. Piccoli, B. Seibold and D. B. Work, A collapsed generalized Aw-RascleZhang model and its model accuracy, arXiv preprint, arXiv: 1702.03624. Google Scholar F. J. Fernández, L. J. Alvarez-Vázquez, N. García-Chan, A. Martínez and M. E. Vázquez-Méndez, Optimal location of green zones in metropolitan areas to control the urban heat island, J. Comput. Appl. Math., 289 (2015), 412-425. doi: 10.1016/j.cam.2014.10.023. Google Scholar M. Garavello, K. Han and B. Piccoli, Models for Vehicular Traffic on Networks, American Institute of Mathematical Sciences, 2016. Google Scholar E. Hairer and G. Wanner, Solving Ordinary Differential Equations II. Stiff and Differential-Algbraic Problem, Second edition, Springer Series in Computational Mathematics, 1996. doi: 10.1007/978-3-642-05221-7. Google Scholar D. J. Jacob, Heterogeneous chemistry and tropospheric ozone, Atmos. Environ., 34 (2000), 2131-2159. doi: 10.1016/S1352-2310(99)00462-8. Google Scholar M. Z. Jacobson, Fundamentals of Atmospheric Modeling, Cambridge University Press, 2005. doi: 10.1017/CBO9781139165389. Google Scholar T. Koto, IMEX Runge-Kutta schemes for reaction-diffusion equations, J. Comput. Appl. Math., 215 (2008), 182-195. doi: 10.1016/j.cam.2007.04.003. Google Scholar J. D. Lambert, Numerical Methods for Ordinary Differential Systems, John Wiley & Sons, Ltd., Chichester, 1991. Google Scholar J.-P. Lebacque, S. Mammar and H. Haj-Salem, Generic second order traffic flow modelling, in Transportation and Traffic Theory, Elsevier, (2007), 755–776. Google Scholar M. J. Lighthill and G. B. Whitham, On kinematic waves II. A theory of traffic flow on long crowded roads, Proc. Roy. Soc. A, 229 (1955), 317-345. doi: 10.1098/rspa.1955.0089. Google Scholar T. Luspay, B. Kulcsar, I. Varga, S. K. Zegeye, B. De Schutter and M. Verhaegen, On acceleration of traffic flow, in Proceedings of the 13th International IEEE Conference on Intelligent Transportation Systems (ITSC 2010), IEEE, (2010), 741–746. doi: 10.1109/ITSC.2010.5625204. Google Scholar [22] S. Manahan, Environmental Chemistry, CRC press, 2017. doi: 10.1201/9781315160474. Google Scholar H. Omidvarborna, A. Kumar and D.-S. Kim, $\mathrm{NO_x}$ emissions from low-temperature combustion of biodiesel made of various feedstocks and blends, Fuel Process. Technol., 140 (2015), 113-118. doi: 10.1016/j.fuproc.2015.08.031. Google Scholar L. I. Panis, S. Broekx and R. Liu, Modelling instantaneous traffic emission and the influence of traffic speed limits, Sci. Total Environ., 371 (2006), 270-285. doi: 10.1016/j.scitotenv.2006.08.017. Google Scholar B. Piccoli, K. Han, T. L. Friesz, T. Yao and J. Tang, Second-order models and traffic data from mobile sensors, Transp. Res. Part C: Emerg. Technol., 52 (2015), 32-56. doi: 10.1016/j.trc.2014.12.013. Google Scholar V. Ramanathan and Y. feng, Air pollution, greenhouse gases and climate change: {G}lobal and regional perspectives, Atmos. Environ., 43 (2009), 37-50. doi: 10.1016/j.atmosenv.2008.09.063. Google Scholar P. I. Richards, Shock waves on the highway, Oper. Res., 4 (1956), 42-51. doi: 10.1287/opre.4.1.42. Google Scholar M. Rößler, T. Koch, C. Janzer and M. Olzmann, Mechanisms of the NO$_2$ formation in diesel engines, MTZ Worldw., 78 (2017), 70-75. doi: 10.1007/s38313-017-0057-2. Google Scholar S. Samaranayake, S. Glaser, D. Holstius, J. Monteil, K. Tracton, E. Seto and A. Bayen, RealTime estimation of pollution emissions and dispersion from highway traffic, Comput.-Aided Civ. Inf., 29 (2014), 546-558. doi: 10.1111/mice.12078. Google Scholar J. H. Seinfeld and S. N. Pandis, Atmospheric Chemistry and Physics: From Air Pollution to Climate Change, John Wiley & Sons, 2016. doi: 10.1063/1.882420. Google Scholar R. Smit, L. Ntziachristos and P. Boulter, Validation of road vehicle and traffic emission models – A review and meta-analysis, Atmos. Environ., 44 (2010), 2943-2953. doi: 10.1016/j.atmosenv.2010.05.022. Google Scholar F. Song, J. Y. Shin, R. Jusino-Atresino and Y. Gao, Relationships among the springtime ground–level $\mathrm{NO_x}$, $\mathrm{O}_3$ and $\mathrm{NO_3}$ in the vicinity of highways in the US East Coast, Atmos. Pollut. Res., 2 (2011), 374-383. doi: 10.5094/APR.2011.042. Google Scholar B. Sportisse, Fundamentals in Air Pollution: From Processes to Modelling, Springer-Verlag, 2010. Google Scholar J. M. Stockie, The mathematics of atmospheric dispersion modeling, SIAM Rev., 53 (2011), 349-372. doi: 10.1137/10080991X. Google Scholar J. Tidblad, K. Kreislová, M. Faller, D. de la Fuente, T. Yates, A. Verney-Carron, T. Grøntoft, A. Gordon and U. Hans, ICP materials trends in corrosion, soiling and air pollution (1987–2014), Materials, 10 (2017). doi: 10.3390/ma10080969. Google Scholar Transportation Research Board, Critical Issues in Transportation 2019, Technical report, The National Academies of Sciences, Engineering, Medicine, 2019. Google Scholar TRB Executive Committee, Special Report 307: Policy Options for Reducing Energy and Greenhouse Gas Emissions from U.S. Transportation, Technical Report, Transportation Research Board of the National Academies, 2011. Google Scholar US Department of Transportation and Federal Highway Administration, Next generation simulation (NGSIM), http://ops.fhwa.dot.gov/trafficanalysistools/ngsim.htm. Google Scholar T. Wang, L. Xue, P. Brimblecombe, Y. F. Lam, L. Li and L. Zhang, Ozone pollution in China: A review of concentrations, meteorological influences, chemical precursors, and effects, Sci. Total Environ., 575 (2017), 1582-1596. doi: 10.1016/j.scitotenv.2016.10.081. Google Scholar [40] R. P. Wayne, Chemistry of Atmospheres, Clarendon Press, Oxford, 1991. Google Scholar S. K. Zegeye, B. De Schutter, J. Hellendoorn, E. A. Breunesse and A. Hegyi, Integrated macroscopic traffic flow, emission, and fuel consumption model for control purposes, Transp. Res. Part C: Emerg. Technol., 31 (2013), 158-171. doi: 10.1016/j.trc.2013.01.002. Google Scholar H. M. Zhang, A non-equilibrium traffic model devoid of gas-like behavior, Transp. Res. B, 36 (2002), 275-290. doi: 10.1016/S0191-2615(00)00050-3. Google Scholar K. Zhang and S. Batterman, Air pollution and health risks due to vehicle traffic, Sci. Total Environ., 450-451 (2013), 307-316. doi: 10.1016/j.scitotenv.2013.01.074. Google Scholar Figure 1. A schematic representation of the four computational modules Figure 2. Top: Flow-density relationship (left) and velocity-density relationship (right) from the NGSIM dataset. Bottom: Family of flux functions (7) (left) and family of velocity functions (8) (right) for the calibrated parameters Figure 3. Comparison between ground-truth emission rate and modeled emission rate computed using discrete acceleration (15) on density and speed via kernel density estimation (left). Comparison of emission rate computed with the discrete (15) and analytical (9) acceleration (right). Both the results refer to 500 meters of road and 13 minutes of simulation (data from 4:01 pm - 4:14 pm of NGSIM dataset) Figure 4. Comparison of modeled (black-dotted), modeled with correction factors $ r_{j} $ (red-circles) and ground-truth (blue-solid) emission rates along 500 meters of road during 13 minutes of simulation for the three time periods of the NSGIM dataset. The top row is computed for $ r_{1} = 1.42 $, the central row for $ r_{2} = 1.35 $ and the bottom row for $ r_{3} = 1.15 $ Figure 5. Numerical grid and adaptive time steps (black crosses) required by the solver Figure 6. Flowchart of the complete procedure Figure 7. Traffic dynamic 1: Variation of density (a), speed (b), analytical acceleration (c) and $ {\mathrm{NO_{x}}} $ emissions (d) in space and time Figure 8. Traffic dynamic 1: $ {\mathrm{NO_{x}}} $ emission rate ($ {\mathrm{g}}{/}{\mathrm{h}} $) as a function of speed and acceleration (left); variation in time of the total emission rate ($ {\mathrm{g}}{/}{\mathrm{h}} $) along the entire road (right) Figure 10. Traffic dynamic 2: $ {\mathrm{NO_{x}}} $ emission rate ($ {\mathrm{g}}{/}{\mathrm{h}} $) as a function of speed and acceleration (left); variation in time of the total emission rate ($ {\mathrm{g}}{/}{\mathrm{h}} $) along the entire road (right) Figure 11. Traffic dynamic 2.1: Variation in time of the total $ {\mathrm{NO_{x}}} $ emission rate ($ {\mathrm{g}}{/}{\mathrm{h}} $) along the entire road with $ r = 3/2 $ and varying the traffic light duration $ t_c $ in minutes: $ t_c = 7.5 $ with $ t_r = 3 $ (left); $ t_c = 5 $ with $ t_r = 2 $ (center); $ t_c = 2.5 $ with $ t_r = 1 $ (right) Figure 12. Traffic dynamic 2.2: Variation in time of the total emission rate ($ {\mathrm{g}}{/}{\mathrm{h}} $) along the entire road by varying the ratio $ r $ Figure 13. Variation in time of the total concentration ($ {\mathrm{g}}/{\mathrm{k}}{\mathrm{m}}^3 $) of $ {\mathrm{O_{3}}} $ (left) and $ {\mathrm{O_{2}}} $ (right), in the case of dynamics with (red-circles) and without (blue-solid) traffic light Figure 14. Vertical diffusion of ozone concentration ($ {\mathrm{g}}{/}{\mathrm{km}}^{3} $) in $ \Omega $ at different times with (bottom) and without (top) traffic lights Figure 15. Diffusion of ozone concentration ($ {\mathrm{g}}{/}{\mathrm{km}}^{3} $) in time at $ 1\, {\mathrm{m}} $ height with (right) and without (left) traffic lights Figure 16. Horizontal diffusion of ozone concentration ($ {\mathrm{g}}{/}{\mathrm{km}}^{3} $) in $ \Omega $ at different times with (bottom) and without (top) traffic lights Table 1. Parameters for CGARZ model (1) calibrated on NGSIM dataset $ {V^{\mathrm{max}}} $ $ \rho_f $ $ {\rho^{\mathrm{max}}} $ $ \rho_{c} $ $ {w_{L}} $ $ {w_{R}} $ $ {65}\, {{\mathrm{k}}{\mathrm{m}}{/}{\mathrm{h}}} $ $ {110}\, {\mathrm{veh}{/}{\mathrm{k}}{\mathrm{m}}} $ $ {800}\, {\mathrm{veh}{/}{\mathrm{k}}{\mathrm{m}}} $ $ {\rho^{\mathrm{max}}}/2 $ $ 5687 $ $ 13000 $ Table 2. $ {\mathrm{NO_{x}}} $ parameters in emission rate formula (16) for an internal combustion engine car, where $ {\mathrm{g}} $ denotes gram, $ {\mathrm{m}} $ meter and $ {\mathrm{s}} $ second Vehicle mode $ f_{1} $ $ f_{2} $ $ f_{3} $ $ f_{4} $ $ f_{5} $ $ f_{6} $ $ \left[{\mathrm{g}}/{\mathrm{s}}\right] $ $ \left[{\mathrm{g}}/{\mathrm{m}}\right] $ $ \left[{\mathrm{g}}\, {\mathrm{s}}/{\mathrm{m}}^{2}\right] $ $ \left[{\mathrm{g}}\, {\mathrm{s}}/{\mathrm{m}}\right] $ $ \left[{\mathrm{g}}\, {\mathrm{s}}^{3}/{\mathrm{m}}^{2}\right] $ $ \left[{\mathrm{g}} \, {\mathrm{s}}^{2}/{\mathrm{m}}^{2}\right] $ If $ a_i (t) \geq -0.5\, {\mathrm{m}}{/}{\mathrm{s}}^2 $ 6.19e-04 8e-05 -4.03e-06 -4.13e-04 3.80e-04 1.77e-04 If $ a_i (t)<-0.5\, {\mathrm{m}}{/}{\mathrm{s}}^2 $ 2.17e-04 0 0 0 0 0 Table 3. Errors given by (21) for the three slots of the NGSIM dataset and different correction factor $ r_{1} = 1.42 $, $ r_{2} = 1.35 $ and $ r_{3} = 1.15 $ Period $ \mathrm{Error}(r_{1}) $ $ \mathrm{Error}(r_{2}) $ $ \mathrm{Error}(r_{3}) $ 4:01 pm - 4:14 pm 0.1604 0.1666 0.2204 Table 4. Parameters $ k_{1} $, $ k_{2} $, and $ k_{3} $ of system (26), where $ {\mathrm{c}}{\mathrm{m}} $ denotes centimeter, $ {\mathrm{s}} $ second and $ \mathrm{molecule} $ the number of molecules Parameter Value $ k_{1} $ $ {0.02}\, {\, {\mathrm{s}}^{-1}} $ $ k_{2} $ $ {6.09\times 10^{-34}}\, {{\mathrm{c}}{\mathrm{m}}^6}\, \mathrm{ molecule}^{-2}\, {\mathrm{s}}^{-1} $ $ k_{3} $ $ {1.81\times10^{-14}}\, {{\mathrm{c}}{\mathrm{m}}^3}\, \mathrm{molecule}^{-1}\, {\mathrm{s}}^{-1} $ Table 5. Variation of the total amount of $ {\mathrm{O_{3}}} $, $ {\mathrm{NO}} $, $ {\mathrm{NO_{2}}} $ and $ {\mathrm{O}} $ concentration ($ {\mathrm{g}}/{\mathrm{k}}{\mathrm{m}}^3 $) computed with three different traffic light duration (Traffic dynamic 2.1) with respect the total amount of concentrations without traffic light (Traffic dynamic 1) $ t_c=t_r+t_g $ $ {(3+4.5)}\, {\mathrm{min}} $ $ {(2+3)}\, {\mathrm{min}} $ $ {(1+1.5)}\, {\mathrm{min}} $ $ {\mathrm{O_{3}}} $ 2.95e+07 3.54e+07 3.91e+07 $ {\mathrm{NO}} $ 1.09e+09 1.28e+09 1.43e+09 $ {\mathrm{NO_{2}}} $ 1.55e+08 1.81e+08 2.02e+08 $ {\mathrm{O}} $ 7.00e+01 8.21e+01 9.13e+01 Bertrand Haut, Georges Bastin. A second order model of road junctions in fluid models of traffic networks. Networks & Heterogeneous Media, 2007, 2 (2) : 227-253. doi: 10.3934/nhm.2007.2.227 Paola Goatin. Traffic flow models with phase transitions on road networks. Networks & Heterogeneous Media, 2009, 4 (2) : 287-301. doi: 10.3934/nhm.2009.4.287 Michael Burger, Simone Göttlich, Thomas Jung. Derivation of second order traffic flow models with time delays. Networks & Heterogeneous Media, 2019, 14 (2) : 265-288. doi: 10.3934/nhm.2019011 Paola Goatin, Elena Rossi. Comparative study of macroscopic traffic flow models at road junctions. Networks & Heterogeneous Media, 2020, 15 (2) : 261-279. doi: 10.3934/nhm.2020012 Oliver Kolb, Simone Göttlich, Paola Goatin. Capacity drop and traffic control for a second order traffic model. Networks & Heterogeneous Media, 2017, 12 (4) : 663-681. doi: 10.3934/nhm.2017027 Benjamin Seibold, Morris R. Flynn, Aslan R. Kasimov, Rodolfo R. Rosales. Constructing set-valued fundamental diagrams from Jamiton solutions in second order traffic models. Networks & Heterogeneous Media, 2013, 8 (3) : 745-772. doi: 10.3934/nhm.2013.8.745 Xiaoping Wang. Ground state homoclinic solutions for a second-order Hamiltonian system. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2163-2175. doi: 10.3934/dcdss.2019139 Nicolas Forcadel, Wilfredo Salazar, Mamdouh Zaydan. Homogenization of second order discrete model with local perturbation and application to traffic flow. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1437-1487. doi: 10.3934/dcds.2017060 Xiaoni Chi, Zhongping Wan, Zijun Hao. Second order sufficient conditions for a class of bilevel programs with lower level second-order cone programming problem. Journal of Industrial & Management Optimization, 2015, 11 (4) : 1111-1125. doi: 10.3934/jimo.2015.11.1111 Alexandre Bayen, Rinaldo M. Colombo, Paola Goatin, Benedetto Piccoli. Traffic modeling and management: Trends and perspectives. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : i-ii. doi: 10.3934/dcdss.2014.7.3i Lino J. Alvarez-Vázquez, Néstor García-Chan, Aurea Martínez, Miguel E. Vázquez-Méndez. Optimal control of urban air pollution related to traffic flow in road networks. Mathematical Control & Related Fields, 2018, 8 (1) : 177-193. doi: 10.3934/mcrf.2018008 Ying Lv, Yan-Fang Xue, Chun-Lei Tang. Ground state homoclinic orbits for a class of asymptotically periodic second-order Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1627-1652. doi: 10.3934/dcdsb.2020176 Michael Herty, Gabriella Puppo, Sebastiano Roncoroni, Giuseppe Visconti. The BGK approximation of kinetic models for traffic. Kinetic & Related Models, 2020, 13 (2) : 279-307. doi: 10.3934/krm.2020010 Marco Di Francesco, Simone Fagioli, Massimiliano D. Rosini. Many particle approximation of the Aw-Rascle-Zhang second order model for vehicular traffic. Mathematical Biosciences & Engineering, 2017, 14 (1) : 127-141. doi: 10.3934/mbe.2017009 Rinaldo M. Colombo, Andrea Corli. Dynamic parameters identification in traffic flow modeling. Conference Publications, 2005, 2005 (Special) : 190-199. doi: 10.3934/proc.2005.2005.190 Nicola Bellomo, Abdelghani Bellouquid, Juanjo Nieto, Juan Soler. On the multiscale modeling of vehicular traffic: From kinetic to hydrodynamics. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 1869-1888. doi: 10.3934/dcdsb.2014.19.1869 Emiliano Cristiani, Smita Sahu. On the micro-to-macro limit for first-order traffic flow models on networks. Networks & Heterogeneous Media, 2016, 11 (3) : 395-413. doi: 10.3934/nhm.2016002 Alberto Bressan, Khai T. Nguyen. Conservation law models for traffic flow on a network of roads. Networks & Heterogeneous Media, 2015, 10 (2) : 255-293. doi: 10.3934/nhm.2015.10.255 Johanna Ridder, Wen Shen. Traveling waves for nonlocal models of traffic flow. Discrete & Continuous Dynamical Systems, 2019, 39 (7) : 4001-4040. doi: 10.3934/dcds.2019161 Tong Li. Qualitative analysis of some PDE models of traffic flow. Networks & Heterogeneous Media, 2013, 8 (3) : 773-781. doi: 10.3934/nhm.2013.8.773 Caterina Balzotti Maya Briani Barbara De Filippo Benedetto Piccoli
CommonCrawl
Boundary controllability of the Korteweg-de Vries equation on a tree-shaped network EECT Home Local null controllability of coupled degenerate systems with nonlocal terms and one control force September 2020, 9(3): 635-672. doi: 10.3934/eect.2020027 Semiglobal exponential stabilization of nonautonomous semilinear parabolic-like systems Sérgio S. Rodrigues Johann Radon Institute for Computational and Applied Mathematics, ÖAW, Altenbergerstraße 69, A-4040 Linz, Austria Received April 2019 Revised June 2019 Published December 2019 Full Text(HTML) Figure(8) It is shown that an explicit oblique projection nonlinear feedback controller is able to stabilize semilinear parabolic equations, with time-dependent dynamics and with a polynomial nonlinearity. The actuators are typically modeled by a finite number of indicator functions of small subdomains. No constraint is imposed on the sign of the polynomial nonlinearity. The norm of the initial condition can be arbitrarily large, and the total volume covered by the actuators can be arbitrarily small. The number of actuators depends on the operator norm of the oblique projection, on the polynomial degree of the nonlinearity, on the norm of the initial condition, and on the total volume covered by the actuators. The range of the feedback controller coincides with the range of the oblique projection, which is the linear span of the actuators. The oblique projection is performed along the orthogonal complement of a subspace spanned by a suitable finite number of eigenfunctions of the diffusion operator. For rectangular domains, it is possible to explicitly construct/place the actuators so that the stability of the closed-loop system is guaranteed. Simulations are presented, which show the semiglobal stabilizing performance of the nonlinear feedback. Keywords: Semiglobal exponential stabilization, nonlinear feedback, nonlinear nonautonomous parabolic systems, finite-dimensional controller, oblique projections. Mathematics Subject Classification: 93D15, 93C10, 93B52. Citation: Sérgio S. Rodrigues. Semiglobal exponential stabilization of nonautonomous semilinear parabolic-like systems. Evolution Equations & Control Theory, 2020, 9 (3) : 635-672. doi: 10.3934/eect.2020027 S. Agmon, Lectures on Elliptic Boundary Value Problems, van Nostrand, 1965, URL https://bookstore.ams.org/chel-369-h/. Google Scholar A. Ambrosetti and G. Prodi, A Primer of Nonlinear Analysis, Cambridge University Press, Cambridge, 1993, URL https://www.cambridge.org. Google Scholar K. Ammari, T. Duyckaerts and A. Shirikyan, Local feedback stabilisation to a non-stationary solution for a damped non-linear wave equation, Math. Control Relat. Fields, 6 (2016), 1-25. doi: 10.3934/mcrf.2016.6.1. Google Scholar S. Aniţa and M. Langlais, Stabilization strategies for some reaction-diffusion systems, Nonlinear Anal. Real World Appl., 10 (2009), 345-357. doi: 10.1016/j.nonrwa.2007.09.003. Google Scholar B. Azmi and K. Kunisch, Receding horizon control for the stabilization of the wave equation, Discrete Contin. Dyn. Syst., 38 (2018), 449-484. doi: 10.3934/dcds.2018021. Google Scholar M. Badra and T. Takahashi, Stabilization of parabolic nonlinear systems with finite dimensional feedback or dynamical controllers: Application to the Navier-Stokes system, SIAM J. Control Optim., 49 (2011), 420-463. doi: 10.1137/090778146. Google Scholar J. M. Ball, Remarks on blow-up and nonexistence theorems for nonlinear evolution equations, Quart. J. Math. Oxford Ser., 28 (1977), 473-486. doi: 10.1093/qmath/28.4.473. Google Scholar V. Barbu, Stabilization of Navier-Stokes equations by oblique boundary feedback controllers, SIAM J. Control Optim., 50 (2012), 2288-2307. doi: 10.1137/110837164. Google Scholar V. Barbu, Boundary stabilization of equilibrium solutions to parabolic equations, IEEE Trans. Automat. Control, 58 (2013), 2416-2420. doi: 10.1109/TAC.2013.2254013. Google Scholar V. Barbu, I. Lasiecka and R. Triggiani, Abstract settings for tangential boundary stabilization of Navier-Stokes equations by high- and low-gain feedback controllers, Nonlinear Anal., 64 (2006), 2704-2746. doi: 10.1016/j.na.2005.09.012. Google Scholar V. Barbu, S. S. Rodrigues and A. Shirikyan, Internal exponential stabilization to a nonstationary solution for 3D Navier-Stokes equations, SIAM J. Control Optim., 49 (2011), 1454-1478. doi: 10.1137/100785739. Google Scholar V. Barbu and R. Triggiani, Internal stabilization of Navier-Stokes equations with finite-dimensional controllers, Indiana Univ. Math. J., 53 (2004), 1443-1494. doi: 10.1512/iumj.2004.53.2445. Google Scholar F. Brauer, Perturbations of nonlinear systems of differential equations, J. Math. Anal. Appl., 14 (1966), 198-206. doi: 10.1016/0022-247X(66)90021-7. Google Scholar T. Breiten, K. Kunisch and S. S. Rodrigues, Feedback stabilization to nonstationary solutions of a class of reaction diffusion equations of FitzHugh-Nagumo type, SIAM J. Control Optim., 55 (2017), 2684-2713. doi: 10.1137/15M1038165. Google Scholar L. Chen and A. Jüngel, Analysis of a parabolic cross-diffusion population model without self-diffusion, J. Differential Equations, 224 (2006), 39-59. doi: 10.1016/j.jde.2005.08.002. Google Scholar S. Chowdhury and S. Ervedoza, Open loop stabilization of incompressible Navier-Stokes equations in a 2d channel using power series expansion, J. Math. Pures Appl., 130 (2019), 301-346. doi: 10.1016/j.matpur.2019.01.006. Google Scholar J.-M. Coron and H.-M. Nguyen, Null controllability and finite time stabilization for the heat equations with variable coefficients in space in one dimension via backstepping approach, Arch. Rational Mech. Anal., 225 (2017), 993-1023. doi: 10.1007/s00205-017-1119-y. Google Scholar F. Demengel and G. Demengel, Functional Spaces for the Theory of Elliptic Partial Differential Equations, Universitext, Springer, London, EDP Sciences, Les Ulis, 2012. doi: 10.1007/978-1-4471-2807-6. Google Scholar T. Duyckaerts, X. Zhang and E. Zuazua, On the optimality of the observability inequalities for parabolic and hyperbolic systems with potentials, Ann. Inst. H. Poincaré Anal. Non Linéaire, 25 (2008), 1-41. doi: 10.1016/j.anihpc.2006.07.005. Google Scholar E. Fernández-Cara, M. González-Burgos, S. Guerrero and J.-P. Puel, Null controllability of the heat equation with boundary Fourier conditions: The linear case, ESAIM Control Optim. Calc. Var., 12 (2006), 442-465. doi: 10.1051/cocv:2006010. Google Scholar E. Fernández-Cara, S. Guerrero, O. Y. Imanuvilov and J.-P. Puel, Local exact controllability of the Navier-Stokes system, J. Math. Pures Appl., 83 (2004), 1501-1542. doi: 10.1016/j.matpur.2004.02.010. Google Scholar J. Fourier, Théorie Analytique de la Chaleur, Éditions Jacques Gabay, Paris, 1988. Google Scholar A. V. Fursikov and O. Y. Imanuvilov, Exact controllability of the Navier-Stokes and Boussinesq equations, Russian Math. Surveys, 54 (1999), 565-618. doi: 10.1070/rm1999v054n03ABEH000153. Google Scholar A. Halanay, C. M. Murea and C. A. Safta, Numerical experiment for stabilization of the heat equation by Dirichlet boundary control, Numer. Funct. Anal. Optim., 34 (2013), 1317-1327. doi: 10.1080/01630563.2013.808210. Google Scholar P. R. Halmos, Naive Set Theory, Undergraduate Texts in Mathematics, Springer-Verlag, New York-Heidelberg, 1974. doi: 10.1007/978-1-4757-1645-0. Google Scholar O. Y. Imanuvilov, Remarks on exact controllability for the Navier-Stokes equations, ESAIM Control Optim. Calc. Var., 6 (2001), 39-72. doi: 10.1051/cocv:2001103. Google Scholar D. Kalise and K. Kunisch, Polynomial approximation of high-dimensional Hamilton-Jacobi-Bellman equations and applications to feedback control of semilinear parabolic PDEs, SIAM J. Sci. Comput., 40 (2018), A629-A652. doi: 10.1137/17M1116635. Google Scholar D. Kalise, K. Kunisch and K. Sturm, Optimal actuator design based on shape calculus, Math. Models Methods Appl. Sci., 28 (2018), 2667-2717. doi: 10.1142/S0218202518500586. Google Scholar A. Kröner and S. S. Rodrigues, Internal exponential stabilization to a nonstationary solution for 1D burgers equations with piecewise constant controls, Proceedings of the 2015 European Control Conference (ECC), Linz, Austria, (2015), 2676-2681, doi: 10.1109/ECC.2015.7330942. Google Scholar A. Kröner and S. S. Rodrigues, Remarks on the internal exponential stabilization to a nonstationary solution for 1D Burgers equations, SIAM J. Control Optim., 53 (2015), 1020-1055. doi: 10.1137/140958979. Google Scholar K. Kunisch and S. S. Rodrigues, Explicit exponential stabilization of nonautonomous linear parabolic-like systems by a finite number of internal actuators, ESAIM Control Optim. Calc. Var., 25 (2019). doi: 10.1051/cocv/2018054. Google Scholar K. Kunisch and S. S. Rodrigues, Oblique projection based stabilizing feedback for nonautonomous coupled parabolic-ODE systems, Discrete Contin. Dyn. Syst., (2018), 2018-2040, URL https://www.ricam.oeaw.ac.at/publications/ricam-reports/, Google Scholar C. Laurent, F. Linares and L. Rosier, Control and stabilization of the Benjamin-Ono equation in ${L}^2(\mathbb{T})$, Arch. Ration. Mech. Anal., 218 (2015), 1531-1575. doi: 10.1007/s00205-015-0887-5. Google Scholar H. A. Levine, Some nonexistence and instability theorems for solutions of formally parabolic equations of the form ${P}u_t = -{A}u+{\mathcal F}(u)$, Arch. Ration. Mech. Anal., 51 (1973), 371-386. doi: 10.1007/BF00263041. Google Scholar J.-L. Lions and E. Magenes, Non-Homogeneous Boundary Value Problems and Applications, Vol. Ⅰ of Die Grundlehren Math. Wiss. Einzeldarstellungen, Springer-Verlag, 1972. doi: 10.1007/978-3-642-65161-8. Google Scholar F. Merle and H. Zaag, Optimal estimates for blowup rate and behavior for nonlinear heat equations, Comm. Pure Appl. Math., 51 (1998), 139-196. Google Scholar K. Morris, Linear-quadratic optimal actuator location, IEEE Trans. Automat. Control, 56 (2011), 113-124. doi: 10.1109/TAC.2010.2052151. Google Scholar K. Morris and S. Yang, A study of optimal actuator placement for control of diffusion, 2016 American Control Conference (AAC), Boston, MA, USA, (2016), 2378-5861. doi: 10.1109/ACC.2016.7525303. Google Scholar A. Münch, P. Pedregal and F. Periago, Optimal internal stabilization of the linear system of elasticity, Arch. Rational Mech. Anal., 193 (2009), 171-193. doi: 10.1007/s00205-008-0187-4. Google Scholar I. Munteanu, Boundary stabilisation to non-stationary solutions for deterministic and stochastic parabolic-type equations, Internat. J. Control, 92 (2019), 1720-1728. doi: 10.1080/00207179.2017.1407878. Google Scholar T. Nagatani, H. Emmerich and K. Nakanishi, Burgers equation for kinetic clustering in traffic flow, Physica A, 255 (1998), 158-162. doi: 10.1016/S0378-4371(98)00082-X. Google Scholar J. Nagumo, S. Arimoto and S. Yoshizawa, An active pulse transmission line simulating nerve axon, Proceedings of the IRE, 1962, 2061-2070. doi: 10.1109/JRPROC.1962.288235. Google Scholar T. Nambu, Feedback stabilization for distributed parameter systems of parabolic type, Ⅱ, Arch. Rational Mech. Anal., 79 (1882), 241-259. doi: 10.1007/BF00251905. Google Scholar E. M. D. Ngom, A. Sène and D. Y. Le Roux, Global stabilization of the Navier-Stokes equations around an unstable equilibrium state with a boundary feedback controller, Evol. Equ. Control Theory, 4 (2015), 89-106. doi: 10.3934/eect.2015.4.89. Google Scholar D. Phan and S. S. Rodrigues, Gevrey regularity for Navier-Stokes equations under Lions boundary conditions, J. Funct. Anal., 272 (2017), 2865-2898. doi: 10.1016/j.jfa.2017.01.014. Google Scholar D. Phan and S. S. Rodrigues, Stabilization to trajectories for parabolic equations, Math. Control Signals Systems, 30 (2018), 50 pp. doi: 10.1007/s00498-018-0218-0. Google Scholar Y. Privat, E. Trélat and E. Zuazua, Actuator design for parabolic distributed parameter systems with the moment method, SIAM J. Control Optim., 55 (2017), 1128-1152. doi: 10.1137/16M1058418. Google Scholar J.-P. Raymond, Feedback boundary stabilization of the three-dimensional incompressible Navier-Stokes equations, J. Math. Pures Appl., 87 (2007), 627-669. doi: 10.1016/j.matpur.2007.04.002. Google Scholar J.-P. Raymond and L. Thevenet, Boundary feedback stabilization of the two-dimensional Navier-Stokes equations with finite-dimensional controllers, Discrete Contin. Dyn. Syst., 27 (2010), 1159-1187. doi: 10.3934/dcds.2010.27.1159. Google Scholar S. S. Rodrigues, Feedback boundary stabilization to trajectories for 3D Navier-Stokes equations, Appl. Math. Optim., (2018), 1-38. doi: 10.1007/s00245-017-9474-5. Google Scholar S. S. Rodrigues and K. Sturm, On the explicit feedback stabilisation of one-dimensional linear nonautonomous parabolic equations via oblique projections, IMA J. Math. Control Inform., (2018). doi: 10.1093/imamci/dny045. Google Scholar D. L. Russell, Controllability and stabilizability theory for linear partial differential equations: Recent progress and open questions, SIAM Rev., 20 (1978), 639-739. doi: 10.1137/1020095. Google Scholar D. L. Russell and B. Y. Zhang, Exact controllability and stabilizability of the Korteweg-de Vries equation, Trans. Amer. Math. Soc., 348 (1996), 3643-3672. doi: 10.1090/S0002-9947-96-01672-8. Google Scholar R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, Second edition. Applied Mathematical Sciences, 68. Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar R. Temam, Navier-Stokes Equations: Theory and Numerical Analysis, Reprint of the 1984 edition. AMS Chelsea Publishing, Providence, RI, 2001. doi: 10.1090/chel/343. Google Scholar M. Y. Wu, A note on stability of linear time-varying systems, IEEE Trans. Automat. Control, AC-19 (1974), 162. doi: 10.1109/tac.1974.1100529. Google Scholar M. Yamamoto, Carleman estimates for parabolic equations and applications, Inverse Problems, 25 (2009), 75 pp. doi: 10.1088/0266-5611/25/12/123013. Google Scholar W. H. Young, On classes of summable functions and their Fourier series, Proc. R. Soc. Lond., 87 (1912), 225-229. doi: 10.1098/rspa.1912.0076. Google Scholar Figure 1. Uncontrolled solutions. Linear and nonlinear systems Figure Options Download as PowerPoint slide Figure 2. Linear systems and linear feedback Figure 3. Nonlinear systems and linear feedback Figure 4. Nonlinear systems and nonlinear feedback Figure 5. Nonlinear systems and nonlinear feedback. Bigger initial condition Figure 6. Nonlinear systems and nonlinear feedback. Increasing the number of actuators Figure 7. Nonlinear systems and linear feedback. Increasing the number of actuators Figure 8. Nonlinear systems and nonlinear feedback. $ y(0) = c_{\rm ic}\sin(8\pi x)\in E_{ {\mathbb M}_\sigma}^\perp $ Duy Phan, Lassi Paunonen. Finite-dimensional controllers for robust regulation of boundary control systems. Mathematical Control & Related Fields, 2021, 11 (1) : 95-117. doi: 10.3934/mcrf.2020029 Guoliang Zhang, Shaoqin Zheng, Tao Xiong. A conservative semi-Lagrangian finite difference WENO scheme based on exponential integrator for one-dimensional scalar nonlinear hyperbolic equations. Electronic Research Archive, 2021, 29 (1) : 1819-1839. doi: 10.3934/era.2020093 Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020434 Vo Van Au, Hossein Jafari, Zakia Hammouch, Nguyen Huy Tuan. On a final value problem for a nonlinear fractional pseudo-parabolic equation. Electronic Research Archive, 2021, 29 (1) : 1709-1734. doi: 10.3934/era.2020088 Divine Wanduku. Finite- and multi-dimensional state representations and some fundamental asymptotic properties of a family of nonlinear multi-population models for HIV/AIDS with ART treatment and distributed delays. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021005 Michiel Bertsch, Danielle Hilhorst, Hirofumi Izuhara, Masayasu Mimura, Tohru Wakasa. A nonlinear parabolic-hyperbolic system for contact inhibition and a degenerate parabolic fisher kpp equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3117-3142. doi: 10.3934/dcds.2019226 Gervy Marie Angeles, Gilbert Peralta. Energy method for exponential stability of coupled one-dimensional hyperbolic PDE-ODE systems. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020108 Xiu Ye, Shangyou Zhang, Peng Zhu. A weak Galerkin finite element method for nonlinear conservation laws. Electronic Research Archive, 2021, 29 (1) : 1897-1923. doi: 10.3934/era.2020097 Vo Van Au, Mokhtar Kirane, Nguyen Huy Tuan. On a terminal value problem for a system of parabolic equations with nonlinear-nonlocal diffusion terms. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1579-1613. doi: 10.3934/dcdsb.2020174 Amira M. Boughoufala, Ahmed Y. Abdallah. Attractors for FitzHugh-Nagumo lattice systems with almost periodic nonlinear parts. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1549-1563. doi: 10.3934/dcdsb.2020172 Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456 Yubiao Liu, Chunguo Zhang, Tehuan Chen. Stabilization of 2-d Mindlin-Timoshenko plates with localized acoustic boundary feedback. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021006 Guangjun Shen, Xueying Wu, Xiuwei Yin. Stabilization of stochastic differential equations driven by G-Lévy process with discrete-time feedback control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 755-774. doi: 10.3934/dcdsb.2020133 Maoding Zhen, Binlin Zhang, Vicenţiu D. Rădulescu. Normalized solutions for nonlinear coupled fractional systems: Low and high perturbations in the attractive case. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020379 Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436 Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $ L^2- $norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077 PDF downloads (135) HTML views (435) Article outline
CommonCrawl
Physically based rendering: informal introduction 07 Dec 2017 · 5 min read In this post I will give you an informal introduction (and my personal understanding) about Physically based rendering. Physically Based Rendering (PBR) is one of the latest and most exciting trend in computer graphics. PBR is "everywhere" in computer graphics. But wait, what is it PBR ? PBR uses physically correct lighting and shading models to treat light as it behaves in the real world. As a consequence of the fact that what could be seen in a computer graphics application is decided by how light is represented, with PBR it is possible to reach a new level of realism. But wait, what do we mean with "physically correct"? Before giving an answer and try to give a detail definition of PBR we need to understand well some important concepts. What is light? Light is a form of electromagnetic radiation. Specifically, it is a small subset of the entire electromagnetic radiation spectrum with wavelength between 400 nm and 700 nm. The set of studies and techniques that try to describe and measure how the electromagnetic radiation of light is propagated, reflected and transmitted is called radiometry. What are the fundamental quantities described by radiometry? The first one is the called flux, it describes the amount of radiant energy emitted, reflected or transmitted from a surface per unit time. The radiant energy is the energy of an electromagnetic radiation. The unit measure of flux is joules per seconds $\frac{J}{s}$, and it is usually reported with the Greek letter $\phi$. Other two important quantities of radiometry are irradiance and radiant exitance. The first one described flux arriving at a surface per unit area. The second one describe flux leaving a surface per unit area (Pharr et al., 2010 [1]). Formally irradiance is described with the following equation: £$E = \frac{d\phi}{dA}£$ where the differential flux $d\phi$ is computed over the differential area $dA$. It is measured as units of watt per square meter. Before proceeding to the last radiometry quantity definition, it is useful to give the definition of solid angle. A solid angle is an extension of a 2D angle in 3D on a unit sphere. It is the total area projected by an object on a unit sphere centered at a point $p$. It is measured in steradians. The entire unit sphere corresponds to a solid angle of $4\pi$ (the surface area of the unit sphere). A solid angle is usually indicated as $\Omega$, but it is possible also to represent it with $\omega$, that is the set of all direction vectors anchored at $p$ that point toward the area on the unit sphere and the object (Pharr et al., 2010 [1]). Now it is possible to give the definition of radiance, that is flux density per unit solid angle per unit area: £$L=\frac{d\phi}{d\omega \ dA^{\perp}}£$ In this case $dA^{\perp}$ is the projected area $dA$ on a surface perpendicular to $\omega$. So radiance describe the limit of measurement of incident light at the surface as a cone of incident directions of interest ${d\omega}$ becomes very small, and as the local area of interest on the surface $dA$ also becomes very small (Pharr et al., 2010 [1]). It is useful to make a distinction between radiance arriving at a point, usually called incident radiance and indicated with $L_{i}(p,\omega)$, and radiance leaving a point called exitant radiance and indicated with $L_{o}(p,\omega)$. This distinction will be used in the equations described below. It is important also to note another useful property, that connect the two types of radiance: £$L_{i}(p,\omega) \neq L_{o}(p,\omega)£$ The rendering equation The rendering equation was introduced by James Kajiya in 1986 [2]. Sometimes it is also called the LTE, Light Transport Equation. It is the equation that describes the equilibrium distribution of radiance in a scene (Pharr et al., 2010 [3]). It gives the total reflected radiance at a point as a sum of emitted and reflected light from a surface. This is the formula of the rendering equation: £$L_{o}(p,\omega) = L_{e}(p,\omega) + \int_{\Omega}f_{r}(p,\omega_{i},\omega_{0})L_{i}(p,\omega)\cos\theta_{i}d\omega_{i}£$ In this formula the meaning of each symbols are: $p$ is a point on a surface in the scene $\omega_{o}$ is the outgoing light direction $\omega_{i}$ is the incident light direction $L_{o}(p,\omega)$ is the exitant radiance at a point $p$ $L_{e}(p,\omega)$ is the emitted radiance at a point $p$ $\Omega$ is the unit hemisphere centered around the normal at point $p$ $\int_{\Omega}…d\omega_{i}$ is the integral over the unit hemisphere $f_{r}(p,\omega_{i},\omega_{0})$ is the Bidirectional Reflectance Distribution Function and we will talk about it in a few moments $L_{i}(p,\omega)$ is the incident radiance arriving at a point $p$ $\cos\theta_{i}$ is given by the dot product between 𝜔: and the normal at point $p$, and is the attenuation factor of the irradiance due to incident angle BRDF One of the main component of the rendering equation previously described is the Bidirectional Reflectance Distribution Function (BRDF). This function describes how light is reflected from a surface. It represents a constant of proportionality between the differential exitant radiance and the differential irradiance at a point $p$ (Pharr et al., 2010 [1]). The parameter of this function are: the incident light direction, the outgoing light direction and a point on the surface. The formula for this function in terms of radiometric quantities is the following: £$f_{r}(p,\omega_{i},\omega_{o}) = \frac{dL_{o}(p,\omega_{o})}{dE(p,\omega_{I})}£$ The BRDF has two important properties: it is a symmetric function, so for all pair of directions £$f_{r}(p,\omega_{i},\omega_{o}) = f_{r}(p,\omega_{o},\omega_{i})£$ it satisfies the energy conservation principle: the light reflected is less than or equal to the incident light. A lot of models has been developed to describe the BRDF of different surfaces. In particular, in the last years the microfacet models have gained attention. In these kind of models the surface is represented as composed by infinitely small microfacets that model in a more realistic way the vast majority of surfaces in the real world. Each one of these microfacets has is geometric definition (in particular its normal). Some specific material surfaces, for example glass, reflect and transmit light at the same time. So a fraction of light goes through the material. For this reason, there's another function, the Bidirectional Transmittance Distribution Function, BTDF, defined in the same way as the BRDF, but with the directions $\omega_{i}$ and $\omega_{o}$ placed in the opposite hemisphere around $p$ (Pharr et al., 2010 [1]). It is usually indicated as $f_{t}(p,\omega_{i},\omega_{o})$. The Fresnel equations tries to define the behaviour of light between different surfaces. They also help us to get the balance between different kind of reflections changes based on the angle at which you view the surface. Physically Based Rendering So let's go back to our original question: What is PBR? PBR is a model that enclose a set of techniques that try to simulate how the light behaves in the real world. Taking an extraction from the Wikipedia definition: PBR is often characterized by an approximation of a real, radiometric bidirectional reflectance distribution function (BRDF) to govern the essential reflections of light, the use of reflection constants such as specular intensity, gloss, and metallicity derived from measurements of real-world sources, accurate modeling of global illumination in which light bounces and/or is emitted from objects other than the primary light sources, conservation of energy which balances the intensity of specular highlights with dark areas of an object, Fresnel conditions that reflect light at the sides of objects perpendicular to the viewer, and accurate modeling of roughness resulting from microsurfaces. You can see from the definition that PBR is a model that uses all the concepts we saw previously in this article to try to get the most accurate results in terms of realism in a computer graphics applications. PBR engines and asset pipelines let the artist define materials in terms of more realistic components, instead of tweaking ad-hoc parameters based on the type of the surface. Usually in these kind of engine/assets pipeline the main parameter used to specify a surface features are: albedo/diffuse: this component controls the base color/reflectivity of the surface metallic: this component specifies the is the surface is metallic or not roughness: this component specifies how rough a surface is on a per texel basis normal: this component is a classical normal map of the surface What results can you achieve suing PBR? These are two example images: the first one is taken from my physically based spectral path tracing engine Spectral Clara Lux Tracer and the second one is taken from PBRT, the physically based engine described in the book "Physically based rendering: from theory to implementation" by M. Pharr, W. Jakob, G. Humphreys. Some PBR scenes generated using PBRT and Spectral Clara Lux Tracer How cool are these images???? We are at the end of the introduction. I hope now it is at least clear what is PBR !! See you for other stuff about computer graphics and PBR . [1] M. Pharr and G. Humphreys, "Color and radiometry," in Physically based rendering: from theory to implementation, 2nd Edition ed., Burlington, Massachusetts: Morgan Kaufmann, 2010, ch. 5, pp. 261-297. [2] J. T. Kajiya, "The Rendering Equation," in SIGGRAPH '86, Dallas, 1986, pp. 143-150. [3] M. Pharr and G. Humphreys, "Light transport I: surface reflection," in Physically based rendering: from theory to implementation, 2nd ed., Burlington, Morgan Kaufmann, 2010, ch. 15, pp. 760-770. computer graphics physically based rendering
CommonCrawl
Functions of several variables 1 Overview of functions 2 Linear functions and planes in ${\bf R}^3$ 3 An example of a non-linear function 4 Graphs 5 Limits 6 Continuity 7 The partial differences and difference quotients 8 The average and the instantaneous rates of change 9 Linear approximations and differentiability 10 Partial differentiation and optimization 11 The second difference quotient with respect to a repeated variable 12 The second difference and the difference quotient with respect to mixed variables 13 The second partial derivatives Let's review multidimensional functions. We have two axes: the dimension of the domain and the dimension of the range: We covered the very first cell in Chapter 7. In Chapter 17, we made a step in the vertical direction and explored the first column of this table. It is now time to move to the right. We retreat to the first cell because the new material does not depend on the material of Chapter 17 -- or vice versa, even though they do interact via compositions. We will not jump diagonally! We need to appreciate however the different challenges these two steps present. Every two numerical functions make a planar parametric curve and, conversely, every planar parametric curve is just a pair numerical functions. On the other hand, we can see that the surface that is the graph of a function of two variables produces -- through cutting by vertical planes -- infinitely many graphs of numerical functions. Note that the first cell has curves but not all of them because some of them fail the vertical line test -- such as the circle -- and can't be represented by graphs of numerical functions. Hence the need for parametric curves. Similarly, the first cell of the second column has surfaces but not all of them because some of them fail the vertical line test -- such as the sphere -- and can't be represented by graphs of functions of two variables. Hence the need for parametric surfaces, shown higher in this column. They are presented in Chapter 19. We represent a function diagrammatically as a black box that processes the input and produces the output of whatever nature: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccc} \text{input} & & \text{function} & & \text{output} \\ x & \mapsto & \begin{array}{|c|}\hline\quad f \quad \\ \hline\end{array} & \mapsto & y \end{array}$$ Let's compare the two: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{rcccl} \text{} & & \text{parametric } & & \text{} \\ \text{input} & & \text{curve} & & \text{output} \\ t & \mapsto & \begin{array}{|c|}\hline\quad F \quad \\ \hline\end{array} & \mapsto & X\\ {\bf R}& &&&{\bf R}^m\\ \text{number}& &&&\text{point or vector} \end{array}\quad \begin{array}{rcccl} \text{} & & \text{function of} & & \text{} \\ \text{input} & & \text{two variables} & & \text{output} \\ X & \mapsto & \begin{array}{|c|}\hline\quad f \quad \\ \hline\end{array} & \mapsto & z\\ {\bf R}^m& &&&{\bf R}\\ \text{point or vector}& &&&\text{number} \end{array}$$ They can be linked up and produce a composition... But our interest is the latter starting with dimension $m=2$: The main metaphor for a function of two variables will remain to be terrain: Here, every pair $X=(x,y)$ represents a location on a map and $z=f(x,y)$ is the elevation of the terrain at that location. This is how it is plotted by a spreadsheet: Now calculus. One subject of calculus is change and motion and, among others, we will address the question: if a drop of water land on this surface, in what direction will it flow? We will also consider the issue of the rate of change of the function -- in any direction. Secondly, calculus studies tangency and linear approximations and, among others, we will address the question: if we zoom in on a particular location on the surface, what does it look like? The short answer is: like a plane. It is discussed in the next section. Examples of this issue have been seen previously. Indeed, recall that the Tangent Problem asks for a tangent line to a curve at a given point. It has been solved for parametric curves in Chapter 17. However, in real life we see surfaces rather than curves. The examples are familiar. Example. In which direction a radar signal will bounce off a plane when the surface of the plane is curved? In what direction will light bounce off a curved mirror? What if it is a whole building? Recall the $1$-dimensional case. The difference quotient of a function $y=f(x)$ at $x=a$ is defined as the slope of the line that connects $(a,f(a))$ to the next point $(x,f(x))$: $$\frac{\Delta f}{\Delta x}=\frac{f(x)-f(a)}{x-a}.$$ Now, let's see how this plan applies to functions of two variables $z=f(x,y)$. If we are interested in point $(a,b,f(a,b))$ on the graph of $f$, we still plot the line that connects this point to the next point on the grid. There are two point this time; they lie in the $x$- and the $y$-directions from $(a,b)$, i.e., $(x,b)$ and $(a,y)$ with $x\ne a$ and $y\ne b$. The two slopes in these two directions are the two difference quotients, with respect to $x$ and with respect to $y$: $$\frac{\Delta f}{\Delta x}=\frac{f(x,b)-f(a,b)}{x-a} \text{ and } \frac{\Delta f}{\Delta y}=\frac{f(a,y)-f(a,b)}{y-b}.$$ When done with every pair of nodes on the graph, the result is a mesh of triangles: Furthermore, if the surface is the graph of a continuous function and we zoom in closer and closer on a particular point, we might expect the surface to start to look more and more straight like a plane. Linear functions and planes in ${\bf R}^3$ We will approach the issue in a manner analogous to that for lines. The standard, slope-intercept, form of the equation of a line in the $xy$-plane is: $$y=mx+p.$$ A similar, also in some sense slope-intercept, form of the equation of a plane in ${\bf R}^3$ is: $$z=mx+ny+p.$$ Indeed, if we substitute $x=y=0$ we have $z=p$. Then $p$ is the $z$-intercept! In what sense are $m$ and $n$ slopes? Let's substitute $y=0$ first. We have $z=mx+p$, an equation of a line -- in the $xz$-plane. Its slope is $m$. Now we substitute $x=0$. We have $z=ny+p$, an equation of a line -- in the $yz$-plane. Its slope is $n$. Now, if we cut the plane with any plane parallel to the $xz$-plane, or respectively $yz$-plane, the resulting line has the same slope; for example: $$y=1\ \Longrightarrow\ z=mx+n+p;\quad x=1\ \Longrightarrow\ z=m+ny+p.$$ Therefore, we are justified to say that $m$ and $n$ are the slopes of the plane with respect to the variables $x$ and $y$ respectively: $$\begin{array}{llll} z=&m&\cdot x&+&n&\cdot y&+&p\\ &x\text{-slope}&&&y\text{-slope}&&&z\text{-intercept} \end{array}$$ Cutting with a horizontal plane also produces a line: $$z=1\ \Longrightarrow\ 1=mx+ny+p.$$ Next, the point-slope form of the equation of a line in the $xy$-plane is: $$y-b=m(x-a).$$ This is how we can plot this line. We start with the point $(a,b)$ in ${\bf R}^2$. Then we make a step along the $x$-axis with the slope $m$, i.e., we end up at $(a+1,b+m)$ or $(a+1/m,b+1)$, etc. These two points determine the line. There is a similar, also in some sense point-slope, form of the equation of a plane. This is the step we make: $$\begin{array}{r|l} \text{in }{\bf R}^2&\text{in }{\bf R}^3\\ \hline \begin{array}{rrr} \text{point }(a,b)\\ \text{slope }m \end{array}& \begin{array}{lll} \text{point }(a,b,c)\\ \text{slopes }m,n \end{array} \end{array}$$ This analogy produces the following formula: $$z-c=m(x-a)+n(y-b).$$ Expanding it takes us back to the point-slope formula. This is how we can plot this plane. We start by plotting the point $(a,b,c)$ in ${\bf R}^3$. Now we treat one variable at a time. First, we fix $y$ and change $x$. From the point $(a,b,c)$, we make a step along the $x$-axis with the $x$-slope, i.e., we end up at $(a+1,b,c+m)$ or $(a+1/m,b,c+1)$, etc. The equation of this line is: $$z-c=m(x-a),\ y=b.$$ Second, we fix $x$ and change $y$. We make a step along the $y$-axis with the $y$-slope, i.e., we end up at $(a,b+1,c+n)$ or $(a,b+1/n,c+1)$, etc. The equation of this line is: $$x=a,\ z-c=n(y-b).$$ These three points (or those two lines through the same point) determine the plane. Below, we have $$(a,b,c)=(2,4,1),\ m=-1,\ n=-2.$$ What is a plane anyway? The planes we have considered so far are the graphs of (linear) functions of two variables: $$z=f(x,y)=mx+ny+p.$$ They have to satisfy the Vertical Line Test. Therefore, the vertical planes -- even the two $xz$- and $yz$-planes -- are excluded. This is very similar to the situation with lines and the impossibility to represent all lines in the standard form $y=mx+p$ and we need the general (implicit) equation of a line in ${\bf R}^2$: $$m(x-a)+n(y-b)=0.$$ The general (implicit) equation of a line in ${\bf R}^3$ is: $$m(x-a)+n(y-b)+k(z-c)=0.$$ Let's take a careful look at the left-hand side of this expression. There is a symmetry here over the three coordinates: $$\begin{array}{lll} m&\cdot&(x-a)+\\ n&\cdot&(y-b)+\\ k&\cdot&(z-c)=0 \end{array}\quad\leadsto\quad\left[ \begin{array}{lll}m\\n\\k\end{array}\right]\cdot \left[ \begin{array}{lll}x-a\\y-b\\z-c\end{array}\right]=0.$$ This is the dot product! The equation becomes: $$<m,n,k>\cdot <x-a,y-b,z-c>=0,$$ or even better: $$<m,n,k>\cdot \bigg( (x,y,z)-(a,b,c) \bigg) =0.$$ Finally a coordinate-free version: $$N\cdot (P-P_0)=0 \text{ or } N\cdot P_0P=0.$$ Here we have in ${\bf R}^3$: $P$ is the variable point, $P_0$ is the fixed point, and $N$ is any vector that somehow represents the slope of the plane. The meaning of the vector $N$ is revealed once we remember that the dot product of two vectors is $0$ if and only if they are perpendicular. These vectors are like bicycle's spokes to the hub $N$. This idea gives us our definition. Definition. Suppose a point $P_0$ and a non-zero vector $N$ are given. Then the plane through $P_0$ with normal vector $N$ is the collection of all points $P$ -- and $P_0$ itself -- that satisfy: $$P_0P\perp N.$$ This definition gives us different results in different dimensions: $$\begin{array}{llll} \text{dimension}&\text{ambient space}&\text{"hyperplane"}&\\ \hline 2&{\bf R}^2&{\bf R}^1&\text{line}\\ 3&{\bf R}^3&{\bf R}^2&\text{plane}\\ 4&{\bf R}^4&{\bf R}^3&\text{--}\\ ...&...&...&...\\ \end{array}$$ A hyperplane is something very "thin" relative the whole space but not as thin as, say, a curve. This wide applicability shows that learning the dot product really pays off! We will need this formula to study parametric surfaces later. In this chapter we limit ourselves to functions of several variables and, therefore, non-vertical planes. What makes a plane non-vertical? A non-zero vertical component of the normal vector. Since length don't matter here (only the angles), we can simply assume that this component is equal to one: $$N=<m,n,1>.$$ Then the equation of a plane simplifies: $$0=N\cdot (P-P_0)=<m,n,1>\cdot<x-a,y-b,z-c>=m(x-a)+n(y-b)+z-c,$$ or the familiar $$z=c+m(x-a)+n(y-b).$$ In the vector notation, we have: $$z=c+M\cdot (Q-Q_0).$$ In case of dimension $2$ we have here: $Q=(x,y)$ is the variable point, $Q_0=(a,b)$ is the fixed point, and $M=<m,n>$ is the vector the components of which are the two slopes of the plane. This is a linear function: $$z=f(x,y)=c+m(x-a)+n(y-b)=p+mx+ny,$$ and $M=<m,n>$ is called the gradient of $f$. An example of a non-linear function What makes a function linear? The implicit answer has been: the variable can only be multiplied by a constant and added to a constant. The first part of the answer still applies, even to functions of many variables as it prohibits multiplication by another variable. You can still add them. The non-linearity of a function of two variables is seen as soon as it is plotted -- it's not a plane -- but it also suffices to limit its domain. So, this function isn't linear: $$f(x,y)=xy.$$ In fact, $xy=x^1y^1$ is seen as quadratic if we add the powers of $x$ and $y$: $1+1=2$. We come to the same conclusion when we limit the domain of the function to the line $y=x$ in the $xy$-plane; using $y=x$ s a substitution we arrive to a function of one variable: $$g(x)=f(x,x)=x\cdot x=x^2.$$ So, the part of the graph of $f$ that lies exactly above the line $y=x$ is a parabola. And so is the part that lies above the line $y=-x$; it's just open down instead of up: $$h(x)=f(x,-x)=x\cdot (-x)=-x^2.$$ These two parabolas have a single point in common and therefore make up the essential part of a saddle point: The former parabola gives room for the horse's front and back while the latter for the horseman's legs. A simpler way to limit the domain is to fix one independent variable at a time. We fix $y$ first: $$\begin{array}{lrl} \text{plane}&\text{equation}&\text{curve}\\ \hline y=2&z=x\cdot 2&\text{line with slope }2\\ y=1&z=x\cdot 1&\text{line with slope }1\\ y=0&z=x\cdot 0=0&\text{line with slope }0\\ y=-1&z=x\cdot (-1)&\text{line with slope }1\\ y=-2&z=x\cdot (-2)&\text{line with slope }-2\\ \end{array}$$ The view shown below is from the direction of the $y$-axis: The data for each line comes from the $x$-column of the spreadsheet and one of the $z$-columns. These lines give the lines of elevation of this terrain in a particular, say, east-west direction. This is equivalent to cutting the graph by a vertical plane parallel to the $xz$-plane. We fix $x$ second: $$\begin{array}{lrl} \text{plane}&\text{equation}&\text{curve}\\ \hline x=2&z=2\cdot y&\text{line with slope }2\\ x=1&z=1\cdot y&\text{line with slope }1\\ x=0&z=0\cdot y=0&\text{line with slope }0\\ x=-1&z=(-1)\cdot y&\text{line with slope }1\\ x=-2&z=(-2)\cdot y&\text{line with slope }-2\\ \end{array}$$ This is equivalent to cutting the graph by a vertical plane parallel to the $yz$-plane. The view shown below is from the direction of the $x$-axis: The data for each line comes from the $y$-row of the spreadsheet and one of the $z$-rows. These lines give the lines of elevation of this terrain in a particular, say, north-south direction. Thus the surface of this graph is made of these straight lines! It's a potato chip: Another way to analyze the graph is to limit the range instead of the domain. We fix the dependent variable this time: $$\begin{array}{lrl} \text{elevation}&\text{equation}&\text{curve}\\ \hline z=2&2=x\cdot y&\text{hyperbola}\\ z=1&1=x\cdot y&\text{hyperbola}\\ z=0&0=x\cdot y&\text{the two axes}\\ z=-1&-1=x\cdot y&\text{hyperbola}\\ z=-2&-2=x\cdot y&\text{hyperbola} \end{array}$$ The result is a family of curves: Each is labelled with the corresponding value of $z$, two branches for each. These lines are the lines of equal elevation of this terrain. We can see who these lines come from cutting the graph by a horizontal plane (parallel to the $xy$-plane). We can use them to reassemble the surface by lifting each to the elevation indicated by its label: In the meantime, the colored parts of the graph correspond to intervals of outputs. This surface presented here is called the hyperbolic paraboloid. Definition. The graph of a function of one variable $z=f(x)$ is the set of all points in ${\bf R}^{2}$ of the form $(x,f(x))$. In spite of a few exceptions, the graphs of the function of one variable have been curves. Definition. The graph of a function of two variables $z=f(x,y)$ is the set of all points in ${\bf R}^{3}$ of the form $(x,y,f(x,y))$. In spite of possible exceptions, the graphs of functions of two variables we encounter will probably be surfaces. It is important to remember that the theory of functions of several variables include that of the functions of one variable! The formula for $f$, for example, might have no mention of $y$ such as for example $z=f(x,y)=x^3$. The graph of such a function will be feature-less, i.e., constant, in the $y$-direction. It will look as if made of planks like a park bench: In fact, the graph can be acquired from the graph of $z=x^2$ (the curve) by shifting it in the $y$-direction producing the surface: In the last section we fixed one independent variable at a time making a function of two variables a function of one variable subject to the familiar methods. The idea applies to higher dimensions. Definition. Suppose $z=f(x_1,...,x_n)$ is a function of several variables, i.e., $x_1,...,x_n$ and $z$ are real numbers. Then for each value of $k=1,2,...,n$ and any collection of numbers $x_i=a_i,\ i\ne k$, the numerical function defined by: $$z=h(x)=f(a_1,...,a_{k-1},x,a_{k+1},...,a_n),$$ is called a variable function of $f$. When $n=2$, its graph is called a variable curve. We thus fix all variables but one making the function of $n$ variables a function of a single variable. Exercise. What happens when $n=1$? Meanwhile, fixing the value of the dependent variable doesn't have to produce a new function. The result is instead an implicit relation. The idea applies to higher dimensions too. Definition. Suppose $z=f(X)$ is a function of several variables, i.e., $X$ belongs to some ${\bf R}^n$ and $z$ is a real number. Then for each value of $z=c$, the subset $$\{X:\ f(X)=c\}$$ of ${\bf R}^n$ is called a level set of $f$. When $n=2$, it is informally called a level curve or a contour curve. In general, a level set doesn't have to be a curve as the example of $f(x,y)=1$ shows. Theorem. Level sets don't intersect. Proof. It follows from the Vertical Line Test. $\blacksquare$ Exercise. Provide the proof. Example. Let's consider: $$f(x,y)=\sqrt{x^2+y^2}.$$ The level curves are implicit curves and they are plotted point by point... ...unless we recognize them: $$c=\sqrt{x^2+y^2}\ \Longrightarrow\ c^2=x^2+y^2.$$ They are circles! This surface is a half of a cone. How do we know it's a cone and not another shape? We limit the domain to this line on the $xy$-plane: $$y=mx\ \Longrightarrow\ z=\sqrt{x^2+(mx)^2}=\sqrt{1+m}|x|.$$ These are V-shaped curves. $\square$ Example. Consider this function of two variables: $$f(x,y)=y^2-x^2.$$ Its level curves are hyperbolas again just as in the first example. This is the familiar saddle point from the last section. Its variable curves are different simply because the angle of the graph relative to the axes is different. $\square$ Example. If we replace $-$ with $+$, the function of two variables becomes $$f(x,y)=y^2+x^2$$ with a very different graph; it has an extreme point. The variable curves are still parabolas but both point upward this time: The level curves $$y^2+x^2=c$$ are circles except when $c=0$ (a point) or $c<0$ (empty). They don't grow uniformly, as with the cone, with $c$ however. The surface is called the paraboloid of revolution. One of the parabolas that passes through zero is rotated around the $z$-axis producing this surface. $\square$ The method of level curves has been used for centuries to create actual maps, i.e., $2$-dimensional visualizations of $3$-dimensional terrains. The collection of all level curves is called the contour map of the function. We will see that zooming in on any point of the contour map is either a generic point with parallel level curves or a singular point exemplified by a saddle point and an extreme point. The singular points are little islands in the sea of generic points... Example. Suppose $z=f(x,y)$ is a function of two variables, then for each value of $z=c$, the subset $$\{(x,y):\ f(x,y)\le c\}$$ of the plane is called a sub-level set of $f$. These sets are used to convert gray-scale images to binary: This operation is called "thresholding". $\square$ Thus, the variable curves are the result of restricting the domain of the function while the level curves are the result of restricting the image. Either method is applied in hope of simplifying the function to the degree that will make is subject to the tool we already have. However, the original graph is made of infinitely many of those... Informally, the meaning of the domain is the same: the set of all possible inputs. Definition. The natural domain of a function $z=f(X)$ is the set of all $X$ in ${\bf R}^n$ for which $f(X)$ makes sense. Just as before, the issue is that of division by zero, square roots, etc. The difference comes from the complexity of the space of inputs: the plane (and further ${\bf R}^n$) vs. the line. Example. The domain of the function $$f(x,y)=\frac{1}{xy}$$ is the whole plane minus the axes. Example. The domain of the function $$f(x,y)=\sqrt{x-y}$$ is the half of the plane given by the inequality $y\le x$. Example. The domain of the function that represents the magnitude of the gravitational force is all points but $0$: It is a multiple of the function: $$f(X)=\frac{1}{d(X,0)}.$$ $\square$ Exercise. Provide the level and the variable curves for the function that represents the magnitude of the gravitational force. Definition. The graph of a function $z=f(X)$, where $X$ belongs to ${\bf R}^{n}$, is the set of all points in ${\bf R}^{n+1}$ so that the first $n$ coordinates are those of $X$ and the last is $f(X)$. We have used the restriction of the image and the level curves that come from it as a tool of reduction. We reduce the study of a complex object -- the graph of $z=f(x,y)$ -- to a collection of simpler ones -- implicit curves $c=f(x,y)$. The idea is even more useful in the higher dimensions. In fact, we can't simply plot the graph of a function of three variables anymore -- it is located in ${\bf R}^4$ -- as we did for functions of two variables. The level sets -- level surfaces -- is the best way to visualize it. We can also restrict the domain instead by fixing one variable at a time and plot the graphs of the variable functions of two variables. The domains of functions of three variables are in our physical space. They may represent: the temperature or the humidity, the air or water pressure, the magnitude of a force (such as gravitation), etc. Example. The level sets of a linear function $$f(x,y,z)=A+mx+ny+kz.$$ are planes: $$d=A+mx+ny+kz.$$ These planes located in $xyz$-space are all parallel to each other because they have the same normal vector $<m,n,k>$. The variable functions aren't that different; let's fix $z=c$: $$d=f(x,y,c)=h(x,y)=A+mx+ny+kc.$$ For all values of $c$, these planes located in $xyu$-space are also parallel to each other because they have the same normal vector $<m,n,1>$. $\square$ Example. We start with a familiar function of two variables, $$f(x,y)=\sin(xy).$$ and just subtract $z$ as the third producing a new function of three variables: $$h(x,y,z)=\sin(xy)-z.$$ Then every of its level sets is given by: $$d=\sin(xy)-z,$$ for some real $d$. What is this? We can make the relation explicit: $$z=\sin(xy)-d,$$ Nothing but the graph of $f$ shifted down (and up) by $d$! In this sense, they are parallel to each other. The function is growing as we move in the direction of $z$. Now, if we fix an independent variable of $h$, say $z=c$, we have a function of two variables: $$g(x,y)=\sin(xy)-c.$$ The graphs are the same. $\square$ Example. Let's consider this function of three variables: $$f(x,y,z)=\sqrt{x^2+y^2+z^2}.$$ Then every of its level sets is given by this implicit equation: $$d^2=x^2+y^2+z^2,$$ for some real $d$. Each is an equation of the sphere of radius $|d|$ centered at $0$ (and the origin itself when $d=0$). They are concentric! The radii also grow uniformly with $d$ and, therefore, these surfaces are not are parallel to each other. So, the function is growing -- and at a constant rate -- as we move in any direction away from $0$. What is this function? It's the distance function itself: $$f(X)=f(x,y,z)=\sqrt{x^2+y^2+z^2}=||X||.$$ The graph of the magnitude of the gravitational force also has concentric spheres are level surfaces but the radii do not change uniformly with $d$. The value of the function is decreasing. $\square$ This is best we can do with functions of three variables. Beyond $3$ variables, we are out of dimensions... The approach via fixing the independent variables is considered later. We now study small scale behavior of functions; we zoom in on a single point of the graph. Just as in the lower dimensions, one of the most crucial properties of a function is the integrity of its graph: is there a break or a cut or a hole? For example, if we think of the graph as a terrain, is there a vertical drop? We approach the issue via the limits. In spite of all the differences between the functions we have seen -- such as parametric curves vs. functions of several variables -- the idea of limit is identical: as the input approaches a point, the output is also forced to approach a point. This is the context with the arrow "$\to$" to be read as "approaches": $$\begin{array}{cccc} &\text{numerical functions}\\ &y=f(x)\to l\text{ as }x\to a\\ \text{parametric curves} && \text{functions of several variables}\\ Y=F(t)\to L\text{ as }t\to a && z=f(X)\to l\text{ as }X\to A\\ \end{array}$$ We use lower case for scalars and upper case for anything multi-dimensional (point or vectors). We then see how the complexity shifts from the output to the input. And so does the challenge of multi-dimensionality. We are ready though; this is the familiar meaning of convergence of a sequence in ${\bf R}^m$ to be used throughout: $$X_n\to A\ \Longleftrightarrow\ d(X_n,A)\to 0 \text{ or }||X_n-A||\to 0.$$ Definition. The limit of a function $z=f(A)$ at a point $X=A$ is defined to be the limit $$\lim_{n\to \infty} f(X_n)$$ considered for all sequences $\{X_n\}$ within the domain of $f$ excluding $A$ that converge to $A$, $$A\ne X_n\to A \text{ as } n\to \infty,$$ when all these limits exist and are equal to each other. In that case, we use the notation: $$\lim_{X\to A} f(X).$$ Otherwise, the limit does not exist. We use this construction to understand what is happening to $z=f(X)$ when a point $X$ is in the vicinity of a chosen point $X=A$, where $f$ might be undefined. We start with an arbitrary sequence on the $xy$-plane that converges to this point, $X_n\to A$, then go vertically from each of these point to find the corresponding points on the graph of the function, $(X_n,f(X_n))$, and finally plot the output values (just numbers) on the $z$-axis, $z_n=f(X_n)$. Is there a limit of this sequence? What about other sequences? Are all these limits the same? The ability to approach the point from different directions had to be dealt with even for numerical functions: $$\operatorname{sign}(x)\to -1 \text{ as } x\to 0^- \text{ but } \operatorname{sign}(x)\to 1 \text{ as } x\to 0^+.$$ In the multi-dimensional case things are even more complicated as we can approach a point on the plane from infinitely many directions. Example. It is easy to come up with an example of a function that has different limits from different directions. We take one that will be important in the future: $$\lim_{(x,y)\to (0,0)}\frac{|2x+y|}{\sqrt{x^2+y^2}}.$$ By the way, this is how one might try to calculate the "slope" (rise over the run) at $0$ of the plane $z=2x+y$. First, we approach along the $x$-axis, i.e., $y$ is fixed at $0$: $$\lim_{x\to 0}\frac{|2x+0|}{\sqrt{x^2+0^2}}=\lim_{x\to 0}\frac{|2x|}{|x|}=2.$$ Second, we approach along the $y$-axis, i.e., $x$ is fixed at $0$: $$\lim_{y\to 0}\frac{|2\cdot 0+y|}{\sqrt{0^2+y^2}}=\lim_{y\to 0}\frac{|y|}{|y|}=1.$$ There can't be just one slope for a plane... $\square$ Things might be bad in an even more subtle way. Example. Not all functions of two variables have graphs like the one above... Let's consider this function: $$f(x,y)=\frac{x^2y}{x^4+y^2}.$$ and its limit $(x,y)\to (0,0)$. Does it exist? It all depends on how $X$ approaches $A=0$: The two green horizontal lines indicate that we get $0$ if we approach $0$ from either the $x$ or the $y$ direction. That's the horizontal cross we see on the surface (just like the one in the hyperbolic paraboloid). In fact, any (linear) direction is OK: $$y=mx\ \Longrightarrow\ f(x,mx)=\frac{x^2mx}{x^4+(mx)^2}=\frac{mx^3}{x^4+m^2x^2}=\frac{m}{x+m^2/x}\to 0 \text{ as }x\to 0.$$ So, the limits from all directions are the same! However, there seems to be two curved cliffs on left and right visible form above... What is we approach $0$ along, instead of a straight line, a parabola $y=x^2$? The result is surprising: $$y=x^2\ \Longrightarrow\ f(x,x^2)=\frac{x^2x^2}{x^4+(x^2)^2}=\frac{x^4}{2x^4}=\frac{1}{2}.$$ This is the height of the cliffs! So, this limit is different from the other and, according to our definition, the limit of the function does not exist: $$\lim_{(x,y)\to (0,0)}\frac{x^2y}{x^4+y^2}\ DNE.$$ In fact, the illustration created by the spreadsheet attempts to make an unbroken surface from these points while in reality there is no passage between the two cliffs as we just demonstrated. So, not only we have to approach the point from all directions at once but also along any possible path. A simpler but very important conclusion is that studying functions of several variables one variable at a time might be insufficient or even misleading. A special note about the limit of a function at a point that lies on the boundary of the domain... It doesn't matter! The case of one-sided limits at $a$ or $b$ of the domain $[a,b]$ is now included in our new definition. The algebraic theory of limits is almost identical to that for numerical functions. There are as many rules because whatever you can do with the outputs you can do with the functions. We will use the algebraic properties of the limits of sequences -- of numbers -- to prove virtually identical facts about limits of functions. Theorem (Algebra of Limits of Sequences). Suppose $a_n\to a$ and $b_n\to b$. Then $$\begin{array}{|ll|ll|} \hline \text{SR: }& a_n + b_n\to a + b& \text{CMR: }& c\cdot a_n\to ca& \text{ for any real }c\\ \text{PR: }& a_n \cdot b_n\to ab& \text{QR: }& a_n/b_n\to a/b &\text{ provided }b\ne 0\\ \hline \end{array}$$ Each property is matched by its analog for functions. Theorem (Algebra of Limits of Functions). Suppose $f(X)\to a$ and $g(X)\to b$ as $X\to A$ . Then $$\begin{array}{|ll|ll|} \hline \text{SR: }& f(X)+g(X)\to a+b & \text{CMR: }& c\cdot f(X)\to ca& \text{ for any real }c\\ \text{PR: }& f(X)\cdot g(X)\to ab& \text{QR: }& f(X)/g(X)\to a/b &\text{ provided }b\ne 0\\ \hline \end{array}$$ Note that there were no PR or QR for the parametric curves... Let's consider them one by one. Now, limits behave well with respect to the usual arithmetic operations. Theorem (Sum Rule). If the limits at $A$ of functions $f(X),g(X)$ exist then so does that of their sum, $f(P) + g(P)$, and the limit of the sum is equal to the sum of the limits: $$\lim_{X\to A} \big( f(X) + g(X) \big) = \lim_{X\to A} f(X) + \lim_{X\to A} g(X).$$ Since the outputs and the limits are just numbers, in the case of infinite limits we follow the same rules of the algebra of infinities as in Chapter 5: $$\begin{array}{lll} \text{number } &+& (+\infty)&=+\infty\\ \text{number } &+& (-\infty)&=-\infty\\ +\infty &+& (+\infty)&=+\infty\\ -\infty &+& (-\infty)&=-\infty\\ \end{array}$$ The proofs of the rest of the properties are identical. Theorem (Constant Multiple Rule). If the limit at $X=A$ of function $f(X)$ exists then so does that of its multiple, $c f(X)$, and the limit of the multiple is equal to the multiple of the limit: $$\lim_{X\to A} c f(X) = c \cdot \lim_{X\to A} f(X).$$ Theorem (Product Rule). If the limits at $a$ of functions $f(X) ,g(X)$ exist then so does that of their product, $f(X) \cdot g(X)$, and the limit of the product is equal to the product of the limits: $$\lim_{X\to A} \big( f(X) \cdot g(X) \big) = \left(\lim_{X\to A} f(X)\right)\cdot\left( \lim_{X\to A} g(X)\right).$$ Theorem (Quotient Rule). If the limits at $X=A$ of functions $f(X) ,g(X)$ exist then so does that of their ratio, $f(X) / g(X)$, provided $\lim_{X\to A} g(X) \ne 0$, and the limit of the ratio is equal to the ratio of the limits: $$\lim_{X\to A} \left(\frac{f(X)}{g(X)}\right) = \frac{\lim\limits_{X\to A} f(X)}{\lim\limits_{X\to A} g(X)}.$$ Just as with sequences, we can represent these rules as commutative diagrams: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \begin{array}{ccc} f,g&\ra{\lim}&l,m\\ \ \da{+}&SR &\ \da{+}\\ f+g & \ra{\lim}&\lim(f+g)=l+m \end{array}$$ Example. $$\lim_{(x,y)\to (0,0)}\frac{x^2+y^2}{x+y}=...$$ $\square$ We can take advantage of the fact that the domain ${\bf R}^m$ of $f$ can also be seen as made of vectors and vectors are subject to algebraic operations. Theorem (Alternative formula for limits). The limit of a function $z=f(X)$ at $X=A$ is equal to $l$ if and only if $$\lim_{||H|| \to 0} f(A + H) = l.$$ The next result stands virtually unchanged from Chapter 6. Theorem (Squeeze Theorem). If a function is squeezed between two functions with the same limit at a point, its limit also exists and is equal to the that number; i.e., if $$f(X) \leq h(X) \leq g(X) ,$$ for all $X$ within some distance from $X=A$, and $$\lim_{X\to A} f(X) = \lim_{X\to A} g(X) = l,$$ then the following limit exists and equal to that number: $$\lim_{X\to A} h(X) = l.$$ The easiest way to handle limits is coordinate-wise, when possible. Theorem. If the limit of a function of several variables exists then it exists with respect to each of the variables; i.e., for any function $z=f(X)=f(x_1,...,x_m)$ and any $A=(a_1,...,a_m)$ in ${\bf R}^m$, we have $$\begin{array}{rll} f(X)\to l \text{ as }X\to A\\ \Longrightarrow& f(a_1,...,a_{k-1},x,a_{k+1},...,a_m)\to l \text{ as }x\to a_k \text{ for each } k=1,2,...,m. \end{array}$$ The converse isn't true! Example. Recall that this function has limits at $(0,0)$ with respect to either of the variables: $$f(x,y)=\frac{x^2y}{x^4+y^2},$$ but the limit as $(x,y)\to (0,0)$ does not exist. $\square$ So, we have to establish that the limit exists -- in the "omni-directional" sense -- first and only then we can use limits with respect to every variable -- in the "uni-directional" sense -- to find this limit. Now, the asymptotic behavior... Definition. Given a function $z=f(X)$ and a point $A$ in ${\bf R}^n$, we say that $f$ approaches infinity at $A$ if $$\lim_{n\to \infty}f(x)=\pm\infty,$$ for any sequence $X_n\to A$ as $n\to \infty$. Then we use the notation: $$\lim_{X\to A}f(x)=\pm\infty .$$ The line $X=A$ located in ${\bf R}^{n+1}$ is then called a vertical asymptote of $f$. Example (Newton's Law of Gravity). If $z=f(x,y,z)$ represents the magnitude of the force of gravity of an object located at the point $X=(x,y,z)$ relative to another object located at the origin, then we have: $$ \lim_{X \to 0} f (X)= \infty.$$ $\square$ Next, we can approach infinity in a number of ways too: That's why we have to look at the distance to the origin. Definition. For a function of several variables $z=f(X)$, we say that $f$ goes to infinity if $$f(X_n)\to \pm\infty,$$ for any sequence $X_n$ with $||X_n||\to \infty$ as $n\to \infty$. Then we use the notation: $$f(X) \to \pm\infty \text{ as }X\to \infty,$$ or $$\lim_{x\to\infty}f(x)=\pm\infty.$$ Example. We previously demonstrated the following: $$\lim_{x \to -\infty} e^{x} = 0, \quad \lim_{x \to +\infty} e^{x} = +\infty.$$ We won't speak of horizontal asymptotes but the next idea is related. Definition. Given a function $z=f(X)$, we say that $f$ approaches $z=d$ at infinity if $$\lim_{n\to \infty}f(X_n)=d,$$ for any sequence $X_n$ with $||X_n||\to \infty$ as $n\to \infty$. Then we use the notation: $$\lim_{X\to \infty}f(X)=d .$$ Example (Newton's Law of Gravity). If $z=f(x,y,z)$ represents the magnitude of the force of gravity of an object located at the point $X=(x,y,z)$ relative to another object located at the origin, then we have: $$ \lim_{X \to \infty} f (X)= 0.$$ $\square$ The idea of continuity is identical to that for numerical functions or parametric cures: as the input approaches a point, the output is forced to approach the values of the function at that point. The concept flows from the idea of limit just as before. Definition. A function $z=f(X)$ is called continuous at point $X=A$ if $f(X)$ is defined at $X=A$, the limit of $f$ exists at $A$, and the two are equal to each other: $$\lim_{X\to A}f(X)=f(A).$$ Furthermore, a function is continuous if it is continuous at every point of its domain. Thus, the limits of continuous functions can be found by substitution: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{|c|}\hline\quad \lim_{X\to A}f(X)=f(A) \quad \\ \hline\end{array} \text{ or } \begin{array}{|c|}\hline\quad f(X)\to f(A) \text{ as } X\to A.\quad \\ \hline\end{array} $$ Equivalently, a function $f$ is continuous at $a$ if $$\lim_{n\to \infty}f(A_n)=f(A),$$ for any sequence $X_n\to A$. A typical function we encounter is continuous at every point of its domain. Theorem. Suppose $f$ and $g$ are continuous at $X=A$. Then so are the following functions: 1. (SR) $f\pm g$, 2. (CMR) $c\cdot f$ for any real $c$, 3. (PR) $f \cdot g$, and 4. (QR) $f/g$ provided $g(A)\ne 0$. As an illustration, we can say that if the floor and the ceiling represented by $f$ and $g$ respectively of a cave are changing continuously then so is its height, which is $g-f$: Or, if the floor and the ceiling ($f$ and $-g$) are changing continuously then so is its height ($g+f$). And so on. There are two ways to compose a function of several variables with another function... Theorem (Composition Rule I). If the limit at $X=A$ of function $z=f(X)$ exists and is equal to $l$ then so does that of its composition with any numerical function $u=g(z)$ continuous at $z=l$ and $$\lim_{X\to A} (g\circ f)(X) = g(l).$$ Proof. Suppose we have a sequence, $$X_n\to A.$$ Then, we also have another sequence, $$b_n=f(X_n).$$ The condition $f(X)\to l$ as $X\to A$ is restated as follows: $$b_n\to l \text{ as } n\to \infty.$$ Therefore, continuity of $g$ implies, $$g(b_n)\to g(l)\text{ as } n\to \infty.$$ In other words, $$(g\circ f)(X_n)=g(f(X_n))\to g(l)\text{ as } n\to \infty.$$ Since sequence $X_n\to A$ was chosen arbitrarily, this condition is restated as, $$(g\circ f)(X)\to g(l)\text{ as } X\to A.$$ $\blacksquare$ We can re-write the result as follows: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{|c|}\hline\quad \lim_{X\to A} (g\circ f)(X) = g(l)\Bigg|_{l=\lim_{X\to A} f(X)} \quad \\ \hline\end{array} $$ Furthermore, we have $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{|c|}\hline\quad \lim_{X\to A} g\big( f(X) \big) = g\left( \lim_{X\to A} f(X) \right) \quad \\ \hline\end{array} $$ Theorem (Composition Rule II). If the limit at $t=a$ of a parametric curve $X=F(t)$ exists and is equal to $L$ then so does that of its composition with any function $z=g(X)$ of several variables continuous at $X=L$ and $$\lim_{t\to a} (g\circ F)(t) = g(L).$$ Proof. Suppose we have a sequence, $$t_n\to a.$$ Then, we also have another sequence, $$B_n=F(t_n).$$ The condition $F(t)\to L$ as $t\to a$ is restated as follows: $$B_n\to L \text{ as } n\to \infty.$$ Therefore, continuity of $F$ implies, $$g(B_n)\to g(L)\text{ as } n\to \infty.$$ In other words, $$(g\circ F)(t_n)=g(F(t_n))\to g(L)\text{ as } n\to \infty.$$ Since sequence $X_n\to A$ was chosen arbitrarily, this condition is restated as, $$(g\circ F)(t)\to g(L)\text{ as } t\to a.$$ $\blacksquare$ We can re-write the result as follows: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{|c|}\hline\quad \lim_{t\to a} (g\circ F)(t) = g(L)\Bigg|_{L=\lim_{t\to a} F(t)} \quad \\ \hline\end{array} $$ Furthermore, we have: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{|c|}\hline\quad \lim_{t\to a} g\big( F(t) \big) = g\left( \lim_{t\to a} F(t) \right) \quad \\ \hline\end{array} $$ Corollary. The composition $g\circ f$ of a function $f$ continuous at $x=a$ and a function $g$ continuous at $y=f(a)$ is continuous at $x=a$. The easiest way to handle continuity is coordinate-wise, when possible. Theorem. If a function of several variables is continuous then it is also continuous with respect to each of the variables. Example. Recall that this function is continuous at $(0,0)$ with respect to either of the variables: $$f(x,y)=\frac{x^2y}{x^4+y^2},$$ but the limit as $(x,y)\to (0,0)$ simply does not exist. $\square$ So, we have to establish continuity -- in the sense of the "omni-directional" limit -- first and only then we can use this fact to find facts about the continuity with respect to every variable -- in the "uni-directional" sense. A special note about the continuity of a function at a point that lies on the boundary of the domain... Once again, it doesn't matter! The case of one-sided continuity at $a$ or $b$ of the domain $[a,b]$ is now included in our new definition. The definition of continuity is purely local: only the behavior of the function in the, no matter how small, vicinity of the point matters. If the function is continuous on a whole set, what can we say about its global behavior? Our understanding of continuity of numerical functions has been as the property of having no gaps in their graphs. This idea is more precisely expressed by the following by the Intermediate Value Theorem: if a function $f$ is defined and is continuous on an interval $[a,b]$, then for any $c$ between $f(a)$ and $f(b)$, there is $d$ in $[a,b]$ such that $f(d) = c$. An often more convenient way to state is: if the domain of a continuous function is an interval then so is its image. Now the plane is more complex than a line and we can't limit ourselves to intervals. But what is the analog of an interval in a multidimensional space? Theorem (Intermediate Value Theorem for functions of several variables). Suppose a function $f$ is defined and is continuous on a set that contains the path $C$ of a continuous parametric curve. Then the image of this path is an interval. Proof. It follows from the Composition Rule II and the Intermediate Value Theorem for numerical functions. $\blacksquare$ The theorem says that there are no missing values in the image of such a set. Exercise. Show that that the converse of the theorem isn't true. A convenient re-statement of the theorem is below. Corollary. The image of a path-connected set under a continuous function is an interval. Recall that a function $f$ is called bounded on a set $S$ in ${\bf R}^n$ if its image is bounded, i.e., there is such a real number $m$ that $$|f(X)| \le m$$ for all $X$ in $S$. Theorem. If the limit at $X=A$ of function $z=f(X)$ exists then $f$ is bounded on some open disk that contains $A$: $$\lim_{X\to A}f(X) \text{ exists }\ \Longrightarrow\ |f(X)| \le m$$ for all $X$ with $d(X,A)<\delta$ for some $\delta >0$ and some $m$. The global version of the above theorem guarantees that the function is bounded under certain circumstances. The version of the theorem for numerical functions is simply: a continuous on $[a,b]$ function is bounded. But what is the multi-dimensional analog of a closed bounded interval? We already know that a set $S$ in ${\bf R}^n$ is bounded if it fits in a sphere (or a box) of a large enough size: $$||x|| < Q \text{ for all } x \text{ in } S;$$ and a set in ${\bf R}^n$ is called closed if it contains the limits of all of its convergent sequences. Theorem (Boundedness). A continuous on a closed bounded set function is bounded. Proof. Suppose, to the contrary, that $x=f(X)$ is unbounded on set $S$. Then there is a sequence $\{X_n\}$ in $S$ such that $f(X_n)\to \infty$. Then, by the Bolzano-Weierstrass Theorem, sequence $\{X_n\}$ has a convergent subsequence $\{Y_k\}$: $$Y_k \to Y.$$ This point belong to $S$! From the continuity, it follows that $$f(Y_k) \to f(Y).$$ This contradicts the fact that $\{Y_k\}$ is a subsequence of a sequence that diverges to $\infty$. $\blacksquare$ Exercise. Why are we justified to conclude in the proof that the limit $Y$ of $\{Y_k\}$ is in $S$? Is the image of a closed set closed? If it is, the function reaches its extreme values, i.e., the least upper bound $\sup$ and the greatest lower bound $\inf$. Definition. Given a function $z=f(X)$. Then $X=D$ is called a global maximum point of $f$ on set $S$ if $$f(D)\ge f(X) \text{ for all } X \text{ in }S;$$ and $X=C$ is called a global minimum point of $f$ on set $S$ if $$f(C)\le f(X) \text{ for all } X \text{ in }S.$$ (They are also called absolute maximum and minimum points.) Collectively they are all called global extreme points. Just because something is described doesn't mean that it can be found. Theorem (Extreme Value Theorem). A continuous function on a closed bounded set in ${\bf R}^n$ attains its global maximum and global minimum values; i.e., if $z=f(X)$ is continuous on a bounded closed set $S$, then there are $C,D$ in $S$ such that $$f(C)\le f(X) \le f(D),$$ for all $X$ in $S$. Proof. It follows from the Bolzano-Weierstrass Theorem. $\blacksquare$ Definition. Given a function $z=f(X)$. Then $X=M$ is called the global maximum value of $f$ on set $S$ if $$M\ge f(X) \text{ for all } X \text{ in }S;$$ and $y=m$ is called the global minimum value of $f$ on set $S$ if $$m\le f(X) \text{ for all } X \text{ in }S.$$ (They are also called absolute maximum and minimum values.) Collectively they are all called global extreme values. Then the global max (or min) value is reached by the function at any of its global max (or min) points. Note that the reason we need the Extreme Value Theorem is to ensure that the optimization problem we are facing has a solution. We can define limits and continuity of functions of several variables without invoking limits of sequences. Let's re-write what we want to say about the meaning of the limits in progressively more and more precise terms. $$\begin{array}{l|ll} X&z=f(X)\\ \hline \text{As } X\to A, & \text{we have } y\to l.\\ \text{As } X\text{ approaches } A, & y\text{ approaches } l. \\ \text{As } \text{the distance from }X \text{ to } A \text{ approaches } 0, & \text{the distance from }y \text{ to } l \text{ approaches } 0. \\ \text{As } d(X,A)\to 0, & \text{we have } |y-l|\to 0.\\ \text{By making } d(X,A) \text{ as smaller and smaller},& \text{we make } |y-l| \text{ as small as needed}.\\ \text{By making } d(X,A) \text{ less than some } \delta>0 ,& \text{we make } |y-l| \text{ smaller than any given } \varepsilon>0. \end{array}$$ Definition. The limit of function $z=f(X)$ at $X=A$ is a number $l$, if exists, such that for any $\varepsilon >0$ there is such a $\delta> 0$ that $$0<d(X,A)<\delta \ \Longrightarrow\ |f(X)-l|<\varepsilon.$$ This is the geometric meaning of the definition: if $X$ is within $\delta$ from $A$, then $f(X)$ is supposed to be within $\varepsilon$ from $l$. In other words, this part of the graph fits between the two planes $\varepsilon$ away from the plane $z=l$. The partial differences and difference quotients We start with numerical differentiation of functions of several variables. Example. We consider the function: $$f(x,y)=-x^2+y^2+xy.$$ For each $x$ in the left-most column and each $y$ in the top row, the corresponding value of the function is computed and placed in this table, just as before: $$\texttt{=R2C3*(R4C^2-RC2^2+R4C*RC2)}.$$ When plotted is recognized as a familiar hyperbolic paraboloid. Below we outline the process of partial differentiation. The variable functions -- with respect to $x$ and $y$ -- are shown. These are the functions we will differentiate. First, for each value of $y$ given in the top row, we compute the difference quotient function with respect to $x$ by going down the corresponding column and then placing these values on right in a new table (it is one row short in comparison to the original): $$\texttt{=(RC[-29]-R[-1]C[-29])/R2C1}.$$ Second, for each value of $x$ given in the left-most column, we compute the difference quotient function with respect to $y$ by going right the corresponding row and then placing these values below in a new table (it is one column short in comparison to the original). $$\texttt{=(R[-29]C-R[-29]C[-1])/R2C1}.$$ This is the summary: The results are all straight lines equally spaced and in both cases they form planes. These planes are the graphs of the two new functions of two variables, the partial derivatives, the tables of which have been constructed. Some things don't change: just as in the one-dimensional case, the derivatives of quadratic functions are linear. $\square$ We deal with the change of the values of a function, $\Delta f$, relative to the change of its input variable. This time, there are two: $\Delta x$ and $\Delta y$. If we know only four values of a function of two variables (left), we can compute the differences $\Delta_x f$ and $\Delta_y f$ of $f$ along both horizontal and vertical edges (right): $$\begin{array}{rcccccccc} y+\Delta y:&f(x,y+\Delta y)&---&f(x+\Delta x,y+\Delta y)&&-\bullet-&\Delta_x f(s,y+\Delta y)&-\bullet-\\ &|&&|&&|&&|\\ t:&|&&|&\leadsto&\Delta_y f(x,t)&&\Delta_y f(x+\Delta x,t)\\ &|&&|&&|&&|\\ y:&f(x,y)&---&f(x+\Delta x,y)&&-\bullet-&\Delta_x f(s,y)&-\bullet-\\ &x&s&x+\Delta x&&x&s&x+\Delta x\\ \end{array}$$ As you can see, we subtract the values at the corners -- vertically and horizontally -- and place them at the corresponding edges. We then acquire the difference quotients by dividing by the increment of the corresponding variable: $$\leadsto\quad\begin{array}{cccccccc} -\bullet-&\frac{\Delta f}{\Delta x}(s,y+\Delta y)&-\bullet-\\ |&&|\\ \frac{\Delta f}{\Delta y}(x,t)&&\frac{\Delta f}{\Delta y}(x+\Delta x,t)\\ |&&|\\ -\bullet-&\frac{\Delta f}{\Delta x}(s,y)&-\bullet-\\ &x&s&x+\Delta x\\ \end{array}$$ Here and below we omit the subscripts in the fractions $\frac{\Delta_x f}{\Delta x}$ and $\frac{\Delta_y f}{\Delta y}$. More generally, we build a partition $P$ of a rectangle $R=[a,b]\times [c,d]$ in the $xy$-plane. It is a combination of partitions of the intervals $[a,b]$ and $[c,d]$: We start with a partition of an interval $[a,b]$ in the $x$-axis into $n$ intervals: $$ [x_{0},x_{1}],\ [x_{1},x_{2}],\ ... ,\ [x_{n-1},x_{n}],$$ with $x_0=a,\ x_n=b$. The increments of $x$ are: $$\Delta x_i = x_i-x_{i-1},\ i=1,2,...,n.$$ Then we do the same for $y$. We partition an interval $[c,d]$ in the $y$-axis into $m$ intervals: $$ [y_{0},y_{1}],\ [y_{1},y_{2}],\ ... ,\ [y_{m-1},y_{m}],$$ with $y_0=c,\ y_n=d$. The increments of $y$ are: $$\Delta y_i = y_i-y_{i-1},\ i=1,2,...,m.$$ The lines $y=y_j$ and $x=x_i$ create a partition $P$ of the rectangle $[a,b]\times [c,d]$ into smaller rectangles $[x_{i},x_{i+1}]\times [y_{j},y_{j+1}]$. The points of intersection of these lines, $$X_{ij}=(x_i,y_{j}),\ i=1,2,...,n,\ j=1,2,...,m,$$ will be called the (primary) nodes of the partition. The secondary nodes of $P$ appear on each of the horizontal and vertical edges of the partition; for each pair $i=0,1,2,...,n-1$ and $i=0,1,2,...,m-1$, we have: a point $S_{ij}$ in the segment $[x_{i},x_{i+1}]\times \{y_{j}\}$, and a point $T_{ij}$ in the segment $\{x_{i}\}\times [y_{j},y_{j+1}]$. We can have left- and right-end augmented partitions as well as mid-point ones. Example. Also, we can use the secondary nodes of the augmented partitions of $[a,b]$, say $\{s_i\}$, and $[c,d]$, say $\{t_j\}$: a point $S_{ij}=(s_{i},y_j)$ in the segment $[x_{i-1},x_{i}]\times \{y_{j}\}$, and a point $T_{ij}=(x_i,t_{j})$ in the segment $\{x_{i}\}\times [y_{j-1},y_{j}]$. Suppose we have a function $f$ is known only at the nodes. When $y=y_j$ is fixed, its difference is computed over each interval of the partition $[x_i,x_{i+1}],\ i=0,1,2...,n-1$ of the segment. This defines a new function on the secondary nodes. Similar for every fixed $x=x_i$. Suppose a function of two variables $z=f(X)=f(x,y)$ is defined at the nodes $X_{ij},\ i,j=0,1,2,...,n$, of the partition. Definition. The partial difference of $f$ with respect to $x$ is defined at the secondary nodes of the partition by: $$\Delta_x f\, (S_{ij})=f(X_{ij})-f(X_{i-1,j});$$ and partial difference of $f$ with respect to $y$ is defined at the secondary nodes of the partition by: $$\Delta_y f\, (T_{ij})=f(X_{ij})-f(X_{i,j-1}).$$ Definition. The partial difference quotient of $f$ with respect to $x$ is defined at the secondary nodes of the partition by: $$\frac{\Delta f}{\Delta x}(S_{ij})=\frac{\Delta_x f\, (S_{ij})}{\Delta x_i}=\frac{f(X_{ij})-f(X_{i-1,j})}{x_{i}-x_{i-1}};$$ and partial difference quotient of $f$ with respect to $y$ is defined at the secondary nodes of the partition by: $$\frac{\Delta f}{\Delta y}(T_{ij})=\frac{\Delta_y f\, (T_{ij})}{\Delta y_j}=\frac{f(X_{ij})-f(X_{i,j-1})}{y_{j}-y_{j-1}}.$$ The last two numbers represent the slopes of the secant lines along the $x$-axis and the $y$-axis over the nodes respectively. Note that both $\frac{\Delta f}{\Delta x}$ and $\frac{\Delta f}{\Delta y}$ are literally fractions. For each pair $(i,j)$, the two difference quotients appear as the coefficients in the equation of the plane through the three points on the graph above the three adjacent nodes in the $xy$-plane: $$\begin{array}{lll} (x_i,&y_{j},&f(x_i,&y_{j})),\\ (x_{i+1},&y_{j},&f(x_{i+1},&y_{j})),\\ (x_i,&y_{j+1},&f(x_i,&y_{j+1})).\\ \end{array}$$ This plane is given by: $$y-f(x_i,y_{j})=\frac{\Delta f}{\Delta x}(S_{ij})(x- x_i)+\frac{\Delta f}{\Delta y}(T_{ij})(y- y_j).$$ This plane restricted to the triangle formed by those three points is a triangle in our $3$-space. For each $(i,j)$, there four such triangles; just take $(i\pm 1,j\pm 1)$. Exercise. Under what circumstances, when taken over all possible such pairs $(i,j)$, do these triangles form a mesh? In other words, when do they fit together without breaks? For a simplified notation, we will often omit the indices: $$\frac{\Delta f}{\Delta x}(s,y)=\frac{f(x+\Delta x,y)-f(x,y)}{\Delta x}\text{ and }\frac{\Delta f}{\Delta y}(x,t)=\frac{f(x,y+\Delta y)-f(x,y)}{\Delta y}.$$ From our study of planes, we know that the vector formed by these two numbers is especially important. Definition. The difference of $z=f(x,y)$ is the function defined at each secondary node of the partition and is equal to the corresponding partial difference of $f$ with respect to $x$ or $y$, denoted by: $$\Delta f\, (N)=\begin{cases} \Delta_x f\, (S_{ij})&\text{ if } N=S_{ij},\\ \Delta_y f\, (T_{ij})&\text{ if } N=T_{ij}. \end{cases}$$ Note that when there are no secondary nodes specified, we can think of $S_{ij}$ and $T_{ij}$ as standing for the edges themselves: $S_{ij}=[x_{i},x_{i+1}]\times \{y_{j}\}$, and $T_{ij}=\{x_{i}\}\times [y_{j},y_{j+1}]$. They are the inputs of the difference. Taken as a whole, the difference may look like this: They are real-valued $1$-forms! Why aren't they vectors? They are if you take into account the direction of the corresponding edge. It is then a (real-valued) $1$-form, the difference of a $0$-form $f$. In fact, for an abbreviated notation, we combine the two differences into a single vector: $$\Delta f=\big<\Delta_x f,\Delta_y f\big>.$$ Example (hydraulic analogy). A simple interpretation of this data is water flow, as follows: each edge of the partition represents a pipe; $f(x_i,y_j)$ represents the water pressure at the joint $(x_i,y_j)$; the difference of the pressure between any two adjacent joints causes the water to flow; $\Delta_x f(s_{i},y_j)$ and $\Delta_y f(x_i,t_{j})$ are the flow amounts along these pipes; and $\frac{\Delta f}{\Delta x}(s_{i},y_j)$ and $\frac{\Delta f}{\Delta y}(x_i,t_{j})$ are the flow rates along these pipes. For electric current, substitute "electric potential" for "pressure". $\square$ According to our interpretation, only the flow along the pipe matters while any leakage is ignored. We can choose to take the latter into account and consider vector fields (or vector-valued $1$-forms); there is a vector assigned to each edge: $$\begin{array}{llllll} F(s,y)&=\big<&p(s,y) &,q(s,y)&\big>,\\ F(x,t)&=\big<&r(s,y)&,s(x,t)&\big>. \end{array}$$ In other words, there is also a component perpendicular to the corresponding edge: However, only the projections of these vectors on the edges matter! Definition. A vector field $F$ defined at each secondary node of the partition is called gradient if there is such a function $z=f(x,y)$ defined on the nodes of the partition that $$F(N)\cdot E=\Delta f\, (N),$$ for every edge $E$ and its secondary node $N$. It follows that the horizontal component of $F(N)$ is equal to $\frac{\Delta f}{\Delta x}(N)$ when $N$ is on a horizontal edge, and the vertical component of $F(N)$ is equal to $\frac{\Delta f}{\Delta y}(N)$ when $N$ is a vertical edge. The other components are irrelevant. Next, we continue in the same manner as before: $$\lim_{\Delta X\to 0}\left( \begin{array}{cc}\text{ discrete }\\ \text{ calculus }\end{array} \right)= \text{ calculus }$$ The average and the instantaneous rates of change Recall the $1$-dimensional case. A linear approximation of a function $y=f(x)$ at $x=a$ is function that defines a secant line, i.e., a line on the $xy$-plane through the point of interest $(a,f(a))$ and another point on the graph $(x,f(x))$. Its slope is the difference quotient of the function: $$\frac{\Delta f}{\Delta x}=\frac{f(x)-f(a)}{x-a}.$$ Now, let's see how this plan applies to functions of two variables. A linear approximation of a function $z=f(x,y)$ at $(x,y)=(a,b)$ is a function that represents a secant plane, i.e., a plane in the $xyz$-space through the point of interest $(a,b,f(a,b))$ and two other points on the graph. In order to ensure that these points define a plane, they should be chosen in such a way that they aren't on the same line. The easiest way to accomplish that is to choose the last two to lie in the $x$- and the $y$-directions from $(a,b)$, i.e., $(x,b)$ and $(a,y)$ with $x\ne a$ and $y\ne b$. The two lines form the secant plane. Definition. The partial derivatives of $f$ with respect to $x$ and $y$ at $(x,y)=(a,b)$ are defined to be the limits of the difference quotients with respect to $x$ at $x=a$ and with respect to $y$ at $y=b$ respectively, denoted by: $$\frac{\partial f}{\partial x}(a,b) = \lim_{x\to a}\frac{f(x,b)-f(a,b)}{x-a}\text{ and } \frac{\partial f}{\partial y}(a,b)= \lim_{y\to b}\frac{f(a,y)-f(a,b)}{y-b},$$ as well as $$f_x'(a,b)\text{ and } f'_y(a,b).$$ The following is an obvious conclusion. Theorem. The partial derivatives of $f$ at $(x,y)=(a,b)$ are found as the derivatives of $f$ with respect to $x$ and to $y$: $$\frac{\partial f}{\partial x}(a,b) = \frac{d}{dx}f(x,b)\bigg|_{x=a},\ \frac{\partial f}{\partial y}(a,b)= \frac{d}{dy}f(a,y)\bigg|_{y=b}.$$ As a result, the computations are straight-forward. Example. Find the partial derivatives of the function: $$f(x,y)=(x-y)e^{x+y^2}$$ at $(x,y)=(0,0)$: $$\begin{array}{ll} \frac{\partial f}{\partial x}(0,0) &= \frac{d}{dx}(x-0)e^{x+0^2}\bigg|_{x=0}&= \frac{d}{dx}xe^{x}\bigg|_{x=0}&=e^x+xe^x\bigg|_{x=0}&=1,\\ \frac{\partial f}{\partial y}(0,0) &= \frac{d}{dy}(0-y)e^{0+y^2}\bigg|_{y=0}&= \frac{d}{dy}-ye^{y^2}\bigg|_{y=0}&=-e^{y^2}-ye^{y^2}2y&=-e^{y^2}-2y^2e^{y^2}. \end{array}$$ $\square$ Example. Find the partial derivatives of the function: $$f(x,y)=$$ $\square$ Example. Now in reverse. Find a function $f$ of two variables the partial derivatives of which are these: $$\frac{\partial f}{\partial x}= ,\ \frac{\partial f}{\partial y}=$$ $\square$ Linear approximations and differentiability This is the build-up for the introduction of the derivative of functions of two variables: the slopes of a surface in different directions, secant lines, secant planes, planes and ways to represent them, and finally Due to the complexity of the problem we will focus on the derivative at a single point for now. Let's review what we know about the derivatives so far. The definition of the derivative of a parametric curve (at a point)is virtually identical to that for a numerical function because the fraction of the difference quotient is allowed by vector algebra: $$\begin{array}{cccc} &\text{numerical functions}\\ &f'(a)=\lim_{x\to a}\frac{f(x)-f(a)}{x-a}\\ \text{parametric curves} && \text{functions of several variables}\\ F'(a)=\lim_{t\to a}\frac{F(t)-F(a)}{t-a} && f'(A)=\lim_{X\to A}\frac{f(X)-f(A)}{X-A}\ ???\\ \end{array}$$ The same formula fails for a function of several variables. We can't divide by a vector! This failure is the reason why we start studying multi-dimensional calculus with parametric curves and not functions of several variables. Can we fix the definition? How about we divide by the magnitude of this vector? This is allowed by vector algebra and the result is something similar to the rise over the run definition of the slope. Example. Let's carry out this idea for a linear function, the simplest kind of function: $$f(x,y)=2x+y.$$ The limit: $$\lim_{(x,y)\to (0,0)}\frac{|2x+y|}{\sqrt{x^2+y^2}},$$ is to be evaluated along either of the axes: $$\lim_{x\to 0}\frac{|2x+0|}{\sqrt{x^2+0^2}}=\lim_{x\to 0}\frac{|2x|}{|x|}=2\ \ne\ \lim_{y\to 0}\frac{|2\cdot 0+y|}{\sqrt{0^2+y^2}}=\lim_{y\to 0}\frac{|y|}{|y|}=1.$$ The limit doesn't exist because the slopes are different in different directions... $\square$ The line of attack of the meaning of the derivative then shifts -- to finding that tangent plane. Some of the functions shown in the last section have no tangent planes at their points of discontinuity. But such a plane is just a linear function! As such, it is a linear approximation of the function. In the $1$-dimensional case, among the linear approximations of a function $y=f(x)$ at $x=a$ are the functions that define secant lines, i.e., lines on the $xy$-plane through the point of interest $(a,f(a))$ and another point on the graph $(x,f(x))$. Its slope is the difference quotient of the function: $$\frac{\Delta f}{\Delta x}=\frac{f(x)-f(a)}{x-a}.$$ In general, they are just linear functions and their graphs are lines through $(a,f(a))$; in the point-slope form they are: $$l(x)= f(a)+ m(x - a). $$ When you zoom in on the point, the tangent line -- but no other line -- will merge with the graph: We reformulate our theory from Chapter 10 slightly. Definition. Suppose $y=f(x)$ is a defined at $x=a$ and $$l(x)=f(a)+m(x-a)$$ is any of its linear approximations at that point. Then, $y=l(x)$ is called the best linear approximation of $f$ at $x=a$ if the following is satisfied: $$\lim_{x\to a} \frac{ f(x) -l(x) }{|x-a|}=0.$$ The definition is about the decline of the error of the approximation, i.e., the difference between the two functions: $$\text{error } = | f(x) -l(x) | . $$ This, not only the error vanishes, but also it vanishes relative to how close we are to the point of interest, i.e., the run. The following result is from Chapter 10. Theorem. If $$l(x)=f(a)+m(x-a)$$ is the best linear approximation of $f$ at $x=a$, then $$m=f'(a).$$ Therefore, the graph of this function is the tangent line. Now the partial derivatives of $f$ with respect to $x$ and $y$ at $(x,y)=(a,b)$ are: $$\frac{\partial f}{\partial x}(a,b) = \lim_{x\to a}\frac{f(x,b)-f(a,b)}{x-a}\text{ and } \frac{\partial f}{\partial y}(a,b)= \lim_{y\to b}\frac{f(a,y)-f(a,b)}{y-b}.$$ From our study of planes, we know that the vector formed by these functions that is especially important. It is called, again, the gradient and is defined to be (notation): $$\nabla f(a,b)=\bigg< \frac{\partial f}{\partial x}(a,b), \frac{\partial f}{\partial y}(a,b)\bigg>.$$ In general these approximations are just linear functions and their graphs are planes through the point $(a,b,f(a,b))$; in the point-slope form they are: $$l(x,y)= f(a,b)+ m(x - a)+n(y-b). $$ When you zoom in on the point, the tangent plane -- but no other plane -- will merge with the graph: Definition. Suppose $y=f(x,y)$ is defined at $(x,y)=(a,b)$ function and $$l(x,y)=f(a,b)+m(x-a)+n(y-b)$$ is any of its linear approximations at that point. Then, $y=l(x,y)$ is called the best linear approximation of $f$ at $(x,y)=(a,b)$ if the following is satisfied: $$\lim_{(x,y)\to (a,b)} \frac{ f(x,y) -l(x,y) }{||(x,y)-(a,b)||}=0.$$ In that case, the function $f$ is called differentiable at $(a,b)$ and the graph of $z=l(x,y)$ is called the tangent plane. In other words, we stick to the functions that look like plane on a small scale! The definition is about the decline of the error of the approximation: $$\text{error } = | f(x,y) -l(x,y) | . $$ The limit of $\frac{\text{error}}{\text{run}}$ is required to be zero. Theorem. If $$l(x,y)=f(a,b)+m(x-a)+n(y-b)$$ is the best linear approximation of $f$ at $x=a$, then $$m=\frac{\partial f}{\partial x}(a,b) \text{ and }n=\frac{\partial f}{\partial y}(a,b) .$$ Proof. The limit in the definition allows us to approach $(a,b)$ in any way we like. Let's start with the direction parallel to the $x$-axis and see how $\frac{\text{error}}{\text{run}}$ in converted into $\frac{\text{rise}}{\text{run}}$: $$\begin{array}{llcc} 0&=\lim_{x\to a^+,\ y=b} \frac{ f(x,y) -l(x,y) }{||(x,y)-(a,b)||}\\ &=\lim_{x\to a^+}\frac{ f(x,y) -(f(a,b) + m(x-a) +n(b-b))}{x-a}\\ &=\lim_{x\to a^+}\frac{ f(x,y) -f(a,b)}{x-a} -m \\ &= \frac{\partial f}{\partial x}(a,b) -m . \end{array}$$ The same computation for the $y$-axis produces the second identity. $\blacksquare$ Example. Let's confirm that the tangent plane to the hyperbolic paraboloid, given by the graph of $f(x,y)=xy$, at $(0,0)$ is horizontal: $$\frac{\partial f}{\partial x}(0,0)= \frac{d}{dx}f(x,0)\bigg|_{x=0}=\frac{d}{dx}(x\cdot 0)\bigg|_{x=0} =0,\ \frac{\partial f}{\partial y}(0,0)= \frac{d}{dy}f(0,y)\bigg|_{y=0}=\frac{d}{dx}(0\cdot y)\bigg|_{y=0} =0.$$ The two tangent lines form a cross and the tangent plane is spanned on this cross. $\square$ Just as before, replacing a function with its linear approximation is called linearization and it can be used to estimate values of new functions. Example. Let's review a familiar example: approximation of $\sqrt{4.1}$. We can't compute $\sqrt{x}$ by hand because, in a sense, the function $f(x)=\sqrt{x}$ is unknown. The best linear approximation of $y=f(x)=\sqrt{x}$ is known and, as a linear function, it can be computed by hand: $$f'(x) = \frac{1}{2\sqrt{x}} \ \Longrightarrow\ f'(4) = \frac{1}{2\sqrt{4}} = \frac{1}{4}.$$ The best linear approximation is: $$l(x) = f(a) + f'(a) (x - a) = 2 + \frac{1}{4} (x - 4).$$ Finally, our approximation of $\sqrt{4.1}$ is $$l(4.1) = 2 + \frac{1}{4}(4.1 - 4) = 2 + \frac{1}{4} \cdot 1 = 2 + 0.025 = 2.025.$$ Let's now approximate of $\sqrt{4.1}\sqrt[3]{7.8}$. Instead of approximating the two terms separately, we will find the best linear approximation of the product $f(x,y)=\sqrt{x}\sqrt[3]{y}$ at $(x,y)=(4,8)$. Then we compute the partial derivatives, $x$: $$\frac{\partial f}{\partial x}(4,8)= \frac{d}{dx}f(x,8)\bigg|_{x=4}=\frac{d}{dx}\left(\sqrt{x}\sqrt[3]{8}\right)\bigg|_{x=4} =\frac{1}{2\sqrt{x}}2\bigg|_{x=4}=\frac{1}{2};$$ and $y$: $$\frac{\partial f}{\partial y}(4,8)= \frac{d}{dy}f(4,y)\bigg|_{y=8}=\frac{d}{dx}\left(\sqrt{4}\sqrt[3]{y}\right)\bigg|_{y=8} =2\frac{1/3}{\sqrt[3]{y^2}}\bigg|_{y=8}=\frac{1}{12}.$$ The best linear approximation is: $$l(x,y) = f(a,b) + \frac{\partial f}{\partial x}(a,b)(x - a) +\frac{\partial f}{\partial y}(a,b)(y-b)= 2\cdot 2 + \frac{1}{2} (x - 4)+\frac{1}{12} (y - 8).$$ Finally, our approximation of $\sqrt{4.1}\sqrt[3]{7.8}$ is $$l(4.1,7.8) = 4 + \frac{1}{2}.1+ \frac{1}{12}(-.2) = 4.033333....$$ $\square$ Exercise. Find the best linear approximation of $f(x)=(xy)^{1/3}$ at $(1,1)$. To summarize, under the limit $x\to a$, the secant line parallel to the $x$-axis turns into a tangent line with the slope $f_x'(a,b)$, and under the limit $y\to b$, the secant line parallel to the $x$-axis turns into a tangent lines with the slope $f'_y(a,b)$, provided $f$ is differentiable with respect to these two variables at this point. Meanwhile, the secant plane turns into the tangent plane, provided $f$ is differentiable at this point, with the slopes $f_x'(a,b)$ and $f'_y(a,b)$ in the directions of the coordinate axes. Partial differentiation and optimization We pick one variable and treat the rest of them as parameters. The rules are the same. This is, for example, the Constant Multiple Rule: $$\frac{\partial }{\partial x}(xy)=y\frac{\partial }{\partial x}(x)=...$$ This is the Sum Rule: $$\frac{\partial }{\partial x}(x+y)=y\frac{\partial }{\partial x}(x)=...$$ This notion is not to be confused with the "related rates". Here $y$ is a function of $x$: $$\frac{d}{dx}\big( xy^2 \big)=y^2+x\cdot \frac{d}{dx}(y^2)=y^2+x\cdot 2yy',$$ by the Chain Rule. Here $y$ is just another variable: $$\frac{\partial}{\partial x}\big( xy^2 \big)=y^2+x\cdot \frac{\partial}{\partial x}(y^2)=y^2+x\cdot 0=y^2.$$ After all, these variable are un-related; they are independent variables! Example. $\square$ Next we consider optimization, which is simply a way to find the maximum and minimum values of functions. We already know from the Extreme Value Theorem that this problem always has a solution provided the function is continuous on a closed bounded subset. Furthermore, just as in Chapter 9, we first narrow down our search. Recall that a function $y=f(x)$ has a local minimum point at $x = a$ if $f(a) \leq f(x)$ for all $x$ within some open interval centered at $a$. We can imagine as if we build a rectangle on top of each of these intervals and use to cut a piece from our graph. We can't use intervals in dimension $2$, what's their analog? It is an open disk on the plane centered at $(a,b)$. We can imagine as if we build a cylinder on top of each of these disks and use to cut a patch from the surface of our graph. In fact, both intervals and disks (and 3d balls, etc.) can be conveniently described in terms of the distance from the point: $d(X,A)\le \varepsilon$. So, we restrict our attention to these (possibly small) disks. Definition. A function $z=f(X)$ has a local minimum point at $X = A$ if $f(A) \leq f(X)$ for all $X$ within some positive distance from $A$, i.e., $d(X,A)\le \varepsilon$. Furthermore, a function $f$ has a local maximum point at $X = A$ if $f(A) \geq f(X)$ for all $X$ for all $X$ within some positive distance from $A$. We call these local extreme points, or extrema. Warning: The definition implies that $f$ is defined for all of these values of $X$. Exercise. What could be a possible meaning of an analog of the one-sided derivative for functions of two variables? In other words, there is an open disk $D$ around $A$ such that $A$ is the global maximum (or minimum) point when $f$ is restricted to $A$. Now, local extreme points are candidates for global extreme points. To compare to the familiar, one-dimensional case, we use to look at the terrain from the side and now we look from above: How do we find them? We reduce the two-dimensional problem to the one-dimensional case. Indeed, if $(a,b)$ is a local maximum of $z=f(x,y)$ then $x=a$ is a local maximum of the numerical function $g(x)=f(x,b)$, and $y=b$ is a local maximum of the numerical function $h(y)=f(a,y)$. In other words, the summit will be the highest point of our trip whether we come from the south or from the west. Warning: with this approach we ignore the possibility of even higher locations found in the diagonal direction. However, the danger of missing this is only possible when the function isn't differentiable. Just as in Chapter 9, we will use the derivatives in order to facilitate our search. Recall what Fermat's Theorem states: if $x=a$ is a local extreme point of $z = f(x)$ and $z=f(x)$ is differentiable at $x=a$, then $a$ is a critical point, i.e., $$f'(a)=0,$$ or undefined. The last equation typically produces just a finite number of candidates for extrema. As we apply the theorem to either variable, we find ourselves in a similar situation but with two equations. File:Fermat's Theorem dim 2.png For a function of two variables we have the following two-dimensional analog of Fermat's Theorem. Theorem (Fermat's Theorem). If $X=A$ is a local extreme point of a differentiable at $A$ function several variables $z = f(X)$ then $A$ is a critical point, i.e., $$\nabla f \, (A)=0.$$ Example. Let's find the critical point of $$f(x,y)=x^2+3y^2+xy+7.$$ We differentiate: $$\begin{array}{lll} \frac{\partial f}{\partial x}(x,y)&=2x+y,\\ \frac{\partial f}{\partial x}(x,y)&=6y+x. \end{array}$$ We how set these derivatives equal to zero: $$\begin{array}{lll} 2x+y=0,\\ 6y+x=0. \end{array}$$ We have two equations to be solved. Geometrically, these are two lines and the solution is the intersection. By substitution: $$6y+x=0\ \Longrightarrow\ x=-6y\ \Longrightarrow\ 2(-6y)+y=0 \ \Longrightarrow\ y=0\ \Longrightarrow\ x=0.$$ There is only one critical point $(0,0)$. It is just a candidate for an extreme point, for now. Plotting reveals that this is a minimum. This is what the graph looks like: This is a elliptic paraboloid (its level curves are ellipses). $\square$ Instead of one equation - one variable as in the case of a numerical function, we will have two variables - two equations (and then three variables - three equations, etc.). Typically the result is finitely many points. Exercise. Give an example of a differentiable function of two variables with an infinitely many critical points. Of course, the condition of the theorem -- partial derivatives are zero -- can be re-stated in a vector form -- the gradient is zero: $$f_x(a,b)=0 \text{ and } f_y(a,b)=0\ \Longleftrightarrow\ \nabla f(a,b)=0.$$ This means that the tangent plane is horizontal. So, at the top of the mountain, if it's smooth enough, it resembles a plane and this plane cannot be tilted because if it slopes down in any direction then so does the surface. What we have seen is local optimization and it is only a stepping stone for global optimization. Just as in the one-dimensional case, the boundary points have to be dealt with separately. This time, however, there will be infinitely many of them instead of just two end-points: Example. Let's find the extreme points of $$f(x,y)=x^2+y^2,$$ subject to the restriction (of the domain): $$|x|\le 1,\ |y|\le 1.$$ The restriction is the restriction of the domain of the function to this square. We differentiate, set the derivatives equal to zero, and solve the system of equations: $$\begin{array}{lll} f_x(x,y)&=2x&=0&\Longrightarrow &x=0\\ f_y(x,y)&=2y&=0&\Longrightarrow &y=0 \end{array}\ \Longrightarrow\ (a,b)=(0,0).$$ This is the only critical point. We then note that $$f(0,0)=0$$ before proceeding. The boundary points of our square domain remain to be investigated: $$|x|= 1,\ |y|= 1.$$ We have to test all of them by comparing the value of the function to each other and to the critical point. In other words, we are solving several one-dimensional optimization problems! These are the results: $$\begin{array}{lll} |x|=1&\Longrightarrow &f(x,y)=1+y^2&\Longrightarrow&\max_{|y|\le 1} \{1+y^2\}=2 \text{ for }y=\pm 1,&\min_{|y|\le 1} \{1+y^2\}=1 \text{ for }y=0,\\ |y|=1&\Longrightarrow &f(x,y)=x^2+1&\Longrightarrow&\max_{|x|\le 1} \{x^2+1\}=2 \text{ for }x=\pm 1,&\min_{|x|\le 1} \{x^2+1\}=1 \text{ for }x=0. \end{array}$$ Comparing these outputs we conclude that the maximal value of $2$ is attained at the four points $(1,1),(-1,1),(1,-1),(-1,-1)$, and the minimal value of $0$ is attained at $(0,0)$. To confirm our conclusions we just look at the graph of this familiar paraboloid of revolution: Under the domain restrictions, the graph doesn't look like a cup anymore... We can see the vertical parabolas we just maximized. $\square$ For functions of three variables, the domains are typically solids. The boundaries of these domains are then surfaces! Then, a part of such a 3d optimization problem will be a 2d optimization problem... We can think of the pair of partial derivatives as one, the derivative, of a function of two variables. However, in sharp contrast to the one-variable case, the resulting function has a very different nature from the original: same input but the output isn't a number anymore but a vector! It's a vector field discussed in Chapter 19. The second difference quotient with respect to a repeated variable We start with numerical differentiation. Example. We consider the function: $$f(x,y)=-x^2+y^2+xy.$$ When plotted, it is recognized as a familiar hyperbolic paraboloid. Below we outline the process of second partial differentiation. The variable functions -- with respect to $x$ and $y$ -- are shown and so are their derivatives. These are the functions we will differentiate. The two functions adjacent to the original are the familiar first partial derivatives. The top right and the bottom left are computed for the first time as described. The function in the middle is discussed in the next section. The results are all horizontal lines equally spaced and in both cases they form horizontal planes. These planes are the graphs of the two new functions of two variables, the second partial derivatives, the tables of which have been constructed. Some things don't change: just as in the one-dimensional case, the second derivatives of quadratic functions are constant. Here is a more complex example: $$f(x,y)=-x^2+y^2+xy+\sin(x+y+3).$$ First, we notice that, just as with numerical functions in Chapter 8 and just as with the parametric curves in Chapter 17, the difference of the difference is simply the difference that skips a node. That's why the second difference doesn't provide us -- with respect to the same variable -- with any meaningful information. We will limit our attention to the difference quotients. We will carry out the same second difference quotient construction for $f$ with respect to $x$ with $y$ fixed and then vice versa. The construction is summarized in this diagram: $$\begin{array}{ccc} -&f(x_1)&---&f(x_2)&---&f(x_3)&-&\\ -&-\bullet-&\frac{\Delta f}{\Delta x_2}&-\bullet-&\frac{\Delta f}{\Delta x_3}&-\bullet-&-\\ -&-\bullet-&---&\frac{\frac{\Delta f}{\Delta x_3} -\frac{\Delta f}{\Delta x_2}}{s_3-s_2}&---&-\bullet-&-&\\ &x_1&s_2&x_2&s_3&x_3&\\ \end{array}$$ The two computations are illustrated by continuing the following familiar diagram for the (first) partial difference quotients: $$\begin{array}{ccccccc} f(x,y+\Delta y)&---&f(x+\Delta x,y+\Delta y)&&-\bullet-&\frac{\Delta f}{\Delta x} f(s,y+\Delta y)&-\bullet-\\\\ |&&|&&|&&|\\ |&&|&\leadsto&\frac{\Delta f}{\Delta y}(x,t)&&\frac{\Delta f}{\Delta y}(x+\Delta x,t)\\ |&&|&&|&&|\\ f(x,y)&---&f(x+\Delta x,y)&&-\bullet-&\frac{\Delta f}{\Delta x}(s,y)&-\bullet- \end{array}\leadsto$$ There are two further diagrams. We, respectively, subtract the numbers assigned to the horizontal edges horizontally (shown below) and we subtract the numbers assigned to the vertical edges vertically before placing the results at the nodes (not shown): $$\begin{array}{ccc} -\bullet-&\frac{\Delta f}{\Delta x}(s,y)&-\bullet-&\frac{\Delta f}{\Delta x}(s+\Delta s,y)&-\bullet- \end{array}\quad\leadsto\quad\begin{array}{ccc} -\bullet-&--&\frac{\Delta \frac{\Delta f}{\Delta x}}{\Delta x}(x,y)&--&-\bullet- \end{array}$$ Recall from earlier in this chapter that we have a partition $P$ of a rectangle $R=[a,b]\times [c,d]$ in the $xy$-plane built as a combination of partitions of the intervals $[a,b]$ and $[c,d]$: $$a=x_{0}\le x_{1}\le x_{2}\le ... \le x_{n-1}\le x_{n}=b,$$ and $$c=y_{0}\le y_{1}\le y_{2}\le ...\le y_{m-1}\le y_{m}=d.$$ Its primary nodes are: $$X_{ij}=(x_i,y_{j}),\ i=1,2,...,n,\ j=1,2,...,m;$$ and its secondary nodes are: a point $S_{ij}$ in the segment $[x_{i-1},x_{i}]\times \{y_{j}\}$, and a point $T_{ij}$ in the segment $\{x_{i}\}\times [y_{j-1},y_{j}]$. If $z=f(x,y)$ is defined at the nodes $X_{ij},\ i,j=0,1,2,...,n$, of the partition, the partial difference quotients of $f$ with respect to $x$ and $y$ are respectively: $$\frac{\Delta f}{\Delta x}(S_{ij})=\frac{f(X_{ij})-f(X_{i-1,j})}{x_{i}-x_{i-1}}\text{ and }\frac{\Delta f}{\Delta y}(T_{ij})=\frac{f(X_{ij})-f(X_{i,j-1})}{y_{j}- y_{j-1}}.$$ It is now especially important that we have utilized the secondary nodes as the inputs of the new functions. Indeed, we can now carry out a similar construction with these functions and find their difference quotients, the four of them. The construction is based on the one we carried out for functions of one variable in Chapter 7: We will start with these two new quantities. First, let's consider the rate of change of $\frac{\Delta f}{\Delta x}$ -- the rate of change of $f$ with respect to $x$ -- with respect to $x$, again. For each fixed $j$, we consider the partition of the horizontal interval $[a,b]\times \{y_{j}\}$ with the primary nodes: $$S_{ij}=(s_{ij},y_{j}),\ i=1,2,...,n,$$ and the secondary nodes: $$X_{ij},\ i=1,2,...,n-1.$$ We now carry out the same difference quotient construction, still within the horizontal edges. Similarly, to define the rate of change of $\frac{\Delta f}{\Delta y}$ with respect to $y$. For each fixed $i$, we consider the partition of the vertical interval $\{x_{i}\}\times [c,d]$ with the primary nodes: $$T_{ij}=(x_{i},t_{ij}),\ j=1,2,...,m,$$ and the secondary nodes: $$X_{ij},\ j=1,2,...,m-1.$$ We now carry out the same difference quotient construction, still within the vertical edges. Definition. The (repeated) second difference quotient of $f$ with respect to $x$ is defined at the primary nodes of the original partition $P$ by ($i=1,2,...,n-1,\ j=0,1,...,m$): $$\frac{\Delta^2 f}{\Delta x^2}(X_{ij})=\frac{\frac{\Delta f}{\Delta x}(S_{ij})-\frac{\Delta f}{\Delta x}(S_{i,j-1})}{s_{ij}-s_{i-1,j}} ;$$ and the second difference quotient of $f$ with respect to $y$ is defined at the primary nodes of the original partition $P$ by ($i=0,1,...,n,\ j=1,2,...,m-1$): $$\frac{\Delta^2 f}{\Delta y^2}(X_{ij})=\frac{\frac{\Delta f}{\Delta y}(T_{ij})-\frac{\Delta f}{\Delta y}(T_{i,j-1})}{t_{ij}-t_{i,j-1}}.$$ These are discrete $0$-forms. In the above example, we can see how the sign of these expressions reveal the concavity of the curves. For a simplified notation, we will often omit the indices: $$\frac{\Delta^2 f}{\Delta x^2}(x,y)=\frac{\frac{\Delta f}{\Delta x}(s+\Delta s,y)-\frac{\Delta f}{\Delta x}(s,y)}{\Delta s} ;$$ $$\frac{\Delta^2 f}{\Delta y^2}(x,y)=\frac{\frac{\Delta f}{\Delta y}(x,t+\Delta t)-\frac{\Delta f}{\Delta y}(x,t)}{\Delta t}.$$ The second difference and the difference quotient with respect to mixed variables Furthermore, we can compute the rate of change of with respect to the other variable. This is how the concavity with respect to $x$ is increasing with increasing $y$: We are not in the $1$-dimensional setting anymore! That is why, in contrast to the one-dimensional case in Chapter 7, the case of parametric curves in Chapter 17, and the repeated variable case above, the difference of the difference is not simply the difference that skips a node. The second difference with respect to mixed variables does provide us with meaningful information. The partial differences: $$\Delta_x f\, (S_{ij}) \text{ and } \Delta_y f\, (T_{ij}),$$ and the partial difference quotients: $$\frac{\Delta f}{\Delta x}(S_{ij}) \text{ and } \frac{\Delta f}{\Delta y}(T_{ij}).$$ are defined at the secondary nodes located on the edges of the partition of the rectangle $[a,b]\times [c,d]$, horizontal and vertical: There are four more quantities: the change of $\Delta_x f$ with respect to $y$, the change of $\Delta_y f$ with respect to $x$, the rate of change of $\frac{\Delta f}{\Delta x}$ with respect to $y$, and the rate of change of $\frac{\Delta f}{\Delta y}$ with respect to $x$. They will be assigned to the nodes located at the faces of the original partition. These are tertiary nodes. The new computations are illustrated by continuing the following familiar diagram for the (first) differences: $$\begin{array}{ccccccc} f(x,y+\Delta y)&---&f(x+\Delta x,y+\Delta y)&&-\bullet-&\Delta_x f\, (s,y+\Delta y)&-\bullet-\\ |&&|&&|&&|\\ |&&|&\leadsto&\Delta_y f\, (x,t)&&\Delta_y f\, (x+\Delta x,t)\\ |&&|&&|&&|\\ f(x,y)&---&f(x+\Delta x,y)&&-\bullet-&\Delta_x f\, (s,y)&-\bullet- \end{array}\leadsto$$ There are two further diagrams. We, respectively, subtract the numbers assigned to the horizontal edges vertically and we subtract the numbers assigned to the vertical edges horizontally before placing the results in the middle of the square: $$\leadsto\begin{array}{ccc} -\bullet-&---&-\bullet-\\ |&\Delta_y\Delta_x f\, (s,t)&|\\ -\bullet-&---&-\bullet- \end{array}\qquad\qquad \qquad\leadsto \begin{array}{ccc} -\bullet-&---&-\bullet-\\ |&\Delta_x\Delta_y f\, (s,t)&|\\ -\bullet-&---&-\bullet- \end{array}$$ Definition. The (mixed) second difference of $f$ with respect to $yx$ is defined at the tertiary nodes of the original partition $P$ by ($i=1,2,...,n-1,\ j=1,2,...,m-1$): $$\Delta^2_{yx} f\, (U_{ij})=\Delta_x f\, (S_{ij})-\Delta_x f\, (S_{i,j-1});$$ and the mixed difference quotient of $f$ with respect to $xy$ is defined at the tertiary nodes of the original partition $P$ by ($i=1,2,...,n-1,\ j=1,2,...,m-1$): $$\Delta^2_{xy} f\, (U_{ij})=\Delta_y f\, (T_{ij})-\Delta_y f\, (T_{i-1,j}).$$ Definition. The (mixed) second difference quotient of $f$ with respect to $yx$ is defined at the tertiary nodes of the original partition $P$ by ($i=1,2,...,n-1,\ j=1,2,...,m-1$): $$\frac{\Delta^2 f}{\Delta y \Delta x}(U_{ij})=\frac{\frac{\Delta f}{\Delta x}(S_{ij})-\frac{\Delta f}{\Delta x}(S_{i,j-1})}{y_{j}-y_{j-1}};$$ and the mixed difference quotient of $f$ with respect to $xy$ is defined at the tertiary nodes of the original partition $P$ by ($i=1,2,...,n-1,\ j=1,2,...,m-1$): $$\frac{\Delta^2 f}{\Delta x \Delta y}(U_{ij})=\frac{\frac{\Delta f}{\Delta y}(T_{i,j})-\frac{\Delta f}{\Delta y}(T_{i-1,j})}{x_{i}-x_{i-1}}.$$ When the tertiary nodes are unspecified, these are functions of the faces themselves, i.e., discrete $2$-forms. For a simplified notation, we will often omit the indices: $$\frac{\Delta^2 f}{\Delta y \Delta x}(s,t)=\frac{\frac{\Delta f}{\Delta x}(s,y+\Delta y)-\frac{\Delta f}{\Delta x}(s,y)}{\Delta y};$$ $$\frac{\Delta^2 f}{\Delta x \Delta y}(s,t)=\frac{\frac{\Delta f}{\Delta y}(x+\Delta x,t)-\frac{\Delta f}{\Delta y}(x,t)}{\Delta x}.$$ Now, the two mixed differences are "made of" the same four quantities; one can guess that they are equal. For example, we can see it here: Let's confirm that with algebra. We substitute and simplify: $$\begin{array}{lll} \Delta_{yx}^2 f\, (s,t)&=\Delta_x f\, (s,y+\Delta y)-\Delta_x f\, (s,y)\\ &=\big( f(x+\Delta x,y+\Delta y)-f(x,y+\Delta y)\big)-\big(f(x+\Delta x,y)-f(x,y)\big)\\ &=f(x+\Delta x,y+\Delta y)-f(x,y+\Delta y)-f(x+\Delta x,y)+f(x,y)\\ &=\big(f(x+\Delta x,y+\Delta y)-f(x+\Delta x,y)\big)-\big( f(x,y+\Delta y)-f(x,y)\big)\\ &=\Delta_y f\, (x+\Delta x,t)-\Delta_y f\, (x,t)\\ &=\Delta_{xy}^2 f\, (s,t). \end{array}$$ Theorem (Discrete Clairaut's Theorem). Over a partition in ${\bf R}^n$, first, the mixed second differences with respect to any two variables are equal to each other: $$\Delta_{yx}^2 f=\Delta_{xy}^2 f;$$ and, second, the mixed second difference quotients are equal to each other: $$\frac{\Delta^2 f}{\Delta y \Delta x}=\frac{\Delta^2 f}{\Delta x \Delta y}.$$ This theorem will have important consequences presented in Chapter 19. The second partial derivatives The second derivative is known to help to classify extreme points. At least, we can dismiss the possibility that a point is a maximum when the function is concave up, i.e., the second derivative is positive. Example. In the above example of the function $$f(x,y)=x^2+3y^2+xy+7,$$ we differentiate one more time each partial derivative -- with respect to the same variable: $$\begin{array}{lll} \frac{\partial f}{\partial x}&=2x+y&\Longrightarrow&\frac{\partial }{\partial x}\left( \frac{\partial f}{\partial x}\right)&=\frac{\partial }{\partial x}(2x+y)&=2,\\ \frac{\partial f}{\partial x}&=6y+x&\Longrightarrow&\frac{\partial }{\partial y}\left( \frac{\partial f}{\partial y}\right)&=\frac{\partial }{\partial x}(6y+x)&=6. \end{array}$$ Both numbers are positive, therefore, both curves are concave up! $\square$ Repeated differentiation produces a sequence of functions in the one variable case: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}\begin{array}{cccccccccccc} f &\ra{\frac{d}{dx}} &f' &\ra{\frac{d}{dx}} & f' ' &\ra{\frac{d}{dx}} &...&\ra{\frac{d}{dx}} & f^{(n)} &\ra{\frac{d}{dx}} & ... \end{array}$$ In the two-variable case, it makes the functions multiply: $$\begin{array}{cccccccccccc} &&&& &f\\ &&&&\swarrow_x&&_y\searrow\\ &&&f_x& && &f_y\\ &&\swarrow_x&&_y\searrow&&\swarrow_x&&_y\searrow\\ &f_{xx}&&&&f_{yx}\quad f_{xy}&&&&f_{yy}\\ \swarrow_x&&_y\searrow&&\swarrow_x&&_y\searrow&&\swarrow_x&&_y\searrow\\ ...&&...&&...&&...&&...&&... \end{array}$$ The number of derivatives might be reduced when the mixed derivatives at the bottom are equal. It is often possible thanks to the following result that we accept without proof. Theorem (Clairaut's theorem). Suppose a function $z=f(x,y)$ has continuous second partial derivatives at a given point $(x_0,y_0)$ in ${\bf R}^2$. Then we have: $$\frac{\partial^2 f}{\partial x\, \partial y}(x_0,y_0) = \frac{\partial^2 f}{\partial y\, \partial x}(x_0,y_0).\,$$ Under the conditions of the theorem, this part of the above diagram becomes commutative: $$\begin{array}{cccccccccccc} &&&& &f\\ &&&&\swarrow_x&&_y\searrow\\ &&&f_x& && &f_y\\ &&\swarrow_x&&_y\searrow&&\swarrow_x&&_y\searrow\\ &f_{xx}&&&&f_{yx}= f_{xy}&&&&f_{yy}\\ \end{array}$$ In fact, the two operations of partial differentiation commute: $$\frac{\partial}{\partial x}\frac{\partial}{\partial y}=\frac{\partial}{\partial y}\frac{\partial}{\partial x}.$$ This can also be written as another commutative diagram: $$\newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llr} f & \ra{\frac{\partial}{\partial x}} & f_x \\ \da{\frac{\partial}{\partial y}} & \searrow & \da{\frac{\partial}{\partial y}} \\ f_y & \ra{\frac{\partial}{\partial x}} & f_{xy}=f_{yx} \end{array}$$ For the case of three variables, there are even more derivatives: $$\begin{array}{cccccccccccc} &&&& &&&f\\ &&&&\swarrow_x&&&\downarrow_y&&&_z\searrow\\ &&&f_x& && &f_y& && &f_z\\ &&\swarrow_x&\downarrow_y&_z\searrow&&\swarrow_x&\downarrow_y&_z\searrow&&\swarrow_x&\downarrow_y&_z\searrow\\ &f_{xx}&&f_{yx}&&f_{zx}\quad f_{xy}&&f_{yy}&&f_{zy}\quad f_{xz}&&f_{xz}&&f_{yz}\ f_{zz}\\ \end{array}$$ Under the conditions of the theorem, the three operations of partial differentiation commute (pairwise): $$\frac{\partial}{\partial x}\frac{\partial}{\partial y}=\frac{\partial}{\partial y}\frac{\partial}{\partial x},\ \frac{\partial}{\partial y}\frac{\partial}{\partial z}=\frac{\partial}{\partial z}\frac{\partial}{\partial y},\ \frac{\partial}{\partial z}\frac{\partial}{\partial x}=\frac{\partial}{\partial x}\frac{\partial}{\partial z}.$$ The above diagram also becomes commutative: $$\newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} % \begin{array}{cccccccccccc} &&&& &f_{zz}\\ &&&& &\uparrow_z\\ &&&\la{}& \la{} &f_z& \ra{} & \ra{}\\ && \swarrow^x& &&\uparrow_z&&&^y\searrow\\ &&f_{xz}=&f_{zx}& & f && f_{zy}&=f_{yz}\\ &&&\uparrow_z&\swarrow^x&&^y\searrow&\uparrow_z\\ &&&f_x& && &f_y\\ &&\swarrow^x&&^y\searrow&&\swarrow^x&&^y\searrow\\ &f_{xx}&&&&f_{yx}= f_{xy}&&&&f_{yy}\\ \end{array}$$ Retrieved from "https://calculus123.com/index.php?title=Functions_of_several_variables&oldid=1108"
CommonCrawl
TMF: TMF, 2019, Volume 201, Number 2, Pages 291–309 (Mi tmf9706) Quantum entanglement in the nonrelativistic collision between two identical fermions with spin $1/2$ K. A. Kouzakov Faculty of Physics, Lomonosov Moscow State University, Moscow, Russia Abstract: In the framework of nonstationary scattering theory, we study the formation of an entangled state of two identical nonrelativistic spin-$1/2$ particles as a result of their elastic scattering. The measure of particle entanglement in the final channel is described using pair concurrence. For the indicated quantitative criterion, we obtain general expressions in terms of the direct and exchange scattering amplitudes in the cases of pure and mixed spin states of the pair in the initial channel. We consider the violation of Bell's inequality in the final channel. We show that as a result of a collision between unpolarized particles, a Werner spin state of the pair forms, which is entangled if the singlet component of the angular differential scattering cross section in the center-of-mass reference frame exceeds the triplet component. We use the process of free electron–electron scattering as an example to illustrate the developed formalism. Keywords: quantum entanglement, pair concurrence, Bell's inequality, nonrelativistic collision, identical fermions, electron–electron scattering. DOI: https://doi.org/10.4213/tmf9706 Full text: PDF file (638 kB) First page: PDF file Theoretical and Mathematical Physics, 2019, 201:2, 1664–1679 PACS: 03.65.Ud; 03.67.Bg MSC: 81U05 Received: 14.02.2019 Revised: 26.05.2019 Citation: K. A. Kouzakov, "Quantum entanglement in the nonrelativistic collision between two identical fermions with spin $1/2$", TMF, 201:2 (2019), 291–309; Theoret. and Math. Phys., 201:2 (2019), 1664–1679 \Bibitem{Kou19} \by K.~A.~Kouzakov \paper Quantum entanglement in the~nonrelativistic collision between two identical fermions with spin~$1/2$ \jour TMF \vol 201 \pages 291--309 \mathnet{http://mi.mathnet.ru/tmf9706} \crossref{https://doi.org/10.4213/tmf9706} \adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2019TMP...201.1664K} \transl \jour Theoret. and Math. Phys. \pages 1664--1679 \crossref{https://doi.org/10.1134/S0040577919110102} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85076338276} http://mi.mathnet.ru/eng/tmf9706 https://doi.org/10.4213/tmf9706 http://mi.mathnet.ru/eng/tmf/v201/i2/p291 This page: 93 First page: 11
CommonCrawl
Stability in representation theory of the symmetric groups Dr. Inna Entova-Aizenbud (Ben-Gurion University) 03/05/2017 - 12:00 - 11:00Add to Calendar 2017-05-03 11:00:00 2017-05-03 12:00:00 Stability in representation theory of the symmetric groups In the finite-dimensional representation theory of the symmetric groups $$S_n$$ over the base field $$\mathbb{C}$$, there is an an interesting phenomena of "stabilization" as $$n \to \infty$$: some representations of $$S_n$$ appear in sequences $$(V_n)_{n \geq 0}$$, where each $$V_n$$ is a finite-dimensional representation of $$S_n$$, where $$V_n$$ become "the same" in a certain sense for $$n >> 0$$. One manifestation of this phenomena are sequences $$(V_n)_{n \geq 0}$$ such that the characters of $$S_n$$ on $$V_n$$ are "polynomial in $n$". More precisely, these sequences satisfy the condition: for $$n>>0$$, the trace (character) of the automorphism $$\sigma \in S_n$$ of $$V_n$$ is given by a polynomial in the variables $$x_i$$, where $$x_i(\sigma)$$ is the number of cycles of length $$i$$ in the permutation $$\sigma$$. In particular, such sequences $$(V_n)_{n \geq 0}$$ satisfy the agreeable property that $$\dim(V_n)$$ is polynomial in $$n$$. Such "polynomial sequences" are encountered in many contexts: cohomologies of configuration spaces of $$n$$ distinct ordered points on a connected oriented manifold, spaces of polynomials on rank varieties of $$n \times n$$ matrices, and more. These sequences are called $$FI$$-modules, and have been studied extensively by Church, Ellenberg, Farb and others, yielding many interesting results on polynomiality in $$n$$ of dimensions of these spaces. A stronger version of the stability phenomena is described by the following two settings: - The algebraic representations of the infinite symmetric group $$S_{\infty} = \bigcup_{n} S_n,$$ where each representation of $$S_{\infty}$$ corresponds to a ``polynomial sequence'' $$(V_n)_{n \geq 0}$$. - The "polynomial" family of Deligne categories $$Rep(S_t), ~t \in \mathbb{C}$$, where the objects of the category $$Rep(S_t)$$ can be thought of as "continuations of sequences $$(V_n)_{n \geq 0}$$" to complex values of $$t=n$$. I will describe both settings, show that they are connected, and explain some applications in the representation theory of the symmetric groups. Third floor seminar room אוניברסיטת בר-אילן - המחלקה למתמטיקה [email protected] Asia/Jerusalem public In the finite-dimensional representation theory of the symmetric groups $$S_n$$ over the base field $$\mathbb{C}$$, there is an an interesting phenomena of "stabilization" as $$n \to \infty$$: some representations of $$S_n$$ appear in sequences $$(V_n)_{n \geq 0}$$, where each $$V_n$$ is a finite-dimensional representation of $$S_n$$, where $$V_n$$ become "the same" in a certain sense for $$n >> 0$$. One manifestation of this phenomena are sequences $$(V_n)_{n \geq 0}$$ such that the characters of $$S_n$$ on $$V_n$$ are "polynomial in $n$". More precisely, these sequences satisfy the condition: for $$n>>0$$, the trace (character) of the automorphism $$\sigma \in S_n$$ of $$V_n$$ is given by a polynomial in the variables $$x_i$$, where $$x_i(\sigma)$$ is the number of cycles of length $$i$$ in the permutation $$\sigma$$. In particular, such sequences $$(V_n)_{n \geq 0}$$ satisfy the agreeable property that $$\dim(V_n)$$ is polynomial in $$n$$. Such "polynomial sequences" are encountered in many contexts: cohomologies of configuration spaces of $$n$$ distinct ordered points on a connected oriented manifold, spaces of polynomials on rank varieties of $$n \times n$$ matrices, and more. These sequences are called $$FI$$-modules, and have been studied extensively by Church, Ellenberg, Farb and others, yielding many interesting results on polynomiality in $$n$$ of dimensions of these spaces. A stronger version of the stability phenomena is described by the following two settings: - The algebraic representations of the infinite symmetric group $$S_{\infty} = \bigcup_{n} S_n,$$ where each representation of $$S_{\infty}$$ corresponds to a ``polynomial sequence'' $$(V_n)_{n \geq 0}$$. - The "polynomial" family of Deligne categories $$Rep(S_t), ~t \in \mathbb{C}$$, where the objects of the category $$Rep(S_t)$$ can be thought of as "continuations of sequences $$(V_n)_{n \geq 0}$$" to complex values of $$t=n$$. I will describe both settings, show that they are connected, and explain some applications in the representation theory of the symmetric
CommonCrawl
Aarhus University logo For PhDs Local study portal Mathematics, mathematics-economy and statistics Local PhD portal Pages for all PhD students Local staff information Department of Mathematics - Staff portal Pages for staff members at AU medarbejdere.au.dk/en/ PhD presentations Master's Degree for international students Student guidance/Career guidance Department of Mathematics Research The Mathematics Group Publications Sort by: Year, type and 1st author Type and 1st author Year and 1st author Year, type and 1st author Sort in Benguria, R. D., Fournais, S., Stockmeyer, E. & Van Den Bosch, H. (2017). Spectral Gaps of Dirac Operators Describing Graphene Quantum Dots. Mathematical Physics, Analysis and Geometry, 20(2), [11]. https://doi.org/10.1007/s11040-017-9242-4 Bökstedt, M. & Ottosen, I. (2017). A Hochschild-Kostant-Rosenberg theorem for cyclic homology. Journal of Pure and Applied Algebra, 221(6), 1458–1493. https://doi.org/10.1016/j.jpaa.2016.10.005 Fournais, S., Raymond, N. T., Le Treust, L. & van Schaftingen, J. (2017). Semiclassical Sobolev constants for the electro-magnetic Robin Laplacian. Journal of the Mathematical Society of Japan, 69(4), 1667-1714. https://doi.org/10.2969/jmsj/06941667 Guneysu, B., Matte, O. & Møller, J. S. (2017). Stochastic differential equations for models of non-relativistic matter interacting with quantized radiation fields. Probability Theory and Related Fields, 167(3-4), 817-915. https://doi.org/10.1007/s00440-016-0694-4 Herbst, I. & Skibsted, E. (2017). Decay of eigenfunctions of elliptic PDE's, II. Advances in Mathematics, 306, 177-199. https://doi.org/10.1016/j.aim.2016.10.018 Hirokawa, M., Moller, J. S. & Sasaki, I. (2017). A mathematical analysis of dressed photon in ground state of generalized quantum Rabi model using pair theory. Journal of Physics A: Mathematical and Theoretical, 50(18), [184003]. https://doi.org/10.1088/1751-8121/aa677c Jantzen, J. C. (2017). Maximal weight composition factors for Weyl modules. Canadian Mathematical Bulletin, 60, 762 - 773. https://doi.org/10.4153/CMB-2016-055-4 Jantzen, J. C. (2017). Restrictions from gl_n to sl_n. Journal of Lie Theory, 27, 969 - 981. http://www.heldermann.de/JLT/JLT27/JLT274/jlt27047.htm Jensen, A. N., Kahle, T. & Katthän, L. (2017). Finding binomials in polynomial ideals. Research In the Mathematical Sciences, 4(16). https://doi.org/10.1186/s40687-017-0106-0 Kock, A. (2017). Metric spaces and SDG. Theory and Applications of Categories, 32(24), 803-822. Lauritzen, N. & Thomsen, J. F. (2017). Two properties of endomorphisms of Weyl algebras. Journal of Algebra, 479. https://doi.org/10.1016/j.jalgebra.2016.12.036 Möllers, J. & Ørsted, B. (2017). Estimates for the restriction of automorphic forms on hyperbolic manifolds to compact geodesic cycles. International Mathematics Research Notices, 11(1), 3209-3236. https://doi.org/10.1093/imrn/rnw119 Spotti, C. & Sun, S. (2017). Explicit Gromov–Hausdorff compactifications of moduli spaces of Kähler–Einstein Fano manifolds. Pure and Applied Mathematics Quarterly, 13(3), 477-515. https://doi.org/10.4310/PAMQ.2017.v13.n3.a5 Stetkær, H. (2017). A note on Wilson's functional equation. Aequationes Mathematicae, 91(5), 945-947. https://doi.org/10.1007/s00010-017-0481-z Stetkær, H. (2017). Kannappan's functional equation on semigroups with involution. Semigroup Forum, 94(1), 17-30. https://doi.org/10.1007/s00233-015-9756-7 Stetkær, H. (2017). The kernel of the second order Cauchy difference on semigroups. Aequationes Mathematicae, 91(2), 279–288. https://doi.org/10.1007/s00010-016-0453-8 Thomsen, K. (2017). KMS weights on graph C*-algebras. Advances in Mathematics, 309, 334-391. https://doi.org/10.1016/j.aim.2017.01.024 Thomsen, K. (2017). Phase Transition in O2. Communications in Mathematical Physics, 349(2), 481-492. https://doi.org/10.1007/s00220-016-2742-4 Skibsted, E. & Herbst, I. (2017). Decay of eigenfunctions of elliptic PDE's. RIMS Kokyuroku, RIMS Kokyuroku (2023), 35-39. Contribution to book anthology Spotti, C. (2017). Kähler-Einstein metrics on ℚ -smoothable Fano varieties, their moduli and some applications. In Springer INdAM Series (Vol. 21, pp. 211-229). Springer. Springer INdAM Series Vol. 21 https://doi.org/10.1007/978-3-319-62914-8_16 Swann, A. F. & Dancer, A. (2017). Hypertoric Manifolds and HyperKähler Moment Maps. In S. Chiossi, A. Fino, E. Musso, F. Podestà & L. Vezzoni (Eds.), Special Metrics and Group Actions in Geometry (pp. 107-127). Springer. Springer INdAM Series Vol. 23 https://doi.org/10.1007/978-3-319-67519-0_5 Jensen, A. N., Verschelde, J. & Sommars, J. (2017). Computing Tropical Prevarieties in Parallel. In Proceedings of the International Workshop on Parallel Symbolic Computation Association for Computing Machinery. PASCO 2017 https://doi.org/10.1145/3115936.3115945 Spotti, C. (2017). Kahler-Einstein metrics on Q-smoothable Fano varieties, their moduli and some applications. In Complex and Symplectic Geometry Springer. Springer INdAM Series Spotti, C. & Sun, S. (2017). Explicit Gromov-Hausdorff compactifications of moduli spaces of Kähler-Einstein Fano manifolds. ArXiv. https://arxiv.org/pdf/1705.00377.pdf Ebanks, B. & Stetkær, H. (2018). Extensions of the Sine Addition Formula on Monoids. Results in Mathematics, 73(3), [119]. https://doi.org/10.1007/s00025-018-0880-z Engelmann, M., Møller, J. S. & Rasmussen, M. G. (2018). Local Spectral Deformation. Annales de l'Institut Fourier, 68(2), 767-804. https://doi.org/10.5802/aif.3177 Fournais, S., Kachmar, A. & Pan, X. B. (2018). Existence of surface smectic states of liquid crystals. Journal of Functional Analysis, 274(3), 900-958. https://doi.org/10.1016/j.jfa.2017.10.001 Fournais, S., Lewin, M. & Solovej, J. P. (2018). The semi-classical limit of large fermionic systems. Calculus of Variations and Partial Differential Equations, 57(4), [105]. https://doi.org/10.1007/s00526-018-1374-2 Frahm, J. & Su, F. (2018). Upper bounds for geodesic periods over rank one locally symmetric spaces. Forum Mathematicum, 30(5), 1065-1077. https://arxiv.org/pdf/1709.00935.pdf Garde, H. (2018). Comparison of linear and non-linear monotonicity-based shape reconstruction using exact matrix characterizations. Inverse Problems in Science and Engineering, 26(1), 33-50. Harrap, S., Hussain, M. & Kristensen, S. (2018). A problem in non-linear Diophantine approximation. Nonlinearity, 31(5), 1734-1756. https://doi.org/10.1088/1361-6544/aaa498 Nishiyama, K. & Ørsted, B. (2018). Real double flag varieties for the symplectic group. Journal of Functional Analysis, 274(2), 573-604. https://doi.org/10.1016/j.jfa.2017.07.003 Thomsen, K. & Christensen, J. (2018). Equilibrium and ground states from Cayley graphs. Journal of Functional Analysis. https://doi.org/10.1016/j.jfa.2017.06.019 du Plessis, A. & Wall, C. T. C. (2018). The moduli space of binary quintics. European Journal of Mathematics, 4(1), 423-436. https://doi.org/10.1007/s40879-017-0187-8 Matte, O. & Møller, J. S. (2018). Feynman-Kac formulas for the ultra-violet renormalized Nelson model. Société Mathématique de France. Asterisque https://smf.emath.fr/publications/formules-de-feynmann-kac-pour-le-modele-de-nelson-ultra-violet-renormalisee Andersen, J. E., Jantzen, J. C. & Pei, D. (2018). Identities for Poincaré polynomials via Kostant cascades. arXiv.org. https://arxiv.org/pdf/1810.05615.pdf Arkhipov, S. & Kanstrup, T. (2018). Colored DG-operads and homotopy adjunction for DG-categories. arXiv Arkhipov, S. & Ørsted, S. (2018). Homotopy (co)limits via homotopy (co)ends in general combinatorial model categories. arXiv Arkhipov, S. & Ørsted, S. (2018). Homotopy limits in the category of dg-categories in terms of $\mathrm{A}_{\infty}$-comodules. arXiv Dam, T. N. & Møller, J. S. (2018). Spin-Boson type models analysed through symmetries. https://arxiv.org/pdf/1803.05812.pdf Gallardo, P., Martinez Garcia, J. & Spotti, C. (2018). Applications of the moduli continuity method to log K-stable pairs. ArXiv. https://arxiv.org/pdf/1811.00088.pdf Madsen, T. B. & Swann, A. F. (2018). Toric geometry of G2-manifolds. arXiv.org. https://arxiv.org/abs/1803.06646 Madsen, T. B. & Swann, A. F. (2018). Toric geometry of Spin(7)-manifolds. arXiv.org. https://arxiv.org/abs/1810.12962 Andersen, S. B. & Kristensen, S. (2019). Arithmetic properties of series of reciprocals of algebraic integers. Monatshefte fur Mathematik, 190(4), 641-656. https://doi.org/10.1007/s00605-019-01326-1 Andersen, S. B. & Kristensen, S. (2019). Irrationality and transcendence of continued fractions with algebraic integers. Publicationes mathematicae-Debrecen, 95(3-4), 469-476. https://doi.org/10.5486/PMD.2019.8575 Bley, G. A. & Fournais, S. (2019). Hardy–Lieb–Thirring Inequalities for Fractional Pauli Operators. Communications in Mathematical Physics, 365(2), 651-683. https://doi.org/10.1007/s00220-018-3204-y Bérczi, G. (2019). Towards the Green–Griffiths–Lang conjecture via equivariant localisation. Proceedings of the London Mathematical Society, 118(5), 1057-1083. https://doi.org/10.1112/plms.12197 Cornean, H., Garde, H., Støttrup, B. & Sørensen, K. S. (2019). Magnetic pseudodifferential operators represented as generalized Hofstadter-like matrices. Journal of Pseudo-Differential Operators and Applications, 10(2), 307-336. https://doi.org/10.1007/s11868-018-0271-y Duff, T., Hill, C., Jensen, A., Lee, K., Leykin, A. & Sommars, J. (2019). Solving polynomial systems via homotopy continuation and monodromy. IMA Journal of Numerical Analysis, 39(3), 1421-1446. https://doi.org/10.1093/imanum/dry017 Fischmann, M., Ørsted, B. & Somberg, P. (2019). Bernstein-Sato identities and conformal symmetry breaking operators. Journal of Functional Analysis, 277(11), [108219]. https://doi.org/10.1016/j.jfa.2019.04.002 Revised 10.01.2023 DK-8000 Aarhus C CVR no.: 31119103 P no.: 1008798024 EAN no.: 5798000419803 Budget code: 7261 © — Cookies at au.dk 130912 / i31
CommonCrawl
How is the energy/eigenvalue gap plot drawn for adiabatic quantum computation? I was going through arXiv:quant-ph/0001106v1, the first paper by Farhi on adiabatic quantum computation. Equation 2.24 says, $$\tilde{H}(s) = (1-s)H_B + sH_P$$ which means the adiabatic evolution starts from the ground state of $H_B$ and slowly evolves until it arrives at the ground state of $H_P$. In section 3.1, the one qubit example has the adiabatic Hamiltonian as $$\tilde{H}(s) = \begin{pmatrix} s & \epsilon (1-s) \\ \epsilon (1-s) & 1-s \end{pmatrix}$$ I don't see how the plot of Figure 1 is drawn. In figure 1, Farhi plotted eigenvalues of the Hamiltonian for s while the range for s was 0 to 1. The Hamiltonian is supposed to evolve according to the Schrodinger equation (eq 2.1), $$i \frac{d}{dt} |\Psi (t) \rangle = H(t) |\Psi (t) \rangle$$ Was this evolution solved to draw the plot? Or did Farhi derive the formula for eigenvalues in terms of s using just matrix math and plotted accordingly? quantum-mechanics research-level quantum-information adiabatic hadsed Omar ShehabOmar Shehab If the parameter $s$ is varied adiabatically there should be no need to solve the time-dependent Schrodinger equation. The adiabatic theorem implies the instantaneous eigenvalues are always those of the parameterized Hamiltonian, provided its spectrum is separated. Thus your final statement and hadsed's conclusion are correct: The figure just shows the eigenvalues of the matrix as a function of $s$. Joshua BarrJoshua Barr $\begingroup$ Yes, I was able to draw the plot. $\endgroup$ – Omar Shehab Mar 26 '13 at 16:53 The latter. The reason is because it doesn't make a lot of sense to do the eigendecomposition of an approximated Hamiltonian, which is what you're doing when solving for the dynamics (your operator probably won't be Hermitian so your energies won't even be real). Doing the evolution gives you the final state probabilities, but it's not very good for analyzing the energy gap precisely. So you just plug in the right $s$ values and find the eigenvalues of that $H(s)$, and plotting that gives you the eigenspectrum in Fig. 1. hadsedhadsed Not the answer you're looking for? Browse other questions tagged quantum-mechanics research-level quantum-information adiabatic or ask your own question. Spin eigenvalues and eigenvectors problem. Is this the correct way to solve it? Ground state of an adiabatic Hamiltonian as an eigenstate of the total spin Reason behind choosing the invariant states for an operator which commutes with an adiabatic Hamiltonian
CommonCrawl
OSA Publishing > Biomedical Optics Express > Volume 11 > Issue 9 > Page 5249 Christoph Hitzenberger, Editor-in-Chief Issues in Progress Feature Issues AV-Net: deep learning for fully automated artery-vein classification in optical coherence tomography angiography Minhaj Alam, David Le, Taeyoon Son, Jennifer I. Lim, and Xincheng Yao Minhaj Alam,1,3 David Le,1,3 Taeyoon Son,1 Jennifer I. Lim,1,2 and Xincheng Yao1,2,* 1Department of Bioengineering, University of Illinois at Chicago, Chicago, IL 60607, USA 2Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA 3These authors contributed equally to this work *Corresponding author: [email protected] Minhaj Alam https://orcid.org/0000-0003-3095-2232 Xincheng Yao https://orcid.org/0000-0002-0356-3242 M Alam D Le T Son J Lim X Yao •https://doi.org/10.1364/BOE.399514 Minhaj Alam, David Le, Taeyoon Son, Jennifer I. Lim, and Xincheng Yao, "AV-Net: deep learning for fully automated artery-vein classification in optical coherence tomography angiography," Biomed. Opt. Express 11, 5249-5257 (2020) Vascular morphology and blood flow signatures for differential artery-vein analysis in optical coherence tomography of the retina (BOE) OCT feature analysis guided artery-vein differentiation in OCTA (BOE) Deep feature learning for automatic tissue classification of coronary artery using optical coherence tomography (BOE) Table of Contents Category Image registration Near infrared radiation Original Manuscript: June 4, 2020 Revised Manuscript: August 18, 2020 Manuscript Accepted: August 18, 2020 This study is to demonstrate deep learning for automated artery-vein (AV) classification in optical coherence tomography angiography (OCTA). The AV-Net, a fully convolutional network (FCN) based on modified U-shaped CNN architecture, incorporates enface OCT and OCTA to differentiate arteries and veins. For the multi-modal training process, the enface OCT works as a near infrared fundus image to provide vessel intensity profiles, and the OCTA contains blood flow strength and vessel geometry features. A transfer learning process is also integrated to compensate for the limitation of available dataset size of OCTA, which is a relatively new imaging modality. By providing an average accuracy of 86.75%, the AV-Net promises a fully automated platform to foster clinical deployment of differential AV analysis in OCTA. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Early disease diagnosis and effective treatment assessment are essential to prevent vision loss. Differential artery-vein (AV) analysis can provide valuable information for disease detection and classification. It has been demonstrated to be valuable for evaluating diabetes, hypertension, stroke and cardiovascular diseases [1–3] along with common retinopathies [4,5]. Several clinical studies have evaluated AV abnormalities in different diseases. However, clinical deployment of the AV analysis for routine management of eye diseases is challenging. Most of the clinical studies relied on manual or semi-automated approaches to identify arteries and veins, which is ineffective in a clinical setting. Therefore, a fully automated platform for AV classification is important. To date, automated AV classification has been primarily used in color fundus images acquired with traditional fundus photography [6–15], which provide limited resolution and sensitivity to reveal microvascular abnormalities associated with eye conditions [16]. Microvascular anomalies that occur at early stages of eye diseases, cannot be reliably identified in traditional fundus photography [17–19]. An alternative to traditional color fundus imaging is optical coherence tomography (OCT) and OCT angiography (OCTA). OCT and OCTA can provide depth-resolved visualization of individual retinal layers with capillary level resolution. Especially, OCTA is sensitive to identify subtle microvascular changes, and thus has been extensively explored for quantitative analysis and objective classification of retinal diseases [20–24]. Using quantitative feature analysis, we have recently demonstrated the potential of differentiating arteries and veins in OCTA [4,5,25,26]. Differential AV analysis showed improved OCTA performance to identify abnormal changes in diabetic retinopathy (DR) and sickle cell retinopathy (SCR) eyes [4,5,26]. However, clinical deployment of the AV analysis in OCTA requires an automated, simple, but robust method. A potential solution is to employ deep machine learning i.e., convolutional neural networks (CNNs) for AV classification automatically. A fully convolutional network (FCN) can be trained with a ground truth dataset for a specific task and can be implemented on validation or testing dataset. A fully automated method is a key factor for clinical deployment of artificial intelligence (AI) based screening, diagnosis, and treatment evaluation. In this study, we develop and validate AV-Net, an FCN based on a modified U-shaped CNN architecture, for deep learning AV classification in OCTA. A multi-modal training process involves both enface OCT and OCTA, which provide intensity and geometric profiles, respectively, for AV classification. Transfer learning is employed to compensate for the limitation of available dataset size of OCTA which is a relatively new imaging modality. By incorporating transfer learning and multi-modal training approaches, fully automated AV classification is demonstrated. The AV-Net performance is validated with manual AV ground truth maps using accuracy and intersection over union (IOU) metrics. This study is in adherence to the ethical standards present in the Declaration of Helsinki and was approved by the institutional review board of the University of Illinois at Chicago (UIC). 2.1 Data acquisition Spectral domain (SD) -enface OCT and OCTA data were acquired using an Angiovue SD-OCT device (Optovue, Fremont, CA, USA). The OCT device had a 70,000 Hz A-scan rate, ∼5 µm axial and ∼15 µm lateral resolutions. All enface OCT/OCTA images used for this study were 6 mm × 6 mm scans; only superficial OCTA images were used. The enface OCT was generated as a maximum intensity 3D projection of the retinal slabs from internal limiting membrane to outer plexiform layer. After image reconstruction, both enface OCT and OCTA were exported from Revue software interface (Optovue) for further processing. 2.2 Model implementation In this paper, we present for the first time 'AV-Net', an FCN based on a modified U-Net architecture. Recent studies using UNet have demonstrated AV classification using fundus photographs. To the best of our knowledge, our study is the first to demonstrate AV classification in OCTA. The input of the AV-Net is a 2-channel system to combine grayscale enface OCT and OCTA. Enface OCT is a near infrared (NIR) image, which is equivalent to a fundus image, to provide vessel intensity profiles. On the other hand, OCTA contains the information of blood flow strength and vessel geometry features. The output of AV-Net is an RGB (red-green-blue) image. The R and B channels correspond to artery and vein systems, respectively, and the G channel represents the background. The overall design of the AV-Net follows an encoder-decoder architecture (Fig. 1(a)). The encoder of the AV-Net is a combination of dense, convolutional and transition blocks (Fig. 1), making the network deeper compared to UNet which incorporated a shallower 'VGG16' architecture. The encoder, also known as the contracting path, extracts the context of the image. The decoder, also termed as the expanding path, identify image features. The addition of bridging between the encoder and decoder is to enable precise localization and mapping of feature maps to produce the output image [27]. The convolution blocks are similar to the identity block in ResNet, except for the use of concatenation instead of summation operations [28,29]. The dense block is composed of convolution blocks, with each subsequent block connected to the previous blocks by skip-connections. Fig. 1. Network architecture for AV-Net, (a) overview of the blocks in AV-Net architecture, (b) the individual blocks that comprises AV-Net. In this figure, Conv stands for convolution operations, AP stands for Average Pooling operation. Each transition block has two outputs, Output A is the output of the AP operation, and Output B is the output of the Conv operation. The skip-connections from each transition block are Output B. In the decoder block, the Input A is the output of the preceding layer, whereas Output B is the output of the appropriately sized transition block. Download Full Size | PPT Slide | PDF Skip-connections is to alleviate the vanishing-gradient problem in deep learning [30]. Following each dense block, a transition block is used to reduce the dimensions of the output feature maps. In the decoder, we employ upsampling operations and use decoder blocks. The decoder block concatenates the outputs of the upsampling operation and the output of the convolution from the appropriate transition block. The feature maps are then convolved to enable precise localization of image features. In the AV-Net, all convolution operations are followed by batch normalization and ReLU activation function, whereas the final convolutional layer is followed by a softmax activation function. As a relatively new modality, available OCTA dataset size is limited. For deep learning applications, a small dataset size may lead to overfitting. To overcome this limitation, we employ transfer learning using the ImageNet Dataset. While the ImageNet dataset (normal every day images) and OCTA images are different, one of the advantages of CNNs is that the CNN learn features from a bottom up hierarchical structure. The earlier layers of the CNN learn simple features such as lines, edges, and color information, and complex features in deeper layers of the network. By employing transfer learning in the training procedure, the network can transfer these simple features to learn complex features associated with arteries and veins, such as tortuosity, branching, and intensity based information. Table 1. Comparative classification performance of AV-Net. In this study, the encoder weights were pre-trained on the ImageNet Dataset. Our pre-trained encoder network contains a fully connected layer with 1000 neurons followed by a softmax activation function. The pre-training of the encoder concluded when the network achieved a ∼75% classification accuracy on the ImageNet validation dataset. To employ the pre-trained encoder into a FCN, the fully connected layer was removed. The intermediate outputs of the encoder network are subsequently connected to the decoder network. This procedure is initialized with random weights using the glorot uniform distribution, as corresponding inputs to the appropriate layers (Fig. 1). Our newly constructed FCN, AV-Net, using transfer learning is then trained on OCTA images for the task of image segmentation. This method of employing transfer learning for image segmentation, is repeated in the comparative study with the state-of-the-art networks, i.e. UNet. FCN training procedure utilized the Adam optimizer with a learning rate of 0.0001, a dice loss function, and a minibatch size of 8. Moreover, regularization procedures including data augmentation and cross-validation were used to prevent overfitting. Training was performed on a Windows 10 computer using NVIDIA Quadro RTX 5000 Graphics Processing Unit (GPU). The FCN was trained and evaluated on Python (v3.7.1) using Keras (2.2.4) with Tensorflow (v1.31.1) backend. In this study, the OCTA dataset comprised of 50 images. To evaluate our network, a 5-fold cross validation method, with each fold following an 80/20 train/test split procedure, was employed. Due to a limited dataset, data augmentation, i.e., random flips, rotation, zooming, and image shifting, was implemented during the training process. Therefore, in each fold the network was trained with 3,000 images, and testing evaluation was performed on the 8 original images of each fold. Average accuracy, intersection-over-union (IOU) and F1-score was used as an evaluation metric for AV classification, by comparing with manually labelled ground truths for each cross-validation folder. In the revision, we have clarified that (page 4): For evaluation of average accuracy of artery identification, we conducted one vs all pixel-wise classification (artery pixels vs vein + background pixels), and measured the average accuracy from prediction performance of both labels. Similarly, for evaluating vein accuracy, the one vs all classification labels were vein vs artery + background pixels. The average performance accuracy is the mean of artery and vein accuracies (Table 1). Both IOU and F1 score are standard metrics for segmentation and pixel-wise classification tasks. IOU measures the similarity between a predicted region (AV) and the ground truth region in an image and can be defined as the size of intersection, divided by the union of two regions [31]. IOU was measured separately for artery and vein, by comparing predicted pixels for each category to the pixels from ground truth. Average was calculated by taking the mean of artery and vein IOU. F1 score is also a robust metric for pixel-wise classification and can be defined as a harmonic mean of precision and recall [32]. 2.3 Loss functions In this study, the AV-Net was trained using a compound loss function derived from dice loss [33] and focal loss [34] and was defined as Eq. (1): (1)$$L = {L_{dice}} + {L_{focal}}$$ Where ${L_{dice}}$ is the dice loss (Eq. (2)) and ${L_{focal}}$ is the focal loss (Eq. (3)). Recent studies have found the combination of multiple losses improves image segmentation tasks with class imbalances [35,36]. Dice score measures the degree of overlap between the prediction and ground truth and is therefore suited for image segmentation (pixel-wise classification) tasks. The dice loss can be written as (2)$${L_{dice}} = 1 - \frac{{2\mathop \sum \nolimits_{x \in \mathrm{\Omega}} {p_l}(x ){g_l}(x )}}{{\mathop \sum \nolimits_{x \in \mathrm{\Omega}} p_l^2(x )+ \mathop \sum \nolimits_{x \in \mathrm{\Omega}} g_l^2(x )}}$$ The focal loss function is used to help mitigate the imbalance between foreground and background classes during training. The focal loss is derived from the cross entropy (CE) loss and introduces a focusing parameter $\gamma $ that helps increase the importance of correcting misclassified examples [34]. ${L_{focal}}$ can be written as (3)$${L_{focal}} ={-} \mathop \sum \nolimits_{x \in \mathrm{\Omega}} ({\alpha {{({1 - {p_l}(x )} )}^\gamma }{g_l}(x )\log {p_l}(x )+ ({1 - \alpha } )p_l^\gamma (x )({1 - {g_l}(x )} )\log ({1 - {p_l}(x )} )} )$$ Where the weighting factor $\alpha \in [{0,1} ]$, focusing parameter $\gamma \ge 0$, ${g_l}(x )$ and ${p_l}(x )$ are label and estimated probability vectors, respectively. In our experimental designs, $\alpha = 0.25$ and $\gamma = 2$ works best in practice [34]. 3.1 Patient demographics Our dataset comprised of images from 50 patients (20 control eyes and 30 DR eyes). Subjects and diabetic patients with and without DR were recruited from the UIC retina clinic. The patients present in this study are representative of a university population of diabetic patients who require clinical diagnosis and management of DR. Two board-certified retina specialists classified the patients based on the severity of DR according to the Early Treatment Diabetic Retinopathy Study (ETDRS) staging system. All patients underwent complete anterior and dilated posterior segment examination. All control OCTA images were obtained from healthy volunteers that provided informed consent for OCT/OCTA imaging. All subjects underwent OCT and OCTA imaging of both eyes (OD and OS). The images used in this study did not include eyes with other ocular diseases or any other pathological features in their retina such as epiretinal membranes and macular edema. Additional exclusion criteria included eyes with prior history of intravitreal injections, vitreoretinal surgery or significant (greater than a typical blot hemorrhage) macular hemorrhages. Validation dataset comprised of healthy volunteers that provided informed consent for OCT/OCTA imaging. 3.2 Classification evaluation The AV-Net achieved an average accuracy of 86.75% (86.71% and 86.80% respectively for artery and vein) on the test data and a mean IOU was 70.72%, and F1-score of 82.81%. The accuracy metric considers segmentation of artery, vein and background pixels, and takes an average of the three parameters for final accuracy value. We observed that, the classifier is extremely robust for background prediction, i.e., it is very good for segmenting the blood vessels (average accuracy ∼97%). To demonstrate a more robust measure for AV classification performance, we utilize IOU which compares the AV-Net generated AV map pixel to pixel with the ground truth. A comparative analysis of AV-Net performance is summarized in Table 1. The optimal AV-Net was trained with pre-trained 'imagenet' weights and a training loss function that integrated both Dice and Focal loss. In Table 1, we demonstrated AV classification performance using UNet ('Imagenet' weights, Dice + Focal loss); AV-Net with Dice loss; AV-Net with Focal loss; AV-Net with both Dice and Focal loss but without transfer learning 'imagenet' weights. For comparative analysis, the results of AV-Net implementation with and without transfer learning are reported. It is observed that transfer learning improves the performance of AV-net compared to random weight initialization. Despite the high dissimilarity between ImageNet and OCTA dataset, there may be certain features that are transferable, such as simple features, morphology or intensity-based features, in early layers of the encoder. Additionally, the combination of multiple losses has been demonstrated to improve the performance for image segmentation tasks. The most used loss function for segmentation tasks is the dice loss function. On the other hand, recent studies have revealed that the focal loss function mitigates class imbalances between the foreground and background. Therefore, our hypothesis is that the combination of both dice and focal loss can improve the performance of AV-Net. To test our hypothesis, we performed a comparative study by training AV-Net with dice and focal loss separately. The results from training with the dice and focal loss function separately, revealed that individually each loss function had adequate performance, with the focal loss function having the worst performance. However, when combining both the dice and focal losses improved the performance of AV-Net. We further compared the performance of AV-Net with the state of the art UNet model. For comparative analysis, both architectures were trained using transfer learning and the combined dice and focal loss functions. As shown in Fig. 2, the AV-Net demonstrates improved performance compared to UNet. Interestingly, it is observed that the UNet showed slightly better accuracy values compared to AV-Net (88.25% vs 86.71%). Since UNet is a comparatively shallower network, it is less prone to overfitting, providing a better semantic segmentation performance. Since accuracy metric considers prediction of all pixels (i.e., artery, vein, and background), the overall accuracy is increased. However, this does not necessarily mean better performance of identifying artery and vein pixels. That is why similarity metrics, i.e., F1 and IOU scores of UNet are comparatively low. As a more complex network, AV-Net is much better for identifying artery and vein pixels, as represented by the F1 and IOU scores which reflect the comparison of artery and vein pixels with the ground truth. While AV-Net was inspired by the UNet architecture, the incorporation of short and long skip connections and the increased depth of the network improved the performance. Fig. 2. Examples of control and DR (top and bottom, respectively) (a) input OCTA, (b) enface OCT, (c) the ground truth, (d) UNet predicted AV-maps, and (e) AV-Net predicted AV-maps. To check the AV classification performance on diseased eyes, we further tested the AV-Net performance on only the OCT/OCTA data from DR cohort. The accuracies of predicting arteries and veins were 85.94% and 85.85%, respectively. The mean IOU scores for artery and vein classification were 68.28% and 68.65%, respectively. The mean F1 scores for artery and vein classification were 81.12% and 81.4%, respectively. In summary, we have demonstrated the AV-Net for fully automated AV classification in OCTA. The AV-Net achieved an average accuracy of 86.75% (86.71% and 86.80% respectively for artery and vein) on the test data and a mean IOU was 70.72%, and F1-score of 82.81%. Differential AV analysis is known to be valuable for quantifying subtle microvascular changes and distortions due to retinopathies. Incorporating AV classification capability into the clinical imaging devices would enhance the diagnostic ability and quantitative power of OCTA. Previous studies exploring the use of deep learning for AV classification have been primarily focused on traditional fundus photography. Xu et al. adapted a UNet for AV classification using publicly available fundus datasets, such as DRIVE and INSPIRE, and achieved high accuracy [37]. Similarly, Meyer et al. employed deep learning using a patch-wise prediction strategy and included regularization techniques such as dropout and batch normalization [38]. To our knowledge, this is the first study to employ deep learning for AV classification in OCTA. In this study, we employed an FCN, based on the UNet architectures. In a previous study, Ronneberger et. al. [27] have shown the use of long skip connections, that can help the network localize high resolution features, thereby a more precise output. In AV-Net, we employ dense blocks that utilize short skip connections. These short skip connections encourage the network to reuse features, making the model more compact. In comparison to other networks such as VGG16, AV-Net is a 5 times deeper network (having more convolutional layers) but the number of parameters is significantly smaller (approximately 17 times less). Having deeper network enables more learning capability, whereas smaller number of parameters means less computational burden. By leveraging both long and short skip connections, we are able to train our AV-Net for robust AV classification. From comparative analysis shown in Table 1, AV-Net performance is improved compared to AV classification performance using a standard UNet architecture. Furthermore, a comparison of AV-Net trained with 'Dice' loss, 'Focal' loss and 'Dice + Focal' loss showed that incorporating both Dice and Focal loss improved segmentation since Dice loss compares similarity between AV map and ground truth, and Focal loss compensates for class imbalance between AV pixels and background pixels. The input of the AV-Net consists of both enface OCT and OCTA. While OCTA does provide highly detailed vasculature maps, the arteries and veins are indistinguishable from each other by OCTA information itself. On the other hand, OCT retains reflectance information to differentiate artery and vein [25]. By combining both images, the FCN can learn the intensity information from the OCT and the highly detailed vasculature from the OCTA. Employing both OCT and OCTA is also convenient since they are from same OCT data volume and OCTA is reconstructed based on OCT processing. Therefore, using enface OCT and OCTA as 2-channel input of the AV-Net requires no pre-processing and image registration. The results of the cross-validation study revealed an adequate IOU and F1 score. Qualitatively AV-Net has good vessel segmentation and AV classification performance. However, the predicted AV maps do appear more dilated compared to the ground truths. There are notable areas of misclassification, i.e., at vessel cross points. Future improvements to AV-Net could include developing a dataset with ground truth for vessel crossings. Additional validation with enlarged datasets from different OCTA devices will be required to pursue clinical deployments of the AV-Net for differential AV analysis. The AV-Net has been demonstrated for fully automated AV classification in OCTA. The AV-Net is based on one FCN with modified U-shaped CNN architecture. A multi-modal training process was involved to include both enface OCT and OCTA for robust AV classification, and a transfer learning procedure was integrated to compensate for the limited size of OCTA dataset. By incorporating transfer learning and multi-modal training, the AV-Net achieved an accuracy of 86.75% for robust AV classification. National Eye Institute (P30 EY001792, R01 EY023522, R01 EY030101, R01EY029673, R01EY030842); Research to Prevent Blindness; Richard and Loan Hill Foundation (Endowment); Illinois society to prevent blindness (ISPB_Minhaj Alam). No competing interest exists for any author. 1. Y. Hatanaka, T. Nakagawa, A. Aoyama, X. Zhou, T. Hara, H. Fujita, M. Kakogawa, Y. Hayashi, Y. Mizukusa, and A. Fujita, "Automated detection algorithm for arteriolar narrowing on fundus images," in 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, (IEEE, 2006), 286–289. 2. M. K. Ikram, J. A. Janssen, A. M. Roos, I. Rietveld, J. C. Witteman, M. M. Breteler, A. Hofman, C. M. Van Duijn, and P. T. de Jong, "Retinal vessel diameters and risk of impaired fasting glucose or diabetes: the Rotterdam study," Diabetes 55(2), 506–510 (2006). [CrossRef] 3. M. Alam, T. Son, D. Toslak, J. Lim, X. Yao, and T. V. S. Tech, "Combining optical density ratio and blood vessel tracking for automated artery-vein classification and quantitative analysis in color fundus images," Trans. Vis. Sci. Tech. 7(2), 23 (2018). [CrossRef] 4. M. Alam, J. I. Lim, D. Toslak, and X. Yao, "Differential Artery–Vein Analysis Improves the Performance of OCTA Staging of Sickle Cell Retinopathy," Trans. Vis. Sci. Tech. 8(2), 3 (2019). [CrossRef] 5. M. Alam, D. Toslak, J. I. Lim, and X. Yao, "Color fundus image guided artery-vein differentiation in optical coherence tomography angiography," Invest. Ophthalmol. Visual Sci. 59(12), 4953–4962 (2018). [CrossRef] 6. W. Aguilar, M. E. Martinez-Perez, Y. Frauel, F. Escolano, M. A. Lozano, and A. Espinosa-Romero, "Graph-based methods for retinal mosaicing and vascular characterization," Lecture Notes in Computer Science 4538, 25–36 (2007). [CrossRef] 7. R. Chrástek, M. Wolf, K. Donath, H. Niemann, G. Michelson, and M. V. Appl, "Automated Calculation of Retinal Arteriovenous Ratio for Detection and Monitoring of Cerebrovascular Disease Based on Assessment of Morphological Changes of Retinal Vascular System," in MVA, 2002), 240–243. 8. E. Grisan and A. Ruggeri, "A divide et impera strategy for automatic classification of retinal vessels into arteries and veins," in Engineering in medicine and biology society, 2003. Proceedings of the 25th annual international conference of the IEEE, (IEEE, 2003), 890–893. 9. H. Jelinek, C. Depardieu, C. Lucas, D. Cornforth, W. Huang, and M. Cree, "Towards vessel characterization in the vicinity of the optic disc in digital retinal images," in Image Vis Comput Conf, 2005), 2–7. 10. H. Li, W. Hsu, M.-L. Lee, and H. Wang, "A piecewise Gaussian model for profiling and differentiating retinal vessels," in Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on, (IEEE, 2003), I–1069. 11. M. Niemeijer, B. van Ginneken, and M. D. Abràmoff, "Automatic classification of retinal vessels into arteries and veins," Med. Imaging 7260, 72601F (2009). [CrossRef] 12. K. Rothaus, X. Jiang, and P. Rhiem, "Separation of the retinal vascular graph in arteries and veins based upon structural knowledge," Image Vis. Comput. 27(7), 864–875 (2009). [CrossRef] 13. A. Simó and E. de Ves, "Segmentation of macular fluorescein angiographies. A statistical approach," Pattern Recognit. 34(4), 795–809 (2001). [CrossRef] 14. S. Vázquez, N. Barreira, M. Penedo, M. Penas, and A. Pose-Reino, "Automatic classification of retinal vessels into arteries and veins," in 7th international conference biomedical engineering (BioMED 2010), 2010), 230–236. 15. S. Vázquez, B. Cancela, N. Barreira, M. G. Penedo, M. Rodríguez-Blanco, M. P. Seijo, G. C. de Tuero, M. A. Barceló, and M. Saez, "Improving retinal artery and vein classification by means of a minimal path approach," Mach. Vis. Appl. 24(5), 919–930 (2013). [CrossRef] 16. S. Zahid, R. Dolz-Marco, K. B. Freund, C. Balaratnasingam, K. Dansingani, F. Gilani, N. Mehta, E. Young, M. R. Klifto, B. Chae, L. A. Yannuzzi, and J. A. Young, "Fractal Dimensional Analysis of Optical Coherence Tomography Angiography in Eyes With Diabetic Retinopathy," Invest. Ophthalmol. Visual Sci. 57(11), 4940–4947 (2016). [CrossRef] 17. B. I. Gramatikov, "Modern technologies for retinal scanning and imaging: an introduction for the biomedical engineer," BioMed. Eng. OnLine 13(1), 52 (2014). [CrossRef] 18. K. R. Mendis, C. Balaratnasingam, P. Yu, C. J. Barry, I. L. McAllister, S. J. Cringle, and D.-Y. Yu, "Correlation of histologic and clinical images to determine the diagnostic value of fluorescein angiography for studying retinal capillary detail," Invest. Ophthalmol. Visual Sci. 51(11), 5864–5869 (2010). [CrossRef] 19. S.-C. Cheng and Y.-M. Huang, "A novel approach to diagnose diabetes based on the fractal characteristics of retinal images," IEEE Trans. Inform. Technol. Biomed. 7(3), 163–170 (2003). [CrossRef] 20. A. Y. Kim, Z. Chu, A. Shahidzadeh, R. K. Wang, C. A. Puliafito, and A. H. Kashani, "Quantifying Microvascular Density and Morphology in Diabetic Retinopathy Using Spectral-Domain Optical Coherence Tomography Angiography," Invest. Ophthalmol. Visual Sci. 57(9), OCT362 (2016). [CrossRef] 21. N. V. Palejwala, Y. Jia, S. S. Gao, L. Liu, C. J. Flaxel, T. S. Hwang, A. K. Lauer, D. J. Wilson, D. Huang, and S. T. Bailey, "Detection of non-exudative choroidal neovascularization in age-related macular degeneration with optical coherence tomography angiography," Retina 35(11), 2204–2211 (2015). [CrossRef] 22. G. Holló, "Vessel density calculated from OCT angiography in 3 peripapillary sectors in normal, ocular hypertensive, and glaucoma eyes," Eur. J. Ophthalmol. 26(3), e42–e45 (2016). [CrossRef] 23. M. Alam, D. Thapa, J. I. Lim, D. Cao, and X. Yao, "Quantitative characteristics of sickle cell retinopathy in optical coherence tomography angiography," Biomed. Opt. Express 8(3), 1741–1753 (2017). [CrossRef] 24. M. Alam, D. Thapa, J. I. Lim, D. Cao, and X. Yao, "Computer-aided classification of sickle cell retinopathy using quantitative features in optical coherence tomography angiography," Biomed. Opt. Express 8(9), 4206–4216 (2017). [CrossRef] 25. M. Alam, D. Toslak, J. I. Lim, and X. Yao, "OCT feature analysis guided artery-vein differentiation in OCTA," Biomed. Opt. Express 10(4), 2055–2066 (2019). [CrossRef] 26. T. Son, M. Alam, T.-H. Kim, C. Liu, D. Toslak, and X. Yao, "Near infrared oximetry-guided artery–vein classification in optical coherence tomography angiography," Exp. Biol. Med. 244(10), 813–818 (2019). [CrossRef] 27. O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), 234–241. 28. Q. Ji, J. Huang, W. He, and Y. Sun, "Optimized Deep Convolutional Neural Networks for Identification of Macular Diseases from Optical Coherence Tomography Images," Algorithms 12(3), 51 (2019). [CrossRef] 29. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017), 4700–4708. 30. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016), 770–778. 31. M. A. Rahman and Y. Wang, "Optimizing intersection-over-union in deep neural networks for image segmentation," in International Symposium on Visual Computing, (Springer, 2016), 234–244. 32. Y. Sasaki, "The truth of the f-measure. 2007," (2007). 33. F. Milletari, N. Navab, and S.-A. Ahmadi, "V-net: Fully convolutional neural networks for volumetric medical image segmentation," in 2016 Fourth International Conference on 3D Vision (3DV), (IEEE, 2016), 565–571. 34. T.-Y. L. P. G. Ross and G. K. H. P. Dollár, "Focal loss for dense object detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017). 35. W. Zhu, Y. Huang, L. Zeng, X. Chen, Y. Liu, Z. Qian, N. Du, W. Fan, and X. Xie, "AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy," Med. Phys. 46(2), 576–589 (2019). [CrossRef] 36. M. Chen, L. Fang, and H. Liu, "FR-NET: Focal loss constrained deep residual networks for segmentation of cardiac MRI," in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), (IEEE, 2019), 764–767. 37. X. Xu, R. Wang, P. Lv, B. Gao, C. Li, Z. Tian, T. Tan, and F. Xu, "Simultaneous arteriole and venule segmentation with domain-specific loss function on a new public database," Biomed. Opt. Express 9(7), 3153–3166 (2018). [CrossRef] 38. M. I. Meyer, A. Galdran, P. Costa, A. M. Mendonça, and A. Campilho, "Deep convolutional artery/vein classification of retinal vessels," in International Conference Image Analysis and Recognition, (Springer, 2018), 622–630. Article Order Y. Hatanaka, T. Nakagawa, A. Aoyama, X. Zhou, T. Hara, H. Fujita, M. Kakogawa, Y. Hayashi, Y. Mizukusa, and A. Fujita, "Automated detection algorithm for arteriolar narrowing on fundus images," in 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, (IEEE, 2006), 286–289. M. K. Ikram, J. A. Janssen, A. M. Roos, I. Rietveld, J. C. Witteman, M. M. Breteler, A. Hofman, C. M. Van Duijn, and P. T. de Jong, "Retinal vessel diameters and risk of impaired fasting glucose or diabetes: the Rotterdam study," Diabetes 55(2), 506–510 (2006). [Crossref] M. Alam, T. Son, D. Toslak, J. Lim, X. Yao, and T. V. S. Tech, "Combining optical density ratio and blood vessel tracking for automated artery-vein classification and quantitative analysis in color fundus images," Trans. Vis. Sci. Tech. 7(2), 23 (2018). M. Alam, J. I. Lim, D. Toslak, and X. Yao, "Differential Artery–Vein Analysis Improves the Performance of OCTA Staging of Sickle Cell Retinopathy," Trans. Vis. Sci. Tech. 8(2), 3 (2019). M. Alam, D. Toslak, J. I. Lim, and X. Yao, "Color fundus image guided artery-vein differentiation in optical coherence tomography angiography," Invest. Ophthalmol. Visual Sci. 59(12), 4953–4962 (2018). W. Aguilar, M. E. Martinez-Perez, Y. Frauel, F. Escolano, M. A. Lozano, and A. Espinosa-Romero, "Graph-based methods for retinal mosaicing and vascular characterization," Lecture Notes in Computer Science 4538, 25–36 (2007). R. Chrástek, M. Wolf, K. Donath, H. Niemann, G. Michelson, and M. V. Appl, "Automated Calculation of Retinal Arteriovenous Ratio for Detection and Monitoring of Cerebrovascular Disease Based on Assessment of Morphological Changes of Retinal Vascular System," in MVA, 2002), 240–243. E. Grisan and A. Ruggeri, "A divide et impera strategy for automatic classification of retinal vessels into arteries and veins," in Engineering in medicine and biology society, 2003. Proceedings of the 25th annual international conference of the IEEE, (IEEE, 2003), 890–893. H. Jelinek, C. Depardieu, C. Lucas, D. Cornforth, W. Huang, and M. Cree, "Towards vessel characterization in the vicinity of the optic disc in digital retinal images," in Image Vis Comput Conf, 2005), 2–7. H. Li, W. Hsu, M.-L. Lee, and H. Wang, "A piecewise Gaussian model for profiling and differentiating retinal vessels," in Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on, (IEEE, 2003), I–1069. M. Niemeijer, B. van Ginneken, and M. D. Abràmoff, "Automatic classification of retinal vessels into arteries and veins," Med. Imaging 7260, 72601F (2009). K. Rothaus, X. Jiang, and P. Rhiem, "Separation of the retinal vascular graph in arteries and veins based upon structural knowledge," Image Vis. Comput. 27(7), 864–875 (2009). A. Simó and E. de Ves, "Segmentation of macular fluorescein angiographies. A statistical approach," Pattern Recognit. 34(4), 795–809 (2001). S. Vázquez, N. Barreira, M. Penedo, M. Penas, and A. Pose-Reino, "Automatic classification of retinal vessels into arteries and veins," in 7th international conference biomedical engineering (BioMED 2010), 2010), 230–236. S. Vázquez, B. Cancela, N. Barreira, M. G. Penedo, M. Rodríguez-Blanco, M. P. Seijo, G. C. de Tuero, M. A. Barceló, and M. Saez, "Improving retinal artery and vein classification by means of a minimal path approach," Mach. Vis. Appl. 24(5), 919–930 (2013). S. Zahid, R. Dolz-Marco, K. B. Freund, C. Balaratnasingam, K. Dansingani, F. Gilani, N. Mehta, E. Young, M. R. Klifto, B. Chae, L. A. Yannuzzi, and J. A. Young, "Fractal Dimensional Analysis of Optical Coherence Tomography Angiography in Eyes With Diabetic Retinopathy," Invest. Ophthalmol. Visual Sci. 57(11), 4940–4947 (2016). B. I. Gramatikov, "Modern technologies for retinal scanning and imaging: an introduction for the biomedical engineer," BioMed. Eng. OnLine 13(1), 52 (2014). K. R. Mendis, C. Balaratnasingam, P. Yu, C. J. Barry, I. L. McAllister, S. J. Cringle, and D.-Y. Yu, "Correlation of histologic and clinical images to determine the diagnostic value of fluorescein angiography for studying retinal capillary detail," Invest. Ophthalmol. Visual Sci. 51(11), 5864–5869 (2010). S.-C. Cheng and Y.-M. Huang, "A novel approach to diagnose diabetes based on the fractal characteristics of retinal images," IEEE Trans. Inform. Technol. Biomed. 7(3), 163–170 (2003). A. Y. Kim, Z. Chu, A. Shahidzadeh, R. K. Wang, C. A. Puliafito, and A. H. Kashani, "Quantifying Microvascular Density and Morphology in Diabetic Retinopathy Using Spectral-Domain Optical Coherence Tomography Angiography," Invest. Ophthalmol. Visual Sci. 57(9), OCT362 (2016). N. V. Palejwala, Y. Jia, S. S. Gao, L. Liu, C. J. Flaxel, T. S. Hwang, A. K. Lauer, D. J. Wilson, D. Huang, and S. T. Bailey, "Detection of non-exudative choroidal neovascularization in age-related macular degeneration with optical coherence tomography angiography," Retina 35(11), 2204–2211 (2015). G. Holló, "Vessel density calculated from OCT angiography in 3 peripapillary sectors in normal, ocular hypertensive, and glaucoma eyes," Eur. J. Ophthalmol. 26(3), e42–e45 (2016). M. Alam, D. Thapa, J. I. Lim, D. Cao, and X. Yao, "Quantitative characteristics of sickle cell retinopathy in optical coherence tomography angiography," Biomed. Opt. Express 8(3), 1741–1753 (2017). M. Alam, D. Thapa, J. I. Lim, D. Cao, and X. Yao, "Computer-aided classification of sickle cell retinopathy using quantitative features in optical coherence tomography angiography," Biomed. Opt. Express 8(9), 4206–4216 (2017). M. Alam, D. Toslak, J. I. Lim, and X. Yao, "OCT feature analysis guided artery-vein differentiation in OCTA," Biomed. Opt. Express 10(4), 2055–2066 (2019). T. Son, M. Alam, T.-H. Kim, C. Liu, D. Toslak, and X. Yao, "Near infrared oximetry-guided artery–vein classification in optical coherence tomography angiography," Exp. Biol. Med. 244(10), 813–818 (2019). O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), 234–241. Q. Ji, J. Huang, W. He, and Y. Sun, "Optimized Deep Convolutional Neural Networks for Identification of Macular Diseases from Optical Coherence Tomography Images," Algorithms 12(3), 51 (2019). G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017), 4700–4708. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016), 770–778. M. A. Rahman and Y. Wang, "Optimizing intersection-over-union in deep neural networks for image segmentation," in International Symposium on Visual Computing, (Springer, 2016), 234–244. Y. Sasaki, "The truth of the f-measure. 2007," (2007). F. Milletari, N. Navab, and S.-A. Ahmadi, "V-net: Fully convolutional neural networks for volumetric medical image segmentation," in 2016 Fourth International Conference on 3D Vision (3DV), (IEEE, 2016), 565–571. T.-Y. L. P. G. Ross and G. K. H. P. Dollár, "Focal loss for dense object detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017). W. Zhu, Y. Huang, L. Zeng, X. Chen, Y. Liu, Z. Qian, N. Du, W. Fan, and X. Xie, "AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy," Med. Phys. 46(2), 576–589 (2019). M. Chen, L. Fang, and H. Liu, "FR-NET: Focal loss constrained deep residual networks for segmentation of cardiac MRI," in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), (IEEE, 2019), 764–767. X. Xu, R. Wang, P. Lv, B. Gao, C. Li, Z. Tian, T. Tan, and F. Xu, "Simultaneous arteriole and venule segmentation with domain-specific loss function on a new public database," Biomed. Opt. Express 9(7), 3153–3166 (2018). M. I. Meyer, A. Galdran, P. Costa, A. M. Mendonça, and A. Campilho, "Deep convolutional artery/vein classification of retinal vessels," in International Conference Image Analysis and Recognition, (Springer, 2018), 622–630. Abràmoff, M. D. Aguilar, W. Ahmadi, S.-A. Alam, M. Aoyama, A. Appl, M. V. Bailey, S. T. Balaratnasingam, C. Barceló, M. A. Barreira, N. Barry, C. J. Breteler, M. M. Brox, T. Campilho, A. Cancela, B. Cao, D. Chae, B. Chen, X. Cheng, S.-C. Chrástek, R. Chu, Z. Cornforth, D. Costa, P. Cree, M. Cringle, S. J. Dansingani, K. de Jong, P. T. de Tuero, G. C. de Ves, E. Depardieu, C. Dollár, G. K. H. P. Dolz-Marco, R. Donath, K. Du, N. Escolano, F. Espinosa-Romero, A. Fan, W. Fang, L. Fischer, P. Flaxel, C. J. Frauel, Y. Freund, K. B. Fujita, A. Fujita, H. Galdran, A. Gao, B. Gao, S. S. Gilani, F. Gramatikov, B. I. Grisan, E. Hara, T. Hatanaka, Y. Hayashi, Y. He, K. He, W. Hofman, A. Holló, G. Hsu, W. Huang, D. Huang, G. Huang, J. Huang, W. Huang, Y. Huang, Y.-M. Hwang, T. S. Ikram, M. K. Janssen, J. A. Jelinek, H. Ji, Q. Jia, Y. Jiang, X. Kakogawa, M. Kashani, A. H. Kim, A. Y. Kim, T.-H. Klifto, M. R. Lauer, A. K. Lee, M.-L. Li, C. Li, H. Lim, J. Lim, J. I. Liu, C. Liu, L. Liu, Y. Liu, Z. Lozano, M. A. Lucas, C. Lv, P. Martinez-Perez, M. E. McAllister, I. L. Mehta, N. Mendis, K. R. Mendonça, A. M. Meyer, M. I. Michelson, G. Milletari, F. Mizukusa, Y. Nakagawa, T. Navab, N. Niemann, H. Niemeijer, M. Palejwala, N. V. Penas, M. Penedo, M. Penedo, M. G. Pose-Reino, A. Puliafito, C. A. Qian, Z. Rahman, M. A. Ren, S. Rhiem, P. Rietveld, I. Rodríguez-Blanco, M. Ronneberger, O. Roos, A. M. Ross, T.-Y. L. P. G. Rothaus, K. Ruggeri, A. Saez, M. Sasaki, Y. Seijo, M. P. Shahidzadeh, A. Simó, A. Sun, J. Sun, Y. Tan, T. Tech, T. V. S. Thapa, D. Tian, Z. Toslak, D. Van Der Maaten, L. Van Duijn, C. M. van Ginneken, B. Vázquez, S. Wang, H. Wang, R. Wang, R. K. Weinberger, K. Q. Wilson, D. J. Witteman, J. C. Xie, X. Xu, F. Xu, X. Yannuzzi, L. A. Yao, X. Young, E. Young, J. A. Yu, D.-Y. Yu, P. Zahid, S. Zeng, L. Zhang, X. Zhou, X. Zhu, W. BioMed. Eng. OnLine (1) Biomed. Opt. Express (4) Eur. J. Ophthalmol. (1) Exp. Biol. Med. (1) IEEE Trans. Inform. Technol. Biomed. (1) Image Vis. Comput. (1) Invest. Ophthalmol. Visual Sci. (4) Lecture Notes in Computer Science (1) Mach. Vis. Appl. (1) Med. Imaging (1) Med. Phys. (1) Pattern Recognit. (1) Trans. Vis. Sci. Tech. (2) OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper View in Article | Download Full Size | PPT Slide | PDF Equations on this page are rendered with MathJax. Learn more. (1) L = L d i c e + L f o c a l (2) L d i c e = 1 − 2 ∑ x ∈ Ω ⁡ p l ( x ) g l ( x ) ∑ x ∈ Ω ⁡ p l 2 ( x ) + ∑ x ∈ Ω ⁡ g l 2 ( x ) (3) L f o c a l = − ∑ x ∈ Ω ⁡ ( α ( 1 − p l ( x ) ) γ g l ( x ) log ⁡ p l ( x ) + ( 1 − α ) p l γ ( x ) ( 1 − g l ( x ) ) log ⁡ ( 1 − p l ( x ) ) ) Comparative classification performance of AV-Net. Cross Validation UNet Dice + Focal Loss Pre-trained Weights Artery 88.054 ± 0.343 77.743 ± 0.673 63.711 ± 0.879 Vein 88.653 ± 0.704 78.978 ± 0.722 65.354 ± 0.953 Average 88.353 ± 0.500 78.360 ± 0.675 64.533 ± 0.884 AV-Net Dice Loss Pre-trained Weights Artery 83.570 ± 0.734 61.769 ± 1.480 44.871 ± 1.495 AV-Net Focal Loss Pre-trained Weights Artery 81.007 ± 0.404 46.031 ± 1.236 29.980 ± 1.045 AV-Net Dice + Focal Loss Random Weight Artery 85.957 ± 0.842 73.243 ± 1.377 58.018 ± 1.656 AV-Net Dice + Focal Loss Pre-trained Weights Artery 86.705 ± 1.087 82.761 ± 1.677 70.658 ± 2.404
CommonCrawl
Thursday, July 30, 2020 ... / / Terrible Q2 GDP drops could have been worse I am confident that the second quarter (Q2) of 2020 was the peak of the Covid hysteria and the economy is already vastly more open and prosperous (almost everywhere) in Q3 – and we will see a substantial GDP rise between Q2 and Q3. I also think that there has been some learning and nations won't return to as draconian national quarantines as those that they suffered in Q2 2020. A financial song from a rare country that didn't lose its marbles because of a dumb coronavirus. So the Q2 2020 figures should be showing the worst results that the human stupidity – totally unfairly blamed on a nearly harmless virus – could have caused. Many such numbers were published today. In Q2, Germany, Belgium, Austria, and the U.S. are estimated (these are first calculations based on the real data, not some guesswork!) to have dropped by 10.1%, 12.2%, 10.7%, and 9.5% quarter-on-quarter. The first three countries' GDP dropped 11.7%, 14.5%, and 12.8% year-on-year (because there has been a 2% or so drop in Q1). Here, I need to discuss the American figure, 9.5%. The published figure was 32.9%, slightly better than estimates, but Americans publish annualized figures which means figures extrapolated to one year by an exponential function. I calculated 9.5% by the following formula:\[ 1-(1-0.329)^{1/4} \approx 0.095 \] The American annualization convention is particularly stupid now – because it basically assumes that the GDP growth rate is a continuous function of time (if it is not, then it is very unwise to extrapolate it!). Too bad, even the GDP itself is far from being a continuous function in countries where it's possible to shut down whole industries overnight. » Don't Stop Reading » Vystavil Luboš Motl v 5:26 PM | slow feedback (0) | Odkazy na tento příspěvek | Other texts on similar topics: biology, Europe, markets Wednesday, July 29, 2020 ... / / Covid numbers have huge error margins It helps to show that the virus is really inconsequential. Because lots of people have been obsessively tested for almost half a year, lots of numbers were produced in the world. The Worldometers statistical page contains a large amount of numbers that describe the "cases" (positively tested people), serious cases, deaths, number of tests in the world's countries (and their states) and their historical evolution. The two most important official statistical pages in Czechia are Illness-currently and Koronavirus-Mzcr. Let me discuss what was happening to the numbers on the first page. Vystavil Luboš Motl v 6:43 AM | slow feedback (0) | Odkazy na tento příspěvek | Other texts on similar topics: biology, Czechoslovakia Tuesday, July 28, 2020 ... / / A near-MSSM from the 10D \(E_8\) superfield and nothing else Can the research of fundamental physics be abolished globally? Not really. Pernicious, scientifically incompetent far left activists may only cripple it locally but the human race may still do it and as long as some truly intelligent humans are alive, some of them will do string theory research. Is it possible that the best theoretical high-energy research will move to an unexpected place such as Mexico in a decade? Yes, it is possible although I still believe that some "everyone is already in Mexico" brain drift would probably be needed. Alfredo Aranda and Francisco de Anda posted an amusing 5-page-long hep-ph preprint Complete \(E_8\) Unification in 10 Dimensions where they showed how one can get a damn realistic spectrum for a theory of everything while starting with the 10-dimensional \(E_8\) gauge superfield only, the superfield that lives e.g. on the domain walls of the 11-dimensional M-theory. The presence of an orbifold, \(T^6 / (\ZZ_6\times \ZZ_2)\) in this case, a typical stringy feature, is capable of circumventing the usual conclusion that you can't get a chiral spectrum with groups like \(E_8\). Other texts on similar topics: string vacua and phenomenology Monday, July 27, 2020 ... / / Anti-SJW "allies" who aren't real anti-SJW allies While many Western (and especially Anglo-Saxon) countries are decaying and events from the U.S. and U.K. look increasingly surreal to everyone in countries like mine where people haven't gone nuts yet, lots of people tell me that they're allies. Sadly, they only tell it to me – and they mostly use anonymous nicknames, anyway. On top of that, I am often told things like the following: I am against the SJWs. But: Shut up. The whole future belongs to the SJWs. The future under their leadership is unavoidable unless some miracle happens, or unless we introduce a full-blown fascist society. George Soros may be shorting the stock market and paying the fake news journalists for bad news in order to bring the U.S. economy to the knees (and to hurt Donald Trump and everyone aligned with his fate along the way). Incidentally, I just shorted Dow Jones for the third time because it will surely go to 18,000. Isn't it exciting? I will earn XY dollars when it's at 18,000. Vystavil Luboš Motl v 10:11 AM | slow feedback (0) | Odkazy na tento příspěvek | Other texts on similar topics: freedom vs PC, politics Saturday, July 25, 2020 ... / / Randonautica, an example of pseudoscientific superstitions in 2020 There are lots of things happening. I will avoid all the small topics described e.g. in my tweets. But an hour ago, a guy who thinks he is smart sent me an advertisement for Randonautica. Well, it is an app for Android or iOS. It's a new app for "orienteering", something like Geocaching or Pokémon Go, but the amount of hype surrounding it has been far greater while the substance is far smaller. Check e.g. Google News and YouTube. Other texts on similar topics: computers, games, science and society Portand, Chicago: did this America teach democracy to others? During a yesterday's bike trip, my friend told me that a major international news agency has declared its desire to capitalize the word "black" but not "white" in the racial sense. I didn't want to believe because it still sounds like a story from The Onion. But it is true. Days ago, The Associated Press decided that its employees are obliged to violate the basic rules of English in this self-evidently racist yet childish way. And indeed, some articles about "black men" were edited even retroactively. Even if the unhinged anti-white racism that cripples America these days spread in Czechia, I still think that the Institute of the Czech Language would show its authority and it would send its grammar cops, an occupation we copied from the Romans, who would punish the rogue inkspillers. Incidentally, the Life of Brian (by Monty Python) is really excellent and you should watch the full film if you have never done so. It's generally respected that the language is something we inherited from 1,000+ years of our ancestors and we have the moral duty to protect it against putrefaction. Other texts on similar topics: politics Unnecessarily many Covid drugs, vaccines work The insanely huge competition shows how much this problem was overblown, too The stupid part of the public was brainwashed by the fake news media that have claimed – screamed – approximately since mid February that the Wuhan coronavirus is a catastrophe of global proportions that would either end the life on Earth or at least totally change the life of humans. None of these things has happened and none of them could happen (at least in mid February, it was already clear that it was not a serious problem); on the other hand, the implications of the fake news, hysteria, and the mad policies building on it have been deep. Some $10 trillion of economic damages are just a part of the problem. Average people's imagination was encouraged to contribute to this hysteria. People were inventing their own variations of fake news what terrible ET-style symptoms and threats every infected person gets; what bad events in their life may be blamed on the coronavirus (after a motorbike accident death was counted as "Covid", a health official seriously claimed that motorbike accidents are caused by Covid); and people also volunteered to spread the idea that this disease will never be curable or cured because it's at least as might as God. Other texts on similar topics: biology, politics, science and society Absurdity of the EU unity Some folks could have thought that the departure of the U.K. would make the European Union visibly more unified. Well, the ongoing EU summit in Brussels makes it very clear that this fantasy was always detached from reality. The continental part of the European Union has more than enough sources to completely mock the idea that Europe is "close to being one country". The prime ministers have been talking about the post-Covid recovery fund that is supposed to be a stunning €0.75 trillion. Thirty 100-kilometer colliders. After the two standard days reserved for the proceedings, they haven't agreed about anything. So they added a third day, Sunday. It didn't help. They will meet at 4 pm again. Other texts on similar topics: Europe, politics Sunday, July 19, 2020 ... / / Research mathematics will only disappear if the civilization is over An artificial "average Russian woman" was only allowed to work as a secretary in the Urals so far. A key point is that she wasn't allowed to run for Putin's job yet. Various people have promoted their ideas about the "end of history", "end of science" but also seemingly less ambitious ends such as "end of string theory" and "end of experimental particle physics" and more far-reaching ones such as the "end of fossil fuels". None of those has taken place yet – despite hundreds of thousands of activists who are trying to destroy one or many items in the list. Tim Gowers, a 1998 Fields Medal winner, responded to a question about his most controversial opinion Research mathematics as a discipline will come to an end within the lifetimes of people just starting out in mathematical research. — Timothy Gowers (@wtgowers) July 16, 2020 Some details and justifications were written in this thread. OK, we are told that mathematicians will cease to exist when the current youngest postdocs start to die en masse. Why? Gowers believes that the expansion of the Artificial Intelligence into mathematics will make sure it will be the case. I think that his view is a corollary of a mistaken understanding of the relationship between the people and the machines. Other texts on similar topics: mathematics, science and society Friday, July 17, 2020 ... / / An individual cannot be delineated objectively yet precisely In the Quanta Magazine, Jordana Cepelewicz wrote a provoking article What Is an Individual? Biology Seeks Clues in Information Theory which is full of inspiring ideas, correct answers, and especially obsessive promotion of a thoroughly unscientific reasoning. OK, we found some fossils of the multicellular life that existed in the ocean half a billion years ago. It's hard to draw boundaries in between individuals in those life forms. But it's so important to draw these boundaries, all of biology and Darwin's theory depend on it, we are told. Oh, really? Do they? They don't. The purpose of science is to explain and predict the observations, not to provide us with details of pictures that people arbitrarily made up, such as the assumption that the whole life is clearly separated to individuals by some sharp contours. Other texts on similar topics: biology, science and society Gödel and Wolchover: fine technology running inside the liar's paradox A few days ago, Natalie Wolchover wrote an article about two famous Gödel's theorems. What is cool is that her text isn't just "some popular article about mathematics". It is really a basically full proof of the theorems translated into plain English – without sacrificing the substance. A big part of her presentation is the translation (Gödel's numbering) of sequences of characters (strings: propositions, proofs...) into unique integer codes. The number of possible strings (or computer files) is countable which means that they may be identified with positive integers in a one-to-one fashion. You could fine-tune a code that translates posssible 16 characters into 0-F (hexadecimal digits) and then add "1" in front of the sequence, to get a hexadecimal number. Well, all of them would start with "1" (which means that not all possible integer codes are used, the map isn't quite one-to-one) but the problem could be fixed by changing the rules a little bit. Instead, for historical reasons, she picks a perhaps even more elegant translation of final strings into integers: she understands the codes 0-15 as exponents above (increasingly large) primes. Again, it's not quite one-to-one but the problem may be fixed. Other texts on similar topics: mathematics Miloš Jakeš (1922-2020): evolving perspectives On July 10th, Milouš Jakeš (which was his official, somewhat more funny and informally sounding than the commonly publicized, name) died, at age almost 98. He had a secretive funeral and we only learned about the death five days later. He was the last leader of the Communist Party of Czechoslovakia in the communist era. It is interesting to see how my perspectives have evolved – and it is relevant for our comparisons of the communism of the 1980s and the neo-Marxism-driven present. After the 1968 Soviet-led occupation of Czechoslovakia that ended the promising Prague Spring, General Ludvík Svoboda stayed as the president up to 1975. Then both functions of the president and the communist "general secretary" (the latter was really more important, a characteristic sign of totalitarianism) were merged in Gustáv Husák, a Slovak guy who was once given a life in prison by fellow communists for his alleged support of the nationalist capitalism. Other texts on similar topics: Czechoslovakia, politics When a new proposed habit is really voluntary, it will be ignored by most That's why policies desirable by the majority of voters must be made transparent and mandatory In the previous blog post, among other things, I outlined my proposals to make face masks mandatory in the public interior spaces (and perhaps under other conditions) during the winter flu season, at the places with massive flu outbreaks, in analogy with policies that almost certainly suppressed the spreading of Covid-19 in my country and others. The number of weeks that we spend in the bed with fever may very well drop to a tiny fraction of the current value. Fer137 protested and argued that instead, such duties (to wear the mask) should be voluntary. The problem is that the bulk of the crazy characteristic behavior of the humans in communism was "voluntary-compulsive" and when it comes to the number of these things, the contemporary America is arguably more Soviet than the Soviet Union ever was. Well, let's start in the Soviet Union, the main communist country during the Cold War era, and look e.g. at the definition of a subbotnik (the picture at the top is relevant): Subbotnik... was a day of volunteer unpaid work [during the weekends]. Initially they were indeed voluntary but gradually de facto obligatory from announcement, as people quipped, "in a voluntary-compulsive way" (в добровольно-принудительном порядке). It was really a big part of the totalitarian power, and a huge sign that the civilized rule of law didn't exist, that the powerful could have made things obligatory in a voluntary-compulsive way. People were obliged to say it was voluntary but they were also obliged to do it! The powerful had so much power that they could force the whole population to do something even without writing anything (any law). When they said "something would be nice", it was done! Well, only the time was spent by the enslaved nation; the economic results of these policies were usually poorer than the rulers dreamed. Face masks in America, treating flu as Covid Czechs, the first white nation that made the face masks mandatory on March 18th, have thrown them away because we have determined that nothing like epidemics exists on our territory. They're mandatory in the Prague subway, currently 4 highly localized outbreaks areas (some 4x 0.5% of the Czech population), plus a minority in situations in hospitals. Meanwhile, Trump finally sports a face mask, too. He wisely says that it is only helpful in some contexts but calls it a "great thing to wear". Ms Ivanko, be a proper Czech who saves lives once. Tell your father to emulate Czechia and start to recommend - and then order - Americans to wear home-made masks. #Masks4All. He could take a cool mask like this with the U.S. flag first https://t.co/D43D0qAFaJ — Luboš Motl (@lumidek) March 31, 2020 In late March, I wrote Ivanka to persuade her dad to make the face masks as normal in the U.S. as they became normal in Czechia – and it really took 3 days from "local fads" to the nationwide Czech mask duty. (Numerous other Czechs wrote similar messages to American and other VIPs.) Too bad, the U.S. has recorded over 100,000 "deaths with Covid" but 3 months later (instead of the 3 days in Czechia), Ivanka's dad finally sports a mask, too (too bad, without any U.S. flags or MAGA colors and symbols, so disppointing). Other texts on similar topics: biology, Czechoslovakia, science and society Ethan Siegel, not Stephen Hawking, lies about the black hole radiation Years ago, I thought Ethan Siegel was a decent science writer. His texts seemed to be full of data that sort of overlapped with some basic knowledge about many physics questions. However, in the recent year or two, I stopped following him after I saw a vast excess of totally wrong, unreasonable, and sometimes toxic "punch lines" in his articles such as the claims that: WIMP is dead (premature) only childish pictures are known about the electron (BS) consensus is the core of science which is defined as mindless obedience to a scientifically illiterate, psychologically ill Scandinavian teenager (!!!) it's shocking we still can't precisely count planets with ETs (not shocking at all, even the extraterrestrial life is basically ill-defined, and heavily uncertain, too) the so-called minorities' situation should never be fairly compared to ours, their supremacy cannot be questioned (he really wrote that almost precisely!) and many others. I've probably missed many similar cesspools of lies, misconceptions, and far left insanities because I haven't followed him for more than a year. But because of John Preskill's Twitter account, I learned about another beauty: Yes, Stephen Hawking Lied To Us All About How Black Holes Decay Aside from some other arrogant manifestations of Siegel's complete misunderstanding of the Hawking radiation (and I will discuss what is arguably the most important example), he primarily repeated the sentence that "Hawking and his followers have lied to us for 32 years" when they said that the Hawking radiation may be explained by a virtual particle-antiparticle pair which becomes real, one member of the pair falls into the black hole, and the other one escapes. Other texts on similar topics: science and society, stringy quantum gravity Melbourne 2nd lockdown: fool me once... First, basic facts. Australia has some 25 million inhabitants. Sydney and Melbourne are the largest cities, in the New South Wales and Victoria, respectively. The basic statistics page shows that Australia has had 9k cases and 100 deaths or so, only 4 of those were after May 23rd. The number of deaths per capita has been basically zero. Note that the seasons are reversed relatively to most of ours but the (currently ongoing) winter is really mild, too. OK, because of "roughly a hundred positive tests per day", the Victorian authorities have re-established a harsh lockdown on the city. The 5 million people who are trapped inside cannot really leave the area for six weeks (a siege). They can't visit relatives. Restaurants, barbers, and most of other would-be non-essential things are shut down and so on. Harvard's purely online learning is a fancy way to turn education into a complete farce Harvard University is one of the most hardcore leftist places that want to cancel the in-person instruction entirely; all teaching and learning should be done online from the Fall. Donald Trump has rightfully pointed out that this decision is ridiculous, an easy way out for which they should be ashamed. Many other universities are "more moderate" and they prepare a hybrid system which is partly in-person, partly online. More than 304,000 likes in 9 hours - among the strongest tweets for @realDonaldTrump in some time. Reopening schools is a powerful issue that crosses party lines. pic.twitter.com/JQY8MbEq5Q — Alex Berenson (@AlexBerenson) July 7, 2020 Trump previously tweeted that the schools must open in the autumn. The tweet got over 300k "likes" which is unusually high even according to his standards. The desire to stop the hysteria and open the schools is almost certainly a "bipartisan cause". All pro-civilization people, including former voters of the Democrats, realize that the education is important, young people are pretty much completely resilient to Covid-19, and the benefits of the education vastly exceed the risks for the public health. And I strongly believe that the plan to remove the far left indoctrination from schools would also have a rather strong bipartisan support and Trump should add it as a visible part of his program. Other texts on similar topics: education, everyday life, science and society Steven Pinker, the latest target of "cancel culture" Steven Pinker is Harvard University's most prominent evolutionary psychologist, a man who knows how to explain why the people (and animals) evolved to think in the ways that we observe. Aside from hundreds of technical papers that have collected almost 100 thousand citations, he is also well-known for many popular books dedicated (not only) to the intelligent laymen. "The Blank Slate" is one of them; his latest bestseller, "Enlightenment Now...", is an optimistic description of the world we inhabit. During the witch hunts on Larry Summers around 2005, I was fortunate to communicate with him as one of the beautiful minds with a spine. I've also attended some ingenious lectures he gave. "Time will tell" has become a favorite replacement for arguments from those who spread lies The seemingly innocent phrase is responsible for a big part of the decay of the West Don't get me wrong. The sentence "time will tell" has been an innocent, and nearly tautologically true, sentence that many people have used at many moments. The sentence means that in the future, we will know more. It's true. To say the least, someone will know more about the events that lie in the "future" for us but they are the "present" for him, especially the events (and it's almost all events) that will be affected by Nature's quantum mechanical random generator. On this website, you may probably find dozens of copies of this sentence written by me, too. However, in recent weeks, I saw that "time will tell" has turned into something much less innocent: into a religious slogan that is particularly favored by those who are wrong about all the important propositions they are making – and who are demonstrably wrong because they simply deny facts and proofs that already exist now. Other texts on similar topics: philosophy of science, science and society Cumrun Vafa vs sloppiness of "lessons from JT gravity" A decade ago, the main topic that clumped fundamental physicists to two camps was almost certainly the anthropic reasoning – or its flawed character. It gradually faded away because people agreed that there was no argument that would settle the question – and in the case of the anthropic approach, there wasn't a consistent theory that was ready to be used. Aside from some correct propositions that were worthless because they were tautological, the anthropic school was just an extreme ideology and it remained an extreme ideology. Coleman's \(\alpha\)-parameters from baby universes were a prank or funny gesticulation, not a serious lecture about the actual mechanisms of quantum gravity. But numerous other topics were fundamental enough and answered by highly controversial views. For example, late Joe Polchinski and his co-authors who are alive and kicking were promoting the idea that there was a firewall on the boundary of every black hole. It's obviously wrong and Joe was sort of accepting that it was wrong – but some people still enjoyed giving evidence to the answer that was wrong, the arguments were clever, and as long as you didn't throw away your common sense, you could have learned some things. Other texts on similar topics: string vacua and phenomenology, stringy quantum gravity A near-MSSM from the 10D \(E_8\) superfield and no... Randonautica, an example of pseudoscientific super... Portand, Chicago: did this America teach democracy... Research mathematics will only disappear if the ci... An individual cannot be delineated objectively yet... Gödel and Wolchover: fine technology running insid... When a new proposed habit is really voluntary, it ... Ethan Siegel, not Stephen Hawking, lies about the ... Harvard's purely online learning is a fancy way to... "Time will tell" has become a favorite replacement... Cumrun Vafa vs sloppiness of "lessons from JT gra...
CommonCrawl
Author Archives: Peter Nelson Lots of Ingleton Matroids In 1971, Aubrey Ingleton [1] showed that the rank function of representable matroids satisfies additional linear inequalities that do not follow from the usual rank axioms. The inequality he observed now bears his name. It states that, for all sets $A,B,C,D$ in a representable matroid $M$, &r(A\cup B) + r(A \cup C) + r(A \cup D) + r(B \cup C) + r(B \cup D) \\ \ge & \ r(A) + r(B) + r(A \cup B \cup C) + r(A \cup B \cup D) + r(C \cup D) As difficult to typeset as it may be, this is an intriguing fact. The problem of characterizing which matroids are representable is difficult; various authors [2][3] have considered whether it is possible to extend one of the usual axiom systems in a natural way to capture precisely the representable matroids. Ingleton's inequality suggests the kind of extra axiom that might need to be added to the rank axioms if thinking along these lines. Perhaps more importantly, the Ingleton inequality gives a concise way to certify that a matroid is not representable; if $(A,B,C,D)$ is a $4$-tuple in a matroid $M$ violating the inequality, then $M$ is non-representable. This has been useful to many authors that wish to construct non-representable matroids, in particular in a result of Mayhew, Newman, Welsh and Whittle [4] that constructs a very rich family of excluded minors for the real-representable matroid, all violating Ingleton's inequality. Finally, the Ingleton inequality is closely related to everyone's favourite non-representable matroid, the Vámos matroid $V_8$. This rank-$4$ matroid has eight elements and ground set $E$ that is the disjoint union of four two-element sets $P_1,P_2,P_3,P_4$, in which the dependent four-element sets are precisely the pairs $P_i \cup P_j$ with $i < j$ and $(i,j) \ne (3,4)$. The tuple $(A,B,C,D) = (P_1,P_2,P_3,P_4)$ violates the inequality in this case: the left-hand and right-hand sides are $15$ and $16$ respectively. On the other hand, satisfying Ingleton's inequality for all choices of $A,B,C,D$ is not enough to imply representability; for example, the non-Desargues matroid satisfies Ingleton's inequality but is still non-representable, as is the direct sum of the Fano and non-Fano matroids. This post is about a recent result [7] I've obtained with Jorn van der Pol that answers what we consider to be a natural question: how `close' is the class of matroids that satisfy Ingleton's inequality to the class of representable matroids? For convenience, we will call a matroid Ingleton if Ingleton's inequality is satisfied for all choices of $(A,B,C,D)$. It might not be immediatley clear what the answer should be. Some (weak) positive evidence is the fact that the Ingleton matroids are closed under minors and duality (see [5], Lemmas 3.9 and 4.5) and that $V_8$ is non-Ingleton. However, I think the natural educated guess goes in the other direction; Ingleton's inequality seems too coarse to capture, even approximately, a notion as intricate as linear representability In fact, Mayhew, Newman and Whittle [3] have in fact shown that it is impossible to define the class of representable matroids by adding any finite list of rank inequalities to the usual rank axioms, let alone a single inequality. Our result confirms this suspicion. Theorem 1: For all sufficiently large $n$, the number of Ingleton matroids with ground set $\{1,\dotsc,n\}$ is at least $2^{\frac{1.94 \log n}{n^2}\binom{n}{n/2}}$. To view this result in context, the lower bound should be compared to the counts of representable matroids and of all matroids. On a fixed ground set $[n] = \{1,\dotsc, n\}$ The number of representable matroids is at most $2^{n^3/4}$ for all $n \ge 12$ [6], and the number of matroids is at least $2^{\tfrac{1}{n}\binom{n}{n/2}}$. (Both these upper and lower bounds are in fact correct counts up to a constant factor in the exponent). This first expression is singly exponential in $n$, having the form $2^{\mathrm{poly}(n)}$, while the second is doubly exponential, having the form $2^{2^n/\mathrm{poly}(n)}$. The lower bound in our theorem is of the second type, showing that the Ingleton matroids asymptoticlaly dwarf the representable matroids in number. In other words, knowing that a matroid is Ingleton tells you essentially nothing about whether it is representable. (In fact, our techniques show that the number of rank-$4$ Ingleton matroids on $[n]$ is asymptotically larger than the class of all representable matroids on $[n]$, which seems surprising.) The ideas in the proof of the above theorem are simple and we obtain a nice excluded minor result on the way; I briefly sketch them below. For the full proof, see https://arxiv.org/abs/1710.01924 . Ingleton Sparse Paving Matroids Our proof actually constructs a large number of Ingleton matroids of a specific sort: sparse paving. These matroids, which have come up in previous matroidunion posts, play a very special role in the landscape of all matroids; they are matroids that, while having a somewhat trivial structure, are conjectured to comprise almost all matroids. For the definition, it is easier to talk about the nonbases of a matroid than its bases. Let $\binom{[n]}{r}$ denote the collection of $r$-element subsets of $[n]$. Given a rank-$r$ matroid on $[n]$, call a dependent set in $\binom{[n]}{r}$ a nonbasis of $M$. A rank-$r$ matroid is sparse paving if for any two nonbases $U_1,U_2$ of $M$, we have $|U_1 \cap U_2| < r-1$. Equivalently, no two nonbases of $M$ differ by a single exchange, or every dependent set of $M$ is a circuit-hyperplane of $M$. In fact, this condition itself implies that the matroid axioms are satisfied; given any collection $\mathcal{K}$ of $r$-element subsets of $[n]$, if no two sets in $K$ intersect in exactly $r-1$ elements, then $\mathcal{K}$ is the set of nonbases of a sparse paving matroid on $[n]$. Thus, an easy way to guarantee that a set $\mathcal{K}$ is actually the set of nonbases of a matroid is to prove that no two of its members intersect in $r-1$ elements. Our key lemma gives a simpler way to understand Ingleton's inequality for sparse paving matroids. In general, it is very hard to mentally juggle the $A$'s, $B$'s, $C$'s and $D$'s while working with the inequality, but for sparse paving matroids, things are much simpler. Lemma 1: If $M$ is a rank-$r$ sparse paving matroid, then $M$ is Ingleton if and only if there do not exist pairwise disjoint sets $P_1,P_2,P_3,P_4,K$ where $|P_1| = |P_2| = |P_3| = |P_4| = 2$ and $|K|=r-4$, such that the five $r$-element sets $K \cup P_i \cup P_j: i < j, (i,j) \ne (3,4)$ are nonbases, while the set $K \cup P_3 \cup B_4$ is a basis. This statement may look technical, but it should also look familiar. If $P_1,P_2,P_3,P_4,K$ are sets as above, then the minor $N = (M / K)|(P_1\cup P_2 \cup P_3 \cup P_4)$ is an eight-element sparse paving matroid having a partition $(P_1,P_2,P_3,P_4)$ into two-element sets, where precisely five of the six sets $P_i \cup P_j$ are nonbases, and the last is a basis. This is a structure very similar to that of the Vámos matroid. Call an eight-element, rank-$4$ matroid $N$ having such a property Vámos-like. (Such an $N$ need not be precisely the Vámos matroid, as there may be four-element sets of other forms that are also nonbases of $N$). In any Vámos-like matroid, $(A,B,C,D) = (P_1,P_2,P_3,P_4)$ will violate Ingleton's inequality. We can restate Lemma 1 as follows. Lemma 1 (simplified): If $M$ is a sparse paving matroid, then $M$ is Ingleton if and only if $M$ has no Vámos-like minor. There are evidently only finitely many Vámos-like matroids, since they have eight elements; in fact, thanks to Dillon Mayhew and Gordon Royle's excellent computational work [8], we know all $39$ of them; as well as the Vámos matroid itself, they include the matroid AG$(3,2)^-$ obtained by relaxing a circuit-hyperplane of the rank-$4$ binary affine geometry. It is easy to show that the sparse paving matroids are themselves a minor-closed class with excluded minors $U_{1,1} \oplus U_{0,2}$ and $U_{0,1} \oplus U_{2,2}$. Combined with Lemma 1, this gives us a nice excluded minor theorem: Theorem 2: There are precisely $41$ excluded minors for the class of Ingleton sparse paving matroids: the $39$ Vámos-like matroids, as well as $U_{1,1} \oplus U_{0,2}$ and $U_{0,1} \oplus U_{2,2}$. Armed with Lemma 1, we can now take a crack at proving our main theorem. For simplicity, we will prove a slightly weaker result, with a worse constant of $0.2$ and no logarithmic factor in the exponent. The stronger result is obtained by doing some counting tricks a bit more carefully. The good news is that the proof of the weaker theorem is short enough to fit completely in this post. Theorem 3: For all sufficiently large $n$, there are at least $2^{0.2n^{-2}\binom{n}{n/2}}$ Ingleton sparse paving matroids with ground set $[n]$ Let $n$ be large and let $r = \left\lfloor n/2 \right\rfloor$ and $N = \binom{n}{r}$. Let $c = 0.4$. We will take a uniformly random subset $\mathcal{X}$ of $\left\lfloor c n^{-2}\binom{n}{r}\right\rfloor$ of the $\binom{n}{r}$ sets in $\binom{[n]}{r}$. We then hope that $\mathcal{X}$ is the set of nonbases of an Ingleton sparse paving matroid. If it is not, we remove some sets in $\mathcal{X}$ so that it is. We consider two possibilities that, together, encompass both ways $\mathcal{X}$ can fail to be the set of nonbases of a sparse paving matroid. They are $\mathcal{X}$ is not the set of nonbases of a sparse paving matroid. (That is, there are sets $U_1,U_2 \in \mathcal{X}$ whose intersection has size $r-1$.) $\mathcal{X}$ is the set of nonbases of a sparse paving matroid, but this matroid fails to be Ingleton. (That is, there are pairwise disjoint subsets $P_1,P_2,P_3,P_4,K$ of $[n]$ where $|P_1| = |P_2| = |P_3| = |P_4| = 2$ and $|K|=r-4$, such that at least five of the six $r$-element sets $K \cup P_i \cup P_j: i < j, (i,j)$ are nonbases.) Let $a(\mathcal{X}),b(\mathcal{X})$ denote the number of times each of these types of failure occurs. The condition in (2) is slightly coarser than required, for reasons we will see in a minute. So $a(X)$ is the number of pairs $(U_1,U_2)$ with $|U_1 \cap U_2| = r-1$ and $U_1,U_2 \in X$, and $b(X)$ is the number of $5$-tuples $(P_1,P_2,P_3,P_4,K)$ satisfying the condition in (2). Claim: If $n$ is large, then $\mathbf{E}(a(\mathcal{X}) + 2b(\mathcal{X})) < \tfrac{1}{2}|\mathcal{X}|$. Proof of claim: The number of pairs $U_1,U_2$ intersecting in $r-1$ elements is $\binom{n}{r}r(n-r) < n^2\binom{n}{r}$. For each such pair, the probability that $U_1,U_2 \in X$ is at most $(|\mathcal{X}|/\binom{n}{r})^2 \le c^2n^{-4}$. Therefore \[\mathbf{E}(a(\mathcal{X})) \le c^2n^{-2}\binom{n}{r} = (1+o(1))c|\mathcal{X}|.\] The number of $5$-tuples $(K,P_1,P_2,P_3,P_4)$ pf disjoint sets where $|K| = r-4$ and $|P_i| = 2$ is at most $\binom{n}{2}^4\binom{n-8}{r-4} < \tfrac{n^8}{16}\binom{n}{r}$. The probability that, for some such tuple, at least five of the sets $K \cup P_i \cup P_j$ are in $X$ is at most $6(|\mathcal{X}|/\binom{n}{r})^5 \le 6c^5n^{-10}$. Therefore \[\mathbf{E}(b(\mathcal{X})) \le \frac{n^8}{16}\binom{n}{r}\cdot 6c^5 n^{-10} = (1+o(1))\tfrac{3}{8}c^4|\mathcal{X}|.\] By linearity of expectation, the claim now holds since $c = 0.4$ gives $c + \tfrac{3}{4}c^4 < \tfrac{1}{2}$. Now, let $\mathcal{X}_0$ be a set of size $\left\lfloor c n^{-2}\binom{n}{r}\right\rfloor$ for which $a(\mathcal{X}) + 2b(\mathcal{X}) < \tfrac{1}{2}|\mathcal{X}_0|$. We now remove from $\mathcal{X}_0$ one of the two sets $U_1,U_2$ for each pair contributing to $a(\mathcal{X}_0)$, and two of the sets $K \cup P_i \cup P_j$ for each tuple $(K,P_1,P_2,P_3,P_4)$ contributing to $b(\mathcal{X}_0)$. This leaves a subset $\mathcal{X}_0'$ of $\mathcal{X}_0$ of size $\tfrac{1}{2}|\mathcal{X}_0| \approx \tfrac{1}{2}cn^{-2}\binom{n}{r} = 0.2n^{-2}\binom{n}{r}$. By construction and Lemma 1, $\mathcal{X}_0'$ is the set of nonbases of a rank-$r$ Ingleton sparse paving matroid on $[n]$. However (and this is where condition (2) needed to be strengthened slightly), so are all subsets of $\mathcal{X}_0'$. This puts the size of $\mathcal{X}_0'$ in the exponent; there are therefore at least $2^{0.2n^{-2}\binom{n}{n/2}}$ Ingleton sparse paving matroids on $[n]$, as required. A.W. Ingleton. Representation of matroids. In D.J.A Welsh, editor, Combinatorial mathematics and its applications (Proceedings of a conference held at the Mathematical Institute, Oxford, from 7-10 July, 1969). Academic Press, 1971. P. Vámos. The missing axiom of matroid theory is lost forever. J. London Math. Soc. 18 (1978), 403-408 D. Mayhew, M. Newman and G. Whittle. Yes, the "missing axiom" of matroid theory is lost forever, arXiv:1412.8399 D. Mayhew, M. Newman and G. Whittle. On excluded minors for real-representability. J. Combin. Theory Ser. B 66 (2009), 685-689. A. Cameron. Kinser inequalities and related matroids. Master's Thesis, Victoria University of Wellington. Also available at arXiv:1401.0500 P. Nelson. Almost all matroids are non-representable. arXiv:1605.04288 P. Nelson and J. van der Pol. Doubly exponentially many Ingleton matroids. arXiv:1710.01924 D. Mayhew and G.F. Royle. Matroids with nine elements. J. Combin. Theory Ser. B 98 (2008), 882-890. Posted on October 4, 2017 by Peter Nelson | 2 Replies The densest matroids without a given projective geometry minor This post is about some work done in collaboration with my graduate student Zachary Walsh. The problem is a simple one, and is the binary matroid analogue of a question studied by Thomason [1] about the edge-density of graphs with no $K_t$-minor. He shows that a simple graph on $n$ vertices without a $K_t$-minor has at most $\alpha(t)n$ edges, where $\alpha(t)$ is a best-possible constant depending only on $t$ that has the order of $t \sqrt{\log t}$. Determining this order exactly is no mean feat, but a disappointing reality is that the extremal examples are random graphs – in other words, there is no nice explicit construction for graphs that are as dense as possible with no $K_t$-minor. Because of this, tightening the upper bound of $\alpha(t)n$ to a specific function of $n$ and $t$ seems near-impossible. The analogous question for matroids is much more pleasant. We'll stick to binary matroids here, but in [2], we prove versions of theorem I discuss for all prime fields. For binary matroids, projective geometries play the same role that cliques do in graphs. Our main theorem is the following: Theorem 1: Let $t$ be a nonnegative integer. If $n$ is sufficiently large, and $M$ is a simple rank-$n$ binary matroid with no $\mathrm{PG}(t+2,2)$-minor, then $|M| \le 2^t\binom{n+1-t}{2} + 2^t-1$. Furthermore, there is a unique simple rank-$n$ binary matroid for which equality holds. Unlike for graphs, we can write down a nice function. Note that for $t = 0$, the function above is just $\binom{n+1}{2}$; in fact, in this case, the cycle matroid of a clique on $n+1$ vertices is the one example for which equality holds and in fact the result specialises to an old theorem of Heller [3] about the density of matroids without an $F_7$-minor. This was the only previously known case. The case for $t = 1$ was conjectured by Irene in her post from 2014. Irene also conjectured the extremal examples; they are all even cycle matroids. These can be defined as the matroids having a representation of the form $\binom{w}{A}$, where $A$ is a matrix having at most two nonzero entries per column, and $w$ is any binary row vector. The largest simple rank-$n$ even cycle matroids can be shown to have no $\mathrm{PG}(3,2)$-minor and have $2\binom{n}{2}-1$ elements; this agrees with the expression in our theorem for $t = 1$. These first two examples suggest a pattern allowing us to construct the extremal matroids more generally; we want a matrix with $n$ rows and as many columns as possible, having distinct nonzero columns, that is obtained from a matrix with at most two nonzero entries per column by appending $t$ rows. For a given column, there are $2^t$ choices for the first $t$ entries, and $\binom{n-t}{0} + \binom{n-t}{1} + \binom{n-t}{2}$ for the last $n-2$ (as we can choose zero, one or two positions where the column is nonzero). Since we can't choose the zero vector both times, the total number of possible columns is $2^t(\binom{n-t}{0} + \binom{n-t}{1} + \binom{n-t}{2})-1 = 2^t\binom{n-t+1}{2} + 2^t-1$, the bound in our theorem. Let's call this maximal matroid $G^t(n)$. Note that $G^0(n)$ is just the cycle matroid $M(K_{n+1})$ We can prove by induction that $G^t(n)$ has no $\mathrm{PG}(t+2,2)$-minor; the $t = 0$ case is obvious since $\mathrm{PG}(2,2)$ is nongraphic. Then, one can argue that appending a row to a binary representation of a matroid with no $\mathrm{PG}(k,2)$-minor gives a matroid with no $\mathrm{PG}(k+1,2)$-minor; since (for $t > 1$) $G^{t}(n)$ is obtained from $G^{t-1}(n-1)$ by taking parallel extensions of columns and then appending a row, inductively it has no $\mathrm{PG}(t+2,2)$-minor as required. All I have argued here is that equality holds for the claimed examples. The proof in the other direction makes essential use of the structure theory for minor-closed classes of matroids due to Geelen, Gerards and Whittle [4]; essentially we reduce Theorem 1 to the case where $M$ is very highly connected, then use the results in [4] about matroids in minor-closed classes that have very high connectivity to argue the bound. I discussed a statement that uses these structure theorems in similar ways back in this post. We can actually say things about excluding matroids other than just projective geometries. The machinery in [2] also gives a result about excluding affine geometries: Theorem 2: Let $t \ge 0$ be an integer and $n$ be sufficiently large. If $M$ is a simple rank-$n$ binary matroid with no $\mathrm{AG}(t+3,2)$-minor, then $|M| \le 2^t\binom{n+1-t}{2} + 2^t-1$. Furthermore, if equality holds, then $M$ is isomorphic to $G^t(n)$. This was proved for $t = 0$ in [5] but was unknown for larger $t$. Again, the examples where equality holds are these nice matroids $G^t(n)$. Our more general result characterizes precisely which minors we can exclude and get similar behaviour. To state it, we need one more definition. Let $A$ be the binary representation of $G^t(n+1)$ discussed earlier (where each column has at most two nonzero entries in the last $n+1-t$ positions) and let $A'$ be obtained from $A$ by appending a single column, labelled $e$, whose nonzero entries are in the last three positions. Let $G^t(n)'$ be the simplification of $M(A') / e$; so $G^t(n)'$ is a rank-$n$ matroid obtained by applying a single `projection' to $G^t(n+1)$. I will conclude by stating the most general version of our theorem for binary matroids; with a little work, it implies both the previous results. Theorem 3: Let $t \ge 0$ be an integer and $N$ be a simple rank-$k$ binary matroid. The following are equivalent: For all sufficiently large $n$, if $M$ is a simple rank-$n$ binary matroid with no $N$-minor, then $|M| \le 2^t\binom{n+1-t}{2} + 2^t-1$, and $M \cong G^t(n)$ if equality holds. $N$ is a restriction of $G^t(k)'$ but not of $G^t(k)$. [1] A. Thomason: The extremal function for complete minors, J. Combinatorial Theory Ser. B 81 (2001), 318–338. [2] P. Nelson, Z. Walsh, The extremal function for geometry minors of matroids over prime fields, arXiv:1703.03755 [math.CO] [3] I. Heller, On linear systems with integral valued solutions, Pacific. J. Math. 7 (1957) 1351–1364. [4] J. Kung, D. Mayhew, I. Pivotto, and G. Royle, Maximum size binary matroids with no AG(3,2)-minor are graphic, SIAM J. Discrete Math. 28 (2014), 1559–1577. [5] J. Geelen, B. Gerards and G. Whittle, The highly connected matroids in minor-closed classes, Ann. Comb. 19 (2015), 107–123. The number of representable matroids Posted on November 14, 2016 by Peter Nelson Hello everyone – MatroidUnion is back! Rudi and Jorn wrote a nice post earlier this year about questions in asymptotic matroid theory, and beautiful new results they've obtained in this area. While reading one of their papers on this topic, I saw that they restated the conjecture that almost all matroids on $n$ elements are non-representable. This was first explicitly written down by Mayhew, Newman, Welsh and Whittle [1] but earlier alluded to by Brylawski and Kelly [2] (in fact, the latter authors claim that the problem is an 'exercise in random matroids' but give no clue how to complete it). Indeed I would argue that most of us would independently come up with the same conjecture after thinking about these questions for a few minutes; surely representable matroids are vanishingly rare among all matroids! In any case, reading this conjecture reminded me that, like many 'obvious' statements in asymptotic matroid theory it was still open, and seemed somewhat hard to approach with existing techniques. I'm happy to say that this is no longer the case; as it happened, I discovered a short proof that I will now give in this post. The proof is also on the arXiv [3]; there, it is written with the bounds as tight as possible. Here, I will relax the calculations a little here to make the proof more accessible, as well as using more elementary bounds so the entire argument is self-contained. The theorem we prove is the following: Theorem: For $n \ge 10$, the number of representable matroids with ground set $[n] = \{1,\dotsc,n\}$ is at most $2^{2n^3}$. The number of matroids on $n$ elements is well-known to be doubly exponential in $n$, so the above gives the 'almost all' statement we need, in fact confirming the intuition that representable matroids are extremely rare among matroids in general. The bound in the proof can in fact be improved to something of the form $2^{n^3(1/4 + o(1))}$, and I believe the true count has this form; see [3] for more of a discussion. Our path to the proof is indirect; we proceed by considering a more general question on `zero-patterns' of polynomials, in the vein of [4]. Let $f_1, \dotsc, f_N$ be integer polynomials in variables $x_1, \dotsc, x_m$. Write $\|f\|$ for the absolute value of the largest coefficient of a polynomial $f$, which we call its height; it is fairly easy to prove that $\|f + g\| \le \|f\| + \|g\|$ and that $\|fg\| \le \binom{\deg(f) + \deg(g)}{\deg(f)} \|f\|\ \|g\|$ for all $f$ and $g$. We will map these polynomials to various fields; for a field $\mathbb{K}$ and a polynomial $f$, write $f^{\mathbb{K}}$ for the polynomial in $\mathbb{K}[x_1, \dotsc, x_m]$ obtained by mapping each coefficient of $f$ to an element of $\mathbb{K}$ using the natural homomorphism $\phi\colon \mathbb{Z} \to \mathbb{K}$. Given a field $\mathbb{K}$ and some $u_1, \dotsc, u_m \in \mathbb{K}$, the polynomials $f_i^{\mathbb{K}}(u_1, \dotsc, u_m)$ all take values in $\mathbb{K}$, and in general some will be zero and some nonzero in $\mathbb{K}$. We are interested in the number of different ways this can happen, where we allow both the field $\mathbb{K}$ and the $u_j$ to be chosen arbitrarily; to this end, we say a set $S \subseteq [N]$ is \realisable with respect to the polynomials $f_1, \dotsc, f_N$ if there is a field $\mathbb{K}$ and there are values $u_1, \dotsc, u_m \in \mathbb{K}$ such that \[S = \{i \in [N]: f^{\mathbb{K}}(u_1, \dotsc, u_m) \ne 0_{\mathbb{K}}\}.\] In other words, $S$ is realisable if and only if, after mapping to some field and substituting some values into the arguments, $S$ is the support of the list $f_1, \dotsc, f_N$. We will get to the matroid application in a minute; for now, we prove a lemma that bounds the number of different realisable sets: Lemma: Let $c,d$ be integers and let $f_1, \dotsc, f_N$ be integer polynomials in $x_1, \dotsc, x_m$ with $\deg(f_i) \le d$ and $\|f_i\| \le c$ for all $i$. If an integer $k$ satisfies \[ 2^k > (2kc(dN)^d)^{N\binom{Nd+m}{m}}, \] then there are at most $k$ realisable sets for $(f_1, \dotsc, f_N)$. Proof: If not, then there are distinct realisable sets $S_1, \dotsc, S_k \subseteq [N]$. For each $i \in [k]$, define a polynomial $g_i$ by $g_i(x_1, \dotsc, x_m) = \prod_{j \in S_i}f_j(x)$. Clearly $\deg(g_i) \le Nd$, and since each $g_i$ is the product of at most $N$ different $f_i$, we use our upper bound on the product of heights to get \[ \|g_i\| \le c^N \binom{dN}{d}^N (2kc')^D\], so we have a collision – there exist distinct sets $I,I' \subseteq [k]$ such that $g_I = g_I'$. By removing common elements we can assume that $I$ and $I'$ are disjoint. Let $\ell \in I \cup I'$ be chosen so that $|S_{\ell}|$ is as small as possible. We can assume that $\ell \in I$. Since the set $S_{\ell}$ is realisable, there is a field $\mathbb{K}$ and there are values $u_1, \dotsc, u_m \in \mathbb{K}$ such that $S_{\ell} = \{i \in [N]: f_{\ell}^{\mathbb{K}}(u_1, \dotsc, u_m) \ne 0_{\mathbb{K}}\}$. So $g^{\mathbb{K}}_{\ell}$, by its definition, is the product of nonzero elements of $\mathbb{K}$, so is nonzero. For each $t \in I \cup I' – \{\ell\}$, on the other hand, since $|S_t| \ge |S_\ell|$ and $S_t \ne S_\ell$ there is some $j \in S_t – S_\ell$, which implies that the zero term $f^{\mathbb{K}}_j(u_1, \dotsc, u_m)$ shows up in the product $g^{\mathbb{K}}_t(u_1, \dotsc, u_m)$. It follows from these two observations that \[ 0_{\mathbb{K}} \ne g^{\mathbb{K}}_I(u_1, \dotsc, u_m) = g^{\mathbb{K}}_{I'}(u_1, \dotsc, u_m) = 0_{\mathbb{K}}, \] which is a contradiction. Why is this relevant to representable matroids? Because representing a rank-$r$ matroid $M$ with ground set $[n]$ is equivalent to finding an $r \times n$ matrix $[x_{i,j}]$ over some field, for which the $r \times r$ determinants corresponding to bases of $M$ are nonzero and the other determinants are zero. In other words, a matroid $M$ is representable if and only if the set $\mathcal{B}$ of bases of $M$ is realisable with respect to the polynomials $(f_A \colon A \in \binom{[n]}{r})$, where $f_A$ is the integer polynomial in the $rn$ variables $[x_{ij} \colon i \in [r],j \in [n]]$ that is the determinant of the $r \times r$ submatrix of $[x_{ij}]$ with column set $A$. Thus, the number of rank-$r$ representable matroids on $n$ elements is the number of realisable sets with respect to these $f_A$. To bound this quantity, we apply the Lemma, for which we need to understand the parameters $N,m,a,d$. Now $m = rn \le n^2$ is just the number of variables. $N = \binom{n}{r} \le 2^n$ is the number of $r \times r$ submatrices. (We can in fact assume that $N = 2^n$ and $m = n^2$ by introducing dummy polynomials and variables). Finally, since the $f_A$ are determinants, we have $\deg(f_A) = r \le n$ and $\|f_A\| = 1$ for all $a$, so $(N,m,c,d) = (2^n,n^2,1,n)$ will do. To apply the lemma, it suffices to find a $k$ for which \[ 2^k > (2kc(dN)^d)^{N\binom{Nd+m}{m}}, \] or in other words, \[k > N\binom{Nd+m}{m}\log_2(2kc(dN)^d)).\] If you are happy to believe that $k = 2^{2n^3}/2n$ satisfies this, then you can skip the next two estimates, but for the sticklers among us, here they are: Using $(N,m,d,c) = (2^n,n^2,n,1)$ and $n \ge 20$ we have \[N\binom{Nd+m}{m} \le 2^n(2^{n+1}n)^{n^2} = 2^{n^2(n+1 + \log_2(n)) + n} \le 2^{2n^3}/6n^4.\] (Here we need that $n^3 > n^2(1+\log_2 n) + n + \log_2(6n^4)$, which holds for $n \ge 10$.) Similarly, for $k = 2^{2n^3}/(2n)$ we have $2kc < 2^{2n^3}$, so \[\log_2(2kc(dN)^d)) < \log_2(2^{2n^3}(n2^n)^n) < 2n^3 + n^2 + n \log_2 n < 3n^3.\] Combining these estimates, we see that $k = 2^{2n^3}/2n$ satisfies the hypotheses of the lemma, so this $k$ is an upper bound on the number of rank-$r$ representable matroids on $n$ elements. This is only a valid bound for each particular $r$, but that is what our extra factor of $2n$ was for; the rank $r$ can take at most $n+1$ values, so the number of representable matroids on $[n]$ in total is at most $(n+1)k < 2nk = 2^{2n^3}$. This completes the proof of the main theorem. [1] D. Mayhew, M. Newman, D. Welsh, and G. Whittle, On the asymptotic proportion of connected matroids, European J. Combin. 32 (2011), 882-890. [2] T. Brylawski and D. Kelly, Matroids and combinatorial geometries, University of North Carolina Department of Mathematics, Chapel Hill, N.C. (1980). Carolina Lecture Series [3] P. Nelson, Almost all matroids are non-representable, arXiv:1605.04288 [math.CO] [4] L. Rónyai, L. Babai and M. K. Ganapathy, On the number of zero-patterns of a sequence of polynomials, J. Amer. Math. Soc. 14 (2001), 717–735. The limits of structure theory? Posted on December 7, 2015 by Peter Nelson This post is about some conjectures on matroid minor structure theory in the most general setting possible: excluding a uniform matroid. Jim Geelen previously discussed questions in this area in [1], and most of what I write here comes from discussions with him. Whether they be for graphs or matroids, qualitative structure theorems for minor-closed classes usually fit a particular template. They make a statement that the members of a minor-closed class are 'close' to having a particular basic structure, where the structure should be something that differs the graphs/matroids in the class from arbitrary/random ones. In the case of proper minor-closed classes of graphs, this structure takes the form of embeddability on a fixed topological surface, as was famously shown by Robertson and Seymour. For matroids representable over a fixed finite field, the structure arises from classes of group-labelled graphs that embed on a fixed topological surface, classes of group-labelled graphs in general, and (when the order of the field is non-prime) classes of matroids representable over a fixed subfield. These results for finite fields have been shown in recent years by Geelen, Gerards and Whittle, and have already given us powerful tools for understanding the matroids in minor-closed classes, as demonstrated most notably in their proof of Rota's conjecture. Extending graph minors structure theory to minor-closed classes of matroids over finite fields was no mean feat, but there is real reason to believe that we can generalise it further. It seems unlikely that one could obtain very strong structural results for all minor-closed classes; for example, the class of sparse paving matroids (that is, the matroids whose girth is at least the rank and whose cogirth at least the corank) is minor-closed, and is conjectured to contain almost all matroids – intuitively, a structure theorem that allows us to 'understand' the class of sparse paving matroids seems unlikely. The sticking point here seems to be that this class contains all uniform matroids. At the moment, we think that one should be able to obtain sensible qualititative structure theorems precisely for the minor-closed classes that don't contain all the uniform matroids. In this post, I'll state a conjectures that makes this kind of statement. For this, we need two ingredients. We first need an idea which basic structures we expect to see as outcomes. These turn out to be pretty close to what goes on in the finite field representable case. Let $\mathcal{M}$ be a minor-closed class not containing all uniform matroids. Let $U$ be a uniform matroid not contained in $\mathcal{M}$. Without loss of generality, we may assume that $U = U_{s,2s}$ where $s \ge 4$, as every uniform matroid is a minor of a matroid of this form. Here are some examples of interesting classes that $\mathcal{M}$ could be: The class of $\mathbb{F}$-representable matroids, where $\mathbb{F}$ is some finite field over which $U$ is not representable. The class of graphic matroids. The class of bicircular matroids. (These are the 'graph-like' matroids in which each edge element is placed freely on the line between two vertex elements). The class of frame matroids (i.e. the matroids of the form $M \backslash V$, where $V$ is a basis of a matroid $M$ for which every fundamental circuit has size at most two). This class contains both the previous ones, as well as various other graph-like matroids such as those arising from group-labelled graphs. $U_{4,8}$ is not a frame matroid and therefore neither is $U$. The class of duals of frame matroids. (Or graphic/bicircular matroids) A class of matroids arising (as graphic, cographic, bicircular, group-labelled-graphic, etc…) from the class of graphs that embed on some surface. Although they are not at all trivial from a graph-theoretic perspective, classes of the last type are less rich as they all have small vertical separations. Classes of the the other types, however, contain matroids of arbitrarily high vertical connectivity. Structural statements simplify a lot when they are made about highly connected matroids, and the main conjecture I'll be stating will apply only to very highly vertically connected matroids in a given minor-closed class; the last type of class will therefore not show up. The divide between the 'highly connected' minor-closed classes and other classes is made concrete by the following conjecture. Conjecture 1. Let $\mathcal{M}$ be a minor-closed class of matroids not containing all uniform matroids. Either $\mathcal{M}$ contains the graphic matroids or their duals, $\mathcal{M}$ contains the bicircular matroids or their duals, or there is an integer $t$ so that $\mathcal{M}$ does not contain any vertically $t$-connected matroid on at least $2t$ elements. We now need to say what, according to our structural conjecture, exactly is meant by two matroids being 'close'. For minor-closed classes of graphs, 'closeness' is measured in terms of vortices, apex vertices, and clique-sums. For matroids over finite fields, the structure theory considers two matroids on the same ground set to be 'close' if they have representations that differ by a bounded-rank matrix. None of these ideas quite work for matroids in general, as we don't have representations, let alone vertices. However, there is something that will. A single-element projection of a matroid $M$ is a matroid $N$ for which there exists a matroid $L$ such that $L \backslash e = M$ and $L / e = N$. If $N$ is a single-element projection of $M$ then $N$ is a single-element lift of $M$. Single-element lifts and projections are dual notions that 'perturb' a matroid by a small amount to another matroid on the same ground set; for example, they change the rank and corank of any given set by at most $1$. We write $\mathrm{dist}(M,N)$ to denote the minimum number of single-element lifts and/or projections to transform $M$ into $N$. If this 'distance' is small, then $M$ and $N$ are 'close'. This will serve as the notion of closeness in our structure conjecture. (In a more general structure theory, vortices will also arise). This distance only makes sense if $M$ and $N$ have the same ground set, if this is not the case, then we say $\mathrm{dist}(M,N) = \infty$. Incidentally (and I'm looking at you, grad students), I would like an answer to the following question, which would simplify the notion of perturbation distance – I don't have a good intuition for whether it should be true or false, and either way it could be easy. Problem. Let $M$ and $N$ be matroids for which there exists a matroid $L$ that is a single-element lift of both $M$ and $N$. Does there exist a matroid $P$ that is a single-element projection of both $M$ and $N$? The Conjecture Here goes! If $\mathcal{M}$ is a minor-closed class not containing all uniform matroids, then the highly connected members of $\mathcal{M}$ should be close to being frame, co-frame, or representable. Conjecture 2: Let $\mathcal{M}$ be a minor-closed class of matroids not containing a uniform matroid $U$. There is an integer $t \ge 0$ so that, if $M$ is a vertically $t$-connected matroid in $\mathcal{M}$, then there is a matroid $N$ such that $\mathrm{dist}(M,N) \le t$ and either $N$ is a frame matroid, $N^*$ is a frame matroid, or $N$ is representable over a field $\mathbb{F}$ for which $U$ is not $\mathbb{F}$-representable. If true, this conjecture would tell us that minor-closed classes of matroids inhabit a very beautiful universe. The hypotheses are very minimal, applying to a massive variety of different $\mathcal{M}$. The conclusions, on the other hand, imply that all 'nondegenerate' members of $\mathcal{M}$ are at a small distance from a matroid $N$ that is very far from being generic. Somehow, as happens again and again in matroid theory, the 'special' classes of representable and graph-like matroids are in fact fundamental to understanding the minor order. Conjecture 2 is one of the many goals of a more general matroid structure theory; in my next post I will discuss some recent progress we have made towards it. [1] Some open problems on excluding a uniform matroid, J. Geelen, Adv. Appl. Math. 41 (2008), 628-637.
CommonCrawl
Tautomerization, acidity, basicity, and stability of cyanoform: a computational study Shaaban A. Elroby1,2 Chemistry Central Journal volume 10, Article number: 20 (2016) Cite this article Cyanoform is long known as one of the strongest acid. Cyanoform is only stable below −40 °C. The issue of the stability and tautomeric equilibria of cyanoform (CF) are investigated at the DFT and MP2 levels of theory. The present work presents a detailed study of structural tautomer interconversion in three different media, namely, in the gas phase, in a solvent continuum, and in a microhydrated environment where the first solvation layer is described explicitly by one or two water molecule. In all cases, the transition state has been localized and identified. Proton affinities, deprotonation energies and the Raman spectra are reported analyzed and discussed. The 1 tautomer of cyanoform is shown to be more stable than 2 form by only 1.8 and 14.1 kcal/mol in the gas phase using B3LYP/6-311 ++G** and MP2/6-311 ++G** level of theory, respectively. This energy difference is reduced to 0.7 and 13.4 kcal/mol in water as a solvent using CPCM model using B3LYP/6-311 ++G** and MP2/6-311 ++G** level of theory, respectively. The potential energy barrier for this proton transfer process in the gas phase is 77.5 kcal/mol at MP2/6-311 ++G** level of theory. NBO analysis, analysis of the electrostatic potential (ESP) of the charge distribution, donor–acceptor interactions and charge transfer interactions in 1 and 2 are performed and discussed. Gross solvent continuum effects have but negligible effect on this barrier. Inclusion of one and two water molecules to describe explicitly the first solvation layer, within the supermolecule model, lowers the barrier considerably (29.0 and 7.6 kcal/mol, respectively). Natural bond orbital (NBO) analysis indicated that the stability of the cyanoform arising from charge delocalization. A very good agreement between experimental and theoretical data has been found at MP2/6-311 ++G** for the energies. On other hand, B3LYP/6-311 ++G** level of theory has good agreement with experimental spectra for CF compound. Tricyanomethane or cyanoform is long known as one of the strongest acid with pKa = −5.1 in water and 5.1 in acetonitrile [1], however, its relative stability have been and still is a controversial subject. The molecule has previously only been identified by microwave spectroscopy in the gas phase at very low pressures [2–4]. Since the first attempt of its synthesis and isolation in 1896, numerous attempts to isolate cyanoform have been reported, but none of them were successful. Dunitz et al. reviewed these attempts and reinvestigated most of them [5]. The tautomeric dicyanoketenimine (2), tricyanomethanide (1), scheme 1) was suggested to play a role in the stability and high acidity of 1. Structure 1 is only stable below −40 °C [6]. Its extreme high acidity was interpreted on the basis that its structure has three cyano groups attached to CH group. The deprotonation of hydrogen from center carbon is very easily, making it a strong acid and demonstrating a fundamental rule of carbon acids. The rule describes how electron-loving groups attached to a central hydrogen-toting carbon pull on that carbon's electrons. Tautomers form of cyanoform 1 and 2 The stability and structure of 1 in the gas phase were investigated by quantum chemical calculations [7–13]. Results of these computational studies revealed that 1 is more stable than 2 by about 7–10 kcal/mole in the gas phase. In the present work, the issue of the stability and tautomeric equilibria of 1 are revisited. Computations at high level of theory and in the gas as well as in solution are performed. Water-assisted proton transfer is investigated for the first time where transition states, a barrier energies and thermodynamic parameters are computed. The ground state geometries, proton affinities, deprotonation energies and the Raman spectra are reported. NBO analysis of the charge distribution, donor–acceptor interactions and charge transfer interactions in 1 and 2 are performed and discussed. Computational methods All quantum chemical calculations are carried out using the Gaussian 09 [14] suite of programs. Full geometry optimizations for each and every species studied have been carried out using two DFT functionals namely, the B3LYP [15–17], and MP2 [18–20] methods using the 6-311 ++G** basis set. The frequency calculations carried out confirm that all the optimized structures correspond to true minima as no negative vibration frequency was observed. Number of imaginary frequencies are zero for minima and one for transition states. Zero point energy (ZPE) was enclosed in all energetic data. Among all DFT methods, B3LYP often gives geometries and vibration frequencies, which are closest to those obtained from the MP2 method. Natural bond orbital (NBO) population analysis on optimized structures is accomplished at the B3LYP/6-311 ++G** level [21]. NBO calculations were performed using NBO 5.0 program as implemented in the gaussian 09 W package. The effect of solvent (water) is taken in consider using the self-consistent reaction field polarisable continuum model (SCRF/PCM) and SMD models [22–24]. Results were visualized using chemcraft program [25]. Figure 1 displays the fully optimized structure of 1, TS, and 2. These structures represent the global minima on the respective potential energy surfaces computed at two different levels of theory, namely, B3LYP and MP2/6-311 ++G**. The two theoretical models gave very comparable geometries. 1 is highly symmetric tetrahedral structure with all C–C–C 110.9o and the C–C-H angle 108.0°. That is the central carbon atom assumes a typical sp3 hybridization scheme. Tautomer 2, on the other hand, is planar having the central carbon atom assuming an sp2 hybridization scheme with C–C–C angles of 120o. The hydrogen atom in 2 form is tilted out of the molecular plane by an angle of 53o. The two tautomers (1 and 2) show also some minor structure variations reflected in the shortening of the C–C and slight elongation of the C-N bond lengths upon going from 1 to 2. Figure 1 displays also the net charges on each atom of 1 and 2. It can be easily noticed that the C-N–H moiety is highly polarized with a considerable charge (0.538, −0.516 and 0.408e, on the C, N and H, respectively) separation. This charge separation is much greater than that observed for the 1 tautomer (0.289 and −0.480 on the C and N, respectively). Optimized structures of CF-CH, TS and CF-NH structures obtained at the B3LYP/6-311 ++G** level. Bond length is in Angstrom, charge distribution is natural charge Due to the 1 → 2 intramolecular-proton transfer, a number of structural parameters of the 1 form have changed. Going from the 1 to the 2 tautomer, the C–C bonds length decreases from 1.475 to 1.430 and 1.342 Å, whereas the C–N bond length enlarges from 1.175 to 1.178 Å. In the optimized geometry of the TS, breaking of the C–H1 bond together with the formation of N8–H1 bond is clear. In 1 tautomer, The C1–H1 and C–C distances vary from 1.098 and 1.474 Å for the 1 tautomer to 1.862 and 1.426 Å for the TS, respectively. The N1–H1 is 1.539Å in TS. This distance is 1.019 Å for the 2 tautomer. The analysis of the normal modes of TS imaginary frequencies (−1588.00) revealed the displacements of N6–H2 and C1–H2 bond lengths of 1. Tautomerization 1⇄2 Proton transfer reactions are very important in chemistry and biology as it underlie several technological and biological processes. Some investigations [6] have suggested that the tautomeric form 2 may exist and underlies the strong acidity of cyanoform. In the present section, the possibility of 1, 3 proton transfer in 1 will be explored. Table 1 compares the relative energies of the two tautomers 1 and 2 computed at two different level of theory. The two methods indicated that the 1 form is more stable than 2 form by 14.1 and, 1.8 kcal/mol, at the MP2/6-311 ++G** and B3LYP/6-311 ++G** levels of theory in the gas phase, respectively. It seems that B3LYP is not able to account for some stabilizing interactions in 1 in particular electron correlations which is well accounted by MP2 calculations. Table 1 Total and relative energies for the studied species using two methods (B3LYP and MP2) at 6-311 ++G** basis set in the gas phase and in the solution Table 1 compiles also relative energies in water as a solvent computed using the solvent continuum model CPCM, where the 1 tautomer is found to be the more stable. Solvent dielectric constant seems to have marked effect on the stability of 1. This is in agreement with a previous experimental study [6]. The lower relative stability of the 2 tautomer may be due to the close proximity of the lone pairs of electrons on the N8 atom and the adjacent triple bond in 2 forms, in 2 form H–N–C angle is bent. On the other hand, the lone pairs of electrons on all N atoms in 1 tautomer are projected in opposite directions collinear with triple bonds. This will minimize the repulsive force in the 1 tautomer as compared to that in the 2. The 1, 3 proton transfer process takes place via the transfer of the H atom from the central carbon atom to N8. We have been able to localize and identify the transition state (TS) for this process, which is displayed in Fig. 1. Some selected structural parameters of the TS are collected together with the corresponding values for 1 and 2 tautomers for comparison (Additional file 1: Tables 1S and 2S and Figure 1S. The barrier energy computed for this tautomerization reaction is 68.7 and 74.4 kcal/mol at B3LYP/6-311 ++G** and MP2/6-311 ++G** level of theory in the gas phase, respectively. In the present work, results generated by DFT and MP2 methods at 6-311 ++G** basis set, barrier energy (Ea) of the 1 and 2 tautomerism in aqueous solution is 68.4 and 77.5 kcal/mol, respectively. This high energy barrier seems to indicate that this reaction is not feasible at room temperature. Solvent dielectric continuum seems to have but little effect on this barrier; in fact, it reduced it by less than 1 % (see Fig. 2). The barriers energy for the proton-transfer process of 1 assisted by one and two water molecule, with and without PCM–Water. Energies are in kcal/mol at the MP2 method at basis set 6-311 ++G** Considering the equilibrium between the 1 and 2 tautomers, the value of the tautomeric equilibrium constant (K) is calculated by using $$\text{K}={{\text{e}}^{-\Delta \text{G}/\text{RT}}}$$ where ΔG, R and T are the Gibbs free energy difference between the two tautomers, the gas constant and temperature, respectively. The Gibbs free energy difference between the tautomers is in favor of the 1 tautomer by 13.0 kcal/mol using MP2/6-311 ++G** level of theory. By using the Eq. (1), K equal about 3.14 × 10−10. To calculate the relative free energies of two tautomers, 1 and 2, in water solution, (ΔG 1−2 )sol we use a simple energy cycle of scheme 2: An energy cycle used to calculate relative free energies of tautomers in water solution $$\left( \Delta \text{G}\mathbf{1}-\mathbf{2} \right)\text{sol}=-\Delta \text{Gsol1}+\left( \Delta \text{G}\mathbf{1}-\mathbf{2} \right)\text{gas}+\Delta \text{Gsol}\mathbf{2}$$ where (ΔG 1−2 )gas is the free energy difference between 1 and 2 in the gas phase and ΔGsol 1 and ΔGsol 2 are the free energies of solvation of 1 and 2, respectively. The calculated relative energy and relative free energy of two tautomers in the water solution are presented in Table 2. The 1 form is the most stable tautomer than 2 by relative energy and free energy. The relative free energy between 1 and 2 tautomers are 26.8 and 26.4 kcal/mol using the SMD and CPCM models, respectively. The 2 tautomer is less stable than 1 by 14.6 and 14.1 kcal/mol using the SMD and CPCM solvation models, respectively. Table 2 The relative energies and relative free energies for the two tautomer's using SMD and CPCM models at MP2/6-311 ++G** level of theory in water solution Water-assisted proton transfer The structure computed in the gas-phase for TS (Fig. 3) reveals the formation of a triangular 4-membered ring. The high energy and relative instability of this TS is associated with the large strain in this triangular ring. In solution, however, one way to relief this strain is to incorporate one or more water molecules in the formation of the transition state. We have examined the possibility of water-assisted proton transfer for the studied tautomerization reaction using MP2/6-311 ++G** level of theory. We have incorporate one and two water molecules. The TS's so obtained are displayed in Fig. 3 and the corresponding energy quantities are compiled in Table 1. The presence of one water molecule in the structure of the transition state considerably relief the ring strain and stabilize it considerably to lie at only 29.6 kcal/mol above the 1 form as shown in Fig. 2. The incorporation of two water molecules, stabilize TS reflecting the stability associated with 8-membered ring formed. The barrier energy with two water molecules is about 7.6 kcal/mol. The energy profile presented in Fig. 2 shows that the most important difference between the prototropic tautomerism of dihydrated species and the isolated compound is associated with the activation barriers, which become almost ten times or even less than ten times of those obtained for the isolated compound; this is a well-known phenomenon [26–32]. Thermodynamics of tautomerization of 1, Table 3 compiles the computed thermodynamic parameters at room temperature and at −40 °C.; at this temperature 1 is known to be stable [6]. Entropies, and enthalpies increase on going from 260 to 300 K, this may be attributed to the fact that intensities of molecular vibration increase with increasing temperature. The enthalpy change (∆H) and the entropy change (∆S) for the reaction are also obtained and listed in Table 3. For the tautomerization of cyanoform 1 to 2, ∆S is negative while the ∆H is positive at both 260 and 300 K. That is, the proton transfer in cyanoform is an endothermic process. The change in Gibbs free energy (∆G) at two different temperatures was also obtained, and is shown in Table 3. ∆G at 260 K is positive, which demonstrates that the formation process of the CF- NH is not spontaneous. Optimized structures, of two (left) and one (right) water-assisted transition states for the tautomerization of cyanoform computed at MP2/6-311 ++G** level of theory Table 3 Thermal energy parameters for the studied species using B3LYP/6-311 ++G** level of theory in solution at 260 and 300 K Protonation and deprotonation The proton affinity (PA) values help in understanding fragmentation patterns in mass spectroscopy influenced by protonation and other proton transfer reactions, the basicity of molecules and susceptibility toward electrophilic substitution. Knowledge of preferred site of protonation is also of significance for structure elucidation of polyfunctional molecules [33]. For each protonation and deprotonation site, the structure with the lowest energy was identified as the most stable and with respect to this, the relative energies are calculated. The variation in geometrical parameters on CH-deprotonation and N-protonation at the B3LYP/6-311 ++G** level theory are displayed in Fig. 4. The analysis of variation in geometrical parameters as a result of protonation of the N in 1, indicates elongation for adjacent C–C bond to protonated N atom along with compression of C–N bond. The protonation energy, ΔEprot, was calculated as follows: ΔEprot = E +AH −EA (where E AH+ is the energy of cationic acid (protonated form) and E A is the energy of the neutral form). By the same equation, the deprotonation energy, DP, was calculated using ΔEDP = E −A —EA (where E −A is the energy of anion (deprotonated form) and E A is the energy of the neutral form. The proton affinities for 1 sites at B3LYP/6-311 ++G** in the gas phase are higher than the values evaluated in solution using PCM method while vice versa is observed for the deprotonation (DP) of the C-H bond. Table 1 compiles the deprotonation and protonation energies of the studied species, obtained at the B3LYP/6-311 ++G** and MP2/6-311 ++G** level of theory. The deprotonation energies of the CH bond in the gas phase and in the solution are 303.7 and 272.0 kcal/mol at MP2 method, respectively, i.e. the CH bond is characterized by a strong acidity (1156 kJ/mol) which is sensibly higher than that of NH bonds in formamide (1500 kJ/mol), N-methylformamide (1510 kJ/mol) or N-methylacetamide (1514 kJ/mol) [34]. The reason for this high acidity is probably a strong delocalization of the negative charge over three cyano groups around CH bond. Optimized structures of deprontaed and protonation species of 1 obtained at the B3LYP/6-311 ++G** level of theory. Bond length is in Angstrom Vibration Raman spectrum analysis The experimental [6] and theoretically predicted FT-Raman spectra (intensities) for 1 are represented in Fig. 5 and detailed band information is summarized Table 4. FT-Raman spectrum were calculated by the two methods, DFT B3LYP and MP2 using two basis sets, namely 6-311 ++G** and aug-cc-pVQZ, and the frequency was scaled by 0.96 [35]. Calculated Raman frequencies (cm−1) (a) 1 and (b) 2 calculated at B3LYP/6-311 ++G** level of theory in the gas phase. Values were scaled by an empirical of 0.96 Table 4 Observed [6] and calculated Raman frequencies (cm−1) (scaled by an empirical factor of 0.96) for 1 using B3LYP and MP2 methods at two basis sets 6-311 ++G** and aug-cc-pVQZ The Raman spectrum of cyanoform was reported recently by Theresa Soltner et al. [6]. Comparison of the of the theoretically computed frequencies and those observed experimentally shows a very good agreement especially with B3LYP/aug-cc-pVQZ level of theory. Most intensive band in Raman spectra, obtained experimentally was observed at 2287 cm−1 occurred in calculated spectra at 2288, 2292 and 2316 cm−1 in B3LYP/6-311 ++G**, B3LYP/aug-cc-pVQZ and PBE1PBE/6-311G(3df, 3dp) [6] level of theory, respectively. MP2 simulated spectra were found have less vibrational band deviation and missing one band from the observed spectrum for the studied molecule, as shown in Fig. 6 and Table 4. It is interesting to note that, the C–H asymmetric stretching vibrations is observed experimentally at 2259 cm−1 and predicted theoretically at 2098 and 2093 cm−1 using the MP2/6-311 ++G** and MP2/aug-cc-pVQZ level of theory, respectively, in weak agreement. DFT functionals show a good prediction spectra of nitriles and their anions [36–40]. The HOMO and LUMO frontier orbitals of the 1 and 2 tautomers. (The Isovalue = 0.05) using B3LYP/6-311 ++G** level of theory It should be noted that the B3LYP at the two basis sets gave good band position evaluation, e.g. band appeared at 2285 cm−1 (obs), 2895 cm−1 (6-311 ++G**) and 2894 cm−1 (aug-cc-pVQZ). As it can be seen from Table 4, the theoretically calculated values at 2897 and 1228 cm−1 showed excellent agreement with the experimental values. The C–H stretching vibrations is observed experimentally at 2885 cm−1 and predicted theoretically at 2895 and 2894 cm−1 using the 6-311 ++G** and aug-cc-pVQZ basis sets, respectively, in excellent agreement. The γ(C-N) stretching is predicted theoretically at 2288 cm−1 using 6-311 ++G** basis set in a very good agreement with the experimental observed Raman line at 2287 cm−1. No bands for C=C or C=N stretching vibrations are observed in FT-Raman of 1. The absence of any band in the 1500–1900 range confirms that the stable form for the studied molecule is 1 tautomer. Full assignment of Raman spectrum of 1 tautomer is given in Table 4. NBO analysis NBO analysis has been performed on the molecule at the MP2 and B3LYP/6-311 ++G** level of theory in order to elucidate the intra molecular, hybridization and delocalization of electron density within the studied molecule, which are presented in Table 5. Table 5 Second order perturbation energy (E(2)) in NBO basis for 1 using B3LYP and MP2 methods at 6-311 ++G** basis set Natural bond orbital (NBO) [41, 42] analysis gives information about interactions in both filled and virtual orbital spaces that could help to have a detailed analysis of intra and intermolecular interactions. The second order Fock matrix was carried out to evaluate the donor–acceptor interactions in the NBO analysis [43]. For each donor NBO (i) and acceptor NBO (j), the stabilization energy associated with i–j delocalization can be estimated as, $${{\text{E}}^{(2)}}=\text{ }\Delta {{\text{E}}_{\text{ij}}}=\text{qi}=\text{F}{{\left( \text{i},\text{j} \right)}^{2}}/{{\varepsilon }_{\text{i}}}{{\varepsilon }_{\text{j}}}$$ where qi is the donor orbital occupancy, ɛi, ɛj are diagonal elements (orbital energies) and F(i,j) is the off-diagonal NBO Fock matrix clement. The stabilization of a molecular system arises due to overlapping of orbital between bonding and anti-bonding which sequels in an intramolecular charge transfer (ICT). In Table 5 the perturbation energies of significant donor–acceptor interactions are comparatively presented for 1 and 2 forms. The larger the E(2) value, the intense is the interaction between electron donors and electron acceptors. The NBO results show that the specific lone pairs of N atoms with σ∗ of the C–C bonds interactions are the most important interactions in 1 and CF_NH, respectively. In 1, the interactions initiated by the donor NBOs like σC1–C2, σC3–C4, πN–C and NBOs due to lone pairs of N atoms are giving substantial stabilization to the structures in the both MP2 and B3LYP methods. Above all, the interaction between lone pairs namely, N6, N7 and N8 is giving the most possible stabilization to 1 since it has the most E(2) value around 12.81 and 11.5 kcal/mole in 2. The other interaction energy in the 1 and 2 is π electron donating from π (C3–N6)−π*(C1–C3), π(C3–N6)−π*(C1–H2), π(C4–N7)−π*(C1–C4), and π (C5–N8)−π*(C1–C5) resulting stabilization energy of about 5.62, 2.76, 5.69 and 5.89 kcal/mol, respectively. The present study at the two methods (MP2 and B3LYP), shows clearly that the electron density of conjugated triple bond of cyano groups exhibits strong delocalization. The NBO analysis has revealed that the lone pairs of N atoms and C–C, C–H and C–N bonds interactions give the strongest stabilization to both of the 1 and 2 with an average value of 12.5 kcal/mole. The 3D-distribution map for the highest-occupied-molecular orbital (HOMO) and the lowest-unoccupied-molecular orbital (LUMO) of the 1 and 2 tautomers are shown in Fig. 6. As seen, the HOMO is mainly localized on the cyano groups; while, the LUMO is mainly localized on the CC bonds. The energy difference between the HOMO and LUMO frontier orbitals is one of the most important characteristics of molecules, which has a determining role in such cases as electric properties, electronic spectra, and photochemical reactions. The gap energy (HOMO–LUMO) is equal to 9.00 and 5.40 eV for the 1 and 2 tautomers, respectively. The large energy gap for 1 tautomer implies that structure of the cyanoform is more stable. A comparative study of two different theoretical methods was performed on the cyanoform to obtain the highest accuracy possible and more reliable structures. Despite the B3LYP and MP2 methods affording good results which provide a better picture of the geometry and spectra and energetics, respectively, both in the gas phase and in a water solution (PCM–water). At all levels of theory used, the 1 form is predicted to be more stable than its 2 form, both in the gas phase and in solution. The potential energy barrier for this proton transfer process in the gas phase is 77.5 kcal/mol using MP2/6-311 ++G** level of theory. Gross solvent continuum effects have negligible effect on this barrier. Inclusion of one and two water molecules to describe explicitly the first solvation layer, within the supermolecule model, lowers the barrier considerably (29.1 and 7.6 kcal/mol). There is good correspondence between the DFT-predicted and experimentally reported Raman frequencies, confirming suitability of optimized geometry for the 1 as the most stable conformer of the cyanoform. This conformation is characterized also by larger HOMO–LUMO gap of 9.00 eV further confirming its marked stability. The NBO analysis has revealed that the lone pairs of N atoms and C–C, C–H and C–N bonds interactions give the strongest stabilization to both of the 1 and 2 with an average value of 12.5 kcal/mol. Raamat E, Kaupmees K, Ovsjannikov G, Trummal A, Ktt A, Saame J, Koppel I, Kaljurand I, Lipping L, Rodima T, Pihl V, Koppel A, Leito I (2013) Acidities of strong neutral Brønsted acids in different media. J Phys Org Chem 26:162–170 Boyd RH (1963) Cyanocarbon chemistry. XXIII. The ionization behavior of cyanocarbon acids. J Phys Chem 67(4):737–774 Bak B, Scanholt H (1977) The existence of gaseous cyanoform as observed by microwave spectra. J Mol Struct 37:153–156 Schmidtmann H (1896) Ueber einige Derivate des Malonitrils. Ber Dtsch Chem Ges 29:1168–1175 Sisak D, McCusker LB, Buckl A, Wuitschik G, Wu YL, Schweizer W, Dunitz JD (2010) The search for tricyanomethane (cyanoform). Chem Eur J 16:7224–7230 Soltner T, Jonas H, Andreas JK (2015) The existence of tricyanomethane. Angew Chem Int Ed 54:1–3 Clark T, Chandrasekhar J, Spitznagel GW, Schleyer PVR (1983) Efficient diffuse function-augmented basis sets for anion calculations. III. The 3-21 + G basis set for first-row elements, Li–F. J Comput Chem 4:294–301 Krishnan R, Binkley JS, Seeger R, Pople JA (1980) Selfconsistent molecular orbital methods. XX. A basis set for correlated wave functions. J Chem Phys 72:650–654 McLean D, Chandler GS (1980) Contracted gaussian basis sets for molecular calculations. I. Second row atoms, Z = 11–18. J Chem Phys 72:5639–5648 Perdew JP, Burke K, Ernzerhof M (1996) Generalized gradient approximation made simple. Phys Rev Lett 77:3865–3868 Csszr P, Pulay P (1984) Geometry optimization by direct inversion in the iterative subspace. J Mol Struct 114:31–34 Brand H, Liebman JF, Schulz A (2008) Cyano-, nitro- and nitrosomethane derivatives: structures and gas-phase acidities. Eur J Org Chem 2008:4665–4675 Trofimenko S, Little EL (1963) Dicyanoketenimine (cyanoform). J Org Chem 28:217–218 Frisch MJ, Trucks GW, Schlegel HB, et al (2009) Gaussian Inc. Revision A.7. Pittsburgh Becke AD (1996) Density-functional thermochemistry. IV. A new dynamical correlation functional and implications for exact-exchange mixing. J Chem Phys 104:1040–1046 Becke AD (1997) Density-functional thermochemistry. V. Systematic optimization of exchange-correlation functionals. J Chem Phys 107:8554–8560 Saebo S, Almlof J (1989) Avoiding the integral storage bottleneck in LCAO calculations of electron correlation. Chem Phys Lett 154:83–89 Chong DP (1997) Recent advances in density functional methods. World Scientific, Singapore (Parts I and II) Barone V, Bencini A (1999) Recent advances in density functional methods. World Scientific, Singapore (Parts III) Ess DH, Houk KN (2005) Activation energies of pericyclic reactions: performance of DFT, MP2, and CBS-QB3 methods for the prediction of activation barriers and reaction energetics of 1,3-dipolar cycloadditions, and revised activation enthalpies for a standard set of hydrocarbon pericyclic reactions. J Phys Chem A 109:9542–9553 Glendening ED, Reed AE, Weinhold F, NBO Version 3.1, Carpenter JE Miertos S, Scrocco E, Tomasi J (1981) Electrostatic interaction of a solute with a continuum. A direct utilization of ab initio molecular potentials for the prevision of solvent effects. Chem Phys 55:117–229 Miertos S, Tomasi J (1982) Approximate evaluations of the electrostatic free energy and internal energy changes in solution processes. Chem Phys 65:239–245 Marenich AV, Cramer CJ, Truhlar DG (2009) Universal solvation model based on solute electron density and a continuum model of the solvent defined by the bulk dielectric constant and atomic surface tensions. J Phys Chem B 113:6378–6396 Barone V, Adamo C (1995) Density functional study of intrinsic and environmental effects in the tautomeric equilibrium of 2-pyridone. J Phys Chem 99:15062–15068 Gorb L, Leszczynski J (1998) Intramolecular proton transfer in mono- and dihydrated tautomers of guanine: an ab initio post Hartree–Fock Study. J Am Chem Soc 120:5024–5032 Alkorta I, Elguero J (1998) 1,2-Proton shifts in pyrazole and related systems: a computational study of [1,5]-sigmatropic migrations of hydrogen and related phenomena. J Chem Soc Perkin Trans 2:2497–2504 Alkorta I, Rozas I, Elguero J (1998) A computational approach to intermolecular proton transfer in the solid state: assistance by proton acceptor molecules. J Chem Soc Perkin Trans 2:2671–2676 Balta B, Aviyente V (2004) Solvent effects on glycine II. Water-assisted tautomerization. J Comput Chem 25:690–703 Enchev V, Markova M, Angelova S (2007) Prototropic tautomerism in aqueous solution: combined and discrete/SCRF models. Chem Phys Res J 1:1–36 Markova N, Pejov L, Enchev V (2015) A hybrid statistical mechanics—quantum chemical model for proton transfer in 5-azauracil and 6-azauracil in water solution. Int J Quantum Chem 115:477–485 Damanjit K, Rupinder PK, Ruchi K (2009) Correlation between proton affinity and conjugation effects in carbamic acid and its higher chalcogenide analogs. J Mol Struct Theochem 9139:90–96 Mautner M (1988) Models for strong interactions in proteins and enzymes. 1. Enhanced acidities of principal biological hydrogen donors. J Am Chem Soc 110:3071 Tsenov J, Stoyanov SS, Binev I (2008) IR spectral and structural changes, caused by the conversion of 4-cyanobenzamide into azanion: a combined experimental/computational approach. Bulg Chem Comm 40:520–525 Alecu IM, Zheng J, Zhao Y, Truhlar DG (2010) Computational thermochemistry: scale factor databases and scale factors for vibrational frequencies obtained from electronic model chemistries. J Chem Theory Comput 6:2872–2887 Stoyanov SS, Popova A, Tsenov J (2008) IR spectra and structure of 3,5,5-trimethyl(cyclohex-2-enylidene) malononitrile and its potassium cyanide and sodium methoxide carbanionic adducts: experimental and b3lyp studies. Bulg Chem Comm 40:538–545 Stoyanov SS, Tsenov JA, Yancheva DY (2012) IR spectra and structure of 2-{5,5-dimethyl-3-[(2-phenyl)vinyl]cyclohex-2-enylidene}-malononitrile and its potassium cyanide and sodium methoxide carbanionic adducts: experimental and B3LYP theoretical studies. J Mol Struct 1009:42–48 Stoyanov SS (2010) Scaling of computed cyano-stretching frequencies and IR intensities of nitriles, their anions, and radicals. J Phys Chem A 114:5149–5161 Tsenov J, Stoyanov SS, Binev I (2005) Experimental and computational studies on the IR spectra and structures of the free tricyanomethanide carbanion and its potassium ion-pair. Bulg Chem Comm 37:361 Weinhold F, Landis CR (2005) Valency and bonding: a natural bond orbital donor-acceptor perspective. Cambridge University Press, Cambridge Weinhold F (1998) Natural bond orbital methods. In: Schleyer PVR, Allinger NL, Clark T, Gasteiger J, Kollman PA, Schaefe HF III, Schreiner PR (eds) Encyclopedia of computational chemistry, vol 3. Wiley, Chichester, UK, pp 1792–1811 Markova N, Pejov L, Enchev V (2015) A hybrid statistical mechanics—quantum chemical model for proton transfer in 5-azauracil and 6-azauracil in water solution. Int J Quant Chem 115:477–485 Zhurko GA, Zhurko DA (2009) Chemcraft program, Academic version 1.8. http://www.chemcraftprog.com The author would like to thank Prof Rifaat H. Hilal for the valuable discussions. The author declares that he has no competing interests. Chemistry Department, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah, 21589, Saudi Arabia Shaaban A. Elroby Chemistry Department, Faculty of Science, Beni-Suef University, Beni-Suef, 62511, Egypt Correspondence to Shaaban A. Elroby. Additional file 1. Selected structural parameters. Elroby, S.A. Tautomerization, acidity, basicity, and stability of cyanoform: a computational study. Chemistry Central Journal 10, 20 (2016). https://doi.org/10.1186/s13065-016-0166-z Cyanoform Tautomerization B3LYP Raman spectra
CommonCrawl
Home > Journals > Algebr. Geom. Topol. > Volume 17 > Issue 1 > Article 2017 Kan extensions and the calculus of modules for $\infty$–categories Emily Riehl, Dominic Verity Algebr. Geom. Topol. 17(1): 189-271 (2017). DOI: 10.2140/agt.2017.17.189 Various models of (∞,1)–categories, including quasi-categories, complete Segal spaces, Segal categories, and naturally marked simplicial sets can be considered as the objects of an ∞–cosmos. In a generic ∞–cosmos, whose objects we call ∞–categories, we introduce modules (also called profunctors or correspondences) between ∞–categories, incarnated as spans of suitably defined fibrations with groupoidal fibers. As the name suggests, a module from A to B is an ∞–category equipped with a left action of A and a right action of B, in a suitable sense. Applying the fibrational form of the Yoneda lemma, we develop a general calculus of modules, proving that they naturally assemble into a multicategory-like structure called a virtual equipment, which is known to be a robust setting in which to develop formal category theory. Using the calculus of modules, it is straightforward to define and study pointwise Kan extensions, which we relate, in the case of cartesian closed ∞–cosmoi, to limits and colimits of diagrams valued in an ∞–category, as introduced in previous work. Emily Riehl. Dominic Verity. "Kan extensions and the calculus of modules for $\infty$–categories." Algebr. Geom. Topol. 17 (1) 189 - 271, 2017. https://doi.org/10.2140/agt.2017.17.189 Received: 25 October 2015; Revised: 15 May 2016; Accepted: 22 May 2016; Published: 2017 First available in Project Euclid: 16 November 2017 Digital Object Identifier: 10.2140/agt.2017.17.189 Primary: 18G55, 55U35 Secondary: 55U40 Keywords: $\infty$–categories, modules, pointwise Kan extension, profunctors, virtual equipment Rights: Copyright © 2017 Mathematical Sciences Publishers Algebr. Geom. Topol. Emily Riehl, Dominic Verity "Kan extensions and the calculus of modules for $\infty$–categories," Algebraic & Geometric Topology, Algebr. Geom. Topol. 17(1), 189-271, (2017)
CommonCrawl
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up. Applications of infinite Ramsey's Theorem (on N)? Asked 11 years ago Finite Ramsey's theorem is a very important combinatorial tool that is often used in mathematics. The infinite version of Ramsey's theorem (Ramsey's theorem for colorings of tuples of natural numbers) also seems to be a very basic and powerful tool but it is apparently not as widely used. I searched in the literature for applications of infinite Ramsey's theorem and only found straight forward generalization of statements that follow from finite Ramsey's theorem (example: Erdos-Szekeres ~> every infinite sequence of reals contains a monotonic subsequence) and some other basic combinatorial applications, Ramsey factorization for \omega-words, the original applications of Ramsey to Logic. Where else is infinite Ramsey's theorem used? Especially are there applications to analysis? co.combinatorics ramsey-theory applications alexodalexod $\begingroup$ Is there a real difference between infinite Ramsey theorem and general finite Ramsey theorem for arbitrary large number of colors? The infinite version implies the finite one is easy. I am under the impression that we can get the former from the later by some compactness argument. $\endgroup$ – abcdxyz Mar 28 '10 at 7:52 $\begingroup$ @Tran The compactness argument only proves the finite version of Ramsey's theorem from the infinite one. The infinite Ramsey's theorem is proof-theoretically stronger than the finite version. For instance finite Ramsey is provable in PA, where infinite Ramsey implies the Paris-Harrington variant, which is not provable there. $\endgroup$ – alexod May 8 '10 at 8:50 The following fact has been called "Ramsey's Theorem for Analysts" by H. P. Rosenthal. Theorem. Let $(a_{i,j})_{i,j=0}^\infty$ be an infinite matrix of real numbers such that $a_i = {\displaystyle\lim_{j\to\infty} a_{i,j}}$ exists for each $i$ and $a = {\displaystyle\lim_{i\to\infty} a_i}$ exists too. Then there is an infinite sequence $k(0) < k(1) < k(2) < \cdots$ such that $a = {\displaystyle\lim_{i<j} a_{k(i),k(j)}}$. The last limit means that for every $\varepsilon > 0$ there is an $n$ such that $n < i < j$ implies $|a-a_{k(i),k(j)}| < \varepsilon$. When the matrix is symmetric and ${\displaystyle\lim_{i\to\infty} a_{k(i),k(i)}} = a$ too, this is just an ordinary double limit. The proof is a straightforward applications of the two-dimensional Ramsey's Theorem. The obvious higher dimensional generalizations are also true and they can be established in the same way using the corresponding higher dimensional Ramsey's Theorem. These are used to construct "spreading models" in Banach Space Theory. Beyond the infinite Ramsey's theorem on N, there is, of course, a kind of super-infinite extension of it to the concept of Ramsey cardinals, one of many large cardinal concepts. Most of the large cardinal concepts, including Ramsey cardinals, generalize various mathematical properties of the countably infinite cardinal ω to uncountable cardinals. For example, an uncountable cardinal κ is a Ramsey cardinal if every coloring of finite subsets of kappa into 2 colors (or indeed, less than κ many colors) admits a homogeneous set of size κ. Such cardinals are necessarily inaccessible, Mahlo, and much more. The somewhat weaker property, that every coloring of pairs (or for any fixed finite size) from κ to 2 colors has a homogeneous set, is equivalent to κ being weakly compact, a provably weaker notion, since every Ramsey cardinal is a limit of weakly compact cardinals. Similarly, the concept of measurable cardinals generalize the existence of ultrafilters on ω, for an uncountable cardinal κ is said to be a measurable cardinal if there is a nonprincipal κ-complete ultrafilter on κ. Ramsey cardinals figure in many arguments in set theory. For example, if there is a Ramsey cardinal, then V is not L, and Ramsey cardinals are regarded as a natural large cardinal notion just exceeding the V=L boundary. Another prominent result is the fact that every measurable cardinal is Ramsey (which is not obvious from first notions). Further, if there is a Ramsey cardinal, then 0# exists. Indeed, this latter argument proceeds as a pure Ramsey style argument, using a coloring. Namely, if κ is Ramsey, then we may color every finite increasing sequence of ordinals with the type that they realize in L. By the Ramsey property, there must be a set of size κ, all of whose increasing finite subsequences realize the same type. That is, there is a large class of order indiscernibles for L. By results of Silver, this is equivalent to the assertion that 0# exists. The fact that Ramsey cardinals are strictly stronger than weakly compact cardinals suggests to my mind that there is something fundamentally more powerful about finding homogeneous sets for colorings of all finite subsets than just for pairs or for subsets of some fixed size. This difference is not revealed at ω, for which both are true by the infinite Ramsey theorem. But perhaps it suggests that we will get more power from Ramsey by using the more powerful colorings, since this is provably the case for higher cardinals. Another point investigated by set theorists is that finding homogeneous sets in the case of infinite exponents---that is, coloring infinite subsets---is known to be inconsistent with the axiom of choice. However, in models of set theory where the Axiom of Choice fails, these infinitary Ramsey cardinals are fruitfully investigated. For example, under the Axiom of Determinacy, there are a great number of cardinals realizing an infinite exponent paritition relation. Joel David HamkinsJoel David Hamkins $\begingroup$ You write amazingly nice answers systematically. Thanks! $\endgroup$ – Mariano Suárez-Álvarez Jan 19 '10 at 2:22 $\begingroup$ Thanks very much for your kind remarks. I'm also very impressed by the range of your expertise, based on your numerous comments on diverse topics. $\endgroup$ – Joel David Hamkins Jan 19 '10 at 2:40 Fred Galvin found the following corollary to Hindman's theorem. There are infinitely many natural numbers, so that any finite sum of them has an odd number of prime factors. Indeed, decompose the natural numbers into two classes according to the parity of the number of prime factors, then the quoted theorem states that there are infinitely many numbers so that any finite sum of them are in the same class, i.e., they have the same parity of the number of prime factors. If this parity is "even", then multiply all of them by 2. Péter KomjáthPéter Komjáth Matousek showed that for every $K\gneq 1$ every infinite metric space $X$ has an infinite subspace that either embeds into the real line by a $K$-bi-Lipschitz function or in which the distances of any two distinct points are the same up to a factor of $K$. The proof uses an iterated application of the infinite Ramsey theorem. Stefan GeschkeStefan Geschke I think it gives the most beautiful proof of the Bolzano–Weierstrass theorem. It's a very easy but beautiful application of Ramsey's theorem. Given a sequence $x=(x_n)$ of real numbers, colour the pairs of naturals $i < j$ by whether $x_i < x_j$ or $x_i \geq x_j$. Ramsey's theorem guarantees an infinite monochromatic set. This corresponds to a monotonic subsequence of $x$; if $x$ is bounded, then this subsequence converges. Ben Barber T.KarageorgosT.Karageorgos $\begingroup$ +1. Coincidentally, I was recently working the other way, taking the "slick" proof of BW and seeing what kind of combinatorial argument made it work; and I think that one doesn't need the full force of Ramsey's theorem. We don't need an infinite monochromatic subset, so we have one less level of induction needed. $\endgroup$ – Yemon Choi Oct 17 '12 at 19:37 $\begingroup$ Although, on rereading the original post, it already mentions the combinatorial result that underlies this proof of BW. In the original post this is attributed to Erdos and Szekeres $\endgroup$ – Yemon Choi Oct 17 '12 at 19:40 $\begingroup$ Would either one of you care to describe the argument? $\endgroup$ – François G. Dorais♦ Oct 18 '12 at 1:29 Ramsey's theorem (and other generalizations such as the Erdos-Rado theorem) are used in many standard model theoretic arguments which are involved in finding (models with) indiscernibles. The most basic example is perhaps the Ehrenfeucht-Mostowski theorem. One example which I find quite cute, although I'm not enough of a specialist/connoisseur to know how important it is: MR1045291 (91b:46013) The Banach space $B(l^2)$ is primary. G. Blower, Bull. London Math. Soc. 22 (1990), no. 2, 176--182. To quote the Math Review: The author proves that if $A$ is an infinite-dimensional injective operator system on $l^2$ and $P$ is a completely bounded projection on $A$, then either $PA$ or $(I-P)A$ is completely boundedly isomorphic to $A$. The author also proves that if $B(l^2)$ is linearly isomorphic to a direct sum of two Banach spaces, then it is linearly isomorphic to one of these spaces. An interesting component of his proof is the use of Ramsey theory. Yemon ChoiYemon Choi $\begingroup$ This is a nice application of Ramsey's theorem, but as far as I can see they are could have used finite Ramsey's theorem if they would have calculated the size of the sets $\sigma_i$. $\endgroup$ – alexod Jan 19 '10 at 10:42 There are, in fact, very deep uses of Ramsey-theoretic methods in analysis. As I recall, Gowers won a Fields medal for using this connection to answer most of the open conjectures about Banach space geometry. For example, he used these methods to show that there exists a Banach space with no unconditional Schauder basis. For a nice (though somewhat advanced) survey, there is a book "Ramsey Methods in Analysis" by Argyos and Todorcevic. The strengthened finite Ramsey theorem: For any positive integers n, k, m we can find N with the following property: if we color each of the n element subsets of S = {1, 2, 3,..., N} with one of k colors, then we can find a subset Y of S with at least m elements, such that all n element subsets of Y have the same color, and the number of elements of Y is at least the smallest element of Y. The Paris–Harrington theorem states that the strengthened finite Ramsey theorem is not provable in Peano arithmetic. See the Wikipedia article on the Paris-Harrington theorem. Alon Amit Kristal CantwellKristal Cantwell That's probably too obvious, but still - applications to linear diophantine equations like (the simplest of all examples) "for every colouring of N in a finite number of colours the equation x+y=z has a monochrome solution". Vladimir DotsenkoVladimir Dotsenko $\begingroup$ Do we need the infinite Ramsey theorem to prove this? I am under the impression that if we fix the number of color the finite version will do. $\endgroup$ – abcdxyz Mar 28 '10 at 7:48 $\begingroup$ You are right about that. I am not sure about the more general case of an equation though - and do not have Graham's book on Ramsey theory at hand to check it. $\endgroup$ – Vladimir Dotsenko Mar 28 '10 at 12:32 Thanks for contributing an answer to MathOverflow! Not the answer you're looking for? Browse other questions tagged co.combinatorics ramsey-theory applications or ask your own question. Reference for Diagonalization Trick Extracting subsequences in Banach spaces, along an ultrafilter? Infinite Fubini rule for co/limits Noncombinatorial proofs of Ramsey's Theorem? Nice applications of the spectral theorem? Combining van der Waerden's theorem with Ramsey's theorem Deriving Konig's Lemma directly from Infinite Ramsey's Theorem for triples Applications of Szemeredi's Theorem Important formulas in combinatorics Games and Ramsey's theorem
CommonCrawl
npj 2d materials and applications Self-organized quantum dots in marginally twisted MoSe2/WSe2 and MoS2/WS2 bilayers Broken mirror symmetry in excitonic response of reconstructed domains in twisted MoSe2/MoSe2 bilayers Jiho Sung, You Zhou, … Hongkun Park Moiré excitons in MoSe2-WSe2 heterobilayers and heterotrilayers Michael Förg, Anvar S. Baimuratov, … Alexander Högele Moiré trions in MoSe2/WSe2 heterobilayers Xi Wang, Jiayi Zhu, … Xiaodong Xu Twist-angle dependence of moiré excitons in WS2/MoSe2 heterobilayers Long Zhang, Zhe Zhang, … Hui Deng Bilayer Wigner crystals in a transition metal dichalcogenide heterostructure You Zhou, Jiho Sung, … Hongkun Park Signatures of moiré trions in WSe2/MoSe2 heterobilayers Erfu Liu, Elyse Barré, … Chun Hung Lui Evidence for moiré intralayer excitons in twisted WSe2/WSe2 homobilayer superlattices Biao Wu, Haihong Zheng, … Yanping Liu Signatures of moiré-trapped valley excitons in MoSe2/WSe2 heterobilayers Kyle L. Seyler, Pasqual Rivera, … Xiaodong Xu Stacking-controllable interlayer coupling and symmetric configuration of multilayered MoS2 Sachin M Shinde, Krishna P Dhakal, … Jong-Hyun Ahn V. V. Enaldiev ORCID: orcid.org/0000-0002-3000-30561,2, F. Ferreira1,2, J. G. McHugh ORCID: orcid.org/0000-0001-8509-48831,2 & Vladimir I. Fal'ko ORCID: orcid.org/0000-0003-0828-03101,2,3 npj 2D Materials and Applications volume 6, Article number: 74 (2022) Cite this article Nanophotonics and plasmonics Two-dimensional materials Moiré superlattices in twistronic heterostructures are a powerful tool for materials engineering. In marginally twisted (small misalignment angle, θ) bilayers of nearly lattice-matched two-dimensional (2D) crystals moiré patterns take the form of domains of commensurate stacking, separated by a network of domain walls (NoDW) with strain hot spots at the NoDW nodes. Here, we show that, for type-II transition metal dichalcogenide bilayers MoX2/WX2 (X=S, Se), the hydrostatic strain component in these hot spots creates quantum dots for electrons and holes. We investigate the electron/hole states bound by such objects, discussing their manifestations via the intralayer intraband infrared transitions. The electron/hole confinement, which is strongest for θ < 0.5°, leads to a red-shift of their recombination line producing single-photon emitters (SPE) broadly tuneable around 1 eV by misalignment angle. These self-organized dots can form in bilayers with both aligned and inverted MoX2 and WX2 unit cells, emitting photons with different polarizations. We also find that the hot spots of strain reduce the intralayer MoX2 A-exciton energy, enabling selective population of the quantum dot states. The formation of minibands is a common moiré superlattice (mSL) effect1,2,3,4,5,6,7,8,9,10,11,12,13,14, often related to a rigid rotation of one 2D crystal against the other. In general, the approximation of a rigid interlayer twist is valid for lattice-mismatched crystals or larger twist angles, where a short mSL period prohibits the formation of energetically preferential stacking domains of the two crystals. In contrast, for marginally (small-angle) twisted bilayers of crystals with very close lattice constants, the long period of the mSL offers sufficient space for creating preferential stacking areas. That is, the energy gain due to better adhesion can surmount the cost of intralayer strain in each of the constituent crystals. The reconstruction of small-angle twisted bilayers into a network of domains15 [2H for antiparallel (AP) and 3R for parallel-oriented (P) bilayers] has been observed in various bilayers of transition metal dichalcogenides (TMDs)16,17,18,19,20. The observed16,17,18,19,20 and theoretically modelled15,21,22 structures feature hexagonal (for AP) and triangular (for P) NoDW with nodes hosting few nanometer areas of "chalcogen-on-chalcogen" stacking (XtXb), which are hot spots of the intralayer strain. Below we study the effects produced by these hot spots of strain in marginally twisted same-chalcogen heterobilayers MoX2/WX2. While domains/NoDW form in both homo- and heterobilayers, the in-plane intralayer deformations, u(r), in those two systems are qualitatively different. In homobilayers the formation of preferential stacking domains is brought about by twisting locally the crystals toward each other. As a result, the deformations at the domain walls are predominantly shear in character, that is, with \({{{\rm{div}}}}\,{{{\boldsymbol{u}}}}\equiv {u}_{ii}\to 0\), where \({u}_{ij}\equiv \frac{1}{2}({\partial }_{i}{u}_{j}+{\partial }_{j}{u}_{i})\) is a 2D strain tensor, and uii is its trace. In perfectly aligned (θ = 0°) heterobilayers, lattice mismatch (δ ≈ 0.2% for MoS2/WS2 and δ ≈ 0.4% for MoSe2/WSe2) requires an adjustment of the MoX2 (\({u}_{ii}^{{{{\rm{Mo}}}}}\approx -\delta\) compression) and WX2 (\({u}_{ii}^{{{{\rm{W}}}}}\approx \delta\) expansion) lattices toward each other inside the large area domains. This inflicts a few percent of hydrostatic compression of WX2 and expansion of MoX2 in XtXb areas (NoDW nodes), quantified in Figs. 1 and 2, which, as we demonstrate below, create deep confinement potentials for charge carriers and interlayer excitons (iXs). Theoretical modelling23,24,25,26,27 of localized iXs in WX2/MoX2 bilayer has been attempted earlier, however without taking into account strong lattice relaxation effects (which is applicable to structures with larger misalignment angles, θ > 2°). This led to the underestimation of the depth of the size of the band edge variation for electrons and holes, as compared to what we find in this Letter, and with different positioning of band edge extrema across moiré supercell. In this work we focus on small-angle twisted WX2/MoX2 bilayers where lattice relaxation plays the critical role on trapping charge carriers, in particular due to a substantial hydrostatic strain component at NoDW nodes. Up to now, no optical studies have been reported on such small-angle (θ ≤ 1°) bilayers, despite that the structural features of NoDW have been demonstrated using transmission electron microscopy16,17 and several spectroscopic studies28,29,30,31 have been performed on bilayers with larger misalignment angles, where lattice reconstruction does not play such a dominant role as discussed below. Fig. 1: Variation of K-valley conduction/valence, Ec/v, band edges in MoX2 (blue) and WX2 (red) monolayers under hydrostatic strain with alignment of the bands corresponding to lattice-matched WX2/MoX2 heterobilayers. The vertical black solid line shows the value of hydrostatic strain of individual layers inside domains. These values are opposite in sign with respect to those in XtXb nodes, leading to formation of localized electron and hole states in self-organized QDs, which are suitable for SPE (wavy line). Inset shows twist angle dependence of hydrostatic components of strain in MoX2 and WX2 layers in XtXb nodes of NoDW. Fig. 2: Lattice reconstruction and local conduction and valence band edge modulations at K-valleys in twisted MoX2/WX2 bilayers. a and c Show distribution of the hydrostatic strain, \({u}_{ii}^{{{{\rm{Mo}}}}}\), in MoSe2 layer of P- and AP-MoSe2/WSe2 bilayers with θ = 0° and θ = 0.4°, respectively. Squares in (a) show areas used to plot band edge profile in Fig. 3. Arrows on the scalebar indicate the values of \({u}_{ii}^{{{{\rm{Mo}}}}}\) inside domains and at NoDW nodes. b Modulations of conduction (c) and valence (v) band edges as a function of twist angle for high symmetry stackings of moiré supercell, indicated by different symbols, for P/AP- MoSe2/WSe2 and MoS2/WS2 bilayers. Matching MoX2 and WX2 lattices inside the large area domains determines homogeneous strain which is accounted for through offsetting of MoX2 and WX2 strain axes in Fig. 1. Modulation of band edges by strain and charge transfer Here, we single out the hydrostatic strain component because of the critical role it plays in determining the K-valley energies in MoX2/WX2 crystals. Several earlier experimental and density functional theory (DFT) studies32,33,34 have agreed that conduction and valence band edges in TMD monolayers are strongly shifted by hydrostatic strain, but without much sensitivity to shear deformations. This trend is illustrated in Fig. 1. The corresponding shifts of conduction/valence band edges in MoX2/WX2 determine the energy of the interlayer exciton (iX). Inside domains formed by lattice reconstruction, hydrostatic strains compensate lattice mismatch between the layers: this slightly increases the band gap as compared to rigidly twisted bilayers without strain (Fig. 1). Compensating small deformations inside large domain areas, \({u}_{ii}^{{{{\rm{Mo,W}}}}}\) in XtXb nodes have the opposite signs and much larger magnitudes as compare to \({u}_{ii}^{{{{\rm{Mo,W}}}}}\) inside domains. This strongly decreases layer-indirect band gap and determines deep confinement potentials for both electrons and holes, leading to the appearance of SPEs. The intralayer strain (\({u}_{ii}^{{{{\rm{Mo/W}}}}}\)) maps in Fig. 2 were computed using a multiscale modelling approach15, tested in the detailed comparison with the STEM microscopy data16. This approach starts with the computation of stacking-dependent MoX2/WX2 adhesion energy, \({{{\mathcal{W}}}}\), followed by the parametrization of interpolation formulae15 for its dependence on the interlayer lateral offset, r0. For both P and AP bilayers, energetically favorable stackings are those with the largest lateral separation between chalcogens. These stackings are MotXb and XtWb for P-orientation and 2H for AP-orientation. Note that XtXb stacking is unfavorable energetically, and its interlayer distance swells15 by up to ≈0.5 Å. By combining interpolation formulae for \({{{\mathcal{W}}}}({{{{\boldsymbol{r}}}}}_{0})\)15 where we use local lateral offset, $${{{{\boldsymbol{r}}}}}_{0}({{{\boldsymbol{r}}}})=\delta \cdot {{{\boldsymbol{r}}}}+\theta \hat{z}\times {{{\boldsymbol{r}}}}+{{{{\boldsymbol{u}}}}}^{{{{\rm{Mo}}}}}-{{{{\boldsymbol{u}}}}}^{{{{\rm{W}}}}},$$ with elasticity theory and minimizing total energy of the bilayer across its mSL (which period, \(\ell \approx a/\sqrt{{\theta }^{2}+{\delta }^{2}}\), is fixed by the twist angle and lattice mismatch between the crystals), we compute the deformation fields. In Fig. 2a, c, we present \({u}_{ii}^{{{{\rm{Mo}}}}}\) maps for MoSe2/WSe2 bilayers with θ = 0° and θ = 0.4°, where the preferential stacking domains (triangular/hexagonal for P/AP-bilayer) are separated by networks of dislocation-like domain walls with XtXb stacking at the NoDW nodes15. Note that for small twist angles the hydrostatic component of strain persists, now combined with shear deformations (similar to those in twisted homobilayers). To incorporate strain into the shifts of conduction/valence band edges, δεc/v, we performed DFT modelling of the TMD band structures using Quantum ESPRESSO35 (see Methods). The computed variations of all band edges can be described as ≈ Vv/cuii, with \({V}_{c}^{{{{\rm{Mo{S}}}_{2}}}}=-12.45\) eV, \({V}_{v}^{{{{\rm{W{S}}}_{2}}}}=-5.94\) eV, \({V}_{c}^{{{{\rm{MoS{e}}}_{2}}}}=-11.57\) eV, \({V}_{v}^{{{{\rm{WS{e}}}_{2}}}}=-5.76\) eV, which are quoted for the relevant bilayer bands. Using these values, we compute, $$\delta {\varepsilon }_{v/c}({{{\boldsymbol{r}}}})={V}_{v/c}{u}_{ii}^{{{{\rm{W/Mo}}}}}({{{\boldsymbol{r}}}})-e{\phi }_{{{{\rm{piezo}}}}}^{{{{\rm{W/Mo}}}}}({{{\boldsymbol{r}}}})\pm \frac{1}{2}\Delta ({{{\boldsymbol{r}}}}),$$ taking into account strain-dependent piezoelectric potential15, \(-e{\phi }_{{{{\rm{piezo}}}}}^{{{{\rm{W/Mo}}}}}({{{\boldsymbol{r}}}})\), and offset-dependent potential drop, Δ(r), due to interlayer charge transfer (for details see Supplementary Section 1). The first term in Eq. (2) represents the effect of hydrostatic component of the intralayer strain which was missed in the previous analysis of the same systems36. The twist-angle dependences of the computed band edge energies δεc/v for three selected stacking areas (MotXb, XtWb, XtXb for P and 2H, MotWb, XtXb for AP) are plotted in Fig. 2(b). These figures suggest that XtXb regions are potential wells for electrons and holes and these wells are the deepest for θ ≈ δ. Based on that we describe the NoDW nodes as trigonally warped quantum dots (QDs), with band edge profiles exemplified in Fig. 3. These QDs are sufficiently deep to accommodate at least two size-quantized states for electrons/holes which retain their distinct s (Lz = 0) and p (Lz = ± 1) characteristics due to the \({\hat{C}}_{3}\)-symmetry of the dots. Fig. 3: Self-organized quantum dots and spectral features of SPE and iX. (Top) Conduction (c) and valence (v) band edge profiles in vicinity of XtXb nodes of NoDW in reconstructed P- and AP-MoX2/WX2 bilayers with θ = 0°. Colors of wavy lines encode polarizations of emitted light in ±K-valleys: red for circular and green for z-polarization. Upper/lower subscript of circular polarization (σ± or σ∓) indicates helicity of light emitted in +K/−K-valleys. Left and right bottom panels show sketches of predicted optical spectra in marginally twisted P- and AP-MoX2/WX2 bilayers, respectively. Note that QD formation in marginally twisted structures qualitatively differs from band energy profiles in stronger misaligned P-bilayers with θ ≥ 2°, where the band edges at K-valley shift into MotXb stacking areas, see Fig. 4. This crossover agrees with the findings of Refs. 23,37. In addition, for MoS2/WS2 with θ ≈ 1.8° and MoSe2/WSe2 with θ ≈ 2.4° the energy profile for interlayer interband exciton resembles an antidot superlattice more than an array of QDs. This contrasts with the persistence up to θ ~ 3.5° of shallow QD arrays for both electrons and holes based at XtXb areas in AP-bilayers. Fig. 4: (Top) Twist-angle-dependences of interlayer band gap at K-valleys at high symmetry stacking areas of moiré supercells of P- and AP-MoX2/WX2 bilayers with 1 ° ≤ θ ≤ 3.5°. For P-bilayers with θ ≳ 2°, we find a crossover from an array of QDs located in XtXb nodes to an antidot array with shallow minima at XtWb areas and high peaks at MotXb areas, while for AP-bilayers there is no such transition. (Bottom) Real space maps showing crossover (toward larger misalignment) of the interlayer band gap for MoS2/WS2 bilayers with θ = 2.5°: an antidot array for P-bilayers and shallow QD array for AP-bilayers. Spectral characteristics of self-organized QDs Spectral features of the interlayer interband emissions of self-organized QDs of marginally twisted bilayers are sketched on the bottom insets in Fig. 3. Energy separation, δE, between the QD transition and the iX inside the domains was computed as, $$\begin{array}{rcl}\delta E&=&{\varepsilon }_{e}^{(s)}-{\varepsilon }_{h}^{(s)}-{E}_{{{{\rm{iX}}}}}\\ &&-\int \int {d}^{2}{{{\boldsymbol{r}}}}{d}^{2}{{{\boldsymbol{r}}}}^{\prime} {\left|{\psi }_{e}^{(s)}({{{\boldsymbol{r}}}})\right|}^{2}{\left|{\psi }_{h}^{(s)}({{{\boldsymbol{r}}}}^{\prime} )\right|}^{2}{V}_{eh}({{{\boldsymbol{r}}}}-{{{\boldsymbol{r}}}}^{\prime} ).\end{array}$$ Here, EiX is the iX energy and \({\varepsilon }_{e/h}^{(s)}\) are the energies of the electron/hole s-states \({\psi }_{e/h}^{(s)}\) inside quantum well. We also take into account the interlayer e-h attraction, Veh, screened by the in-plane susceptibility of TMDs and hBN environment38 (see details in Supplementary Sections 2 and 3). The computed dependences of δE(θ) for MoX2/WX2 bilayers (X = Se,S) are shown in Fig. 5. We find that the QD line can be tuned across a 0.8–1.2 eV spectral interval (telecom range) for 0.3° ≤ θ ≤ 1° in MoSe2/WSe2 and for 0° ≤ θ ≤ 0.5° in MoS2/WS2. In Fig. 5 the computed data for electrons in AP-MoS2/WS2 are terminated at θ = 0.6°, because for a larger misalignment the K-valley conduction band profile starts resembling an antidot lattice with maxima at the 2H domains (see Fig. 2(b)). Fig. 5: Energies of SPEs in P/AP-WX2/MoX2 bilayers shown as a shift from the lower iX energy inside the domains (left-side axis common for all bilayers) and as an absolute value (right-side axis specified separately for selenides and sulfides). The latter were determined as E = δE + 1.4 eV and E = δE + 1.6 eV for selenide (Se) and sulfide (S), respectively. These estimates are based on calculated binding energies of iXs ≈ 65 meV (see details in Supplementary Section 2) and measured positions of photoluminescence peaks of iXs in MoSe2/WSe230 and MoS2/WS247. Polarization and spin selection rules Additional information, displayed in Fig. 3 and gathered in Table 1, concerns the polarizations of SPEs and iXs inside domains and fine structure related to a ΔSO-splitting39,40,41,42,43 between the spin-flip and spin-conserving interband transitions inside the QDs. We note that, in each of the two ± K-valleys, the hole spin at the band edge is determined by the spin-valley locking in WX2, whereas conduction band in MoX2 is characterized by spin-orbit splitting ΔSO. Also, the iX emission from the inner part of domains, shown in Fig. 3, differs for P and AP bilayers. For AP-bilayers we expect a single line of circularly polarized iX emission. For P-bilayers iX energies and polarizations are different for MotXb and XtWb domains, with the energy splitting determined by the interlayer charge transfer36,44,45 and circular (in MotXb) vs linear (in XtWb) polarization, established in Ref. 46. In Table 1 we underline the most prominent SPE transition which happens to be related to the spin-conserving recombination in self-organized QDs in P-bilayers with approximately one hundred times weaker intensity than that of intralayer A-exciton (AX) in MoX2 layer determined by the ratio of corresponding interband matrix elements. To mention, DFT modelling suggests that lattice matching inside domains promotes direct-to-indirect band gap crossover for electrons toward Q-valley and most importantly for holes toward Γ-valley47 (see also Supplementary Section 6). Table 1 Polarization of emitted photons by SPE, iX and AX in MoX2, and strength of the emission characterized by interband velocity matrix element, \(| {v}_{\pm ,z}^{cv}|\) (in 106 × cm/s, v± = vx ± ivy) computed with Quantum ESPRESSO (for details see Supplementary Sections 4 and 5). Intralayer transitions in self-organized QDs In addition, intraband s−p optical transitions for electrons/holes trapped in QDs give rise to infrared (IR) features with energies shown in Fig. 6(a) for twist angles 0° ≤ θ ≤ 1°. In AP-bilayers, XtXb NoDW nodes also feature spikes of pseudomagnetic field B*15 characteristic of multivalley semiconductors with a strongly inhomogeneous strain48,49,50. These pseudomagnetic fields have opposite signs for electrons in ±K-valleys, splitting (by ℏΔν = 5μBB*) the QD s−p transitions into circularly polarized doublets, as sketched in Fig. 6(a). Such an IR transition can be used to manipulate the state of the SPE, by exciting either electron or hole into their respective QD p-states. Fig. 6: QD characteristics for electrons and holes. Intralayer infrared features of QDs and intralayer (MoX2) band gap variation with marking of materials and orientations of the bilayers shown on (b). a Infrared line, ν*, of a QD in P- and AP-MoX2/WX2 bilayers, with a line splitting due to pseudomagnetic field, B*, in AP-structures. Twist angle dependence of QD s − p-transition frequency was analyzed based on Eq. (2) and B* was estimated for conduction band electrons in MoS2 using parameters from Ref. 50. Pseudomagnetic field maps around XtXb nodes of NoDW for θ = 0° and θ = 0.6° are shown in the insets. Pseudomagnetic fields of similar magnitude are expected for holes in MoS2/WS2 bilayers and both electrons and holes for MoSe2/WSe2 bilayers. b Depth of a potential well, \({U}_{{{{\rm{AX}}}}}^{* }\), confining intralayer A-excitons in MoX2, created by hydrostatic strain around XtXb nodes of NoDW. Inset shows anisotropy of the well profile for P- and AP-MoSe2/WSe2 bilayers with θ = 0°. The intralayer band gap variation, due to the hydrostatic strain at XtXb nodes reduces/increases the energy of the intralayer AX in MoX2/WX2. In Fig. 6(b) we show that this results in a~100 meV potential well for AX in MoX2 exactly over the self-organized QD position. The red-shift of the MoX2 AX confined in such a well can be used for selective population of the QD states, upon the relaxation of photoexcited hole in MoX2 layer into its bound state in the QD in WX2. Overall, hot spots of hydrostatic strain at the nodes of domain wall network, generated by the lattice reconstruction in marginally twisted MoX2/WX2 bilayers, form a nanoscale array of QDs for electrons and holes, which may be operated as single-photon emitters. Based on the presented analysis, we propose that the SPE spectrum can be tuned by the choice of the twist angle over a broad range (including telecom for MoSe2/WSe2 bilayers), and the electron/hole state in these QDs can be manipulated via intra-band s−p transitions using THz radiation. The data on the optical oscillator strength of the interlayer interband transitions in such QDs, Table 1, suggest that the brightest would be SPEs in marginally twisted biayers with parallel orientation of MoX2 and WX2 unit cells. Ratio of intra- and inter-layer interband velocity matrix elements also suggest that the recombination rate of QD-localized excitons is about ~1% of the recombination rate of the intralayer A-exciton in MoX2; as the latter was found in Refs. 51,52,53,54 to be ~1/300 − 1/200 fs−1, this would set a 100 MHz possible repetition rate for SPEs in these self-organized QDs. Note that areal density of these SPEs is ~1011 cm−2, which is 100 times higher than the density of quantum emitters in patterned TMD monolayers55,56 and that a red-shift of the A-exciton in MoX2, due to the same hot spots of strain, would enable selective population of the QD states for the optical pumping of the self-organized SPEs. Finally, we note that in real samples there is usually inhomogeneity of domain structure caused by smooth strain introduced during sample transfer process. Such inhomogeneity spread a spectral range of SPEs available on a single wafer, which can be used to produce a wealth of SPE devices operating in complementary spectral intervals provided by different parts of a single large-area WX2/MoX2 bilayer. Computation of hydrostatic strain effect To quantify effect of hydrostatic strain on the band edges of TMD monolayes we considered biaxial strain in the range of ±2%, fully relaxing atomic positions in the monolayer with Vanderbilt PBE GBRV ultrasoft pseudopotentials57, a wavefunction cut-off of Ecut = 50 Ry, and a 20 × 20 × 1 k-point grid, sampled according to the Monkhorst-Pack algorithm58. Spin-orbit coupling was included by a norm-conserving fully-relativistic pslibrary PBE PAW pseudopotentials(We used the relevant relativistic pseudopotentials from http://www.quantum-espresso.org.) with Ecut = 80 Ry. The data that support the plots in the paper are available from the corresponding authors upon reasonable request. Cao, Y. et al. Correlated insulator behaviour at half-filling in magic-angle graphene superlattices. Nature 556, 80 (2018). Cao, Y. et al. Unconventional superconductivity in magic-angle graphene superlattices. Nature 556, 43 (2018). Yankowitz, M. et al. Tuning superconductivity in twisted bilayer graphene. Science 363, 1059–1064 (2019). Lu, X. et al. Superconductors, orbital magnets and correlated states in magic-angle bilayer graphene. Nature 574, 653–657 (2019). Xu, S. et al. Tunable van hove singularities and correlated states in twisted monolayer–bilayer graphene. Nat. Phys. 17, 619–626 (2021). Sharpe, A. L. et al. Emergent ferromagnetism near three-quarters filling in twisted bilayer graphene. Science 365, 605–608 (2019). Polshyn, H. et al. Electrical switching of magnetic order in an orbital chern insulator. Nature 588, 66–70 (2020). Chen, G. et al. Evidence of a gate-tunable mott insulator in a trilayer graphene moiré superlattice. Nat. Phys. 15, 237–241 (2019). Chen, S. et al. Electrically tunable correlated and topological states in twisted monolayer–bilayer graphene. Nat. Phys. 17, 374–380 (2021). Shen, C. et al. Correlated states in twisted double bilayer graphene. Nat. Phys. 16, 520–525 (2020). Park, J. M., Cao, Y., Watanabe, K., Taniguchi, T. & Jarillo-Herrero, P. Tunable strongly coupled superconductivity in magic-angle twisted trilayer graphene. Nature 590, 249–255 (2021). Cao, Y. et al. Nematicity and competing orders in superconducting magic-angle graphene. Science 372, 264–271 (2021). Liu, X. et al. Tunable spin-polarized correlated states in twisted double bilayer graphene. Nature 583, 221–225 (2020). Gadelha, A. C. et al. Localization of lattice dynamics in low-angle twisted bilayer graphene. Nature 590, 405–409 (2021). Enaldiev, V. V., Zólyomi, V., Yelgel, C., Magorrian, S. J. & Fal'ko, V. I. Stacking domains and dislocation networks in marginally twisted bilayers of transition metal dichalcogenides. Phys. Rev. Lett. 124, 206101 (2020). Weston, A. et al. Atomic reconstruction in twisted bilayers of transition metal dichalcogenides. Nat. Nanotechnol. 15, 592–597 (2020). Rosenberger, M. R. et al. Twist angle-dependent atomic reconstruction and moiré patterns in transition metal dichalcogenide heterostructures. ACS Nano 14, 4550–4558 (2020). Sung, J. et al. Broken mirror symmetry in excitonic response of reconstructed domains in twisted MoSe2/MoSe2 bilayers. Nat. Nanotechnol. 15, 750–754 (2020). McGilly, L. J. et al. Visualization of moiré superlattices. Nat. Nanotechnol. 15, 580–584 (2020). Shabani, S. et al. Deep moiré potentials in twisted transition metal dichalcogenide bilayers. Nat. Phys. 17, 720–725 (2021). Naik, M. H. & Jain, M. Ultraflatbands and shear solitons in moiré patterns of twisted bilayer transition metal dichalcogenides. Phys. Rev. Lett. 121, 266401 (2018). Carr, S. et al. Relaxation and domain formation in incommensurate two-dimensional heterostructures. Phys. Rev. B 98, 224102 (2018). Yu, H., Liu, G.-B., Tang, J., Xu, X. & Yao, W. Moiré excitons: from programmable quantum emitter arrays to spin-orbit-coupled artificial lattices. Sci. Adv. 3, e1701696 (2017). Wu, F., Lovorn, T. & MacDonald, A. H. Theory of optical absorption by interlayer excitons in transition metal dichalcogenide heterobilayers. Phys. Rev. B 97, 035306 (2018). Lu, X., Li, X. & Yang, L. Modulated interlayer exciton properties in a two-dimensional moiré crystal. Phys. Rev. B 100, 155416 (2019). Brem, S., Linderälv, C., Erhart, P. & Malic, E. Tunable phases of moiré excitons in van der waals heterostructures. Nano Lett. 20, 8534–8540 (2020). Guo, H., Zhang, X. & Lu, G. Shedding light on moiré excitons: a first-principles perspective. Sci. Adv. 6, eabc5638 (2020). Tran, K. et al. Evidence for moiré excitons in van der waals heterostructures. Nature 567, 71–75 (2019). Seyler, K. L. et al. Signatures of moiré-trapped valley excitons in MoSe2/WSe2 heterobilayers. Nature 567, 66–70 (2019). Baek, H. et al. Highly energy-tunable quantum light from moiré-trapped excitons. Sci. Adv. 6, eaba8526 (2020). Brotons-Gisbert, M. et al. Spin–layer locking of interlayer excitons trapped in moiré potentials. Nat. Mater. 19, 630–636 (2020). Conley, H. J. et al. Bandgap engineering of strained monolayer and bilayer MoS2. Nano Lett. 13, 3626–3630 (2013). Dhakal, K. P. et al. Local strain induced band gap modulation and photoluminescence enhancement of multilayer transition metal dichalcogenides. Chem. Mater. 29, 5124–5133 (2017). Zollner, K., Junior, P. E. F. & Fabian, J. Strain-tunable orbital, spin-orbit, and optical properties of monolayer transition-metal dichalcogenides. Phys. Rev. B 100, 195126 (2019). Giannozzi, P. et al. Quantum espresso: a modular and open-source software project for quantum simulations of materials. J. Phys: Condensed Matter 21, 395502 (19pp) (2009). Enaldiev, V. V., Ferreira, F., Magorrian, S. J. & Fal'ko, V. I. Piezoelectric networks and ferroelectric domains in twistronic superlattices in WS2/MoS2 and WSe2/MoSe2 bilayers. 2D Mater. 8, 025030 (2021). Danovich, M. et al. Localized interlayer complexes in heterobilayer transition metal dichalcogenides. Phys. Rev. B 97, 195452 (2018). Mak, K. F., He, K., Shan, J. & Heinz, T. F. Control of valley polarization in monolayer MoS2 by optical helicity. Nat. Nanotechnol. 7, 494–498 (2012). Echeverry, J. P., Urbaszek, B., Amand, T., Marie, X. & Gerber, I. C. Splitting between bright and dark excitons in transition metal dichalcogenide monolayers. Phys. Rev. B 93, 121107 (2016). Wang, G. et al. In-plane propagation of light in transition metal dichalcogenide monolayers: Optical selection rules. Phys. Rev. Lett. 119, 047401 (2017). Liu, G.-B., Shan, W.-Y., Yao, Y., Yao, W. & Xiao, D. Three-band tight-binding model for monolayers of group-VIB transition metal dichalcogenides. Phys. Rev. B 88, 085433 (2013). Kormányos, A. et al. k ⋅ ptheory for two-dimensional transition metal dichalcogenide semiconductors. 2D Mater. 2, 022001 (2015). Ferreira, F., Enaldiev, V. V., Fal'ko, V. I. & Magorrian, S. J. Weak ferroelectric charge transfer in layer-asymmetric bilayers of 2D semiconductors. Sci. Rep. 11, 13422 (2021). Weston, A. et al. Interfacial ferroelectricity in marginally twisted 2d semiconductors. Nat. Nanotechnol. 17, 390–395 (2022). Yu, H., Liu, G.-B. & Yao, W. Brightened spin-triplet interlayer excitons and optical selection rules in van der waals heterobilayers. 2D Mater. 5, 035021 (2018). Kiemle, J. et al. Control of the orbital character of indirect excitons in MoS2/WS2 heterobilayers. Phys. Rev. B 101, 121404 (2020). Iordanskii, S. V. & Koshelev, A. E. Dislocations and localization effects in multivalley conductors. JETP Lett. 41, 471 (1985). Suzuura, H. & Ando, T. Phonons and electron-phonon scattering in carbon nanotubes. Phys. Rev. B 65, 235412 (2002). Rostami, H., Roldán, R., Cappelluti, E., Asgari, R. & Guinea, F. Theory of strain in single-layer transition metal dichalcogenides. Phys. Rev. B 92, 195402 (2015). Poellmann, C. et al. Resonant internal quantum transitions and femtosecond radiative decay of excitons in monolayer WSe2. Nat. Mater. 14, 889–893 (2015). Dey, P. et al. Optical coherence in atomic-monolayer transition-metal dichalcogenides limited by electron-phonon interactions. Phys. Rev. Lett. 116, 127402 (2016). Jakubczyk, T. et al. Radiatively limited dephasing and exciton dynamics in MoSe2 monolayers revealed with four-wave mixing microscopy. Nano Lett. 16, 5333–5339 (2016). Cadiz, F. et al. Excitonic linewidth approaching the homogeneous limit in MoS2 -based van der waals heterostructures. Phys. Rev. X 7, 021026 (2017). Palacios-Berraquero, C. et al. Large-scale quantum-emitter arrays in atomically thin semiconductors. Nat. Commun. 8, 15093 (2017). Luo, Y. et al. Deterministic coupling of site-controlled quantum emitters in monolayer WSe2 to plasmonic nanocavities. Nat. Nanotechnol. 13, 1137–1142 (2018). Garrity, K. F., Bennett, J. W., Rabe, K. M. & Vanderbilt, D. Pseudopotentials for high-throughput DFT calculations. Comput. Mater. Sci. 81, 446–452 (2014). Monkhorst, H. J. & Pack, J. D. Special points for brillouin-zone integrations. Phys. Rev. B 13, 5188–5192 (1976). This work was supported by EC-FET European Graphene Flagship Core3 Project, EC-FET Quantum Flagship Project 2D-SIPC, EPSRC grants EP/S030719/1 and EP/V007033/1, and the Lloyd Register Foundation Nanotechnology Grant. School of Physics and Astronomy, University of Manchester, Oxford Road, Manchester, M13 9PL, UK V. V. Enaldiev, F. Ferreira, J. G. McHugh & Vladimir I. Fal'ko National Graphene Institute, University of Manchester, Oxford Road, Manchester, M13 9PL, UK Henry Royce Institute for Advanced Materials, University of Manchester, Oxford Road, Manchester, M13 9PL, UK Vladimir I. Fal'ko V. V. Enaldiev F. Ferreira J. G. McHugh V.I.F. conceived the project. Theoretical analysis was done by V.V.E., F.F., and J.G.M. All the authors discussed the results and wrote the paper. Correspondence to V. V. Enaldiev or Vladimir I. Fal'ko. Enaldiev, V.V., Ferreira, F., McHugh, J.G. et al. Self-organized quantum dots in marginally twisted MoSe2/WSe2 and MoS2/WS2 bilayers. npj 2D Mater Appl 6, 74 (2022). https://doi.org/10.1038/s41699-022-00346-0 Accepted: 27 September 2022 About the Partner npj 2D Materials and Applications (npj 2D Mater Appl) ISSN 2397-7132 (online)
CommonCrawl
arXiv.org > astro-ph > arXiv:1408.0634 Astrophysics > High Energy Astrophysical Phenomena Title:Searches for small-scale anisotropies from neutrino point sources with three years of IceCube data Authors:IceCube Collaboration: M. G. Aartsen, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, M. Ahrens, D. Altmann, T. Anderson, C. Arguelles, T. C. Arlen, J. Auffenberg, X. Bai, S. W. Barwick, V. Baum, J. J. Beatty, J. Becker Tjus, K.-H. Becker, S. BenZvi, P. Berghaus, D. Berley, E. Bernardini, A. Bernhard, D. Z. Besson, G. Binder, D. Bindig, M. Bissok, E. Blaufuss, J. Blumenthal, D. J. Boersma, C. Bohm, F. Bos, D. Bose, S. Böser, O. Botner, L. Brayeur, H.-P. Bretz, A. M. Brown, J. Casey, M. Casier, E. Cheung, D. Chirkin, A. Christov, B. Christy, K. Clark, L. Classen, F. Clevermann, S. Coenders, D. F. Cowen, A. H. Cruz Silva, M. Danninger, J. Daughhetee, J. C. Davis, M. Day, J. P. A. M. de André, C. De Clercq, S. De Ridder, P. Desiati, K. D. de Vries, M. de With, T. DeYoung, J. C. Díaz-Vélez, M. Dunkman, R. Eagan, B. Eberhardt, B. Eichmann, J. Eisch, S. Euler, P. A. Evenson, O. Fadiran, A. R. Fazely, A. Fedynitch, J. Feintzeig, J. Felde, T. Feusels, K. Filimonov, C. Finley, T. Fischer-Wasels, S. Flis, A. Franckowiak, K. Frantzen, T. Fuchs, T. K. Gaisser, R. Gaior, J. Gallagher, L. Gerhardt, D. Gier, L. Gladstone, T. Glüsenkamp, A. Goldschmidt, G. Golup, J. G. Gonzalez, J. A. Goodman, D. Góra, D. Grant, P. Gretskov, J. C. Groh, A. Groß, C. Ha, C. Haack , A. Haj Ismail, P. Hallen, A. Hallgren, F. Halzen, K. Hanson, D. Hebecker, D. Heereman, D. Heinen, K. Helbing, R. Hellauer, D. Hellwig, S. Hickford, G. C. Hill, K. D. Hoffman, R. Hoffmann, A. Homeier, K. Hoshina, F. Huang, W. Huelsnitz, P. O. Hulth, K. Hultqvist, S. Hussain, A. Ishihara, E. Jacobi, J. Jacobsen, K. Jagielski, G. S. Japaridze, K. Jero, O. Jlelati, M. Jurkovic, B. Kaminsky, A. Kappes, T. Karg, A. Karle, M. Kauer, J. L. Kelley, A. Kheirandish, J. Kiryluk, J. Kläs, S. R. Klein, J.-H. Köhne, G. Kohnen, H. Kolanoski, A. Koob, L. Köpke, C. Kopper, S. Kopper, D. J. Koskinen, M. Kowalski, A. Kriesten, K. Krings, G. Kroll, M. Kroll, J. Kunnen, N. Kurahashi, T. Kuwabara, M. Labare, D. T. Larsen, M. J. Larson, M. Lesiak-Bzdak, M. Leuermann, J. Leute, J. Lünemann, J. Madsen, G. Maggi, R. Maruyama, K. Mase, H. S. Matis, R. Maunu, F. McNally, K. Meagher, M. Medici, A. Meli, T. Meures, S. Miarecki, E. Middell, E. Middlemas, N. Milke, J. Miller, L. Mohrmann, T. Montaruli, R. Morse, R. Nahnhauer, U. Naumann, H. Niederhausen, S. C. Nowicki, D. R. Nygren, A. Obertacke, S. Odrowski, A. Olivas, A. Omairat, A. O'Murchadha, T. Palczewski, L. Paul, Ö. Penek, J. A. Pepper, C. Pérez de los Heros, C. Pfendner, D. Pieloth, E. Pinat, J. Posselt, P. B. Price, G. T. Przybylski, J. Pütz, M. Quinnan, L. Rädel, M. Rameez, K. Rawlins, P. Redl, I. Rees, R. Reimann, M. Relich, E. Resconi, W. Rhode, M. Richman, B. Riedel, S. Robertson, J. P. Rodrigues, M. Rongen, C. Rott, T. Ruhe, B. Ruzybayev, D. Ryckbosch, S. M. Saba, H.-G. Sander, J. Sandroos, M. Santander, S. Sarkar, K. Schatto, F. Scheriau, T. Schmidt, M. Schmitz, S. Schoenen, S. Schöneberg, A. Schönwald, A. Schukraft, L. Schulte, O. Schulz, D. Seckel, Y. Sestayo, S. Seunarine, R. Shanidze, M. W. E. Smith, D. Soldin, G. M. Spiczak, C. Spiering, M. Stamatikos, T. Stanev, N. A. Stanisha, A. Stasik, T. Stezelberger, R. G. Stokstad, A. Stößl, E. A. Strahler, R. Ström, N. L. Strotjohann, G. W. Sullivan, H. Taavola, I. Taboada, A. Tamburro, A. Tepe, S. Ter-Antonyan, A. Terliuk, G. Tešić, S. Tilav, P. A. Toale, M. N. Tobin, D. Tosi, M. Tselengidou, E. Unger, M. Usner, S. Vallecorsa, N. van Eijndhoven, J. Vandenbroucke, J. van Santen, M. Vehring, M. Voge, M. Vraeghe, C. Walck, M. Wallraff, Ch. Weaver, M. Wellons, C. Wendt, S. Westerhoff, B. J. Whelan, N. Whitehorn, C. Wichary, K. Wiebe, C. H. Wiebusch, D. R. Williams, H. Wissing, M. Wolf, T. R. Wood, K. Woschnagg, D. L. Xu, X. W. Xu, J. P. Yanez, G. Yodh, S. Yoshida, P. Zarzhitsky, J. Ziemann, S. Zierke, M. Zoll et al. (203 additional authors not shown) (Submitted on 4 Aug 2014 (v1), last revised 14 Jan 2015 (this version, v2)) Abstract: Recently, IceCube found evidence for a diffuse signal of astrophysical neutrinos in an energy range of $60\,\mathrm{TeV}$ to the $\mathrm{PeV}$-scale. The origin of those events, being a key to understanding the origin of cosmic rays, is still an unsolved question. So far, analyses have not succeeded to resolve the diffuse signal into point-like sources. Searches including a maximum-likelihood-ratio test, based on the reconstructed directions and energies of the detected down- and up-going neutrino candidates, were also performed on IceCube data leading to the exclusion of bright point sources. In this paper, we present two methods to search for faint neutrino point sources in three years of IceCube data, taken between 2008 and 2011. The first method is an autocorrelation test, applied separately to the northern and southern sky. The second method is a multipole analysis, which expands the measured data in the northern hemisphere into spherical harmonics and uses the resulting expansion coefficients to separate signal from background. With both methods, the results are consistent with the background expectation with a slightly more sparse spatial distribution, corresponding to an underfluctuation. Depending on the assumed number of sources, the resulting upper limit on the flux per source in the northern hemisphere for an $E^{-2}$ energy spectrum ranges from $1.5 \cdot 10^{-8}\,\mathrm{GeV}/(\mathrm{cm}^2 \mathrm{s})$, in the case of one assumed source, to $4 \cdot 10^{-10} \,\mathrm{GeV}/(\mathrm{cm}^2 \mathrm{s})$, in the case of $3500$ assumed sources. Comments: 35 pages, 16 figures, 1 table Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) MSC classes: 85-05 ACM classes: J.2 Journal reference: Astroparticle Physics 66 (2015) 39-52 DOI: 10.1016/j.astropartphys.2015.01.001 Cite as: arXiv:1408.0634 [astro-ph.HE] (or arXiv:1408.0634v2 [astro-ph.HE] for this version) From: Martin Leuermann [view email] [v1] Mon, 4 Aug 2014 10:26:04 UTC (1,024 KB) [v2] Wed, 14 Jan 2015 17:45:06 UTC (1,028 KB) astro-ph.HE
CommonCrawl
Power handling of silicon microring modulators Marc de Cea, Amir H. Atabaki, and Rajeev J. Ram Marc de Cea,* Amir H. Atabaki, and Rajeev J. Ram Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA *Corresponding author: [email protected] Marc de Cea https://orcid.org/0000-0002-4761-2683 M de Cea A Atabaki R Ram Marc de Cea, Amir H. Atabaki, and Rajeev J. Ram, "Power handling of silicon microring modulators," Opt. Express 27, 24274-24285 (2019) Wavelength-tunable silicon microring modulator (OE) Enhanced optical bistability from self-heating due to free carrier absorption in substrate removed silicon ring modulators (OE) A comb laser-driven DWDM silicon photonic transmitter based on microring modulators (OE) Integrated Optics Dense wavelength division multiplexing Laser resonators Ring resonators Silicon modulators Wavelength division multiplexing Revised Manuscript: June 17, 2019 Theoretical model Optical modulator under study Silicon photonic wavelength division multiplexing (WDM) transceivers promise to achieve multi-Tbps data rates for next-generation short-reach optical interconnects. In these systems, microring resonators are important because of their low power consumption and small footprint, two critical factors for large-scale WDM systems. However, their resonant nature and silicon's strong optical nonlinearity give rise to nonlinear effects that can deteriorate the system's performance with optical powers on the order of milliwatts, which can be reached on the transmitter side where a laser is directly coupled into resonant modulators. Here, a theoretical time-domain nonlinear model for the dynamics of optical power in silicon resonant modulators is derived, accounting for two-photon absorption, free-carrier absorption and thermal and dispersion effects. This model is used to study the effects of high input optical powers over modulation quality, and experimental data in good agreement with the model is presented. Two major consequences are identified: the importance of a correct initialization of the resonance wavelength with respect to the laser due to the system's bistability; and the existence of an optimal input optical power beyond which the modulation quality degrades. Silicon photonic optical interconnects [1] promise to meet the growing demands in communication bandwidth thanks to their scalability, high bandwidth density and low loss. Key to achieving such high data rates is the ability to carry and manipulate multiple data streams carried by closely-spaced wavelengths in a single waveguide or fiber, in what is known as dense wavelength division multiplexing (DWDM) [2]. The microring resonator arises as a promising element in these systems due to its small footprint, low power consumption and narrow wavelength selectivity [3], allowing for independent manipulation of each wavelength by cascading microrings on an optical bus [4]. High-speed modulation can be obtained with ring resonators by shifting its resonance wavelength through a change in its carrier density, and therefore refractive index [5], by applying an external voltage to a pn junction embedded in the device [6,7]. Nevertheless, ring-based DWDM systems are challenging due to their narrow-band nature that makes them highly sensitive to fabrication variations and environmental fluctuations (e.g., substrate temperature). In addition, the strong light confinement and electric field enhancement exhibited in these structures due to their high quality factors and small sizes gives rise to high optical power densities, which combined with high third-order nonlinearity in silicon [8], generate nonlinear effects that can cause unwanted behavior even for moderate input optical powers on the order of mW. Effects such as thermal [9] and carrier [10] induced optical bistabilities, as well as self-pulsation due to the competition between these processes [11], have been reported in silicon microring or microdisk resonators. As the loss of specific components in a communications link (such as grating couplers, modulators or detectors) become more difficult to improve, the push for longer link distances and improved link budgets will require an increase in input optical powers, exacerbating the issues associated to handling high powers in general, and particularly in resonant modulators. As a consequence, understanding the behavior of resonant modulators under high optical powers becomes key for the design and operation of practical DWDM systems for the data center and telecommunication spaces. Time domain models have been developed to study nonlinear effects in silicon resonators. In [11], Johnson et al. presented a model accounting for the effects of two photon absorption (TPA), free carrier absorption (FCA) and self-heating. A similar model focusing on carrier effects was also presented in [12]. These models are developed for passive structures, in which no active external modulation of the device resonance is possible. Here, a time domain model for externally modulated silicon resonators is implemented which accounts for the main nonlinearities that affect modulation performance – free carrier dispersion, FCA, TPA and thermal effects – and is used to study power handling limits in silicon microring modulators (Fig. 1(a)). We apply this model to our CMOS photonic microring modulators and demonstrate that modeling results are in good agreement with experiments, confirming the validity of the derived model. Fig. 1. (a) Top view of the modeled resonant modulator. $S_{in}(t)$ and $S_{out}(t)$ represent the input and output optical E-fields, and $U(t)$ is the energy stored in the microring. $V(t)$ is the modulator's driving signal. (b) Optical micrograph of the ring resonator used in the experiments. (c) Diagram of the physical phenomena occurring in a silicon optical device in the presence of two photon absorption (TPA). (d) Diagram of nonlinear effects in a silicon ring modulator and their inter-dependence. The modulation signal ($V(t)$) changes the resonance frequency ($\Delta W_0$) of the device, affecting the total stored energy in the resonator and setting the strength of TPA. TPA, in turn, modifies the resonance through two effects: free-carrier dispersion and self-heating. These two phenomena compete in opposite directions: free carrier dispersion shifts the resonance to shorter wavelengths, while heating pushes it to longer wavelengths. 2. Theoretical model In [11], Johnson et al. presented a time domain model to describe nonlinear effects in passive high quality factor (Q) silicon resonant structures. There, TPA, FCA and self-heating were accounted for: TPA results in photon loss and the generation of free carriers (which in turn induces more optical loss through FCA), and free-carrier dispersion that shifts the resonance to shorter wavelengths. All of the absorption mechanisms cause a heating of the device, inducing thermo-optic dispersion that shifts the resonance to longer wavelengths. The combination of these effects (Fig. 1(c)) results in behaviors such as bistability and self-pulsation in the resonator. In this work, Johnson's model is extended to include active modulation of the resonator with an applied external voltage. Figure 1(d) depicts the physical processes considered in this work (TPA, FCA, thermal dispersion, carrier dispersion and external modulation) and the intricate coupling between them. This model allows us to analyze the behavior of free-carrier-plasma-dispersion microring modulators in the presence of nonlinear effects. A detailed derivation of the nonlinear model based on coupled-mode theory can be found in [11]. Here, we have included three modifications to the model to extend it to actively modulated devices. First, the splitting between clockwise and counterclockwise modes of the ring is not considered, since this effect is only observable in high quality factor (Q $>100,000$) resonators that are not attractive for fast modulation due to their limited bandwidth [18] (optimal Q for high-speed modulation are in the 10,000-20,000 range). Second, the definitions of effective mode volumes for TPA ($V_{TPA}$) and FCA ($V_{FCA}$), and of the field confinement factor ($\Gamma _{ring}$) have been modified by normalizing the electric field to the Poynting vector rather than the E-field energy for better accuracy in high-index-contrast silicon photonic structures [19]. Third, the effect of external modulation voltage on the resonance frequency of the microring ($\Delta W_{0_{mod}}$) is included as an additive term next to the thermal and plasma dispersion terms: (1)$$\frac{\Delta W_0 (t)}{W_0} ={-}\frac{1}{n_{Si}}\left(\frac{dn_{Si}}{dT}\overline{\Delta T(t)} + \left(\frac{dn_{Si}}{dN_p}+\frac{dn_{Si}}{dN_n}\right)\overline{N(t)}\right) + \frac{\Delta W_{0_{mod}}(t)}{W_0}$$ We assume the resonance frequency changes linearly with the pn junction voltage through $dW_0/dV_{pn}$, a valid assumption for most high-speed modulators due to high doping concentrations that lead to very small changes in the depletion width compared to the optical mode size: (2)$$\Delta W_{0_{mod}}(t) = \frac{dW_0}{dV_{pn}}V_{pn}(t)$$ The voltage across the pn junction ($V_{pn}$) whose depletion region is modulated through $V(t)$ is modeled as a first order system with time constant $\tau$: (3)$$\frac{dV_{pn}(t)}{dt} = \frac{-V_{pn}(t)}{\tau} + \frac{V(t)}{\tau}$$ Due to the dependence of the depletion capacitance ($C(t)$) and series resistance ($R(t)$) of the device on the applied voltage, the time constant $\tau$ is time dependent. Nevertheless, this dependence is very small due to the high doping concentrations used in these devices and is considered negligible in this work. A variable order Adams-Bashforth-Moulton predictor-corrector method [20] has been used to solve the coupled nonlinear model. We solve for the time evolution of the optical signal at the output of the modulator and that of the resonator variables (resonance wavelength, stored energy, temperature, etc.) as a binary (i.e., on-off keying) electrical voltage signal is applied to the modulator, allowing for the visualization of the time-domain behavior of the modulator and giving insight into its operation in the nonlinear regime (see Figs. 4 and 5). 3. Optical modulator under study In this work, we will study a depletion-type pn-junction silicon ring modulator designed for a wavelength of 1550 nm. The ring has an outer radius of 10 $\mu$m, is 1.7 $\mu$m wide and roughly 100 nm thick [21] (Figs. 1(a) and 1(b)). This device was fabricated in a commercial microelectronic foundry (45nm SOI CMOS process node) using the 'zero change' CMOS approach [22]. High speed operation up to 25 Gbps of this device has been demonstrated in [16]. In the time-domain nonlinear model, different physical parameters capture linear and nonlinear effects in the resonator. These parameters are listed in Table 1 along with a brief description for their derivation for the device under study. An important parameter in our model is the free-carrier lifetime, which is estimated to be on the order of 0.1 ns in our device [17]. This magnitude is consistent with the estimated lifetime from operation of the device in the forward bias regime, and lifetime variations on the order of 0.5-5x do not cause significant differences in the qualitative behavior of the model. The thermal time constant $\gamma _{th}$ has been derived using the estimates for the thermal impedance (through self-heating resonance shift measurements) and heat capacitance of the device. Table 1. Model parameters corresponding to the silicon microring modulator studied in this work. FEM = Finite Elements Method [13,14,15]. The model described in the previous sections allows us to obtain the time domain optical waveform at the output of the modulator device $s_{out}(t)$, allowing direct comparison between modulation results under different operational conditions. Here, the optical modulation amplitude (OMA) will be used as a measure of the quality of the modulation, as it can be directly related to the bit error rate (BER) in a communication link [24]. The OMA is defined as the difference between the mean value of the '1' ($\mu _1$) and '0' ($\mu _0$) bits, and can be rewritten as $\mu _1 - \mu _0 = \frac {P_{in}}{IL}\left (1-\frac {1}{ER}\right )$, where $P_{in}$ is the input optical power, and $ER$ and $IL$ are the extinction ratio and insertion loss of the modulator, respectively. 4.1 Device initialization The presence of bistabilities under high optical powers makes it important to bring the resonator into a stable point before starting the modulation. Figure 2(a) shows the bistability curve of the device for a 0.45 mW optical power launched in the input waveguide, simulated using [23]. The red shift of the resonance ($\lambda _0$) in states A and B is observed due to the thermal effect. Fig. 2. Device initialization. (a) Bistability curve of the microring extracted using the model in [23] for a 0.45 mW input power. $\lambda _0$ is the resonance wavelength of the ring and $\lambda$ is the laser wavelength. (b) Transmission as a function of wavelength for a 0.45 mW input optical power obtained with the model presented in this work. Red curve corresponds to the laser being abruptly turned on at a fixed wavelength, and the blue curve corresponds to the laser being swept from the blue side of the resonance and stopped at the target wavelength. The ER achievable when the laser is swept is considerably higher. By tracking the resonance along the A+B branch through sweeping the laser ($\lambda$) from shorter wavelengths, the laser can be placed on any arbitrary point on the blue side of the Lorentzian curve of the resonator. This is shown by the blue curve in Fig. 2(b), which shows the transmission value simulated with our time domain model as wavelength is swept from the blue side of the resonance toward longer wavelengths. The consequence of this initialization approach is that it allows us to achieve an arbitrary small value for the transmitted zero bit ($\mu _0$) and therefore high ER and high OMA modulation. However, if instead the laser is not swept but is abruptly turned on at a given wavelength, the ability to track the resonance is lost, causing a transition from branch A in the blue side of the resonance to branch C in the red side. This is shown by the red curve in Fig. 2(b), which was again obtained by solving the nonlinear model presented above. It is seen that as opposed to the swept initialization case (blue curve), when the laser is directly turned on at the target wavelength the transmission values cannot approach zero, resulting in a low OMA modulation and therefore high BER. Furthermore, when the laser is turned on abruptly self-modulation due to competing heating (moving the resonance to longer wavelengths) and free-carrier (moving the resonance to shorter wavelengths) dispersion can occur (shaded band in Fig. 2(b)), preventing successful data modulation. Note that the same initialization (tracking down the A+B branch) is used for most nonlinear experiments with high-Q resonators, as the goal in those experiments is to achieve maximum power drop in the resonator for maximizing the nonlinearity [25], which overlaps with our goal to achieve low '0' bit values (strong power drop) for modulation. 4.2 Power handling: optimal operational point Using our time-domain nonlinear solver, we explored power handing capabilities of the microring modulator as the device is driven by a pseudo-random bit sequence with the characteristics specified in Table 1. Different input optical powers and optical carrier (laser) wavelengths (swept from the blue side of the resonance as mentioned in the previous section) were considered, and the modulation performance metrics (ER, IL, OMA) were extracted by analyzing the obtained time-domain transmission waveforms. To confirm the validity of the model, the same study was performed experimentally on the device described in Section 3 under the same operational conditions as the simulation. The modulator's output optical signal for different input optical powers and laser wavelengths was recorded using a photodetector connected to a high speed oscilloscope and analyzed to extract modulation performance metrics. To reach high input optical powers an Erbium Doped Fiber Amplifier (EDFA) was used to amplify the light coming out of a tunable laser, and a second EDFA was used at the output of the modulator device to increase the strength of the signal going into the photodetector. A narrowband filter was used to reduce the Amplified Spontaneous Emission (ASE) noise at the output of the second amplifer. Nevertheless, to avoid the insertion loss hit in the input optical power ($\approx 6 \, dB$), no ASE filter was used after the first EDFA at the input of the device. Figures 3(a) and 3(b) show the evolution of ER, IL and OMA with wavelength for a fixed input optical power of 0.45 mW and a 4 $V_{pp}$ driving voltage derived with the model (Fig. 3(a)) and obtained experimentally (Fig. 3(b)). OMA results are normalized to its maximum to allow for a direct comparison between experimental OMA (measured in mV at the output of the photoreceiver) and simulated OMA (given in mW at the output of the microring). Fig. 3. Device optimum operational point. Theoretical (a) and experimental (b) evolution of the ER (black), IL (red) and normalized OMA (blue) as a function of laser wavelength for a 0.45 mW input power. Theoretical (c) and experimental (d) maximum attainable OMA (blue, left axis) and wavelength at which this value is reached (red, right axis) as a function of input power. The dashed line shows the expected evolution of OMA if no nonlinearities were present. Reported experimental powers are at the center laser wavelength and do not account for the extra input power due to unfiltered ASE optical power coming from the EDFA. The data rate used is 0.5 Gbps. The results given by the model are in good agreement with the experimental curves. The existence of two distinct peaks (corresponding to modulation in the blue (peak at lower wavelengths) and red (peak at higher wavelengths) sides of the ring resonance) for OMA and ER is correctly modeled, as well as the fact that the maximum IL occurs between these two peaks (due to modulation happening right at the resonance dip). The model also correctly captures the fact that better OMA is obtained when modulation is performed at the blue side of the resonance, and that the maximum OMA is obtained when the ER is maximum. Due to the theoretical model not accounting for noise, experimental ER (IL) values are lower (higher) than theoretical values. For the same reason, experimental curves show a smoother dependence with wavelength. Both experiment and theory show that there is a well-defined wavelength where the best performance is achieved for a specific input optical power. As expected, the IL is near zero when the laser is far from the 'hot' resonance of the device (the resonance under nonlinear effects), and increases and reaches a peak as it approaches the transmission dip (where the modulation swing is not enough to move the device out of the resonance line-shape). The ER increases as the laser approaches the resonance dip from the blue side, and reaches a maximum when the system is able to reach the critical coupling condition and close to zero transmission for bit '0'. As the laser moves slightly to longer wavelengths, ER drops dramatically as the modulation now happens between the two sides of the resonance lineshape, resulting in a return-to-zero (RZ) like pattern with a high '0' output power. ER increases and reaches a second peak as modulation moves completely to the red side of the resonance, but the value of ER is low due to the resonance returning quickly to its cold state (state D in Fig. 2(a)). Figures 3(c) and 3(d) show the highest achieved OMA (blue curve; normalized to its maximum to allow for comparison of experimental and theoretical results) along with the wavelength (red curve) at which it is achieved derived with the model (Fig. 3(c)) and obtained experimentally (Fig. 3(d)). Again, good qualitative agreement between experimental and theoretical results is achieved: (1) the experimental optimal operational wavelengths are very closely reproduced by our model, and (2) the saturation and eventual decrease in the maximum attainable OMA as input optical power increases, which is predicted by the theoretical model, is also observed experimentally, although not as clearly. While simulated data shows a peak in OMA followed by a saturation (see Fig. 5(a)), experimental data only shows the saturation of OMA. Notice how, in simulation, the difference between the peak OMA and its saturated value is roughly 20%, or 1dB. We believe that we could have missed this relatively small peak due to our coarse choice of wavelengths in the experiment. Nevertheless, both the theory and experiment point to presence of an optimum input optical power: if the maximum attainable OMA has plateaued, there is no need to increase the power as this will only increase power consumption but won't improve modulation quality. The existence of nonlinear effects results in an optimal input optical power for modulation quality: an increase in the power beyond this limit will deteriorate the performance due to the enhancement of nonlinear effects. Note that, if nonlinearities were not present in the system, OMA would linearly increase with input power (dashed black curves in Fig. 3(c) and 3(d)) as ER and IL would be independent of the optical power (resulting in OMA being only dependent on $P_{in}$). In the presence of nonlinearities, there is a power above which the negative effects of increased thermal and carrier dispersion (which cause a fluctuation in the output power values for bits '1' and '0', see Fig. 4) overcome the gain of using a high input power that would otherwise result in a higher '1' output power and an improvement in OMA. Fig. 4. Effects of nonlinearities in time domain. Simulated normalized output optical power as a function of time (blue, left axis) and resonance frequency shift due to temperature (red, right axis) and carrier density (green, right axis) fluctuations for 0.3 mW input optical power (a) and for 2 mW input optical power (c). Resonance shifts are referenced to the minimum shift for the operational condition being considered, so that the shown curves are $\Delta W_X(t)-min \left \{ \Delta W_X(t)\right \}$, where X refers to either temperature or carrier dispersion. Experimental normalized output optical power as a function of time for 0.1 mW input optical power (b) and for the experimental optimal input optical power of 1.65 mW (d). Reported experimental powers are at the center laser wavelength and do not account for extra input power due to unfiltered ASE optical power coming from the EDFA. The data rate is 0.5 Gbps. Reported temperature and carrier averages and standard deviations are calculated over a 2 $\mu s$ time series. As can be observed, the theoretical optimal input power ($\approx$ 4.2 mW) is around 2.5x higher than the experimentally measured optimum ($\approx$ 1.65 mW). This difference is due to the fact that the experimental powers measured and reported correspond to the power at the center wavelength of the laser, but the effective power entering the cavity is higher due to the presence of unfiltered ASE optical power. By studying the shift in resonance wavelength as a function of input optical power with and without an input EDFA, we observed that the total effective power driving nonlinearities was approximately 2-3 dB higher than the center wavelength input power. Additionally, uncertainties in the input grating coupler loss (which translate into uncertainties in the power launched into the ring) and the difficulty of experimentally reaching the exact optimum operational point also contribute to this difference. Due to the use of non-optimized grating couplers (with an insertion loss of around 10 dB) and a limited gain EDFA, the maximum input optical power that could be experimentally reached at the input waveguide of the modulator was around 2.6 mW. At high modulation speeds (a few Gbps) this amount of optical power would not result in significant nonlinear effects due to the slow thermal response of the device, which translates in lower thermal nonlinearities at the same level of input power. Therefore, in order to experimentally observe nonlinear phenomena at lower powers the data rate was reduced to 0.5 Gbps. Figure 4 shows simulated and experimental transmission waveforms for low power (Fig. 4(a) - theoretical, Fig. 4(b) - experimental) and at the optimal input power (Fig. 4(c) - theoretical, Fig. 4(d) - experimental). The theoretical waveforms also show the thermal (red curve) and free carrier dispersion (green curve) temporal evolution (first and second terms in Eq. (1)). Again, good agreement between theoretical and experimental waveforms is obtained. For low input powers (Figs. 4(a) and 4(b)) nonlinearities are weak, so resonance fluctuations are minimal and stable output power values for the '0' and '1' bits are achieved. At the optimal input power (Figs. 4(c) and 4(d)) the increase in temperature and carrier density variations generates visible fluctuations in the output power for bits '0' and '1'. Notice how our model correctly predicts the time evolution of the '0' and '1' levels observed experimentally: '1' output powers show a negative slope with time due to a temperature decrease (which moves the resonance towards shorter wavelengths and thus closer to the laser wavelength, as we concluded that optimal modulation is obtained with the laser on the blue side of the resonance), while '0' output powers show a positive slope with time due to the complementary effect (as we are closer to the resonance, there is a temperature increase which moves the resonance towards longer wavelengths and away from the operating laser wavelength). Notice how experimental '0' values do not reach close to zero transmission (as shown by simulation) due to the presence of noise. As power is increased beyond the optimal point, the enhancement in thermal and carrier dispersion and corresponding '0' and '1' output power fluctuations is so high that the modulation performance decreases. To further explore the effects of high input powers over device performance, simulations were performed for input powers well above the optimum. The results are summarized in Fig. 5. As could be inferred from Fig. 3(c), the existence of an optimum input power beyond which modulation quality decreases is confirmed (Fig. 5(a)). Fig. 5. Optimal operation point for high input optical powers. (a) Maximum attainable OMA (blue, left axis) and wavelength at which this value is reached (red, right axis) as a function of input power derived with the model. The dashed line shows the expected performance if no nonlinearities were present. (b)(c) Output optical power as a function of time (blue, left axis) and resonance frequency shift due to temperature (red, right axis) and carrier density (green, right axis) fluctuations for 5 mW of input optical power at the optimal operation wavelength of 1546.32 nm (b) and with a wavelength of 1547.52 nm, closer to the resonance (c). Resonance shifts are referenced to the minimum shift for the operational condition being considered, so that the shown curves are $\Delta W_X(t)-min \left \{ \Delta W_X(t)\right \}$, where X refers to either temperature or carrier dispersion. Reported temperature and carrier averages and standard deviations are calculated over a 2 $\mu s$ time series. As power is increased beyond the maximum OMA point to 4.75 mW, an interesting trend is observed: the optimal operation results in bit 0 backing off from close to zero transmission (Fig. 5(b) for a 5 mW input power), showing that at such high powers it is necessary to keep the laser away from critical coupling in order to reduce the energy stored in the resonator and thus lower optical nonlinearities. This is also seen in Fig. 5(a), where the optimal wavelength moves toward shorter wavelengths for input powers above 4.75 mW (see the distinct knee in the red curve of Fig. 5(a)) . Figure 5(c) shows the resulting waveform at 5 mW input power if the laser is tuned closer to the resonance ($\lambda ' = \lambda _{opt} + 800 \enspace pm$) and critical coupling: thermal and free carrier dispersion effects are increased to a level that leads to very strong fluctuations in the '0' and '1' bits, and degradation of the modulation quality. The agreement with experimental results at the modest data rates considered suggests that our theoretical model correctly captures the effects of both free carrier and thermal nonlinearities. Since modern data transmission devices work at speeds at least an order of magnitude higher than the 0.5 Gbps data rate considered in this work, it is pertinent to ask ourselves how does the behavior described here, and in particular the optimum input power, change as the data rate of the modulating signal is increased. As bit time decreases well below the thermal time constant of the device ($\tau _{th} \approx 0.5 \mu s$), the extent of thermal fluctuations (and thus its penalty on modulation efficiency) will decrease for the same input power due to the system's temperature not responding rapidly enough to the changes in the absorbed optical power. For the same reason, these thermal effects will become pattern dependent, that is, temperature changes will depend on how many consecutive '0' ('1') bits does the transmitting signal have, since this will set the maximum time the system has to increase (decrease) its temperature. Thus, for the same data pattern, an increase in the input optimal power with data rate is expected as it transitions from a bit time longer or comparable to the thermal time constant ($T_b >\approx \tau _{th}$) to a much shorter bit period ($T_b \ll \tau _{th}$). The increase in optimum power will stop when the decrease in bit time does not have a significant effect in the relative magnitude of $T_b$ with respect to $\tau _{th}$. At data rates in which ($\tau _{th} \gg T_b > \tau _{fc}$), only free carrier effects will limit the maximum power that the device can handle. While the bit time is smaller than the free carrier lifetime, no change in the optimum power is expected, since the free carrier response won't be limited by the time the device stays on or off resonance. But if we keep increasing the data rate to a point in which $T_b$ becomes comparable to $\tau _{fc}$, the extent of carrier dispersion fluctuations (and thus its penalty on modulation efficiency) will decrease due to the free carrier population not being able to respond to changes in the bits being transmitted. At this point, both free carrier and thermal dispersion effects will become pattern dependent, and it is expected that for the same pattern there will be an increase in the optimum power as data rate increases. The confirmation of this expected behavior through the time domain nonlinear model presented here is left for a follow-up paper. We presented a time-domain model for silicon ring modulators under high input optical powers accounting for two-photon absorption, free-carrier absorption, thermal and dispersion effects to study the consequences of optical nonlinearities on modulation quality, and experimentally verified the theoretical results. We showed the existence of an optimal input optical power at which the best modulation performance (in terms of OMA) is achieved. Increasing the power beyond this point will degrade modulation by closing the eye diagram due to strong fluctuation in the resonance wavelength caused by thermal and free-carrier dispersion effects. The necessity for correct initialization of the microring modulator through either tuning of the resonator or of the laser wavelength to place the device in the correct bi-stable state was also discussed. These observations and the model derived here can be utilized for the design of practical microring modulators that will be needed for data-center optical interconnects or telecommunications systems. Defense Advanced Research Projects Agency (DARPA) (HR0011-11-C-0100); "la Caixa" Foundation (LCF-BQ-AA17-11610001). R.J.R. is developing silicon photonic technologies at Ayar Labs, Inc. 1. D. A. B. Miller, "Rationale and challenges for optical interconnects to electronic chips," Proc. IEEE 88(6), 728–749 (2000). [CrossRef] 2. C. A. Brackett, "Dense wavelength division multiplexing networks: principles and applications," IEEE J. on Sel. Areas Commun. 8(6), 948–964 (1990). [CrossRef] 3. K. Preston, N. Sherwood-Droz, J. S. Levy, and M. Lipson, "Performance guidelines for WDM interconnects based on silicon microring resonators," in CLEO: 2011 - Laser Science to Photonic Applications, pp. 1–2 (2011). 4. A. Joshi, C. Batten, Y. J. Kwon, S. Beamer, I. Shamim, K. Asanovic, and V. Stojanovic, "Silicon-photonic clos networks for global on-chip communication," in 2009 3rd ACM/IEEE International Symposium on Networks-on-Chip, pp. 124–133 (2009). 5. R. Soref and B. Bennett, "Electrooptical effects in silicon," IEEE J. Quantum Electron. 23(1), 123–129 (1987). [CrossRef] 6. Q. Xu, B. Schmidt, S. Pradhan, and M. Lipson, "Micrometre-scale silicon electro-optic modulator," Nature 435(7040), 325–327 (2005). [CrossRef] 7. E. Timurdogan, C. M. Sorace-Agaskar, J. Sun, E. S. Hosseini, A. Biberman, and M. R. Watts, "An ultralow power athermal silicon modulator," Nat. Commun. 5(1), 4008 (2014). [CrossRef] 8. M. Dinu, F. Quochi, and H. Garcia, "Third-order nonlinearities in silicon at telecom wavelengths," Appl. Phys. Lett. 82(18), 2954–2956 (2003). [CrossRef] 9. V. R. Almeida and M. Lipson, "Optical bistability on a silicon chip," Opt. Lett. 29(20), 2387–2389 (2004). [CrossRef] 10. Q. Xu and M. Lipson, "Carrier-induced optical bistability in silicon ring resonators," Opt. Lett. 31(3), 341–343 (2006). [CrossRef] 11. T. J. Johnson, M. Borselli, and O. Painter, "Self-induced optical modulation of the transmission through a high-Q silicon microdisk resonator," Opt. Express 14(2), 817–831 (2006). [CrossRef] 12. M. Soltani, S. Yegnanarayanan, Q. Li, A. A. Eftekhar, and A. Adibi, "Self-sustained gigahertz electronic oscillations in ultrahigh-$Q$ photonic microresonators," Phys. Rev. A 85(5), 053819 (2012). [CrossRef] 13. S. M. Sze and K. K. Ng, Physics of Semiconductor Devices (John Wiley & Sons Ltd., 2006). 14. G. Treyz, "Silicon Mach-Zehnder waveguide interferometers operating at 1.3 $\mu$m," Electron. Lett. 27(2), 118–120 (1991). [CrossRef] 15. M. Borselli, T. J. Johnson, and O. Painter, "Accurate measurement of scattering and absorption loss in microphotonic devices," Opt. Lett. 32(20), 2954–2956 (2007). [CrossRef] 16. M. de Cea, A. H. Atabaki, L. Alloatti, M. Wade, M. Popovic, and R. J. Ram, "A thin silicon photonic platform for telecommunication wavelengths," in 2017 European Conference on Optical Communication (ECOC), pp. 1–3 (2017). 17. D. Dimitropoulos, R. Jhaveri, R. Claps, J. C. S. Woo, and B. Jalali, "Lifetime of photogenerated carriers in silicon-on-insulator rib waveguides," Appl. Phys. Lett. 86(7), 071115 (2005). [CrossRef] 18. G. Li, A. V. Krishnamoorthy, I. Shubin, J. Yao, Y. Luo, H. Thacker, X. Zheng, K. Raj, and J. E. Cunningham, "Ring resonator modulators in silicon for interchip photonic links," IEEE J. Sel. Top. Quantum Electron. 19(6), 95–113 (2013). [CrossRef] 19. J. T. Robinson, K. Preston, O. Painter, and M. Lipson, "First-principle derivation of gain in high-index-contrast waveguides," Opt. Express 16(21), 16659–16669 (2008). [CrossRef] 20. L. Shampine and M. Reichelt, "The MATLAB ODE suite," SIAM J. Sci. Comput. 18(1), 1–22 (1997). [CrossRef] 21. L. Alloatti, D. Cheian, and R. J. Ram, "High-speed modulator with interleaved junctions in zero-change CMOS photonics," Appl. Phys. Lett. 108(13), 131101 (2016). [CrossRef] 22. J. S. Orcutt, B. Moss, C. Sun, J. Leu, M. Georgas, J. Shainline, E. Zgraggen, H. Li, J. Sun, M. Weaver, S. Urošević, M. Popović, R. J. Ram, and V. Stojanović, "Open foundry platform for high-performance electronic-photonic integration," Opt. Express 20(11), 12222–12232 (2012). [CrossRef] 23. C. Sun, M. Wade, M. Georgas, S. Lin, L. Alloatti, B. Moss, R. Kumar, A. H. Atabaki, F. Pavanello, J. M. Shainline, J. S. Orcutt, R. J. Ram, M. Popović, and V. Stojanović, "A 45 nm CMOS-SOI monolithic photonics platform with bit-statistics-based resonant microring thermal tuning," IEEE J. Solid-State Circuits 51(4), 893–907 (2016). [CrossRef] 24. G. P. Agrawal, Fiber-Optic Communication Systems (John Wiley & Sons, Inc., 2011). 25. T. Carmon, T. J. Kippenberg, L. Yang, H. Rokhsari, S. Spillane, and K. J. Vahala, "Feedback control of ultra-high-Q microcavities: application to micro-raman lasers and micro-parametric oscillators," Opt. Express 13(9), 3558–3566 (2005). [CrossRef] D. A. B. Miller, "Rationale and challenges for optical interconnects to electronic chips," Proc. IEEE 88(6), 728–749 (2000). C. A. Brackett, "Dense wavelength division multiplexing networks: principles and applications," IEEE J. on Sel. Areas Commun. 8(6), 948–964 (1990). K. Preston, N. Sherwood-Droz, J. S. Levy, and M. Lipson, "Performance guidelines for WDM interconnects based on silicon microring resonators," in CLEO: 2011 - Laser Science to Photonic Applications, pp. 1–2 (2011). A. Joshi, C. Batten, Y. J. Kwon, S. Beamer, I. Shamim, K. Asanovic, and V. Stojanovic, "Silicon-photonic clos networks for global on-chip communication," in 2009 3rd ACM/IEEE International Symposium on Networks-on-Chip, pp. 124–133 (2009). R. Soref and B. Bennett, "Electrooptical effects in silicon," IEEE J. Quantum Electron. 23(1), 123–129 (1987). Q. Xu, , B. Schmidt, S. Pradhan, and M. Lipson, "Micrometre-scale silicon electro-optic modulator," Nature 435(7040), 325–327 (2005). E. Timurdogan, C. M. Sorace-Agaskar, J. Sun, E. S. Hosseini, A. Biberman, and M. R. Watts, "An ultralow power athermal silicon modulator," Nat. Commun. 5(1), 4008 (2014). M. Dinu, F. Quochi, and H. Garcia, "Third-order nonlinearities in silicon at telecom wavelengths," Appl. Phys. Lett. 82(18), 2954–2956 (2003). V. R. Almeida and M. Lipson, "Optical bistability on a silicon chip," Opt. Lett. 29(20), 2387–2389 (2004). Q. Xu and M. Lipson, "Carrier-induced optical bistability in silicon ring resonators," Opt. Lett. 31(3), 341–343 (2006). T. J. Johnson, , M. Borselli, and O. Painter, "Self-induced optical modulation of the transmission through a high-Q silicon microdisk resonator," Opt. Express 14(2), 817–831 (2006). M. Soltani, S. Yegnanarayanan, Q. Li, A. A. Eftekhar, and A. Adibi, "Self-sustained gigahertz electronic oscillations in ultrahigh-$Q$Q photonic microresonators," Phys. Rev. A 85(5), 053819 (2012). S. M. Sze and K. K. Ng, Physics of Semiconductor Devices (John Wiley & Sons Ltd., 2006). G. Treyz, "Silicon Mach-Zehnder waveguide interferometers operating at 1.3 $\mu$μm," Electron. Lett. 27(2), 118–120 (1991). M. Borselli, T. J. Johnson, and O. Painter, "Accurate measurement of scattering and absorption loss in microphotonic devices," Opt. Lett. 32(20), 2954–2956 (2007). M. de Cea, A. H. Atabaki, L. Alloatti, M. Wade, M. Popovic, and R. J. Ram, "A thin silicon photonic platform for telecommunication wavelengths," in 2017 European Conference on Optical Communication (ECOC), pp. 1–3 (2017). D. Dimitropoulos, R. Jhaveri, R. Claps, J. C. S. Woo, and B. Jalali, "Lifetime of photogenerated carriers in silicon-on-insulator rib waveguides," Appl. Phys. Lett. 86(7), 071115 (2005). G. Li, , A. V. Krishnamoorthy, , I. Shubin, J. Yao, Y. Luo, H. Thacker, X. Zheng, K. Raj, and J. E. Cunningham, "Ring resonator modulators in silicon for interchip photonic links," IEEE J. Sel. Top. Quantum Electron. 19(6), 95–113 (2013). J. T. Robinson, K. Preston, O. Painter, and M. Lipson, "First-principle derivation of gain in high-index-contrast waveguides," Opt. Express 16(21), 16659–16669 (2008). L. Shampine and M. Reichelt, "The MATLAB ODE suite," SIAM J. Sci. Comput. 18(1), 1–22 (1997). L. Alloatti, D. Cheian, and R. J. Ram, "High-speed modulator with interleaved junctions in zero-change CMOS photonics," Appl. Phys. Lett. 108(13), 131101 (2016). J. S. Orcutt, B. Moss, C. Sun, J. Leu, M. Georgas, J. Shainline, E. Zgraggen, H. Li, J. Sun, M. Weaver, S. Urošević, M. Popović, R. J. Ram, and V. Stojanović, "Open foundry platform for high-performance electronic-photonic integration," Opt. Express 20(11), 12222–12232 (2012). C. Sun, M. Wade, M. Georgas, S. Lin, L. Alloatti, B. Moss, R. Kumar, A. H. Atabaki, F. Pavanello, J. M. Shainline, J. S. Orcutt, R. J. Ram, M. Popović, and V. Stojanović, "A 45 nm CMOS-SOI monolithic photonics platform with bit-statistics-based resonant microring thermal tuning," IEEE J. Solid-State Circuits 51(4), 893–907 (2016). G. P. Agrawal, Fiber-Optic Communication Systems (John Wiley & Sons, Inc., 2011). T. Carmon, T. J. Kippenberg, L. Yang, H. Rokhsari, S. Spillane, and K. J. Vahala, "Feedback control of ultra-high-Q microcavities: application to micro-raman lasers and micro-parametric oscillators," Opt. Express 13(9), 3558–3566 (2005). Adibi, A. Agrawal, G. P. Alloatti, L. Almeida, V. R. Asanovic, K. Atabaki, A. H. Batten, C. Beamer, S. Bennett, B. Biberman, A. Borselli, M. Brackett, C. A. Carmon, T. Cheian, D. Claps, R. Cunningham, J. E. de Cea, M. Dimitropoulos, D. Dinu, M. Eftekhar, A. A. Garcia, H. Georgas, M. Hosseini, E. S. Jalali, B. Jhaveri, R. Johnson, T. J. Joshi, A. Kippenberg, T. J. Krishnamoorthy, A. V. Kumar, R. Kwon, Y. J. Leu, J. Levy, J. S. Lin, S. Lipson, M. Luo, Y. Miller, D. A. B. Ng, K. K. Orcutt, J. S. Painter, O. Pavanello, F. Popovic, M. Pradhan, S. Preston, K. Quochi, F. Raj, K. Ram, R. J. Reichelt, M. Robinson, J. T. Rokhsari, H. Schmidt, B. Shainline, J. Shainline, J. M. Shamim, I. Shampine, L. Sherwood-Droz, N. Shubin, I. Soltani, M. Sorace-Agaskar, C. M. Soref, R. Spillane, S. Stojanovic, V. Sun, C. Sun, J. Sze, S. M. Thacker, H. Timurdogan, E. Treyz, G. Uroševic, S. Vahala, K. J. Wade, M. Watts, M. R. Weaver, M. Woo, J. C. S. Yao, J. Yegnanarayanan, S. Zgraggen, E. Zheng, X. Appl. Phys. Lett. (3) IEEE J. on Sel. Areas Commun. (1) IEEE J. Quantum Electron. (1) IEEE J. Sel. Top. Quantum Electron. (1) IEEE J. Solid-State Circuits (1) Nat. Commun. (1) Phys. Rev. A (1) Proc. IEEE (1) SIAM J. Sci. Comput. (1) (1) Δ W 0 ( t ) W 0 = − 1 n S i ( d n S i d T Δ T ( t ) ¯ + ( d n S i d N p + d n S i d N n ) N ( t ) ¯ ) + Δ W 0 m o d ( t ) W 0 (2) Δ W 0 m o d ( t ) = d W 0 d V p n V p n ( t ) (3) d V p n ( t ) d t = − V p n ( t ) τ + V ( t ) τ Model parameters corresponding to the silicon microring modulator studied in this work. FEM = Finite Elements Method [13,14,15].
CommonCrawl
Volume 12 Supplement 9 Genetic Analysis Workshop 20: envisioning the future of statistical genetics by exploring methods for epigenetic and pharmacogenomic data Logistic Bayesian LASSO for detecting association combining family and case-control data Xiaofei Zhou†1, Meng Wang†2, Han Zhang1, William C. L. Stewart1, 2 and Shili Lin1Email author †Contributed equally BMC Proceedings201812 (Suppl 9) :54 © The Author(s). 2018 Because of the limited information from the GAW20 samples when only case-control or trio data are considered, we propose eLBL, an extension of the Logistic Bayesian LASSO (least absolute shrinkage and selection operator) methodology so that both types of data can be analyzed jointly in the hope of obtaining an increased statistical power, especially for detecting association between rare haplotypes and complex diseases. The methodology is further extended to account for familial correlation among the case-control individuals and the trios. A 2-step analysis strategy was taken to first perform a genome-wise single single-nucleotide polymorphism (SNP) search using the Monte Carlo pedigree disequilibrium test (MCPDT) to determine interesting regions for the Adult Treatment Panel (ATP) binary trait. Then eLBL was applied to haplotype blocks covering the flagged SNPs in Step 1. Several significantly associated haplotypes were identified; most are in blocks contained in protein coding genes that appear to be relevant for metabolic syndrome. The results are further substantiated with a Type I error study and by an additional analysis using the triglyceride measurements directly as a quantitative trait. As next-generation sequencing (NGS) technology becomes more accurate and affordable, many recent studies have focused on assessing associations between common complex diseases and single-nucleotide variants (SNVs), paying particular attentions to those that are rare. Various methods have been proposed, but most can only achieve the identification of candidate genes or regions. To narrow the list of potential causal variants, it would be helpful to investigate haplotype blocks formed by single-nucleotide polymorphisms (SNPs) in regions/genes where associations are suggested but may not necessarily be genome-wide significant. Apart from being able to identify biologically relevant variants, haplotype-based methods can be more powerful than SNV-based methods as multilocus genotypes contain more information than single-locus genotypes, especially when causal loci interact in cis, leading to disease etiology [1]. If there are rare causal SNVs in a haplotype block, then rare haplotypes can tag such causal variants, a conclusion based on a simulation study [2]. More importantly, rare haplotypes may be obtained from common SNPs, rendering NGS data unnecessary. The power for detecting rare haplotype associations is further enhanced in a family-based study, as rare associated variants are enriched in families afflicted with the diseases compared to population samples of independent cases and controls of the same size. Currently, numerous methods exist, including a class based on Logistic Bayesian LASSO (LBL) for detecting associations of haplotypes, common or rare, using either case control or family-based data [1, 3]. The GAW20 Real Data Package provides a good opportunity to apply LBL to identify haplotypes that are associated with metabolic syndrome. Specifically, we consider the ATP binary trait derived from the measurements taken at visit 2 (before drug intervention). Among the 188 pedigrees in the data set, only 17 contain complete case–parent trios (ie, genotype information for both parents and the child and phenotype status for the child are all available), leading to a total of only 25 such trios. In addition, we extracted 283 cases (ATP = 1) and 475 controls (ATP = 0) with available genotype information. Because the number of trios is extremely small, it is clear that there is insufficient power to detect haplotype association using these data alone. However, their inclusion may enhance detection power compared to when only case-control data are used. Because the current LBL methodology focuses on a single study design, we propose eLBL, an extension of the LBL methodology to combine case–control and case–parent trios data for a joint analysis. Furthermore, because cases, controls, and trios are all extracted from the same set of pedigrees, there are intrinsic correlations. To account for such familial dependency, we have adopted a composite likelihood adjustment approach. Extension of the logistic Bayesian LASSO accounting for familial dependency Suppose we have n = n1 + n2 individuals, where n1 and n2 are the numbers of cases and controls, respectively. Let Y = (Y1, Y2,...,Yn), where Yi denotes the affection status of the ith individual, with case = 1 and control = 0. Let Z = (Z1, Z2,...,Zn), where Zi is the (unobserved haplotype pair) of individual i, while the observed genotype matrix is denoted by G = (G1, G2,...,Gn), where Gi is the genotype vector (over a set of SNPs) of the ith individual. We note that G contains information about Z, but the mapping is typically many (haplotype pairs) to one (vector of genotypes). The complete data (haplotype) likelihood is $$ {L}_c\left(\varnothing \right)={\prod}_{i=1}^{n_1}P\left({Z}_i|{Y}_i=1,\varnothing \right){\prod}_{j={n}_1+1}^nP\left({Z}_j|{Y}_j=0,\varnothing \right) $$ where the probabilities are specified, as elaborated below, through a logit link function relating the odds of disease to the haplotypes, and ∅ is a vector of parameters including haplotype frequencies and coefficients of the logistic regression model. Suppose we also have m trios with each ascertained through the offspring. Let Yic = 1 denote that the child is a case. The haplotype configuration of the ith trio can be written as Zi = (Zif, Zim, Zic), where the 3 components denote the haplotype pair of the father, mother, and the affected child, respectively. Under the assumption of allelic exchangeability, this is equivalent to Zi = (Zic, Ziu), where Ziu is the untransmitted haplotype pair from the parents. Then, the haplotype-based likelihood for case–parent trios is $$ {L}_f\left(\varnothing \right)={\prod}_{i=1}^mP\left({Z}_{ic}|{Y}_{ic}=1,\varnothing \right)P\left({Z}_{iu}|\varnothing \right) $$ where the probabilities and the parameter vector are specified as in the case–control data. Putting eqs. (1) and (2) together, we obtained the following composite likelihood: $$ {L}_{cf}\left(\varnothing \right)={L}_c\left(\varnothing \right)\times {L}_f\left(\varnothing \right) $$ It is apparent from the description of the GAW20 data given above that our data units are not independent, thus the composite likelihood as specified in eq. (3) is not the correct likelihood based on the observed data. However, owing to the complex relationships among the extracted cases, controls, and trios, it is difficult to formulate the correct likelihood. Fortunately, it is possible to obtain correct inferences based on the misspecified composite likelihood Lcf(∅) through appropriate adjustment [4]. Following the "magnitude adjustment" algorithm [5], we denote H(∅) = − E[∇2ℓcf(∅)]and J(∅) = Var[∇ℓcf(∅)], where ℓcf(∅)=log[Lcf(∅)] is the log-composite likelihood, and ∇ and ∇2 are the first-order and second-order derivatives, respectively. Let λ1, λ2, ⋯, λpbe the eigenvalues of H(∅)−1J(∅); based on them we form\( k=p/{\sum}_{i=1}^p{\lambda}_i. \) Then the adjusted log-likelihood $$ {\ell}^{\ast}\left(\varnothing \right)={k\mathit{\ell}}_{cf}\left(\varnothing \right) $$ is used for inference, as elaborated in the following paragraph. To specify the probabilities and elaborate on ∅, we assume that, for any given individual in the study with haplotype pair Z, we model the odds of the disease θZ = P(Y = 1|Z)/P(Y = 0|Z) with a logistic model logθz = α + XZβ, where XZ is the design vector corresponding to haplotype pair Z, coded according to the assumed mode of inheritance (eg, additive, recessive, dominant); β = (β1, ⋯, βK) (part of the collection of parameter vector ∅) is the regression coefficient vector with βj corresponding to the effect of the jth variant on the log odds; and α is the baseline effect (related to the phenocopy rate). Note that if we assume an additive model, then the jth variant is the jth haplotype, and the total number of distinct haplotypes is K + 1. We cast the problem into a Bayesian framework, where the adjusted likelihood in eq. (4) is used for correct posterior inference [5]. The detailed Markov chain Monte Carlo (MCMC) inference procedure follows the original LBL methodology using shrinkage priors to increase power for detecting rare haplotypes [1, 3]; the adjustment factor k is updated in each MCMC iteration. Convergence of the Markov chain is assured based on commonly used diagnostic tools. The posterior odds over the prior odds, namely the Bayes factor (BF), is used to assess the significance of the βjs. We have also constructed empirical posterior credible intervals (CIs) for the odds ratios (ORs). For each haplotype, the OR is essentially the exponential of the corresponding β in the logistic model given above. It is estimated, together with the CIs, from the posterior sample of the β values. Decision on the significance of a haplotype is based on both BF (> 2) and CI (not including the null value 1). A 2-step analysis strategy Because the proposed eLBL (extension of the Logistic Bayesian LASSO [least absolute shrinkage and selection operator]) methodology is based on an MCMC procedure to sample from the posterior distribution, it is computationally intensive, and thus not suitable for whole-genome scan. Instead, we adopt the following 2-step strategy. In the first step, we use Monte Carlo pedigree disequilibrium test (MCPDT) [6], a family-based single-SNP association testing method, to scan 654,767 SNPs across the 22 autosomes. We excluded SNPs with low minor allele frequencies (< 1%). MCPDT imputes missing data and takes familial relationships into account; consequently, it is viewed as using all information to the maximum extent possible. In the second step, we formed haplotype blocks around the SNPs selected from Step 1 using haploview [7]. We then applied eLBL to identify haplotype(s) within each block that have a significant influence on the ATP binary trait. Of the 654,767 SNPs considered, we selected the 10 SNPs with the smallest MCPDT p values for further analysis with eLBL. These 10 SNPs have p values close to 10− 4 or smaller (Table 1), with the 3 SNPs on chromosome 1 passing the threshold of genome-wide significance at the 5% level after Bonferroni correction. To increase the statistical power for detecting rare variants and to potentially understand the causal mechanism in downstream analysis, we applied eLBL to 9 haplotype blocks that cover the SNPs displayed in Table 1. Note that the first 2 SNPs on chromosome 1 belong to the same haplotype block (block 1; Table 1). The remaining 8 SNPs were placed into 8 separate blocks labeled as blocks 2 to 9 (Table 1). Top 10 SNPs with the smallest p values as identified by MCPDT Allelea G/A 2.59 × 10− 8 3.48 × 10−8 C/T G/C MAF, minor allele frequency aMinor allele is listed after the slash The results from eLBL, presented in Table 2, show that a number of haplotypes have an estimated CI of OR not including 1 and a BF > 2, providing significant evidence for association between the identified haplotypes and the ATP trait. For instance, haplotype h11110 of block 1, which contains the minor alleles of SNPs rs10915052 and rs1406862 (occupying the first and fourth positions, in bold in Table 2), is seen to have a fairly significant evidence of association with a BF of 15. That this haplotype contains the 2 minor alleles strongly suggests that the 2 SNPs may very well interact in cis and play a regulatory role for metabolic syndrome, as the block is not located within the coding region of a gene. As another example, 2 haplotypes in block 7, located within the protein coding gene STARD13, are inferred to be associated with ATP. This haplotype block spans 10 SNPs, with SNP rs8001893 sitting at the last position. For the haplotype containing the minor allele, h1111111111, its effect is protective (OR < 1), which is opposite of the effect of haplotype h1000000000 (OR > 1). Both haplotypes are rare, especially the risk haplotype (frequency < 0.001). Given its rarity and the large variability in the estimate (reflected in the large upper bound of the CI resulting from small frequency and only moderate sample size), care needs to be taken in the interpretation of the effect size. Nevertheless, the associated gene appears to be relevant for the study of metabolic syndrome. Gene Ontology annotations related to STARD13 include guanosine triphosphatase (GTPase) activator activity and lipid binding. Finally, another protein coding gene, ABCC1, which contains haplotype block 8, is also noteworthy, as it also appears to be pertinent to metabolic syndrome. Among its related pathways are vitamin digestion and absorption, and metabolism. Significant haplotypes identified by eLBL; CI does not include 1 and BF > 2 Blocka Hapb SRP72 ncRNA STARD13 ABCC1 aThe SNPs contained in the haplotype blocks are as follows: B1: rs10915052 rs2377270 rs2205841 rs1406862 rs12410878; B3: rs6849183 rs11133443 rs17086804 rs11610 rs41476944 rs17086853 rs12649799 rs10015634 B4: rs10104096 rs2048091 rs13266438 rs10088192 rs13262422; B6: rs7943255 rs12289510 rs12276567 rs10893317 rs4936976 rs3808995; B7: rs9563616 rs7985396 rs9591912 rs8001801 rs7993044 rs9315232 rs10507413 rs9569943 rs7328696 rs8001893; B8: rs35621 rs35625 rs4148350 rs4148351 rs35628 rs4148353 rs35629 b"1" denotes the minor allele and the SNPs in Table 1 are in bold cLB (lower bounds) and UB (upper bounds) of the odds ratio (OR), which make up the 95% credible interval (CI) Motivated by making maximum use of information resulting from the limited sample sizes when only case–control or trio data are considered, we propose eLBL, an extension of the LBL methodology, so that both types of data can be analyzed jointly to increase statistical power. This new approach is further extended to adjust for familial correlations, leading to correct statistical inference using dependent data. Our 2-step analysis strategy was designed to increase statistical power. Indeed, by using all available information, MCPDT identified 3 genome-wide significant SNPs, which disappear when only observed data are used (results not shown). On the other hand, eLBL with shrinkage priors was able to recover haplotypes (many are rare) that are associated with the ATP binary trait. The associated genes harboring the haplotype blocks studied all appear to be related to metabolic syndrome. The increase in power is clearly seen as several of the associated haplotypes contain SNPs that do not pass genome-wide significance. To further substantiate the gain in power with the new eLBL approach, we performed an analysis with only independent cases and controls using the original LBL [1]. The results, as expected, are sensitive to the selection of the independent samples, and miss many of the haplotypes identified in Table 2. Similarly, an analysis of 17 independent trios using famLBL [2] reveals that the sample size is too small to obtain interpretable results. With an increase in power, the natural question is whether there is also an increase in Type I error, as eLBL is a new method and has not been studied thoroughly. To answer this question, we performed a limited simulation study wherein data from a null model was simulated. To mimic the family dependent structure and the linkage disequilibrium structure of the real data, we simulated our data using the GAW20 families and the inferred haplotypes with the estimated frequencies from block 8 to preserve linkage disequilibrium. Our results indicate that there is no elevated Type I error. In the contrary, eLBL is seen to be conservative for rare variants. Nevertheless, for haplotypes with frequencies greater than 0.05, the Type I error is as expected. To further substantiate the results from eLBL, we also analyzed the triglyceride level from visit 2 directly as a quantitative trait using a variation of LBL [1], but also accounting for the familial structures in the data. Of the 4 protein coding genes, SRPT2, SLC37A2, STARD13, and ABBC1, identified by eLBL, the quantitative analysis also identified associated haplotypes in blocks contained within these genes. These results, together with the Type I error study and the annotations of the genes, affirm the results from eLBL, leading to our conjecture that the haplotypes identified are potentially either involved in the causal mechanism or playing a regulatory role in metabolic syndrome. Xiaofei Zhou and Meng Wang contributed equally to this work. Publication of this article was supported by NIH R01 GM031575. This work was supported in part by NSF grant DMS-1208968. The data that support the findings of this study are available from the Genetic Analysis Workshop (GAW), but restrictions apply to the availability of these data, which were used under license for the current study. Qualified researchers may request these data directly from GAW. About this supplement This article has been published as part of BMC Proceedings Volume 12 Supplement 9, 2018: Genetic Analysis Workshop 20: envisioning the future of statistical genetics by exploring methods for epigenetic and pharmacogenomic data. The full contents of the supplement are available online at https://bmcproc.biomedcentral.com/articles/supplements/volume-12-supplement-9. XZ, MW, and HZ implemented the algorithms and performed the data analyses. SL conceived the study and supervised the analyses and interpretation. XZ, MW, and SL wrote the manuscript. WS reviewed the manuscript and contributed to the discussion. All authors have read and approved the final manuscript. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Department of Statistics, The Ohio State University, 1958 Neil Avenue, Columbus, OH 43210, USA Battelle Center for Mathematical Medicine, Nationwide Children's Hospital Research Institute, 700 Childrens Drive, Columbus, OH 43205, USA Biswas S, Lin S. Logistic Bayesian lasso for identifying association with rare haplotypes and application to age-related macular degeneration. Biometrics. 2012;68(2):587–97.View ArticleGoogle Scholar Wang M, Lin S. Detecting associations of rare variants with common diseases: collapsing or haplotyping? Brief Bioinform. 2015;16(5):1–10.View ArticleGoogle Scholar Wang M, Lin S. FamLBL: detecting rare haplotype disease association based on common SNPs using case-parent triads. Bioinformatics. 2014;30(18):2611–8.View ArticleGoogle Scholar Varin C, Reid N, Firth D. An overview of composite likelihood methods. Stat Sin. 2011;21:5–42.Google Scholar Ribatet M, Cooley D, Davison AC. Bayesian inference from composite likelihoods, with an application to spatial extremes. Stat Sin. 2012;22(2):813–45.Google Scholar Ding J, Lin S, Liu Y. Monte Carlo pedigree disequilibrium test for markers on the X chromosome. Am J Hum Genet. 2006;79(3):567–73.View ArticleGoogle Scholar Barrett JC, Fry B, Maller J, Daly MJ. Haploview: analysis and visualization of lD and haplotype maps. Bioinformatics. 2005;21(2):263–5.View ArticleGoogle Scholar See updates
CommonCrawl
[[openproblems]] GeMeCoD - Geometry of convex and discrete measures GeMeCoD open problems The Fourier-Entropy-Influence conjecture The Kesten-McKay conjecture Invertibility of random matrices The Mahler conjecture Yet another conjecture This page gathers some open mathematical problems related to the GeMeCoD project. Let $\mu$ be the uniform probability measure on the discrete cube $\{-1,1\}^n$, i.e. $\mu(A)=2^{-n}\mathrm{Card}(A)$ for every $A\subset\{-1,1\}^n$. Let us consider a Boolean function $$f:\{-1,1\}^n\to\{-1,1\}.$$ For every $1\leq i\leq n$, the influence of the $i$-th coordinate on $f$ is $$I_i(f)=\mu(A_i(f))$$ with $A_i(f)=\{x\in\{-1,1\}^n:f(x)\neq f(\tau_i(x))\}$ where $\tau_i(x)$ is obtained from $x=(x_1,\ldots,x_n)$ by flipping the $i$-th coordinate. The total influence of $f$ is $$I(f)=I_1(f)+\cdots+I_n(f)=\mu(A_1(f))+\cdots+\mu(A_n(f)).$$ One may check this definition on various simple examples such as $f(x)=x_1\cdots x_n$, $f(x)=\mathrm{sign}(x_1+\cdots+x_n)$, or $f(x)=\max(x_1,\ldots,x_n)$. Boolean functions are used in the modeling of the status of non-linear discrete systems with multiple components, such as for the percolation phenomenon, for the connectivity of random graphs, or for the invertibility of $\pm 1$ matrices (the geometrical structure of the model is captured by the specific function $f$, which is rarely fully symmetric). The total influence of $f$ is a measure of its one dimensional complexity. Another way to measure the complexity of $f$, more global, is to consider the entropy of its Fourier transform (squared). Let us denote by $\hat f$ the Fourier transform of $f$ on $\{-1,1\}^n$. The Fourier-Entropy-Influence (FEI) conjecture formulated by Friedgut and Kalai in 1996 states that there exists a real constant $c>0$ such that for every Boolean function $f:\{-1,1\}^n\to\{-1,1\}$, $$\mathrm{Ent}(\hat f^2)\leq cI(f).$$ If $\partial_if=(f-f(\tau_i))/2$ stands for the discrete gradient acting on the $i$-th coordinate then $\partial_if$ takes its values in $\{-1,0,1\}$ and $$A_i(f)=\{x\in\{-1,1\}^n:(\partial_i f)(x)\neq0\}=\{x\in\{-1,1\}^n:(\partial_i f)^2(x)=1\}$$ and thus $$I(f)=\mu\left(\sum_{i=1}^n(\partial_i f)^2\right).$$ However, beware of the apparent similarity with the Gross logarithmic Sobolev inequality (which formulates hypercontractivity). G. Kalai guest blogpost about the FEI conjecture on the blog of T. Tao, 2007 The FEI Conjecture for certain classes of Boolean functions, by O'Donnell, J. Wright, and Y. Zhou, 2011 Hypercontractive measures, Talagrand's inequality, and influences, by D. Cordero-Erausquin and M. Ledoux, 2011 Le théorème de Kahn, Kalai, et Linial (KKL), notes by P. Pansu and B. Graham, 2010 The Entropy Influence Conjecture Revisited, by B. Das, M. Pal, and V. Visavaliya, 2011 Open Problems in Analysis of Boolean Functions, by R. O'Donnell, 2012 Random oriented graphs are host of many open problems. For example, for integers $n \geq r \geq 3$, an oriented $r$-regular graph is a graph on $n$ vertices such that all vertices have $r$ incoming and $r$ outgoing oriented edges. Consider the adjacency matrix $A$ of a random oriented $r$-regular graph sampled from the uniform measure (there exists suitable simulation algorithms using matchings of half edges). Let $\mu_A=\frac{1}{n}\sum_{k=1}^n\delta_{\lambda_k}$ be the counting measure of the eigenvalues of $A$ (roots of the characteristic polynomial in $\mathbb{C}$). It is conjectured that as $n \to \infty$, almost surely $\mu_A$ converges to the probability measure \[ \frac{1}{\pi} \frac{r ^2(r-1)}{(r^2 - |z|^2)^2 }\mathbb{1}_{\{|z|<\sqrt r\}}\,dxdy. \] It turns out that this probability measure is also the Brown measure of the free sum of $r$ unitary, see Haagerup and Larsen. The Hermitian (actually symmetric) version of this measure is known as the Kesten-McKay distribution for random non-oriented $r$-regular graphs, see Kesten and McKay. We recover the circular law when $r\to\infty$ up to renormalization. This paragraph was extracted from Around the Circular Law, by Bordenave and Chafaï (2011) Let $M=(M_{ij})_{1\leq i,j\leq n}$ be a $n\times n$ matrix with symmetric Bernoulli $\pm 1$ entries. A convenient way to quantify the invertibility of $M$ is to consider the smallest singular value $$ s_n(M):=\min_{\left\Vert x\right\Vert_2=1}\left\Vert Mx\right\Vert_2. $$ Indeed, if $M$ is invertible then $s_n(M)=\Vert M^{-1}\Vert_{2\to2}^{-1}$. Note also that $\det(M)=0$ if and only if $s_n(M)=0$. A famous conjecture by Spielman and Teng related to their work on the smoothed analysis of algorithms states that there exists a constant $0<c<1$ such that \[ \mathbb{P}\left(s_n(M)\leq t\right)\leq t+c^n \] for $n\gg1$ and any small enough $t\geq0$. This was almost solved by Rudelson and Vershynin and by Tao and Vu. In particular, taking $t=0$ gives \[ \mathbb{P}(M\text{ is singular})=c^n. \] This positive probability of being singular does not contradict the asymptotic invertibility since by the first Borel-Cantelli lemma, a.s. $M$ is not singular for $n\gg1$. Regarding the constant $c$ above, it has been conjectured years ago that \[ \mathbb{P}(M\text{ is singular})=\left(\frac{1}{2}+o(1)\right)^n. \] This intuition comes from the probability of equality of two rows, which implies that $\mathbb{P}(M\text{ is singular})\geq(1/2)^n$. Many authors contributed to the analysis of this difficult nonlinear discrete problem, starting from Komlos, Kahn, and Szemerédi. The best result to date is due to Bourgain, Vu, and Wood who proved that \[ \mathbb{P}(M\text{ is singular})\leq \left(1/\sqrt{2}+o(1)\right)^n. \] If $K$ is a convex body in $\mathbb{R}^n$ and $z$ is an interior point of $K$ the polar body $K^z$ of $K$ with center of polarity $z$ is defined by $K^z = \{y\in\mathbb{R}^n : \langle y-z,x-z\rangle\le 1 \mbox{ for all } x\in K\}.$ The bipolar theorem says that $(K^z)^z =K$. The volume product of $K$ is defined by $$\mathcal{P}(K) = \inf \{|K| |K^z| : z\in {\rm int}(K)\}.$$ The volume product is affinely invariant, that is $\mathcal{P}(A(K))=\mathcal{P}(K)$ for every affine isomorphism $A$. The Blaschke-Santaló inequality (proved by Blaschke in dimension 2 and 3 and Santaló in higher dimension) states that $$ \mathcal{P}(K) \le \mathcal{P}(B^n_2), $$ where $B^n_2$ is the Euclidean ball (or any ellipsoid), with equality only for ellipsoids. Simple proofs were given by Meyer-Pajor and Meyer-Reisner. The minimal value of $\mathcal{P}(K)$ is still unknown. Mahler's conjecture states that, for every convex body $K$ in $\mathbb{R}^n$, $$\mathcal{P}(K) \ge \mathcal{P}(\Delta^n)=\frac{(n+1)^{n+1}}{(n!)^2}$$ where $\Delta^n$ is an $n$-dimensional simplex with equality only if $K$ is a simplex. The symmetric case of Mahler conjecture states that for every convex symmetric body $K$: $$\mathcal{P}(K) \ge \mathcal{P}([-1,1]^n)=\frac{4^{n}}{n!}.$$ These inequalities for $n=2$ were proved by Mahler. Saint Raymond established the case of unconditional bodies (bodies symmetric with respect to the coordinate hyperplanes) Meyer and Reisner together and separately treated other cases, like e.g. bodies of revolution, or $n$ dimensional polytopes with at most $n+3$ vertices or zonoids. Barthe and Fradelizi proved stronger inequalities for convex bodies having hyperplane symmetries which fix only one common point. Nazarov, Petrov, Ryabogin and Zvavitch proved the conjecture for bodies close to the cube; Kim and Reisner for bodies close to the simplex. Even in dimension 3, it is still open. Observe that the (non-exact) reverse Santaló inequality of Bourgain and Milman is $$\mathcal{P}(K) \ge c^n\mathcal{P}(B^n_2)$$ where $c$ is a positive constant; Kuperberg gave a new proof of this result with a better constant and Nazarov found an harmonic analysis approach to the inequality. Functional forms of the conjectures were also formulated and proved in some particular cases (dimension $1$ and the case of unconditionnal functions for example) by Fradelizi and Meyer. One of these conjectures asserts that for any even convex function $\varphi$, $$\int e^{-\varphi}\int e^{-\mathcal{L}\varphi}\ge 4^n,$$ with equality for $\varphi(x)=\|x\|_{\infty}$, where $\mathcal{L}\varphi(y):=\sup_x\langle x,y\rangle -\varphi(x)$ denotes the Legendre transform of $\varphi$. The non-symmetric variant form of the conjecture asserts that $$\inf_{z\in\mathbb{R}^n}\int e^{-\varphi}\int e^{-\mathcal{L^z}\varphi}\ge e^n,$$ with equality for $\varphi(x)=\sum_{i=1}^nx_i{\bf 1}_{\mathbb{R}_+^n}$, where $\mathcal{L^z}\varphi(y):=\sup_x\langle x-z,y-z\rangle -\varphi(x)$. openproblems.txt · Last modified: 2012/11/26 23:33 (external edit)
CommonCrawl
Starosta, Štěpán Generalized Thue-Morse words and palindromic richness. (English). Kybernetika, vol. 48 (2012), issue 3, pp. 361-370 MSC: 68R15 | MR 2975794 palindrome; palindromic richness; Thue-Morse; Theta-palindrome We prove that the generalized Thue-Morse word $\mathbf{t}_{b,m}$ defined for $b \ge 2$ and $m \ge 1$ as $\mathbf{t}_{b,m} = \left ( s_b(n) \mod m \right )_{n=0}^{+\infty}$, where $s_b(n)$ denotes the sum of digits in the base-$b$ representation of the integer $n$, has its language closed under all elements of a group $D_m$ isomorphic to the dihedral group of order $2m$ consisting of morphisms and antimorphisms. Considering antimorphisms $\Theta \in D_m$, we show that $\mathbf{t}_{b,m}$ is saturated by $\Theta$-palindromes up to the highest possible level. Using the generalisation of palindromic richness recently introduced by the author and E. Pelantová, we show that $\mathbf{t}_{b,m}$ is $D_m$-rich. We also calculate the factor complexity of $\mathbf{t}_{b,m}$. [1] Allouche, J.-P., Shallit, J.: Sums of digits, overlaps, and palindromes. Discrete Math. Theoret. Comput. Sci. 4 (2000), 1–10. MR 1755723 | Zbl 1013.11004 [2] Baláži, P., Masáková, Z., Pelantová, E.: Factor versus palindromic complexity of uniformly recurrent infinite words. Theoret. Comput. Sci. 380 (2007), 3, 266–275. DOI 10.1016/j.tcs.2007.03.019 | MR 2330997 | Zbl 1119.68137 [3] Balková, L.: Factor frequencies in generalized Thue-Morse words. Kybernetika 48 (2012), 3, 371–385. [4] Brlek, S.: Enumeration of factors in the Thue-Morse word. Discrete Appl. Math. 24 (1989), 1–3, 83–96. DOI 10.1016/0166-218X(92)90274-E | MR 1011264 | Zbl 0683.20045 [5] Brlek, S., Hamel, S., Nivat, M., Reutenauer, C.: On the palindromic complexity of infinite words. Internat. J. Found. Comput. 15 (2004), 2, 293–306. DOI 10.1142/S012905410400242X | MR 2071459 | Zbl 1067.68113 [6] Bucci, M., De Luca, A., Glen, A., Zamboni, L. Q.: A connection between palindromic and factor complexity using return words. Adv. Appl. Math. 42 (2009), no. 1, 60–74. DOI 10.1016/j.aam.2008.03.005 | MR 2475313 | Zbl 1160.68027 [7] Cassaigne, J.: Complexity and special factors. Bull. Belg. Math. Soc. Simon Stevin 4 (1997), 1, 67–88. MR 1440670 | Zbl 0921.68065 [8] de Luca, A., Varricchio, S.: Some combinatorial properties of the Thue-Morse sequence and a problem in semigroups. Theoret. Comput. Sci. 63 (1989), 3, 333–348. DOI 10.1016/0304-3975(89)90013-3 | MR 0993769 | Zbl 0671.10050 [9] Droubay, X., Justin, J., Pirillo, G.: Episturmian words and some constructions of de Luca and Rauzy. Theoret. Comput. Sci. 255 (2001), 1–2, 539–553. DOI 10.1016/S0304-3975(99)00320-5 | MR 1819089 | Zbl 0981.68126 [10] Frid, A.: Applying a uniform marked morphism to a word. Discrete Math. Theoret. Comput. Sci. 3 (1999), 125–140. MR 1734902 | Zbl 0935.68055 [11] Glen, A., Justin, J., Widmer, S., Zamboni, L. Q.: Palindromic richness. European J. Combin. 30 (2009), 2, 510–531. DOI 10.1016/j.ejc.2008.04.006 | MR 2489283 | Zbl 1169.68040 [12] Pelantová, E., Starosta, Š.: Languages invariant under more symmetries: overlapping factors versus palindromic richness. To appear in Discrete Math., preprint available at http://arxiv.org/abs/1103.4051 (2011). [13] Prouhet, E.: Mémoire sur quelques relations entre les puissances des nombres. C. R. Acad. Sci. Paris 33 (1851), 225. [14] Starosta, Š.: On theta-palindromic richness. Theoret. Comput. Sci. 412 (2011), 12–14, 1111–1121. DOI 10.1016/j.tcs.2010.12.011 | MR 2797753 | Zbl 1211.68302 [15] Tromp, J., Shallit, J.: Subword complexity of a generalized Thue-Morse word. Inf. Process. Lett. (1995), 313–316. DOI 10.1016/0020-0190(95)00074-M | MR 1336711 | Zbl 0875.68596
CommonCrawl
Serum suPAR and syndecan-4 levels predict severity of community-acquired pneumonia: a prospective, multi-centre study Qiongzhen Luo1, Pu Ning1, Yali Zheng1, Ying Shang1, Bing Zhou1 & Zhancheng Gao1 Critical Care volume 22, Article number: 15 (2018) Cite this article The Letter to this article has been published in Critical Care 2019 23:405 Community-acquired pneumonia (CAP) is a major cause of death worldwide and occurs with variable severity. There are few studies focused on the expression of soluble urokinase-type plasminogen activator receptor (suPAR) and syndecan-4 in patients with CAP. A prospective, multi-centre study was conducted between January 2014 and December 2016. A total of 103 patients with severe CAP (SCAP), 149 patients with non-SCAP, and 30 healthy individuals were enrolled. Clinical data were recorded for all enrolled patients. Serum suPAR and syndecan-4 levels were determined by quantitative enzyme-linked immunosorbent assay. The t test and Mann–Whitney U test were used to compare between two groups; one-way analysis of variance and the Kruskal–Wallis test were used to compare multiple groups. Correlations were assessed using Pearson and Spearman tests. Area under the curve (AUCs), optimal threshold values, sensitivity, and specificity were calculated. Survival curves were constructed and compared by log-rank test. Regression analyses assessed the effect of multiple variables on 30-day survival. suPAR levels increased in all patients with CAP, especially in severe cases. Syndecan-4 levels decreased in patients with CAP, especially in non-survivors. suPAR and syndecan-4 levels were positively and negatively correlated with severity scores, respectively. suPAR exhibited high accuracy in predicting SCAP among patients with CAP with an AUC of 0.835 (p < 0.001). In contrast, syndecan-4 exhibited poor diagnostic value for predicting SCAP (AUC 0.550, p = 0.187). The AUC for predicting mortality in patients with SCAP was 0.772 and 0.744 for suPAR and syndecan-4, respectively; the respective prediction threshold values were 10.22 ng/mL and 6.68 ng/mL. Addition of both suPAR and syndecan-4 to the Pneumonia Severity Index significantly improved their prognostic accuracy, with an AUC of 0.885. Regression analysis showed that suPAR ≥10.22 ng/mL and syndecan-4 ≤ 6.68 ng/mL were reliable independent markers for prediction of 30-day survival. suPAR exhibits high accuracy for both diagnosis and prognosis of SCAP. Syndecan-4 can reliably predict mortality in patients with SCAP. Addition of both suPAR and syndecan-4 to a clinical scoring method could improve prognostic accuracy. Trial registration ClinicalTrials.gov, NCT03093220. Registered on 28 March 2017 (retrospectively registered). Community-acquired pneumonia (CAP) is a very common type of respiratory infection. Despite the rapid development of new treatments, pneumonia continues to cause a high rate of complications and associated costs, and remains a major international cause of death [1, 2]. Because of the diversity of clinical conditions and the lag in a clear definition of the causative pathogen, the most challenging task for a physician is the risk stratification of patients with CAP and the subsequent administration of individual treatment [3]. CURB-65 and the Pneumonia Severity Index (PSI) are widely recommended and validated scoring methods: CURB-65 is a predictive assessment that is concisely and conveniently implemented in clinical settings [4]. PSI is a sensitive indicator in judging whether patients should be hospitalized [4]. However, both the CURB-65 and PSI scores are neither comprehensive nor exhaustive. CURB-65 assesses very few aspects of disease with low specificity, while PSI relies primarily on age and underlying diseases such that it is inaccurate in young and otherwise healthy patients. In recent years, novel biomarkers, such as soluble triggering receptor expressed on myeloid cells-1 (sTREM-1) [5], proadrenomedullin (pro-ADM) [6], and copeptin [7] have been widely validated in clinical settings. However, their sensitivity and specificity for prediction of pneumonia severity are variable and largely insufficient; thus, there is a need for new biomarkers to provide effective risk stratification and assist in clinical judgement. Urokinase-type plasminogen activator receptor (uPAR) is a component of the plasminogen activator (PA) system. This system plays an important role in many physiological and pathological processes, including tissue remodelling [8], thrombosis [9], inflammation [10], and tumourigenesis [11]. The soluble form of uPAR (suPAR) can be detected in serum and other organic fluids [12]; its levels are increased in patients with HIV infection, malaria, tuberculosis, and sepsis, suggesting that it could serve as a useful prognostic biomarker [13]. This capability may be useful for prediction of the severity of CAP, but few studies have focused on suPAR levels in patients with CAP. The syndecan proteins, a family of transmembrane heparan sulphate proteoglycans, bind to various extracellular effectors and regulate many processes, such as tissue homeostasis, inflammation, tumour invasion, and metastasis [14,15,16]. Syndecan-4 is the most well-known member of the family. Studies have demonstrated that levels of syndecan-4 increase in response to bacterial inflammation, and that syndecan-4 possesses an anti-inflammatory function in acute pneumonia [17]. However, data on the relationship between the expression of syndecan-4 and the severity of CAP are rare. Considering the previous experimental data on suPAR and syndecan-4, we hypothesized that their serum levels might be correlated with the severity and prognosis of CAP. Thus, the aim of this study was to clarify the precise roles of suPAR and syndecan-4 in CAP, and to validate the effectiveness of these proteins as indicators of the severity of CAP and of the risk of death in severe CAP. This prospective, observational study was conducted during the period of January 2014 through December 2016 among patients hospitalized in Peking University People's Hospital, Tianjin Medical University General Hospital, Wuhan University People's Hospital, and Fujian Provincial Hospital (ClinicalTrials.gov ID, NCT03093220). All patients in this study were diagnosed with CAP. CAP was defined by the following criteria [18]: (1) a chest radiograph showing either a new patchy infiltrate, leaf or segment consolidation, ground glass opacity, or interstitial change; (2) at least one of the following signs – (a) the presence of cough, sputum production, and dyspnoea; (b) core body temperature >38.0 °C; (c) auscultatory findings of abnormal breath sounds and rales; or (d) peripheral white blood cell counts >10 × 109/L or <4 × 109/L; and (3) symptom onset that began in the community, rather than in a healthcare setting. Severe CAP (SCAP) was diagnosed by the presence of at least one major criterion, or at least three minor criteria, as follows [19]. Major criteria: (1) requirement for invasive mechanical ventilation and (2) occurrence of septic shock with the need for vasopressors. Minor criteria: (1) respiratory rate ≥30 breaths/min; (2) oxygenation index (PaO2/FiO2) ≤250; (3) presence of multilobar infiltrates; (4) presence of confusion; (5) serum urea nitrogen ≥20 mg/dL; (6) white blood cell count ≤4 × 109/L; (7) blood platelet count <100 × 109/L; (8) core body temperature <36.0 °C; and (9) hypotension requiring aggressive fluid resuscitation. The exclusion criteria were age <18 years, or the presence of any of the following: pregnancy, immunosuppressive condition, malignant tumour, end-stage renal or liver disease, active tuberculosis, or pulmonary cystic fibrosis. Sample size calculation In this study, we set the type I error/significance level (two-sided) at α = 0.05 and the type II error at β = 0.10 to provide 90% power. The test standard deviation was Zα = 1.96 and Zβ = 1.282. Assuming the mortality of non-SCAP and SCAP was P0 = 0.05 and P1 = 0.25, respectively [1, 19], the sample size was calculated as follows: $$ {\displaystyle \begin{array}{l}R=\frac{P_1}{P_0};A={P}_1\left(1-{P}_0\right)+{P}_0\left(1-{P}_1\right);B=\left(R-1\right){P}_0\left(1-{P}_0\right);K=\left(A+B\right)\left( RA-B\right)-R{\left({P}_1-{P}_0\right)}^2\\ {}\mathrm{N}{'}_{\mathrm{non}\hbox{-} \mathrm{SCAP}}=\frac{{\mathrm{Z}}_{\upbeta}^2\mathrm{K}+{\mathrm{Z}}_{\upalpha}^2{\left(\mathrm{A}+\mathrm{B}\right)}^2+2{\mathrm{Z}}_{\upalpha}{\mathrm{Z}}_{\upbeta}\ \left(\mathrm{A}+\mathrm{B}\right)\sqrt{\mathrm{K}}\ }{{\left({P}_1\hbox{--} {P}_0\right)}^2\left(\mathrm{A}+\mathrm{B}\right)}=121\\ {}\mathrm{N}{'}_{\mathrm{SCAP}}=\frac{{\mathrm{N}}_{\mathrm{non}-\mathrm{SCAP}}}{\mathrm{R}}=25\end{array}} $$ The rate of ineligible inclusion was 10–30%. The primary number of patients with non-SCAP was calculated as follows: $$ {\mathrm{N}}_{\mathrm{non}\hbox{-} \mathrm{SCAP}}=\frac{{\mathrm{N}}_{\mathrm{non}-\mathrm{SCAP}}^{\hbox{'}}}{1-30\%}=173 $$ The primary number of patients with SCAP was calculated as follows: $$ {\mathrm{N}}_{\mathrm{SCAP}}=\frac{{\mathrm{N}}_{\mathrm{SCAP}}^{\hbox{'}}}{1\hbox{-} 30\%}=36 $$ We ultimately recruited 252 patients with CAP, including 103 with SCAP and 149 with non-SCAP. All patients with SCAP were admitted to the intensive care unit (ICU). The screening process is shown in Fig. 1. Thirty healthy people (>18 years old, without any exclusionary diseases) served as a control group. All subjects provided informed consent. This study was approved by the medical ethics committee of Peking University People's Hospital. Flowchart of the study population. SCAP severe community-acquired pneumonia Blood sample collection Clinical data were recorded on all patients; these included whole blood leukocyte count (WBC), blood biochemical assessment, C-reactive protein (CRP), procalcitonin (PCT), blood gas analysis, and chest images. The confusion, urea, respiratory rate, blood pressure, and age ≥65 years old (CURB-65) score [20], PSI [21], and Acute Physiology and Chronic Health Evaluation (APACHE) II [22] were calculated from clinical and laboratory data. Peripheral venous blood samples were collected within 2 days after admission, in sterile, pro-coagulation tubes and centrifuged immediately; the resulting serum samples were stored at –80 °C until analysis. Measurement of suPAR and syndecan-4 Serum suPAR levels were measured using quantitative enzyme-linked immunosorbent assay (ELISA) kits (DUP00; R&D Systems, Minneapolis, MN, USA) in duplicate as instructed by the manufacturer. Briefly, test wells containing serum samples, and standard wells containing a gradient concentration of a standard protein, were analysed using a microplate assay. Absorbance at 450 nm was measured using the Multiskan FC (Thermo, Waltham, MA, USA). The detection sensitivity was 33 pg/mL and the intra-assay and inter-assay coefficients of variation were < 8%. Serum syndecan-4 levels were determined using ELISA (JP27188; Immuno-Biological Laboratories, Fujioka, Japan) in duplicate as instructed by the manufacturer. Absorbance at 570 nm was measured using the Multiskan FC. The detection sensitivity was 3.94 pg/mL and the intra-assay and inter-assay coefficients of variation were < 5%. Using the standard curve, the quantities of syndecan-4 and suPAR were calculated using CurvExpert Professional 2.6.3 (Hyams Development, Madison, WI, USA). Normally distributed continuous variables are expressed as means ± standard error of the mean (SEM), abnormally distributed continuous variables are expressed as median (interquartile range) and categorical variables are expressed as number (percentage). For equivalent variables with a normal distribution, the independent Student's t test was utilized to compare two groups. The Mann–Whitney U test was used to compare categorical variables and abnormal distributional variables between two groups. One-way analysis of variance and the Kruskal–Wallis test were used to compare multiple groups. Correlation between variables with normal distribution was assessed using Pearson's correlation test, while abnormal distributions were assessed using Spearman's rho test. Receiver operating characteristic (ROC) analysis was performed to differentiate patients with CAP from those with SCAP, and to separate non-survivors from the overall patients with SCAP. Areas under the curve (AUCs), optimal threshold values, sensitivity, and specificity were calculated. Kaplan–Meier methods were used to build 30-day survival curves, and survival rates were compared using the log-rank test. Cox proportional hazards regression analyses were used to analyse the effect of an array of variables on 30-day survival. A two-sided p value <0.05 was considered statistically significant; confidence intervals (CIs) were set at 95%. Statistical analyses were performed by using GraphPad Prism version 6.01 software (GraphPad Software, La Jolla, CA, USA) and MedCalc statistical software version 15.2.2 (MedCalc Software, Ostend, Belgium). Characteristics of the enrolled patients From January 2014 to December 2016, 252 patients (165 male, 87 female) were enrolled and divided into two groups (103 with SCAP and 149 with non-SCAP) according to their clinical characteristics. As shown in Table 1, there were no significant differences between the SCAP and non-SCAP groups in sex, past medical history, smoking, antibiotic pre-treatment, and whether a causative pathogen was established. Chest radiographs revealed that 84.47% and 29.13% of patients with SCAP exhibited bilateral lung infection and pleural effusion, respectively; these proportions were significant higher than those observed in the non-SCAP group (29.53% and 4.70%, respectively; p < 0.001 for both comparisons). Laboratory analyses showed that the SCAP group exhibited a higher WBC and higher neutrophil/lymphocyte ratio (NLR) than detected in the non-SCAP group (p < 0.001 for both comparisons). Furthermore, serum CRP and PCT levels were substantially greater in the SCAP group than in the non-SCAP group (p < 0.001 for both comparisons). The CURB-65, PSI, and APACHE II scores in the SCAP group were 1 (0–2), 94.27 ± 37.42, and 15.18 ± 5.65, respectively. These scores were significantly higher than those of patients in the non-SCAP group (0 (0–1), 57.50 (35.00–72.25), 8 (6–9), respectively; p < 0.001 for all comparisons). The mortality rate in the SCAP group was 18.45%, while no patients with non-SCAP died in hospital. Table 1 Clinical characteristics and laboratory findings of the study population Levels of suPAR and syndecan-4 in each group As shown in Fig. 2, serum suPAR level in healthy individuals was 1.71 ± 1.00 ng/mL, which was significantly lower than that in the non-SCAP group (2.76 (2.01–4.20) ng/mL, p < 0.001). The SCAP group exhibited the highest level of suPAR, 6.17 (4.37–9.72) ng/mL (compared with the non-SCAP level, p < 0.001). In patients with SCAP, the suPAR level of non-survivors was 13.19 (6.05–18.68) ng/mL, which was notably higher than the suPAR level of survivors (6.09 (4.01–8.63) ng/mL, p < 0.001). Levels of soluble urokinase-type plasminogen activator receptor (suPAR) and syndecan-4 across multiple groups. a, b Levels of suPAR and syndecan-4 in patients with severe community-acquired pneumonia SCAP, patients with non-SCAP, and healthy individuals, respectively. For suPAR, SCAP versus non-SCAP, p < 0.001; non-SCAP versus healthy individuals, p < 0.001. For syndecan-4, SCAP versus healthy individuals, p < 0.001; non-SCAP versus healthy individuals, p < 0.001. c, d Levels of suPAR and syndecan-4 in survivors and non-survivors among patients with SCAP, patients with SCAP who met at least one major criterion (major criteria), and patients with SCAP who met only minor criterion (minor criteria). For suPAR, survivors versus non-survivors, p < 0.001; major criteria versus minor criteria, p = 0.459. For Syndecan-4, survivors versus non-survivors, p = 0.002; major criteria versus minor criteria, p = 0.671. e, f Comparison of suPAR and syndecan-4 in patients with SCAP and non-SCAP for various causative pathogens; p > 0.05 for all comparisons In contrast, the expression of syndecan-4 was reduced in patients with CAP. The level of syndecan-4 in healthy individuals was 14.30 ± 5.34 ng/mL, whereas in the SCAP and non-SCAP groups the levels were 9.54 ± 5.92 and 10.15 ± 4.37 ng/mL, respectively (p < 0.001 for both comparisons). There was no difference in the levels of syndecan-4 between the SCAP and non-SCAP groups (p = 0.177, data not shown). The expression of syndecan-4 in the non-survivor SCAP group was lower than in the survivor SCAP group (5.81 ± 4.38 and 10.21 (6.20–13.56) ng/mL, respectively, p = 0.002). In order to avoid possible effects of pre-admission duration of symptoms on the levels of suPAR and syndecan-4, we divided patients in two groups according to the duration of symptoms: > 3 days or ≤ 3 days. The expression of suPAR and syndecan-4 was not significantly different between the two groups both in the SCAP and the non-SCAP groups (suPAR, p = 0.549 (SCAP), p = 0.339 (non-SCAP); syndecan-4, p = 0.078 (SCAP), p = 0.635 (non-SCAP), data not shown). Moreover, the expression of suPAR and syndecan-4 was not significantly different between patients with SCAP who met at least one major criterion and those who met only minor criteria (suPAR, p = 0.459; syndecan-4, p = 0.671). Figure 2e and f summarizes the pathogens detected in patients with CAP who were classified as having bacterial, viral, atypical pathogen (including mycoplasma pneumonia, chlamydial pneumonia, and legionella pneumonia), mixed pathogen, and unknown pathogen infections. The causative agent was detected in 50 patients with SCAP and 65 patients with non-SCAP. There were no differences in the levels of suPAR or syndecan-4 between patients with SCAP and patients with non-SCAP who exhibited different causative pathogen infections (p > 0.05 for all comparisons). Correlation between levels of suPAR and syndecan-4 and the severity of CAP We chose the CURB-65, PSI, and APACHE II scoring systems to evaluate the severity of CAP. Using our entire sample of 252 patients with CAP, serum suPAR level was positively correlated with all three scoring systems: CURB-65, PSI, and APACHE II (r = 0.399, r = 0.433, and r = 0.496, respectively; p < 0.001 for all comparisons; Fig. 3). In addition, suPAR levels were also positively correlated with WBC (r = 0.232, p < 0.001), NLR (r = 0.351, p < 0.001), CRP (r = 0.272, p < 0.001), and PCT (r = 0.407, p < 0.001); suPAR levels were not positively correlated with age (r = 0.102, p = 0.107). Correlation of soluble urokinase-type plasminogen activator receptor (suPAR) and syndecan-4 levels with multiple scoring systems across 252 patients with community-acquired pneumonia (CAP). r is the correlation coefficient. a, c, e Levels of suPAR were significantly positively correlated with the confusion, urea, respiratory rate, blood pressure, and age ≥65 years old (CURB-65) score (r = 0.399, p < 0.001), Pneumonia Severity Index (PSI) (r = 0.433, p < 0.001), and Acute Physiology and Chronic Health Evaluation II (APACHE II) score (r = 0.496, p < 0.001), respectively. b, d, f Levels of syndecan-4 were significantly negatively correlated with CURB-65 score (r = -0.220, p = 0.001), PSI (r = -0.279, p < 0.001), and APACHE II score (r = -0.184, p = 0.003), respectively Conversely, syndecan-4 levels in patients with CAP were negatively correlated with all three scoring systems: CURB-65 (r = -0.220, p = 0.001), PSI (r = -0.279, p < 0.001), and APACHE II (r = -0.184, p = 0.003) (Fig. 3). Syndecan-4 levels were also negatively correlated with PCT (r = -0.304, p < 0.001), but not with WBC (r = 0.005, p = 0.943), NLR (r = -0.067, p = 0.326), CRP (r = -0.127, p = 0.081), or age (r = 0.023, p = 0.719). Value of suPAR and syndecan-4 in predicting SCAP in patients with CAP Figure 4 and Table 2 show that suPAR reliably predicted SCAP in patients with CAP, with an AUC of 0.835 (p < 0.001); further, suPAR prediction capability was second only to the APACHE II score (AUC 0.886, p < 0.001). Using a suPAR threshold value of 4.33 ng/mL for diagnosis of SCAP, the sensitivity and specificity for discriminating SCAP and CAP were 76.70% and 79.19%, respectively. Using an APACHE II score >10 as a threshold for diagnosis, the sensitivity and specificity for discriminating SCAP from CAP were 78.64% and 82.55%, respectively. Syndecan-4 was the least accurate in predicting SCAP, with an AUC of 0.550 (not statistically different, p = 0.187). Detailed results are shown in Fig. 4 and Table 2. Receiver operating characteristic curve analysis of various parameters to discriminate patients with severe community-acquired pneumonia from patients with community-acquired pneumonia. suPAR soluble urokinase-type plasminogen activator receptor, NLR neutrophil/lymphocyte ratio, WBC whole blood leukocyte count, CRP C-reactive protein, PCT procalcitonin, CURB-65 confusion, urea, respiratory rate, blood pressure, and age ≥65 years old score, PSI Pneumonia Severity Index Score, APACHE II Acute Physiology and Chronic Health Evaluation II score, AUC area under the curve Table 2 Area under the curve (AUC) and thresholds for predicting SCAP in patients with CAP Prognostic value of suPAR and syndecan-4 in patients with SCAP The ability of suPAR and syndecan-4 to predict total mortality in patients with SCAP is summarized in Table 3. Notably, the AUCs for suPAR and PSI score were 0.772 and 0.787, respectively (p < 0.001 for both comparisons). The optimal threshold to predict death was 10.22 ng/mL of suPAR, with a sensitivity of 68.23% and specificity of 89.29%. While patients with syndecan-4 concentrations <6.68 ng/mL exhibited a noticeable increase in risk of death, this threshold yielded sensitivity and specificity of 73.68% and 71.43%, respectively for prediction of total mortality. The remaining variables, WBC, NLR, CRP, and PCT, had no prognostic value for mortality prediction in patients with SCAP (all p > 0.05). Table 3 Area under the curve (AUC) and thresholds for predicting total mortality in patients with SCAP The combination of suPAR, syndecan-4, and PSI was the most accurate predictor of 30-day mortality, with an AUC of 0.885. The AUC of the combination of suPAR, syndecan-4 and CURB-65, and of suPAR, syndecan-4 and APACHE II score was 0.878 and 0.881, respectively (Fig. 5). Receiver operating characteristic (ROC) curve analysis of various parameters to predict 30-day mortality in patients with SCAP Kaplan–Meier curves were used to assess the relationship between suPAR and syndecan-4 levels in the prediction of 30-day mortality in patients with SCAP (Fig. 6). Consistent with the prediction threshold for total mortality, the optimal threshold values for 30-day mortality were also 10.22 ng/mL for suPAR (p < 0.001) and 6.68 ng/mL for syndecan-4 (p < 0.001). Kaplan–Meier analysis of 30-day mortality in patients with severe community-acquired pneumonia. Analysis was stratified by soluble urokinase-type plasminogen activator receptor (suPAR) (a) and syndecan-4 (b) levels In univariate Cox proportional hazards regression analysis to determine 30-day survival, suPAR levels ≥10.22 ng/mL and APACHE II scores >14 were associated with significantly higher risk ratios than any other variables. In multivariate Cox proportional hazards regression, only suPAR levels ≥10.22 ng/mL and syndecan-4 levels ≤6.68 ng/mL were strong, independent predictors of 30-day survival. Results are summarized in Table 4. Table 4 Cox proportional hazards regression analysis of the effects of multiple variables on 30-day survival In this prospective study of 252 patients with CAP, there were five major findings: (1) serum suPAR levels increased and syndecan-4 levels decreased in patients with CAP, especially in non-survivors, but the changes were not correlated with pre-admission duration of symptoms or the identity of the causative pathogen; (2) elevated suPAR was positively correlated with severity scores (CURB-65, PSI, and APACHE II), whereas lower syndecan-4 was negatively correlated with these same scores; (3) a suPAR threshold value of 4.33 ng/mL discriminated SCAP from CAP, with 76.70% sensitivity and 79.19% specificity, but syndecan-4 levels did not accurately predict SCAP in patients with CAP; (4) the mortality rate was significantly higher in patients with suPAR and syndecan-4 levels ≥10.22 ng/mL and ≤6.68 ng/mL, respectively, and both serum proteins independently predicted 30-day mortality; and (5) the combination of suPAR and syndecan-4 levels with the clinical severity score significantly improved the accuracy of mortality prediction. Taken together, these results suggest that serum suPAR and syndecan-4 levels can predict disease severity in patients with CAP. suPAR has been widely studied in a variety of infectious diseases, including sepsis [23], tuberculosis [24], and HIV infection [25]. Savva et al. demonstrated that suPAR is a reliable predictor of severity of sepsis and can independently predict unfavourable outcomes in both ventilator-associated pneumonia and sepsis [26]. The findings of our study are identical to previous studies [26, 27] where suPAR levels were significantly elevated in patients with SCAP, especially in non-survivors. These results enhance our understanding of the expression of suPAR in severe infections. Furthermore, we found that suPAR levels are not correlated with specific causative agents in the same risk stratification of CAP. This is consistent with a previous study demonstrating that suPAR has no discriminatory value in patients with bacterial, viral, or parasitic infections [28]. Studies on the expression of syndecan-4 in infection are few. Nikaido et al. found that syndecan-4 levels were significantly increased in a study of 30 patients with acute mild pneumonia, and that this protein provides an anti-inflammatory function in acute pneumonia [29]. In contrast, our results show that syndecan-4 levels are significantly lower in patients with CAP compared with healthy individuals; this reduction was more pronounced in non-survivors. The lower syndecan-4 levels in our study may result from the larger sample size and the increase in CAP severity in our study, compared with the previous study. Syndecan-4 expression was elevated in mild acute pneumonia because of bacterial components that stimulated toll-like receptors 2 and 4 [29]. Importantly, the causative mechanism for lowered syndecan-4 levels in patients with CAP is not yet clear. Further, as in the analysis of suPAR, syndecan-4 levels were not statistically different among patients with CAP due to a variety of causative agents. We investigated the correlation between suPAR and syndecan-4 levels and a variety of clinical parameters. There was strong correlation between levels of suPAR or syndecan-4 and the clinical scoring systems, suggesting that serum suPAR and syndecan-4 might aid in clinical judgment of the degree of severity. Remarkably, there was broad correlation between suPAR levels and a variety of laboratory measures: WBC, NLR, CRP, and PCT. In contrast, Wittenhagen et al. found that suPAR was not correlated with CRP in pneumococcal bacteraemia [27], but these differences may result from different patient samples. The NLR has been reported to indicate mortality and prognosis in various diseases, including tumour [30], inflammation [31], and heart failure [32]. Since NLR is convenient, easily obtained, and low cost, we included NLR as a clinical reference biomarker in this study. The diagnostic value of suPAR is reportedly poor and was not shown to be superior to CRP or PCT. The AUC for suPAR to discriminate 197 septic ICU patients from 76 non-septic patients is 0.62 [33]. Kofoed et al. measured plasma suPAR levels in 57 patients with systemic inflammatory response syndrome (SIRS), and reported an AUC of 0.54 for suPAR (and 0.81 and 0.72 for CRP and PCT, respectively) in diagnosing bacterial infection [28]. Savva et al. reported an AUC of 0.758 for suPAR to discriminate severe sepsis or patients with septic shock within a group of 180 patients with sepsis; in the same study, the AUC for PCT was 0.652 [26]. Hoenighl et al. reported that in 132 patients with SIRS, the AUC for suPAR and PCT was 0.726 and 0.744, respectively, to differentiate patients with and without bacteraemia [34]. However, our results indicate that suPAR (AUC 0.835) had good diagnostic value in discriminating SCAP from CAP, which is comparable to APACHE II (AUC 0.886), the "gold standard" criterion for stratifying critically ill patients [22]. Taken together, these studies suggest that suPAR might have better diagnostic value in discriminating between severe and mild cases of infectious disease than in discriminating between infectious and non-infectious diseases. Many studies focus on the prognostic value of suPAR; the AUC for suPAR in predicting in-hospital mortality ranges from 0.67 to 0.84 [35,36,37]. In our study, the AUC of suPAR to predict mortality was 0.772. Kaplan–Meier curves showed that suPAR ≥10.22 ng/mL was associated with significantly higher mortality risk; this threshold value is identical to at reported in prior studies [35, 38]. Further, in multivariate Cox proportional hazards regression, suPAR was an independent marker to predict 30-day mortality. This indicates that suPAR may provide a promising prognostic biomarker in SCAP. The mechanism for increased suPAR levels in severely ill patients was uncertain. suPAR is expressed on the surface of various cells, including neutrophils, lymphocytes, macrophages, and endotheliocytes. During an inflammatory response, the presence of increased suPAR-expressing cells and the accelerated cleavage of suPAR might result in high blood levels of suPAR [35]. Furthermore, severely ill patients often present with dysfunctional blood coagulation. Excessive cytokine release activates the coagulation system in multiple ways, and contributes to the formation of a complex reaction that includes coagulation, inflammatory mediators, cytokines, and complement [39]. These processes might also promote expression of suPAR. There has been no report on the diagnostic and prognostic value of syndecan-4. Our results show that although syndecan-4 did not discriminate SCAP from CAP, it might be used to predict mortality in patients with SCAP, with an AUC of 0.744. Further, multivariate Cox regression analysis demonstrated that the syndecan-4 level was an independent factor related to 30-day mortality. Previous studies reported that syndecan-4–deficient mice exhibit significantly higher bacterial counts, more severe pulmonary inflammation, and higher mortality, compared with wild-type mice [29, 40, 41]. Notably, these studies found that syndecan-4 expression was elevated in macrophages, endothelial cells, and epithelial cells after stimulation with lipopolysaccharide in vitro [40, 41]. Although suPAR and syndecan-4 were both good in predicting 30-day mortality, none was better than PSI. Remarkably, our results show that the addition of both suPAR and syndecan-4 to a clinical severity scoring method significantly improved their prognostic accuracy. The severity score alone is often insufficient to obtain satisfactory predictive accuracy. Therefore, biomarkers are thought to be able to better stratify patients. Mid-region proadrenomedullin (MR-proADM) has been hitherto the best single predictor of short-term and long-term mortality. Notably, the AUC of a combination of MR-proADM and PSI for 30-day mortality prediction can reach 0.914 [6]. Our study has certain limitations. Only serum suPAR and syndecan-4 levels were detected at the time of admission; dynamic and follow-up changes (in response to treatment) were not investigated. Besides, suPAR is a non-specific inflammatory marker that has been shown to be increased in diabetes mellitus [42], liver disease [43], and heart failure [44]. While our study enrolled a number of patients with those comorbidities (as shown in Table 1), the pre-admission levels of suPAR should be tested. The effects of changes in suPAR and syndecan-4 expression during the pathogenesis of CAP should be further investigated. In conclusion, we demonstrated that serum suPAR is elevated, and serum syndecan-4 is reduced, in patients with SCAP. suPAR is able to accurately predict SCAP and mortality in patients with CAP. Further, we revealed that syndecan-4 has no diagnostic value but that it can serve as a prognostic biomarker in patients with SCAP. The combination of both suPAR and syndecan-4 with clinical scores significantly improved their 30-day mortality prediction. APACHE II: Acute Physiology and Chronic Health Evaluation II Community-acquired pneumonia CRP: CURB-65: Confusion, urea, respiratory rate, blood pressure, and age ≥65 years old, scoring system MR-proADM: Mid-region proadrenomedullin NLR: Neutrophil/lymphocyte ratio Plasminogen activator PCT: pro-ADM: Proadrenomedullin Pneumonia Severity Index SCAP: Severe community-acquired pneumonia suPAR: Soluble urokinase-type plasminogen activator receptor uPAR: Urokinase-type plasminogen activator receptor WBC: Whole blood leukocyte count Prina E, Ranzani OT, Torres A. Community-acquired pneumonia. Lancet. 2015;386:1097–108. Musher DM, Thorner AR. Community-acquired pneumonia. N Engl J Med. 2014;371:1619–28. Chalmers JD. Identifying severe community-acquired pneumonia: moving beyond mortality. Thorax. 2015;70:515–6. Fine MJ, Auble TE, Yealy DM, Hanusa BH, Weissfeld LA, Singer DE, Coley CM, Marrie TJ, Kapoor WN. A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med. 1997;336:243–50. Gibot S, Cravoisy A, Levy B, Bene MC, Faure G, Bollaert PE. Soluble triggering receptor expressed on myeloid cells and the diagnosis of pneumonia. N Engl J Med. 2004;350:451–8. Bello S, Lasierra AB, Mincholé E, Fandos S, Ruiz MA, Vera E, de Pablo F, Ferrer M, Menendez R, Torres A. Prognostic power of proadrenomedullin in community-acquired pneumonia is independent of aetiology. Eur Respir J. 2012;39:1144–55. Krüger S, Ewig S, Giersdorf S, Hartmann O, Suttorp N, Welte T; German Competence Network for the Study of Community Acquired Pneumonia (CAPNETZ) Study Group. Cardiovascular and inflammatory biomarkers to predict short- and long-term survival in community-acquired pneumonia: Results from the German Competence Network, CAPNETZ. Am J Respir Crit Care Med. 2010;182:1426-34. Manetti M, Rosa I, Milia AF, Guiducci S, Carmeliet P, Ibba-Manneschi L, Matucci-Cerinic M. Inactivation of urokinase-type plasminogen activator receptor (uPAR) gene induces dermal and pulmonary fibrosis and peripheral microvasculopathy in mice: a new model of experimental scleroderma? Ann Rheum Dis. 2014;73:1700–9. Kobayashi N, Ueno T, Ohashi K, Yamashita H, Takahashi Y, Sakamoto K, Manabe S, Hara S, Takashima Y, Dan T, Pastan I, Miyata T, Kurihara H, Matsusaka T, Reiser J, Nagata M. Podocyte injury-driven intracapillary plasminogen activator inhibitor type 1 accelerates podocyte loss via uPAR-mediated β1-integrin endocytosis. Am J Physiol Renal Physiol. 2015;308:F614–26. Genua M, D'Alessio S, Cibella J, Gandelli A, Sala E, Correale C, Spinelli A, Arena V, Malesci A, Rutella S, Ploplis VA, Vetrano S, Danese S. The urokinase plasminogen activator receptor (uPAR) controls macrophage phagocytosis in intestinal inflammation. Gut. 2015;64:589–600. Mazzieri R, Pietrogrande G, Gerasi L, Gandelli A, Colombo P, Moi D, Brombin C, Ambrosi A, Danese S, Mignatti P, Blasi F, D'Alessio S. Urokinase receptor promotes skin tumor formation by preventing epithelial cell activation of Notch1. Cancer Res. 2015;75:4895–909. Matzkies LM, Raggam RB, Flick H, Rabensteiner J, Feierl G, Hoenigl M, Prattes J. Prognostic and diagnostic potential of suPAR levels in pleural effusion. J Infect. 2017;75:465–7. Donadello K, Covajes C, Covajes C, Vincent JL. suPAR as a prognostic biomarker in sepsis. BMC Med. 2012;10:2. Choi S, Chung H, Hong H, Kim SY, Kim SE, Seoh JY, Moon CM, Yang EG, Oh ES. Inflammatory hypoxia induces syndecan-2 expression through IL-1β-mediated FOXO3a activation in colonic epithelia. FASEB J. 2017;31:1516–30. Brauer R, Ge L, Schlesinger SY, Birkland TP, Huang Y, Parimon T, Lee V, McKinney BL, McGuire JK, Parks WC, Chen P. Syndecan-1 attenuates lung injury during influenza infection by potentiating c-met signaling to suppress epithelial apoptosis. Am J Respir Crit Care Med. 2016;194:333–44. Cassinelli G, Zaffaroni N, Lanzi C. The heparanase/heparan sulfate proteoglycan axis: a potential new therapeutic target in sarcomas. Cancer Lett. 2016;382:245–54. Santoso A, Kikuchi T, Tode N, Hirano T, Komatsu R, Damayanti T, Motohashi H, Yamamoto M, Kojima T, Uede T, Nukiwa T, Ichinose M. Syndecan 4 mediates Nrf2-dependent expansion of bronchiolar progenitors that protect against lung inflammation. Mol Ther. 2016;24:41–52. Niederman MS, Mandell LA, Anzueto A, Bass JB, Broughton WA, Campbell GD, Dean N, File T, Fine MJ, Gross PA, Martinez F, Marrie TJ, Plouffe JF, Ramirez J, Sarosi GA, Torres A, Wilson R, Yu VL; American Thoracic Society. Guidelines for the management of adults with community-acquired pneumonia. Diagnosis, assessment of severity, antimicrobial therapy, and prevention. Am J Respir Crit Care Med. 2001;163:1730–54. Mandell LA, Wunderink RG, Anzueto A, Bartlett JG, Campbell GD, Dean NC, Dowell SF, File TM Jr, Musher DM, Niederman MS, Torres A, Whitney CG; Infectious Diseases Society of America; American Thoracic Society. Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community-acquired pneumonia in adults. Clin Infect Dis. 2007;44 Suppl 2:S27–72. Capelastegui A, España PP, Quintana JM, Areitio I, Gorordo I, Egurrola M, Bilbao A. Validation of a predictive rule for the management of community-acquired pneumonia. Eur Respir J. 2006;27:151–7. Spindler C, Ortqvist A. Prognostic score systems and community-acquired bacteraemic pneumococcal pneumonia. Eur Respir J. 2006;28:816–23. Giamarellos-Bourboulis EJ, Norrby-Teglund A, Mylona V, Savva A, Tsangaris I, Dimopoulou I, Mouktaroudi M, Raftogiannis M, Georgitsi M, Linnér A, Adamis G, Antonopoulou A, Apostolidou E, Chrisofos M, Katsenos C, Koutelidakis I, Kotzampassi K, Koratzanis G, Koupetori M, Kritselis I, Lymberopoulou K, Mandragos K, Marioli A, Sundén-Cullberg J, Mega A, Prekates A, Routsi C, Gogos C, Treutiger CJ, Armaganidis A, Dimopoulos G. Risk assessment in sepsis: a new prognostication rule by APACHE II score and serum soluble urokinase plasminogen activator receptor. Crit Care. 2012;16(4):R149. Rudolf F, Wagner AJ, Back FM, Gomes VF, Aaby P, Østergaard L, Eugen-Olsen J, Wejse C. Tuberculosis case finding and mortality prediction: added value of the clinical TBscore and biomarker suPAR. Int J Tuberc Lung Dis. 2017;21:67–72. Rasmussen LJ, Knudsen A, Katzenstein TL, Gerstoft J, Obel N, Jørgensen NR, Kronborg G, Benfield T, Kjaer A, Eugen-Olsen J, Lebech AM. Soluble urokinase plasminogen activator receptor (suPAR) is a novel, independent predictive marker of myocardial infarction in HIV-1-infected patients: a nested case-control study. HIV Med. 2016;17:350–7. Savva A, Raftogiannis M, Baziaka F, Routsi C, Antonopoulou A, Koutoukas P, Tsaganos T, Kotanidou A, Apostolidou E, Giamarellos-Bourboulis EJ, Dimopoulos G. Soluble urokinase plasminogen activator receptor (suPAR) for assessment of disease severity in ventilator-associated pneumonia and sepsis. J Infect. 2011;63:344–50. Wittenhagen P, Kronborg G, Weis N, Nielsen H, Obel N, Pedersen SS, Eugen-Olsen J. The plasma level of soluble urokinase receptor is elevated in patients with Streptococcus pneumoniae bacteraemia and predicts mortality. Clin Microbiol Infect. 2004;10:409–15. Kofoed K, Andersen O, Kronborg G, Tvede M, Petersen J, Eugen-Olsen J, Larsen K. Use of plasma C-reactive protein, procalcitonin, neutrophils, macrophage migration inhibitory factor, soluble urokinase-type plasminogen activator receptor, and soluble triggering receptor expressed on myeloid cells-1 in combination to diagnose infections: a prospective study. Crit Care. 2007;11:R38. Nikaido T, Tanino Y, Wang X, Sato S, Misa K, Fukuhara N, Sato Y, Fukuhara A, Uematsu M, Suzuki Y, Kojima T, Tanino M, Endo Y, Tsuchiya K, Kawamura I, Frevert CW, Munakata M. Serum syndecan-4 as a possible biomarker in patients with acute pneumonia. J Infect Dis. 2015;212:1500–8. Derman BA, Macklis JN, Azeem MS, Sayidine S, Basu S, Batus M, Esmail F, Borgia JA, Bonomi P, Fidler MJ. Relationships between longitudinal neutrophil to lymphocyte ratios, body weight changes, and overall survival in patients with non-small cell lung cancer. BMC Cancer. 2017;17:141. Curbelo J, Luquero Bueno S, Galván-Román JM, Ortega-Gómez M, Rajas O, Fernández-Jiménez G, Vega-Piris L, Rodríguez-Salvanes F, Arnalich B, Díaz A, Costa R, de la Fuente H, Lancho Á, Suárez C, Ancochea J, Aspa J. Inflammation biomarkers in blood as mortality predictors in community-acquired pneumonia admitted patients: Importance of comparison with neutrophil count percentage or neutrophil-lymphocyte ratio. PLoS One. 2017;12, e0173947. Benites-Zapata VA, Hernandez AV, Nagarajan V, Cauthen CA, Starling RC, Tang WH. Usefulness of neutrophil-to-lymphocyte ratio in risk stratification of patients with advanced heart failure. Am J Cardiol. 2015;115:57–61. Koch A, Voigt S, Kruschinski C, Sanson E, Dückers H, Horn A, Yagmur E, Zimmermann H, Trautwein C, Tacke F. Circulating soluble urokinase plasminogen activator receptor is stably elevated during the first week of treatment in the intensive care unit and predicts mortality in critically ill patients. Crit Care. 2011;15:R63. Hoenigl M, Raggam RB, Wagner J, Valentin T, Leitner E, Seeber K, Zollner-Schwetz I, Krammer W, Prüller F, Grisold AJ, Krause R. Diagnostic accuracy of soluble urokinase plasminogen activator receptor (suPAR) for prediction of bacteremia in patients with systemic inflammatory response syndrome. Clin Biochem. 2013;46:225–9. Suberviola B, Castellanos-Ortega A, Ruiz Ruiz A, Lopez-Hoyos M, Santibañez M. Hospital mortality prognostication in sepsis using the new biomarkers suPAR and proADM in a single determination on ICU admission. Intensive Care Med. 2013;39:1945–52. Mölkänen T, Ruotsalainen E, Thorball CW, Järvinen A. Elevated soluble urokinase plasminogen activator receptor (suPAR) predicts mortality in Staphylococcus aureus bacteremia. Eur J Clin Microbiol Infect Dis. 2011;30:1417–24. Huttunen R, Syrjänen J, Vuento R, Hurme M, Huhtala H, Laine J, Pessi T, Aittoniemi J. Plasma level of soluble urokinase-type plasminogen activator receptor as a predictor of disease severity and case fatality in patients with bacteraemia: a prospective cohort study. J Intern Med. 2011;270:32–40. Jalkanen V, Yang R, Linko R, Huhtala H, Okkonen M, Varpula T, Pettilä V, Tenhunen J, FINNALI Study Group. SuPAR and PAI-1 in critically ill, mechanically ventilated patients. Intensive Care Med. 2013;39:489–96. Murciano JC, Higazi AA, Cines DB, Muzykantov VR. Soluble urokinase receptor conjugated to carrier red blood cells binds latent pro-urokinase and alters its functional profile. J Control Release. 2009;139:190–6. Tanino Y, Chang MY, Wang X, Gill SE, Skerrett S, McGuire JK, Sato S, Nikaido T, Kojima T, Munakata M, Mongovin S, Parks WC, Martin TR, Wight TN, Frevert CW. Syndecan-4 regulates early neutrophil migration and pulmonary inflammation in response to lipopolysaccharide. Am J Respir Cell Mol Biol. 2012;47:196–202. Ishiguro K, Kadomatsu K, Kojima T, Muramatsu H, Iwase M, Yoshikai Y, Yanada M, Yamamoto K, Matsushita T, Nishimura M, Kusugami K, Saito H, Muramatsu T. Syndecan-4 deficiency leads to high mortality of lipopolysaccharide-injected mice. J Biol Chem. 2001;276:47483–8. Theilade S, Lyngbaek S, Hansen TW, Eugen-Olsen J, Fenger M, Rossing P, Jeppesen JL. Soluble urokinase plasminogen activator receptor levels are elevated and associated with complications in patients with type 1 diabetes. J Intern Med. 2015;277:362–71. Kirkegaard-Klitbo DM, Langkilde A, Mejer N, Andersen O, Eugen-Olsen J, Benfield T. Soluble urokinase plasminogen activator receptor is a predictor of incident non-AIDS comorbidity and all-cause mortality in human immunodeficiency virus type 1 infection. J Infect Dis. 2017;216:819–23. Koller L, Stojkovic S, Richter B, Sulzgruber P, Potolidis C, Liebhart F, Mörtl D, Berger R, Goliasch G, Wojta J, Hülsmann M, Niessner A. Soluble urokinase-type plasminogen activator receptor improves risk prediction in patients with chronic heart failure. JACC Heart Fail. 2017;5:268–77. The authors thank the following hospitals for their efforts and dedication in enrolling study participants: Peking University People's Hospital, Tianjin Medical University General Hospital, Wuhan University People's Hospital, Fujian Provincial Hospital, and The First Affiliated Hospital of Zhengzhou University. This study was funded by The National Key Research and Development Programme of China (2016YFC0903800). The datasets generated and analysed during the current study are not publicly available due to health privacy concerns, but are available from the corresponding author on reasonable request. Department of Respiratory & Critical Care Medicine, Peking University People's Hospital, Beijing, People's Republic of China Qiongzhen Luo , Pu Ning , Yali Zheng , Ying Shang , Bing Zhou & Zhancheng Gao Search for Qiongzhen Luo in: Search for Pu Ning in: Search for Yali Zheng in: Search for Ying Shang in: Search for Bing Zhou in: Search for Zhancheng Gao in: The roles of the authors in this study were as follows: ZC-G conceived this study and obtained research funding. P-N and YL-Z were in charge of sample preservation, and reinforced clinical data for the database. Y-S and B-Z provided experimental support. QZ-L conducted biomarker measurements, analysed data, and wrote and edited the manuscript. All authors read and approved the final manuscript. Correspondence to Zhancheng Gao. All subjects provided informed consent. This study was approved by the medical ethics committee of Peking University People's Hospital. Luo, Q., Ning, P., Zheng, Y. et al. Serum suPAR and syndecan-4 levels predict severity of community-acquired pneumonia: a prospective, multi-centre study. Crit Care 22, 15 (2018) doi:10.1186/s13054-018-1943-y Syndecan-4
CommonCrawl
5.1: Basics of Probability Distributions [ "article:topic", "showtoc:no", "license:ccbysa", "authorname:kkozak" ] https://stats.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fstats.libretexts.org%2FBookshelves%2FIntroductory_Statistics%2FBook%253A_Statistics_Using_Technology_(Kozak)%2F05%253A_Discrete_Probability_Distributions%2F5.01%253A_Basics_of_Probability_Distributions Book: Statistics Using Technology (Kozak) 5: Discrete Probability Distributions Contributed by Kathryn Kozak Professor (Mathematics) at Coconino Community College Examples of each Determining if an event is unusual As a reminder, a variable or what will be called the random variable from now on, is represented by the letter x and it represents a quantitative (numerical) variable that is measured or observed in an experiment. Also remember there are different types of quantitative variables, called discrete or continuous. What is the difference between discrete and continuous data? Discrete data can only take on particular values in a range. Continuous data can take on any value in a range. Discrete data usually arises from counting while continuous data usually arises from measuring. How tall is a plant given a new fertilizer? Continuous. This is something you measure. How many fleas are on prairie dogs in a colony? Discrete. This is something you count. If you have a variable, and can find a probability associated with that variable, it is called a random variable. In many cases the random variable is what you are measuring, but when it comes to discrete random variables, it is usually what you are counting. So for the example of how tall is a plant given a new fertilizer, the random variable is the height of the plant given a new fertilizer. For the example of how many fleas are on prairie dogs in a colony, the random variable is the number of fleas on a prairie dog in a colony. Now suppose you put all the values of the random variable together with the probability that that random variable would occur. You could then have a distribution like before, but now it is called a probability distribution since it involves probabilities. A probability distribution is an assignment of probabilities to the values of the random variable. The abbreviation of pdf is used for a probability distribution function. For probability distributions, \(0 \leq P(x) \leq 1 \operatorname{and} \sum P(x)=1\) Example \(\PageIndex{1}\): Probability Distribution The 2010 U.S. Census found the chance of a household being a certain size. The data is in Example \(\PageIndex{1}\) ("Households by age," 2013). Size of household 1 2 3 4 5 6 7 or more Probability 26.7% 33.6% 15.8% 13.7% 6.3% 2.4% 1.5% Table \(\PageIndex{1}\): Household Size from US Census of 2010 In this case, the random variable is x = number of people in a household. This is a discrete random variable, since you are counting the number of people in a household. This is a probability distribution since you have the x value and the probabilities that go with it, all of the probabilities are between zero and one, and the sum of all of the probabilities is one. You can give a probability distribution in table form (as in Example \(\PageIndex{1}\)) or as a graph. The graph looks like a histogram. A probability distribution is basically a relative frequency distribution based on a very large sample. Example \(\PageIndex{2}\) graphing a probability distribution The 2010 U.S. Census found the chance of a household being a certain size. The data is in the table ("Households by age," 2013). Draw a histogram of the probability distribution. State random variable: x = number of people in a household You draw a histogram, where the x values are on the horizontal axis and are the x values of the classes (for the 7 or more category, just call it 7). The probabilities are on the vertical axis. Figure \(\PageIndex{1}\): Histogram of Household Size from US Census of 2010 Notice this graph is skewed right. Just as with any data set, you can calculate the mean and standard deviation. In problems involving a probability distribution function (pdf), you consider the probability distribution the population even though the pdf in most cases come from repeating an experiment many times. This is because you are using the data from repeated experiments to estimate the true probability. Since a pdf is basically a population, the mean and standard deviation that are calculated are actually the population parameters and not the sample statistics. The notation used is the same as the notation for population mean and population standard deviation that was used in chapter 3. The mean can be thought of as the expected value. It is the value you expect to get if the trials were repeated infinite number of times. The mean or expected value does not need to be a whole number, even if the possible values of x are whole numbers. For a discrete probability distribution function, The mean or expected value is \(\mu=\sum x P(x)\) The variance is \(\sigma^{2}=\sum(x-\mu)^{2} P(x)\) The standard deviation is \(\sigma=\sqrt{\sum(x-\mu)^{2} P(x)}\) where x = the value of the random variable and P(x) = the probability corresponding to a particular x value. Example \(\PageIndex{3}\): Calculating mean, variance, and standard deviation for a discrete probability distribution The 2010 U.S. Census found the chance of a household being a certain size. The data is in the table ("Households by age," 2013). Find the mean Find the variance Find the standard deviation Use a TI-83/84 to calculate the mean and standard deviation Using R to calculate the mean x= number of people in a household a. To find the mean it is easier to just use a table as shown below. Consider the category 7 or more to just be 7. The formula for the mean says to multiply the x value by the P(x) value, so add a row into the table for this calculation. Also convert all P(x) to decimal form. x 1 2 3 4 5 6 7 P(x) 0.267 0.336 0.158 0.137 0.063 0.024 0.015 xP(x) 0.267 0.672 0.474 0.548 0.315 0.144 0.098 Table \(\PageIndex{4}\): Calculating the Mean for a Discrete PDF Now add up the new row and you get the answer 2.525. This is the mean or the expected value, \(\mu\) = 2.525 people. This means that you expect a household in the U.S. to have 2.525 people in it. Now of course you can't have half a person, but what this tells you is that you expect a household to have either 2 or 3 people, with a little more 3-person households than 2-person households. b. To find the variance, again it is easier to use a table version than try to just the formula in a line. Looking at the formula, you will notice that the first operation that you should do is to subtract the mean from each x value. Then you square each of these values. Then you multiply each of these answers by the probability of each x value. Finally you add up all of these values. \(x-\mu\) -1.525 -0.525 0.475 1.475 2.475 3.475 4.475 \((x-\mu)^{2}\) 2.3256 0.2756 0.2256 2.1756 6.1256 12.0756 20.0256 \((x-\mu)^{2} P(x)\) 0.6209 0.0926 0.0356 0.2981 0.3859 0.2898 0.3004 Table \(\PageIndex{5}\): Calculating the Variance for a Discrete PDF Now add up the last row to find the variance, \(\sigma^{2}=2.02375 \text { people }^{2}\). (Note: try not to round your numbers too much so you aren't creating rounding error in your answer. The numbers in the table above were rounded off because of space limitations, but the answer was calculated using many decimal places.) c. To find the standard deviation, just take the square root of the variance, \(\sigma=\sqrt{2.023375} \approx 1.422454\) people. This means that you can expect a U.S. household to have 2.525 people in it, with a standard deviation of 1.42 people. d. Go into the STAT menu, then the Edit menu. Type the x values into L1 and the P(x) values into L2. Then go into the STAT menu, then the CALC menu. Choose 1:1-Var Stats. This will put 1-Var Stats on the home screen. Now type in L1,L2 (there is a comma between L1 and L2) and then press ENTER. If you have the newer operating system on the TI-84, then your input will be slightly different. You will see the output in Figure \(\PageIndex{1}\). Figure \(\PageIndex{1}\): TI-83/84 Output The mean is 2.525 people and the standard deviation is 1.422 people. e. The command would be weighted.mean(x, p). So for this example, the process would look like: x<-c(1, 2, 3, 4, 5, 6, 7) p<-c(0.267, 0.336, 0.158, 0.137, 0.063, 0.024, 0.015) weighted.mean(x, p) [1] 2.525 So the mean is 2.525. To find the standard deviation, you would need to program the process into R. So it is easier to just do it using the formula. Example \(\PageIndex{4}\) Calculating the expected value In the Arizona lottery called Pick 3, a player pays $1 and then picks a three-digit number. If those three numbers are picked in that specific order the person wins $500. What is the expected value in this game? To find the expected value, you need to first create the probability distribution. In this case, the random variable x = winnings. If you pick the right numbers in the right order, then you win $500, but you paid $1 to play, so you actually win $499. If you didn't pick the right numbers, you lose the $1, the x value is -$1. You also need the probability of winning and losing. Since you are picking a three-digit number, and for each digit there are 10 numbers you can pick with each independent of the others, you can use the multiplication rule. To win, you have to pick the right numbers in the right order. The first digit, you pick 1 number out of 10, the second digit you pick 1 number out of 10, and the third digit you pick 1 number out of 10. The probability of picking the right number in the right order is \(\dfrac{1}{10} * \dfrac{1}{10} * \dfrac{1}{10}=\dfrac{1}{1000}=0.001\). The probability of losing (not winning) would be \(1-\dfrac{1}{1000}=\dfrac{999}{1000}=0.999\). Putting this information into a table will help to calculate the expected value. Win or lose x P(x) xP(x) Win $499 0.001 $0.499 Lose -$1 0.999 -$0.999 Table \(\PageIndex{6}\): Finding Expected Value Now add the two values together and you have the expected value. It is \(\$ 0.499+(-\$ 0.999)=-\$ 0.50\). In the long run, you will expect to lose $0.50. Since the expected value is not 0, then this game is not fair. Since you lose money, Arizona makes money, which is why they have the lottery. The reason probability is studied in statistics is to help in making decisions in inferential statistics. To understand how that is done the concept of a rare event is needed. Definition \(\PageIndex{1}\): Rare Event Rule for Inferential Statistics If, under a given assumption, the probability of a particular observed event is extremely small, then you can conclude that the assumption is probably not correct. An example of this is suppose you roll an assumed fair die 1000 times and get a six 600 times, when you should have only rolled a six around 160 times, then you should believe that your assumption about it being a fair die is untrue. If you are looking at a value of x for a discrete variable, and the P(the variable has a value of x or more) < 0.05, then you can consider the x an unusually high value. Another way to think of this is if the probability of getting such a high value is less than 0.05, then the event of getting the value x is unusual. Similarly, if the P(the variable has a value of x or less) < 0.05, then you can consider this an unusually low value. Another way to think of this is if the probability of getting a value as small as x is less than 0.05, then the event x is considered unusual. Why is it "x or more" or "x or less" instead of just "x" when you are determining if an event is unusual? Consider this example: you and your friend go out to lunch every day. Instead of Going Dutch (each paying for their own lunch), you decide to flip a coin, and the loser pays for both. Your friend seems to be winning more often than you'd expect, so you want to determine if this is unusual before you decide to change how you pay for lunch (or accuse your friend of cheating). The process for how to calculate these probabilities will be presented in the next section on the binomial distribution. If your friend won 6 out of 10 lunches, the probability of that happening turns out to be about 20.5%, not unusual. The probability of winning 6 or more is about 37.7%. But what happens if your friend won 501 out of 1,000 lunches? That doesn't seem so unlikely! The probability of winning 501 or more lunches is about 47.8%, and that is consistent with your hunch that this isn't so unusual. But the probability of winning exactly 501 lunches is much less, only about 2.5%. That is why the probability of getting exactly that value is not the right question to ask: you should ask the probability of getting that value or more (or that value or less on the other side). The value 0.05 will be explained later, and it is not the only value you can use. Example \(\PageIndex{5}\) is the event unusual Is it unusual for a household to have six people in the family? If you did come upon many families that had six people in the family, what would you think? Is it unusual for a household to have four people in the family? If you did come upon a family that has four people in it, what would you think? a. To determine this, you need to look at probabilities. However, you cannot just look at the probability of six people. You need to look at the probability of x being six or more people or the probability of x being six or less people. The \(\begin{aligned} P(x \leq 6) &=P(x=1)+P(x=2)+P(x=3)+P(x=4)+P(x=5)+P(x=6) \\ &=26.7 \%+33.6 \%+15.8 \%+13.7 \%+6.3 \%+2.4 \% \\ &=98.5 \% \end{aligned}\) Since this probability is more than 5%, then six is not an unusually low value. The \(\begin{aligned} P(x \geq 6) &=P(x=6)+P(x \geq 7) \\ &=2.4 \%+1.5 \% \\ &=3.9 \% \end{aligned}\) Since this probability is less than 5%, then six is an unusually high value. It is unusual for a household to have six people in the family. b. Since it is unusual for a family to have six people in it, then you may think that either the size of families is increasing from what it was or that you are in a location where families are larger than in other locations. c. To determine this, you need to look at probabilities. Again, look at the probability of x being four or more or the probability of x being four or less. The \(\begin{aligned} P(x \geq 4) &=P(x=4)+P(x=5)+P(x=6)+P(x=7) \\ &=13.7 \%+6.3 \%+2.4 \%+1.5 \% \\ &=23.9 \% \end{aligned}\) Since this probability is more than 5%, four is not an unusually high value. The \(\begin{aligned} P(x \leq 4) &=P(x=1)+P(x=2)+P(x=3)+P(x=4) \\ &=26.7 \%+33.6 \%+15.8 \%+13.7 \% \\ &=89.8 \% \end{aligned}\) Since this probability is more than 5%, four is not an unusually low value. Thus, four is not an unusual size of a family. d. Since it is not unusual for a family to have four members, then you would not think anything is amiss. Eyeglassomatic manufactures eyeglasses for different retailers. The number of days it takes to fix defects in an eyeglass and the probability that it will take that number of days are in the table. Number of days Probabilities 3 9.1% Table \(\PageIndex{8}\): Number of Days to Fix Defects a. State the random variable. b. Draw a histogram of the number of days to fix defects c. Find the mean number of days to fix defects. d. Find the variance for the number of days to fix defects. e. Find the standard deviation for the number of days to fix defects. f. Find probability that a lens will take at least 16 days to make a fix the defect. g. Is it unusual for a lens to take 16 days to fix a defect? h. If it does take 16 days for eyeglasses to be repaired, what would you think? Suppose you have an experiment where you flip a coin three times. You then count the number of heads. State the random variable. Write the probability distribution for the number of heads. Draw a histogram for the number of heads. Find the mean number of heads. Find the variance for the number of heads. Find the standard deviation for the number of heads. Find the probability of having two or more number of heads. Is it unusual for to flip two heads? The Ohio lottery has a game called Pick 4 where a player pays $1 and picks a four-digit number. If the four numbers come up in the order you picked, then you win $2,500. What is your expected value? An LG Dishwasher, which costs $800, has a 20% chance of needing to be replaced in the first 2 years of purchase. A two-year extended warrantee costs $112.10 on a dishwasher. What is the expected value of the extended warranty assuming it is replaced in the first 2 years? 1. a. See solutions, b. See solutions, c. 4.175 days, d. 8.414375 \(\text { days }^{2}\), e. 2.901 days, f. 0.004, g. See solutions, h. See solutions 3. -$0.75 5.2: Binomial Probability Distribution Kathryn Kozak
CommonCrawl
Collective Dynamics of Model Pili-Based Twitcher-Mode Bacilliforms Andrew M. Nagel1, Michael Greenberg1, Tyler N. Shendruk2,3 & Hendrick W. de Haan1 Scientific Reports volume 10, Article number: 10747 (2020) Cite this article Computational biophysics Pseudomonas aeruginosa, like many bacilliforms, are not limited only to swimming motility but rather possess many motility strategies. In particular, twitching-mode motility employs hair-like pili to transverse moist surfaces with a jittery irregular crawl. Twitching motility plays a critical role in redistributing cells on surfaces prior to and during colony formation. We combine molecular dynamics and rule-based simulations to study twitching-mode motility of model bacilliforms and show that there is a critical surface coverage fraction at which collective effects arise. Our simulations demonstrate dynamic clustering of twitcher-type bacteria with polydomains of local alignment that exhibit spontaneous correlated motions, similar to rafts in many bacterial communities. Active matter possesses the potential to bridge between physics and biology. Like living systems, manufactured active systems maintain far-from-equilibrium states by autonomously drawing energy from the surroundings to fuel non-thermal processes. Furthermore, active systems exhibit many of the characteristic traits of biological materials, such as spontaneous motion, self-organization and complex spatio-temporal dynamics. Communities of model bacteria, such as Pseudomonas aeruginosa, are excellent biological examples of out-of-equilibrium systems. These relatively simple living systems serve as a biophysical study of active matter in which collectivity arising from bio-mechanical action can perform essential biological roles. Theories and simulations have approached such bacterial systems by simplifying or omitting all but the most essential, lowest-order physical traits of these microbes, as well as biological complexities. From the very first considerations of active matter, self-propulsion and local alignment were identified as the fundamental components necessary for collective dynamics to emerge from active particles1,2. Simulations of self-propelled rods and their continuum limit of active nematics have been particularly important to the field3,4,5,6,7,8,9,10,11,12,13,14, as recently reviewed in ref. 15. However, the universality of behaviors exhibited by active systems is still a matter of debate16,17 and it cannot simply be taken for granted that the collective dynamics of Vicsek boids1, active Brownian particles18,19,20 or self-propelled rods are directly inherited by microbial motility strategies. Indeed it is known that what might appear to be higher-order details can qualitatively alter the large-scale dynamics. For example, while self-propelled rods and other active colloids commonly exhibit pronounced clustering21, which can be explained by motility-induced phase separation or other theoretical approaches22,23,24, swimming microbes can behave as homogeneous fluids on the scales of mesoscale active turbulence25, with simulations suggesting that the details of hydrodynamic interactions are essential for differentiating these large-scale swimmer properties20,26,27,28. Various modes of swimming motility, including but not limited to pushing, pulling, squirming and undulating, as well as their microscopic details, have been extensively considered29,30,31. However, swimming is only one of many motility mechanisms employed by P. aeruginosa and other motile microbes32. Other motility modes employed by P. aeruginosa alone include swarming33, hyperswarming34, sliding35, walking36, slingshot37, and twitching38, not to mention the migration modes of many eukaryotic cells39. While these motility strategies have received less attention than swimming modes, each has the potential to introduce seemingly microscopic details from which emerge distinctive collectivity. Twitching motility plays a particularly critical role in redistributing cells on surfaces prior to colony and subsequent biofilm formation38,40,41,42, as well as impacting final biofilm morphology43,44 and compositional structure45,46,47. Twitching motility is a flagella-independent form of translocation over moist surfaces, commonly studied using motility plate assays of 1% agar48. Twitching motility relies on type-IV pili49, which are filamentous appendages common to many gram-negative, and some gram-positive bacteria47,50. Through an active cycle of pili extension, anchoring and retraction51,52, P. aeruginosa and other bacilliforms can jerkily crawl over surfaces. This twitching activity enables rapid dissemination and invasion, while it is simultaneously capable of bringing cells together into locally crowded configurations. As simulations of swimming-mode motility have demonstrated that the details of swimming produce essential consequences not seen in simple self-propelled rods53, so too it is constructive to simulate and quantify the collective dynamics of model twitcher-mode bacteria and to quantify any distinctions between dynamics in the low and high density regimes. We present the results of a coarse-grained model that accounts for biologically relevant twitching motility of rod-like bacilliforms fixed to a planar surface. Motivated by twitcher-mode bacterial dynamics, this model goes beyond traditional self-propelled rods, which typically consist of a persistent force aligned along the body of each rod subject to continuous noise distributions15. As shown schematically in Fig. 1, the mechanics of twitching are modelled dissimilarly from traditional self-propelled rods. In this study, each bacilliform twitcher stochastically obeys a twitching cycle of rest, pili extension and active retraction. Thus, at any instant, our simulations contain a mixture of active and passive bacteria, which allows us to observe the effect of the passive bacteria on the emergence of collective motion and also how passive substances can be swept along with active neighbors. Further differentiating our model from studies of traditional self-propelled rods, model twitchers employ a dummy pilus (Fig. 1a), which pulls the bacteria body towards a fixed adhesion point on the substrate. This dummy-pilus scheme means that the propulsive bearing, direction of motion and orientation can each be markedly different. Thus, to study the collective motion of twitching mode bacteria, we have developed a distinct model. Nonetheless, our model neglects further biological complications, such as multiple motility modes40,42,48, reproduction54, biosurfactants55, bacteria-secreted polymeric trails56 and nutrient competition. Incorporation of these effects is left to future work. Schematics describing the twitcher model. (a) Single twitcher discretized into four Langevin spheres. A dummy pilus extends from the head particle and is affixed to the surface stochastically within a cone \([-\pi \mathrm{/4,}\,\pi \mathrm{/4}]\), while it applies a constant retraction force on the head. (b) The motion of a single twitcher described by its pilus force \(\overrightarrow{F}\), the center of mass displacement \(\Delta \overrightarrow{r}\), the direction of motion \(\hat{v}\), polar orientation \(\hat{p}\), and nematic alignment \(\hat{n}\). (c) The motility cycle of a single twitcher. The twitcher is non-motile in the rest (1) and extension phases (2) but pulls itself forward during the retraction phase (3). A resting twitcher has a \(\mathrm{90 \% }\) probability per time step \(\tau \) to continue resting. The extension of the pilus to an adhesion point a distance \({L}_{0}\) from the head takes \(10\tau \). The retraction phase continues until: (i) the head arrives at the adhesion point, (ii) the head is pushed too far from the dummy pilus point causing the pilus to snap, (iii) the adhesion is exhausted after a maximum adhesion time \({t}_{{\rm{M}}}\). To the best of our knowledge, this report is the first numerical study of the collective effects that can arise from twitching mode motility and our simulations explicitly demonstrate that collective motion can arise from purely physical mechanisms. That is, with a sufficiently high coverage fraction, rod-like twitchers nematically align through excluded-volume interactions and form dynamic clusters that exhibit correlated motion. However, we also make clear that the emergent collectivity is not immediately apparent through a transition to flocking or swarming, nor through a qualitative change in the mean squared displacements. Rather, we quantify the dynamics through changes to the non-Gaussian parameter, relative diffusivity and decorrelation lengths, which together constitute a suite of statistical tools readily available to experimentalists studying the collective dynamics of twitching bacteria, such as P. aeruginosa. Our coarse-grained simulations of bacilliform microbes treat each individual twitcher as a stiff chain of four spheres with dynamics obeying Langevin equations of motion57,58, with a non-integrated dummy particle representing the action of bacterial pili (Fig. 1a). Excluded-volume, finite-extension connectivity and rigidity are each accounted for via potentials as described in detail in the Methods Section. All quantities are expressed in terms of simulation units with length in terms of twitcher sphere size \(\sigma \), mass in sphere mass \(m\), energy in Lennard-Jones well-depth \(\varepsilon \) and unit time \(\tau =\sqrt{m{\sigma }^{2}/\varepsilon }\). Twitching motility is modeled via the pilus particle, which actively pulls each individual twitcher forward (Fig. 1b) and obeys a stochastic rule-based cycle composed of three phases (Fig. 1c): The first phase is the rest phase. Resting twitchers do not do not undergo self-induced movement, only passively respond to external forces and have a 10% chance per \(\tau \) of stochastically transitioning to the next phase (Fig. 1c-1). The next phase is the extension phase, in which the dummy pilus extends over a set period of \(10\tau \) then adheres to the surface a distance \({L}_{0}=2.4\) away from the head particle with a random angle between \(-\pi \mathrm{/4}\) and \(\pi \mathrm{/4}\) (Fig. 1c-2). The retraction phase is the period in which the twitcher is actively motile (Fig. 1c-3). The twitcher's head is pulled towards its fixed pilus adhesion point with a force \(\overrightarrow{F}\) of constant magnitude to model the average force exerted by multiple pili59. The retraction phase ends when one of three conditions are met: The twitcher arrives at its pilus adhesion point, which is achieved if the distance between the head and the pili adhesion point \({r}_{\gamma }\) is less than the cutoff \({L}_{{\rm{R}}}=0.2\) (Fig. 1c-3.i). The pilus adhesion snaps because the head is pushed further from the adhesion point than the cutoff \({L}_{{\rm{S}}}=3\) (Fig. 1c-3.ii). The adhesion is exhausted if the retraction phase persists for more than \({t}_{{\rm{M}}}=70\tau \) (Fig. 1c-3.iii). Once any of these occur, the twitcher returns to the rest phase and the cycle repeats. The instantaneous state of the \({\gamma }^{{\rm{th}}}\) twitcher is quantified by its center of mass position \({\overrightarrow{x}}_{\gamma }(t)\), velocity \({\overrightarrow{v}}_{\gamma }(t)\) and orientation. The velocity is defined as the displacement vector \(\Delta \overrightarrow{r}\) per time step (Fig. 1b), along with associated speed \({v}_{\gamma }(t)=|{\overrightarrow{v}}_{\gamma }|\) and direction of motion \({\hat{v}}_{\gamma }(t)={\overrightarrow{v}}_{\gamma }/{v}_{\gamma }\). The direction of motion does not necessarily align with the retraction force \(\overrightarrow{F}\), nor the orientation (Fig. 1b). We consider both the polar orientation \({\hat{p}}_{\gamma }(t)\), the unit vector pointing from tail to head, and the rod-like nematic alignment, for which \({\hat{n}}_{\gamma }(t)\) and \(-{\hat{n}}_{\gamma }(t)\) are equivalent. Twitchers interact with one another through excluded-volume repulsion and we define the coverage fraction to be the area of \(N\) twitchers normalized by the 2D simulation box size. We simulate a wide variety of coverage fractions, from a solitary twitcher (\(N=1\) and \(\phi =4\times {10}^{-4}\)) to \(N=2000\) (\(\phi =0.76\)). Supplemental Movies 1–6 illustrate the simulation results for surface coverages \(\phi =\{4\times {10}^{-4},0.04,0.19,0.3,0.38,0.57\}\) respectively, snapshots from which are shown in Fig. 2. Further details are available in the Methods Section. Simulation snapshots. (a) Surface coverage \(\phi =0.19\) (Supplemental Movie 3). (b) Near the critical surface coverage \(\phi =0.3\approx {\phi }^{\ast }\) (Supplemental Movie 4). (c) High surface coverage \(\phi =0.57\), exhibiting coexistence of a locally dilute phase and a dense phase with non-homogeneous polydomains of orientational ordering (Supplemental Movie 5). Solitary Twitcher In the absence of interactions with other twitchers, the dynamics of a solitary twitcher are controlled entirely by the motility cycle (Section Motility Cycle). Example trajectories appear diffusive on long times (Fig. 3a; Supplemental Movie 1), though closer inspection of shorter periods demonstrates the rest/extension and active retraction phases, as well as correlated motion across multiple resting phases (Fig. 3a; inset). The consequences of these phases can be characterized by calculating the mean square displacement (MSD) $$\Delta {r}^{2}(t)\equiv \langle {|{\overrightarrow{x}}_{\gamma }\mathrm{(0)}-{\overrightarrow{x}}_{\gamma }(t)|}^{2}\rangle $$ as a function of lag time \(t\) from any initial time (Fig. 3b). MSD is a natural measurement for situations involving randomness, in which case the average of the displacement \(\Delta r(t)\equiv \langle {\overrightarrow{x}}_{\gamma }\mathrm{(0)}-{\overrightarrow{x}}_{\gamma }(t)\rangle \) is often zero. As a measure of the width of the distribution of step sizes for each lag time, MSD measures the extent of the random motion. The lag time is simply the time interval from the arbitrarily chosen starting point. The manner in which MSD increases as a function of lag time can help us to understand the nature of twitchers' motion. At different lags, we observe \(\Delta {r}^{2}\sim {t}^{\beta }\), where the scaling \(1\le \beta (t)\le 2\), with \(\beta =1\) indicating diffusive dynamics and \(\beta =2\) signaling propulsive motion. For short times \(t\lesssim 10\), \(\Delta {r}^{2}(t)\) scales as \(\beta =2\), corresponding to active self-propelled motion of a single retraction phase dominating over the noise induced by the random pilus extension angle. From t ≈ 10–30, there is a shoulder in the MSD where \(\Delta {r}^{2}(t)\) nearly saturates, illustrating the pauses in self-propelled motion during the rest and extension phases. In contrast to our model, isolated P. aeruginosa cells extend pili to variable lengths and retraction times are stochastic60, which would be expected to dampen the shoulder in our numerical model. After \(t\gtrsim 30\), \(\Delta {r}^{2}(t)\) again scales as \(\beta \approx 2\), indicating correlated motion across multiple twitching jumps due to the model restricting pilus adhesion to a cone in front of the twitcher (see Fig. 1a). Around \(t\gtrsim {10}^{3}\), the scaling transitions to \(\beta \approx 1\), corresponding to diffusive dynamics over long lag times and indicating a random walk as expected from the random motion exhibited in Fig. 3a. Solitary twitcher dynamics. (a) Example trajectory. (Inset) Short time showing resting/extension and retraction phases. (b) Mean squared displacement \(\Delta {r}^{2}\sim {t}^{\beta }\), with propulsive behavior (\(\beta \approx 2\)) at short/intermediate times and diffusive behavior (\(\beta \approx 1\)) at long times. (c) Non-Gaussian parameter \({\alpha }_{2}(t)\), which is zero for Gaussian statistics, \(\mathrm{ < 0}\) when there are fewer large displacements than a normal distribution with the same second moment, and \(\mathrm{ > 0}\) when there are more. For such a rich motility cycle, the MSD does not exhibit compelling qualities, only hinting at the underlying dynamics as described above. While the MSD tells us the width of the distribution of step sizes for each lag time, it cannot tell us more. Indeed there has been a growing appreciation in the soft condensed matter community that the MSD can easily be over interpreted61,62,63,64,65,66 (as recently reviewed in ref. 67) and this issue requires even greater care in biologically complex systems, such as ensembles of twitching P. aeruginosa. To learn more, we need to consider more subtle aspects of the displacements and we turn to higher order moments of the the displacement distribution. The extent to which the dynamics deviate from the Gaussian distribution, which would lead to diffusive motion, can be measured by a non-Gaussian parameter (NGP)68,69,70 $${\alpha }_{2}(t)=\frac{d}{d+2}\frac{\Delta {r}^{4}}{{|\Delta {r}^{2}|}^{2}}-\mathrm{1,}$$ where the dimension \(d=2\) since the twitchers are confined to a plane and \(\Delta {r}^{4}=\langle {|{\overrightarrow{x}}_{\gamma }\mathrm{(0)}-{\overrightarrow{x}}_{\gamma }(t)|}^{4}\rangle \). While the MSD gives the second order moment of the displacement distribution, NGP gives the fourth moment and so expresses information about the motion that is not generally encoded in the MSD (\(\Delta {r}^{4}\ne {|\Delta {r}^{2}|}^{2}\) in general). However, in the particular case of a Gaussian distribution all higher order even moments are functions of the MSD; particularly, the fourth moment of a normal distribution is \(\Delta {r}^{4}=(d+\mathrm{2)}{|\Delta {r}^{2}|}^{2}/d\), which would give \({\alpha }_{2}(t)=0\). Thus, NGP communicates the extent to which the displacement distribution differs from normal. When \({\alpha }_{2}(t) < 0\) the displacement distribution is said to be platykurtic, meaning there are fewer large step sizes than would be produced by a normal distribution with the same second moment. When \({\alpha }_{2}(t) > 0\) the distribution is leptokurtic, indicating that the tails of the distribution are longer than normal. The NGP much more clearly indicates the three regions that could be discerned from the MSD (Fig. 3c). Moreover, it reveals the dynamics at each of these time scales to be leptokurtic, platykurtic and normal, respectively. Additionally, to demonstrate these different regimes explicitly, the distribution of twitcher displacements \(G(\Delta x,t)\) is calculated and compared to Gaussian distributions with the same standard deviation. These distributions, which are sometimes referred to as van Hove self-correlation functions65, are shown in Fig. 4 for several times. Solitary twitcher step size distributions. (a–c) Distribution \(G(\Delta x,t)\) for various lag times \(t\) and step sizes \(\Delta x\) along either Cartesian axis. Grey curves denote reference Gaussian distributions with equivalent standard deviations to the respective step size distributions. (d) Step size distributions normalized to collapse diffusive curves. Firstly, \({\alpha }_{2}(t)\) in Fig. 3c approaches zero at long times, indicating Gaussian dynamics just as the MSD indicated diffusive behavior. This is verified in Fig. 4c where \(G(\Delta x,t={10}^{5})\) closely matches the equivalent Gaussian curve. Next, at the shortest lag times in Fig. 3c, \({\alpha }_{2}(t)\) approaches a positive constant of \(\sim 0.55\) because the twitchers are likely to be found in the motile retraction phase with large propulsive displacements. This leptokurtic behaviour is shown explicitly in Fig. 3a where the tails of the \(G(\Delta x,t=\mathrm{5)}\) distribution are much longer than those of the Gaussian. The sharp peak at zero displacement reflects the non-motile rest phases. Finally, between these limits, \({\alpha }_{2}(t)\) is platykurtic and approaches the lower bound of \(-\mathrm{2/(}d+\mathrm{2)}\)71, which reflects the sequential resting phases that shorten the tails of the displacement distribution in comparison to a random walk. Figure 4b displays the distribution of step sizes at \(t={10}^{3}\) and the platykurtic nature is evident from the sharply truncated tails of \(G(\Delta x,t={10}^{3})\) compared to the Gaussian. This, coupled with the sharp shoulders, indicate the greater likelihood of traveling in a correlated manner but then abruptly pausing to rest with only a vanishingly small probability of traversing any further. Recall that \(\beta \approx 2\) for both the leptokurtic and platykurtic regimes in the MSD, and so the qualitative difference in dynamics could only be quantified by considering the NGP. Figure 4d displays the step size distributions at 5 different times. The \(\Delta x\) values are scaled by \({t}^{-\mathrm{1/2}}\) and the distributions are normalized by \({t}^{\mathrm{1/2}}\) such that curves corresponding to pure diffusion would collapse. This allows examination of the evolution of \(G(\Delta x,t)\) across disparate time scales. The decay of the sharp peak at \(\Delta x=0\) at short times, the emergence of sharp shoulders and cut tails at intermediate times, and the convergence towards a universal curve indicating diffusion at long times are all evident. Collective Twitcher Dynamics Individual dynamics of constituent twitchers To assess pre-colony collective dynamics as a function of surface coverage, we simulate ensembles of twitchers. At low coverage (\(\phi =0.19\) curve in Fig. 5a; Supplemental Movies 2–3), the mean squared displacement retains the qualities observed in the solitary twitcher case: the short-time active self-propulsion with scaling β = 2; shoulder near t ≈ 10–30 due to the non-motile rest phases; correlated motion across multiple twitching jumps (intermediate times) with β ≈ 2; and random-walk dynamics at long times with β = 1 (Fig. 5a). In fact, as the coverage fraction further increases (\(\phi =0.57,0.76\) curves), the MSD curves remain qualitatively similar. The shoulder in \(\Delta {r}^{2}(t;\phi )\) in the vicinity of t ≈ 10–30 becomes less pronounced; however, the scaling \(\beta \) for short, intermediate and long times is essentially unaffected. However, increasing ϕ does cause two limiting changes to the twitcher MSD: At short times, the MSD curves shift down as ϕ increases (Fig. 5a). In this short-time regime, \(\Delta {r}^{2}(t;\phi )\sim {t}^{2}\). Thus, we define an effective short-time mean squared velocity (MSV) by \({V}^{2}(\phi )=\Delta {r}^{2}(\tau )/{\tau }^{2}\) (Fig. 5c). Starting from low coverage fractions, the MSV decreases relatively weakly with increasing ϕ because the twitchers are well separated and seldom collide. The MSV decreases because collisions become more likely, generally slowing active twitcher motility. At long times, the MSD curves are diffusive and the \(d=2\) dimensional diffusion coefficient can be extracted by fitting \(\mathop{lim}\limits_{t\to {\rm{\infty }}}\Delta {r}^{2}(t;\phi )=2d{\mathscr{D}}t\). However, the reduction of the short-time \({V}^{2}(\phi )\) has already slowed the dynamics, effectively acting as an increased viscosity at long times causing the effective diffusion coefficient \({\mathscr{D}}\) to decrease with increasing ϕ. To normalize, we consider the dimensionless relative diffusivity \(D(\phi )={\mathscr{D}}(\phi )/\tau {V}^{2}(\phi )\) (Fig. 5d). The relative diffusivity is non-monotonic with its minimum corresponding to the same surface coverage as the inflection point in \({V}^{2}\). Collective dynamics of twitcher systems of different coverage fractions ϕ. (a) Mean squared displacement \(\Delta {r}^{2}(t;\phi )\). (b) Non-Gaussian parameter \({\alpha }_{2}(t;\phi )\). (c) Short-time mean squared velocity \({V}^{2}(\phi )=\Delta {r}^{2}(\tau ;\phi )/{\tau }^{2}\). The vertical dashed line marks the critical coverage \({\phi }^{\ast }\). (d) Long-time relative diffusion \(D(\phi )={\mathscr{D}}(\phi )/\tau {V}^{2}(\phi )\), where \({\mathscr{D}}(\phi )\) is the diffusion coefficients as measured from the MSD for \(t > {10}^{4}\). (Inset) High coverage regime. By considering the limiting character of the MSDs we are able to extract some subtle differences in the collective behavior that is not immediately apparent. However, while the MSD curves remain qualitatively similar at all surface coverages, the non-Gaussian parameters reveal a qualitatively distinct change to the collective dynamics (Fig. 5b). At low coverage, the NGP manifests the same three regimes as the solitary case (Fig. 3c) but comparing the \(\phi =0.19\) (Supplemental Movie 3) and \(\phi =0.38\) (Supplemental Movie 5) curves in Fig. 5b, the transition to \({\alpha }_{2}(t;\phi )\approx 0\) diffusion from the negative platykurtic plateau occurs at earlier times as ϕ increases, illustrating the loss of the distribution's large displacement tails. This shift is due to collisions between twitchers randomizing the correlated motion between retraction phases earlier than in the solitary limit. As the surface coverage increases, \({\alpha }_{2}(t;\phi )\) loses the negative plateau altogether, becoming leptokurtic at all but the longest lag times, i.e. revealing the distribution has longer tails than expected for a Gaussian despite the rest phase. This is accompanied by a change in the short-time limit of \({\alpha }_{2}(t;\phi )\): sparse surface coverage (\(\phi =0.19\) curve) exhibits a constant \(\mathop{lim}\limits_{t\to 0}{\alpha }_{2}(t;\phi )\approx 0.5\); however, it rises substantially. This implies that larger displacements than expected by a normal distribution become far more common at both short and intermediate time scales. This is a strong indication of collective and coherent motion at intermediate time scales suggesting that even rest-phase twitchers are typically moving due to interactions with retraction-phase neighboring twitchers, with more frequent large step sizes than expected for diffusive motion. This indicates a qualitative change in twitcher behavior, which can be understood as the transition from distinct collision events at low coverage (\(\phi < {\phi }^{\ast }\)) to continuous interactions at high coverage (\(\phi > {\phi }^{\ast }\)). This roughly suggests $${\phi }^{\ast }=\frac{{A}_{{\rm{twitch}}}}{\pi {({L}_{{\rm{body}}}\mathrm{/2)}}^{2}}\approx 0.3$$ to be the point at which the mean area per twitcher equals the characteristic rotational area occupied by each twitcher and above which \({\alpha }_{2}(t;\phi )\ge 0\) at all lag times. The importance of ϕ* on the dynamics is also discernible from the MSV (Fig. 5c) and relative diffusivity (Fig. 5d). While the decrease in MSV is monotonic, there is an inflection point at \(\phi \approx {\phi }^{\ast }\). Similarly, systems with low coverage fractions have the largest \(D(\phi )\), as twitchers seldom obstruct each other's diffusive motion, which remains the case until \({\phi }^{\ast }\) (Supplemental Movie 4), at which point the relative diffusivity \(D(\phi )\) is a minimum (Fig. 5d). Beyond \({\phi }^{\ast }\) (Supplemental Movies 5–6), \(D(\phi )\) increases, further demonstrating the collective motion that emerges at high coverage. To further understand this collectivity, we consider the average speed \({v}_{{\rm{m}}}(\phi )\) (Fig. 6; green dashed). We show the separated contributions due to twitchers in their actively self-motile retraction phase \({v}_{{\rm{a}}}(\phi )\) (Fig. 6; purple) and their resting/extending non-motile phases \({v}_{{\rm{r}}}(\phi )\) (Fig. 6; blue). The mean \({v}_{{\rm{m}}}(\phi )\) is constant for low coverages and only decreases substantially once \(\phi > {\phi }^{\ast }\). On the other hand, \({v}_{{\rm{a}}}(\phi )\) decreases in both regimes. In the intermediate \(\phi \approx {\phi }^{\ast }\) regime, we see that slight increases in \(\phi \) result in large decreases in \({v}_{{\rm{a}}}(\phi )\). At this coverage, neighboring twitchers are hindering each others' motion but are not recompensing significant speed through collective effects, as will occur at higher coverages. Average speed of twitchers. The total weighted average speed \({v}_{{\rm{m}}}(\phi )\) is separated into the contributions from twitchers in their resting/extending state \({v}_{{\rm{r}}}(\phi )\) and their active retraction state \({v}_{{\rm{a}}}(\phi )\). Critical coverage ϕ* denoted with dotted vertical line. Figure 6 demonstrates that even twitchers in the resting phase of the motility cycle are collectively advected as ϕ approaches the critical coverage. In fact, ϕ* clearly marks the saturation of the increase in the speed of resting twitchers, a sharp decrease in the speed of active twitchers, and the beginning of the decrease in the mean speed. It is interesting to compare this to typical rod models where the rods experience steadfast self-propulsion and uniform orientational noise and thus are never inactive15. The critical coverage as calculated from Eq. 3 is a purely geometric argument; it does not consider what fraction of the matter covering the surface is active. This estimate of ϕ* works well for both continuous self-propelled rods and the mix of active/passive twitchers studied here thus indicating that the emergence of collective motion is primarily dictated by excluded volume effects rather than energetic considerations. While \({v}_{{\rm{a}}}(\phi )\) decreases, \({v}_{{\rm{r}}}(\phi )\) rises with the frequency of collisions between twitchers. In fact, by the highest coverage fractions, there is sufficient collective motion for the rest/extension phase twitchers to be advected at the same average speed as the retracting twitchers (Fig. 6). These dynamics are explained by the step size distribution \(G(\Delta x,t;\phi )\) (Fig. 7). Focusing on the \(\phi =0.19\) subplot (\(\phi < {\phi }^{\ast }\)), \(G(\varDelta x,t;\varphi )\) is similar to the solitary twitcher limit shown in Fig. 3d. However, as the coverage surpasses ϕ* in the remaining three subplots, the intermediate-time peaked shoulders become suppressed (Fig. 7b–d). This is because collisions make it both unlikely to remain in place during rests and unlikely to travel without obstruction for long periods. At intermediate ϕ, moderate lag times (\(t={10}^{3}\) in Fig. 7(b,c)) begin to collapse on to the long-time diffusive distributions, which itself narrows with increasing ϕ. At the highest ϕ (Fig. 7d), the intermediate lag time \(G(\Delta x,t;\phi )\) of \(t={10}^{3}\) again transitions — now behaving like the short-time distributions. Step size distributions. Van Hove functions \(G(\Delta x,t;\phi )\) with axes normalized to collapse diffusive dynamics. Panels a-d show step size distributions for various coverage fractions (\(\phi =0.19,0.38,0.57,0.76\)). The notion that resting twitchers do not impede the emergence of collective motion and can actually exhibit speeds comparable to that of active twitchers at large ϕ is in agreement with previous studies. It has been shown experimentally that not all twitching cells must be motile in order to exhibit collective effects38 and that polystyrene microspheres can be moved across surfaces by colonies of twitching P. aeruginosa72. Further, physical studies of active granular matter have demonstrated that collectivity can arise in systems consisting of few active agents surrounded by many passive particles73. The results shown in Figs. 6 and 7 demonstrate that this is true not only for non-motile tracers, species, or mutants, but rather is continually occurring for resting cells. Returning to the step-size distributions at large ϕ results depicted in Fig. 7, the short-time center peak and long tails indicate that the majority of individual twitchers are caged by their neighbors but are able to collectively advect and so move larger distances than expected if they were behaving diffusively. These caging effects are particularly evident in the correlated motion of individual twitchers within the ensemble. To explore how persistent the direction of motion of individual twitchers is, we consider the spatial individual auto-correlation (IAC) function of the direction of motion \({\hat{v}}_{\gamma }\) of the \({\gamma }^{{\rm{th}}}\) twitcher along its own trajectory. The IAC is given by $${\rho }_{\hat{v}}(\Delta r;\phi )=\langle {\hat{v}}_{\gamma }(0)\cdot {\hat{v}}_{\gamma }(\Delta r)\rangle ,$$ where \(\Delta r\) is the distance travelled relative to an arbitrary starting point. When \(\Delta r\) is small, no twitcher will have moved far nor changed direction and \({\hat{v}}_{\gamma }(0)\) and \({\hat{v}}_{\gamma }(\Delta r)\) will be very similar, such that \(\langle {\hat{v}}_{\gamma }(0)\cdot {\hat{v}}_{\gamma }(\Delta r)\rangle \approx 1\). As each twitcher moves across the surface, \(\Delta r\) increases and the correspondence between the direction of motion at the starting point and at \(\Delta r\) is lost. In the limit of completely uncorrelated directions of motion, \(\langle {\hat{v}}_{\gamma }(0)\cdot {\hat{v}}_{\gamma }(\Delta r)\rangle \) approaches zero. The IAC defined in Eq. 4 thus decays from \(\approx 1\) to small values with increasing \(\Delta r\) thus indicating how the direction of motion is randomized with increasing displacement. Note that the IAC is averaged over both initial times and the ensemble of twitchers. Similar auto-correlation functions have previously proven useful in assessing collective motion of swimming Bacillus subtilis74,75,76,77. The IAC curves calculated for different surface coverage values are shown in Fig. 8a. For the case of solitary twitchers corresponding to \(\phi =0.004\), the principle contribution to \({\rho }_{\hat{v}}(\Delta r)\) is exponential decay, with a small dip and peak at small distances representing the stochastic angle chosen in the extension phase and the directed active motion of the retraction phase. In the low coverage regime (\(\phi =0.19\)), as ϕ increases the IAC curve shifts downward and also the decay becomes steeper. The shift reflects the same collisional dynamics as the short-time MSV decrease of \({V}^{2}(\phi )\) (Fig. 5c), while the increased decay reiterates the long-time MSD of \(D(\phi )\) (Fig. 5d). If the coverage fraction is greater than \({\phi }^{\ast }\) (\(\phi =0.57,0.76\)), the slopes start to flatten out and the IAC \({\rho }_{\hat{v}}(\Delta r;\phi )\) shifts up in magnitude, implying that high coverages cage twitchers' direction of motion as they travel large distances. Individual auto-correlation (IAC) within an ensemble. (a) IAC function \({\rho }_{\hat{v}}(\Delta r;\phi )\) of the direction of motion of an individual twitcher. (b) \({\rho }_{\hat{v}}(\Delta r;\phi )\) for two values of distance traveled \(\Delta r(t)=\{10,50\}\) as a function surface coverage ϕ. Markers denote \(\Delta r=10\) (+) and Δr = 50 (◆) (c) Decorrelation length \({\lambda }_{{\rho }_{\hat{v}}}(\phi )\) from exponential fits to the large \(\Delta r\) decay of the IAC functions. If one focuses on a subset of chosen travel distances \(\Delta r=\{50,10\}\), the importance of \({\phi }^{\ast }\) is highlighted (Fig. 8b). For \(\Delta r=50\), \({\rho }_{\hat{v}}(\Delta r;\phi )\) is non-monotonic, decreasing rapidly with ϕ to a minimum at \({\phi }^{\ast }\). For this large-distance limit, we characterize \({\rho }_{\hat{v}}(\Delta r;\phi )\) by fitting exponential correlation lengths \({\lambda }_{{\rho }_{\hat{v}}}(\phi )\) (Fig. 8c) to the tails of the curves in Fig. 8a. At low coverage fractions, \({\lambda }_{{\rho }_{\hat{v}}}(\phi )\) is largest due to unobstructed twitcher motion. The correlation length drops to a shallow minimum at \(\phi \approx {\phi }^{\ast }\) with only a minor increase for larger coverage. The short distance (\(\Delta r=10\)) correlation indicates more complicated dynamics (Fig. 8b). The correlation still drops to a local minimum at \(\phi \approx {\phi }^{\ast }\) but now the minimum is nearly 4.5 times more correlated than for \(\Delta r=50\). The rise in \({\rho }_{\hat{v}}(10;\,\phi )\) above \({\phi }^{\ast }\) begins more suddenly and climbs to a local maximum around \(\phi \approx 0.57\). At this local maximum, the IAC of a twitcher is nearly as large as for a solitary twitcher. At these coverages, twitchers form tightly packed clusters that promote alignment and cage the twitchers' direction of motion, maintaining correlation. Thus, individual auto-correlation calculations can reveal the persistence of motion of individual P. aeruginosa or other motile microbes and by comparing the curves across ϕ values, the emergence of collective motion can be indirectly observed from increases in the IAC arising from interactions with neighboring twitchers. Long-range correlated motion In order to directly examine these correlations between twitchers, we consider another correlation function: the radial pair auto-correlation (PAC) function given by $${g}_{\hat{v}}(\Delta r;\phi )=\langle {\hat{v}}_{\gamma }(t)\cdot {\hat{v}}_{\eta }(t)\rangle .$$ This measure compares the direction of motion of the \({\gamma }^{{\rm{th}}}\) twitcher relative to its \({\eta }^{{\rm{th}}}\) neighbor that is a distance \(\Delta r\) away at that instant. While the IAC given in Eq. 4 compares a twitcher to itself at different displacements and thus different times, the PAC given in Eq. 5 compares one twitcher to its neighbours at the same point in time. This is thus a direct measure of how the motion of a twitcher is correlated to that of its neighbours and allows us to explore the inference that tightly packed domains result in long-range correlated motion by caging twitchers and aligning their direction of motion. As for the IAC, values near +1 indicate high correlation while values near 0 indicate insignificant correlation. In dilute systems, the correlation of neighboring twitchers' direction of motion drops quickly to zero (Fig. 9a) — only twitchers that are in direct contact (within \(\Delta r < {L}_{{\rm{body}}}/2\)) exhibit non-negligible correlations. However, there is a sudden jump in the long-range correlation as the coverage surpasses \({\phi }^{\ast }\). The principle contribution to \({g}_{\hat{v}}(\Delta r;\phi )\) is exponential decay and by fitting exponential correlation lengths \({\lambda }_{{g}_{\hat{v}}}(\phi )\) to the tail of the curves, we see the rapid rise and subsequent saturation of the correlation within the system. A closer examination reveals that there is a minor peak in \({g}_{\hat{v}}(\Delta r;\phi )\) for all ϕ found at small separations. Pair correlations between twitchers. Radial pair auto -correlation (PAC) functions demonstrating local ordering for the same coverage fractions as in Fig. 3 \((\phi =\{0.19,0.38,0.57,0.76\})\). (a) PAC function of the direction of motion \({g}_{\hat{v}}(\Delta r;\phi )=\langle {\hat{v}}_{\gamma }(t)\cdot {\hat{v}}_{\eta }(t)\rangle \) for twitchers \(\gamma \) and \(\eta \) that are separated by \(\Delta r\) at time \(t\). (Inset) Exponential decorrelation length \({\lambda }_{{g}_{\hat{v}}}(\phi )\). (b) PAC function of polar orientation \({g}_{\hat{p}}(\Delta r;\phi )=\langle {\hat{p}}_{\gamma }(t)\cdot {\hat{p}}_{\eta }(t)\rangle \). (Inset) Schematic of steric alignment mechanisms for co-translating twitchers and passing twitchers. (c) PAC function of the director \({g}_{\hat{n}}(\Delta r;\phi )=\langle {\hat{n}}_{\gamma }(t)\cdot {\hat{n}}_{\eta }(t)\rangle \). (Inset) Exponential decorrelation length \({\lambda }_{{g}_{\hat{n}}}(\phi )\). In Fig. 9a, we considered the PAC for the instantaneous direction of motion \({\hat{v}}_{\gamma }\). This does not necessarily align with twitchers' polar orientation \({\hat{p}}_{\gamma }\), which describes the direction from twitcher's tail to its head, nor twitchers' nematic alignment \({\hat{n}}_{\gamma }\), which disregards differences between parallel/anti-parallel orientation (\({\hat{n}}_{\gamma }=-{\hat{n}}_{\gamma }\)), as defined in the Methods Section. An anti-correlation for \(\phi < {\phi }^{\ast }\) arises in the PAC function of polar orientation \({g}_{\hat{p}}(\Delta r;\phi )=\langle {\hat{p}}_{\gamma }(t)\cdot {\hat{p}}_{\eta }(t)\rangle \) (Fig. 9b). From \({g}_{\hat{p}}(\Delta r;\phi )\), we see that the \(\phi =0.19\) curve crosses zero at \(\Delta r=2.8\) and has a negative minimum at \(\Delta r=4.0\). These features arise from pair collision events, which produce either: Alignment, in which case the nematic interactions and polar motion cause persistent co-movement. Even if future pili adhesion events pull the heads apart, nematic interactions keep the pair aligned (Fig. 9b; left inset). Anti-alignment, in which case the nematic interactions produce ephemeral anti-parallel configurations, since twitchers are free to move in uncorrelated directions once the twitchers pass one another (Fig. 9b; right inset). The net result is that polar aligned twitchers have an effective short-range attraction and that twitchers in immediate contact tend to stay polar aligned. Since co-aligned twitchers effectively attract and anti-aligned do not, the range \(\Delta r\approx 3-15\) exhibits an anti-correlation. This anti-correlated region has a minimum centered on the mean separation distance between twitchers (\(\Delta r=4.0\) for \(\phi =0.19\) in Fig. 9b). At higher ϕ, this is no longer the case, since spontaneous symmetry breaking is expected of active systems above the critical "flocking" transition1,2. However, while \({g}_{\hat{v}}(\Delta r;\phi )\) increased for all ϕ (Fig. 9a), the \(\phi =0.76\) curve for \({g}_{\hat{p}}(\Delta r;\phi )\) actually crosses down below the \(\phi =0.38\) and \(\phi =0.57\) curves for local \(\Delta r\) (Fig. 9b). At these high coverages, the polar alignment mechanism described by Fig. 9b (inset) no longer holds since isolated pair collisions are rare. Anti-aligned pairs can no longer episodically pass one another because the majority of twitchers are surrounded on all sides by nearby neighbors (Fig. 2b). Thus, the coverage fraction in these dense regions nematically aligns the twitchers because of the bacilliform shape, overcoming collisional polar alignment. This is revealed in the 2D pair -correlation function of nematic orientation \({g}_{\hat{n}}(\Delta r;\phi )=\langle 3({\hat{n}}_{\gamma }(t)\cdot {\hat{n}}_{\eta }(t)-2/3)\rangle \) (Fig. 9c). Unlike \({g}_{\hat{p}}(\Delta r;\phi )\), the magnitude of \({g}_{\hat{n}}(\Delta r;\phi )\) increases monotonically with ϕ at all \(\Delta r\). The nematic PAC is very high at contact (small \(\Delta r\)) for all coverages, falls rapidly, then possesses a well-defined peak at intermediate separations \(\Delta r\approx 4.0\); consistent to all three subplots. This peak corresponds to the length of a twitcher indicating that twitchers are often observed in locally smectic-ordered layers, as can be seen in Fig. 2b, for example. The locally correlated domains represent proto-rafts, regions of strong nematic ordering that are reminiscent of the "rafts" observed in dense communities P. aeruginosa78,79. Fitting exponentials to the \({g}_{\hat{n}}(\Delta r;\phi )\) tails after the nematic raft peaks, we extrapolate an effective raft size parameter \({\lambda }_{{g}_{\hat{n}}}(\phi )\) (Fig. 7c; inset). These nematic proto-rafts have a size scale (Fig. 7c; inset) that is much smaller than the size of the dense regions, which can span the entire system at high ϕ (Fig. 2b). In this way, we see clearly the distinct transition from the dilute state with no clustering to a dense state with non-homogeneous polydomains of local nematic ordering that exhibit collective motion on scales comparable but larger than raft size. While local alignment on scales comparable to \({\lambda }_{{g}_{\hat{n}}}(\phi )\) generate the collective motion of rafts, \({\lambda }_{{g}_{\hat{v}}}(\phi ) > {\lambda }_{{g}_{\hat{n}}}(\phi )\) (Fig. 9; insets) since non-aligned neighbors can be entrained by the collective advection. Non-homogeneous ensemble structure The nematic rafts represent ordered localities within larger dense regions. From Fig. 2b, it can be seen that at high total coverage fractions localized dense regions (liquid-like state with non-uniform polydomain nematic ordering) coexist with dilute regions (active gas-like state). To quantify the coexistence, we consider the distributions of local coverage fractions \(\phi {\prime} \) by partitioning the system into 100 square sub-domains to calculate the probability distribution \(P(\phi {\prime} ;\phi )\) of observing a local \(\phi {\prime} \) given a certain global surface coverage ϕ. Below \({\phi }^{\ast }\), the distribution exhibits a single peak centered around \(\phi {\prime} =\phi \), which is to say that the twitchers constitute a homogeneous gas-like active system (Figs. 10a and 2a). As the total coverage is raised, the primary peak shifts slightly to the right, as a secondary peak arises at a substantially larger coverage fraction (Fig. 10b). Above \({\phi }^{\ast }\), a dilute gas-like phase with coverage fraction \({\phi {\prime} }_{g}=0.2\) coexists with a liquid-like phase at \({\phi {\prime} }_{l}=0.85\). As the total ϕ is increased further, the fraction of twitchers that reside in the active-gas phase decreases, while the fraction in the active cluster increases (Figs. 10c and 2b). Eventually the active-gas phase all but disappears at the highest coverage fractions (Fig. 10d). Coexistence. Probability distributions \(P(\phi {\prime} ;\phi )\) of local coverage fractions \(\phi {\prime} \) for different global coverage ϕ. Vertical lines denote the coexistence densities in the active gas-like phase \({\phi {\prime} }_{g}=0.2\) (dashed line) and the liquid-like dense phase \({\phi {\prime} }_{l}=0.85\) (dotted line). (a) \(\phi =0.10\). (b) \(\phi =0.38\). (c) \(\phi =0.57\). (d) \(\phi \mathrm{=0.76}\). Global ϕ is marked on each curve. However, even at these high coverages the impact of interspersed zones depleted of twitchers can be observed. Within the sub-domains of the system, we measure the fluctuations of the number of twitchers. That is, we measure the standard deviation \(\Delta \phi {\prime} (\phi )={\langle {[\phi {\prime} (\overrightarrow{r},t;\phi )-\phi ]}^{2}\rangle }^{1/2}\) of the local coverage fraction for different subsection sizes (Fig. 11a). In the dilute limit, one expects \(\Delta \phi {\prime} \sim {\phi }^{{\prime} \mu }\) with \(\mu =1/2\) in accordance with the central limit theorem (CLT). However, as intrinsically far-from-equilibrium systems there should be no general expectations that density fluctuations of motile microbes obey the CLT. Indeed, in dense active nematic systems, giant number fluctuations (GNF) with \(\mu =1\) are predicted80 and anomalous density fluctuations have been observed in simulations of self-propelled particles81,82 and experiments of driven granular matter13,83. Nevertheless, the scaling exponent \(\mu \) may depend on microscopic details, such as shape and motility mode, or surface coverage, as we will now demonstrate. For \(\phi < {\phi }^{\ast }\), the fluctuations are thermal-like with \(\mu =1/2\), as expected from CLT for the gas-like phase (Fig. 11b). However, \(\mu \) is much closer to unity than \(\mathrm{1/2}\) in the large ϕ limit (Fig. 11b). The transition from the CTL to GNF occurs rapidly about \({\phi }^{\ast }\). The increased fluctuations can be interpreted as a result of twitchers clustering together in dense actively flowing regions with polydomains of orientational ordering, while leaving depleted windows of low density between actively motile clusters. Together these combine to cause \(\phi {\prime} (\overrightarrow{r},t;\phi )\) to swing from large to small values. In the small-\(\phi {\prime} \)/large-\(\phi \) limit, the fluctuations are actually suppressed, rather than enhanced, because whole rafts of twitchers are caged within the liquid phase regions (Fig. 11). Similar giant number fluctuations have been, for example, reported in dense ensembles of swimming B. subtilis74. Twitcher surface coverage fluctuations. (a) Fluctuations of the local coverage \(\Delta \phi {\rm{{\prime} }}\) as a function of the local instantaneous coverage \(\phi {\prime} \) for various global coverage fractions ϕ. Reference scalings \(\Delta \,\phi {\rm{{\prime} }}\sim \phi {{\rm{{\prime} }}}^{\mu }\) for \(\mu =1/2\) and \(1\) (dashed lines) are expected in the thermal-like and active-nematic limits respectively. (b) Power law exponent \(\mu \) describing the scaling of the fluctuations with local coverage, as measured in the large \(\phi {\prime} \) limit. Vertical dashed line denotes ϕ*. Using coarse-grained and simplified simulations of bacilliforms with a stochastic motility cycle of rest, pilus extension and pilus retraction, we explored the collective behavior of twitchers as a function of surface covering. Although our study greatly simplifies twitcher-type bacteria by neglecting species-specific and biologically mediated complexities, we find cooperative action arising from physical mechanisms across all scales. By analyzing the displacement statistics of individual model twitchers within the ensemble, we found that the intermediate time shoulder in the mean squared displacement corresponding to twitchers in their resting period disappears with high coverage fraction, demonstrating that non-motile twitchers in their resting period are carried by the flow of their active neighbors. The MSD also showed that the short-time dynamics are slowed with the effective mean squared velocity decreasing monotonically with coverage. However, the long-time dynamics, as measured by relative diffusivity, are non-monotonic exhibiting an increase after a critical coverage fraction. This coverage fraction corresponds to the mean area per twitcher equaling the characteristic rotational area occupied by each bacilliform. These conclusions more readily found by employing the non-Gaussian parameter, which provides additional information on the dynamics of the twitchers. The non-Gaussian parameter loses its negative plateau with higher coverage, which provides evidence of non-motile resting twitchers being displaced by the flow of active twitchers. Additionally, by separating the contributions to the average velocity due to twitchers in the retraction and rest phases, we find motile and non-motile twitchers are indistinguishable at sufficiently high coverage with their speeds converging to the mean. We can definitively conclude that not all cells must be motile in the collective clusters38. Furthermore, the increase of the short-time NGP with coverage fraction implies larger displacements than expected by a normal distribution. This is validated using the step size displacement distributions (van Hove self-correlation functions) which exhibit longer tails than a normal distribution indicating collective motion for higher coverage. While this does not imply twitchers exhibit bacterial turbulence25,84,85, it does reveal the early stages of collectivity in pre-biofilm twitching communities. From the correlation functions, we see the microscopic arrangement of twitchers to form co-moving polar-aligned pairs in low coverage situations. However, the twitchers self-assemble into oriented local domains at high coverage, which form heterogeneous ordered polydomains within larger liquid-like regions, similar to bacterial rafts observed in bacterial colonies78. Biologically observed rafts generally move radially outward from the colony along the local alignment of the cells, which are in tight contact. As in our model proto-rafts, direction can vary and individuals within a raft may instantaneously move against the local flow but are advected with the group. An important distinction exists between the proto-rafts in our simulations and biological rafts in P. aeruginosa colonies—cells left behind biological rafts stretch and the continuity of the community breaks into small aggregates or even a network79. On the other hand, proto-rafts are free to simply move away from a larger cluster into a depleted region, forming a separate cluster since our model bacilliforms only interact via steric, excluded volume and not by signaling or other biological mechanisms. The transition from a purely dilute state with no clustering to a dense state with collective motion and non-homogeneous polydomains of local nematic ordering exhibits coexistence between the dilute and dense states. Such coexistence of separated phases appears to be a hallmark of self-propelled particles in general, not limited to twitching bacilliforms nor self-propelled rods, which has been studied theoretically in terms of motility-induced phase separation86,87, in simulations of active Brownian particles18,19,88, in self-propelled ballistic particles23, kinetic Monte Carlo89 and experimentally in systems of active spherical Janus colloids24. Similarly, our simulations quantify the giant number fluctuations and dynamic distributions of the coverage produced by twitching motility, which are likewise expected from active nematic systems83,90. While our model is simplified compared to the biological complexity of P. aeruginosa and other bacteria that employ twitching as a motility strategy, microscopic details of swimming motility have previously been shown to result in qualitative changes to collective dynamics26,91,92,93 and swarming-mode motility of P. aeruginosa94. Our well defined microscopic model of the twitching mode motility cycle captures the essential microscopic details that differentiate biologically relevant twitching motility from a purely idealized toy model of self-propelled rods and demonstrates that twitching motility is sufficient to exhibit physically mediated collectivity, without requiring additional long-range complications, such as photosensing and quorum sensing95 or secretions56 or other forms of bacterial stigmergy96. Although lacking a clear signal in the first order statistics of mean squared displacement, the collectivity of twitchers above a critical coverage fraction can be directly quantified by higher order statistics, including the non-Gaussian parameter, decorrelation lengths and the scaling of the fluctuations with local coverage. Such physically mediated collective properties may bestow an advantage on pre-biofilm communities of twitchers by allowing regions of high coverage to potentially seed the formation of biofilms, while continuously preserving a subpopulation of disengaged individuals that are free to explore the surface with effectively isolated twitcher dynamics. Simulation details Our coarse-grained model of motile microbes treats individual twitchers as stiff rod-like bacilliforms discretized into four spheres, with a non-integrated dummy particle representing the action of bacterial pili (Fig. 1a). At all times \(t\), each sphere \(i\) of mass \(m\) is located at a point \({\overrightarrow{x}}_{i}(t)\) and subject to thermal noise \({\overrightarrow{\xi }}_{i}(t)\), drag \(-\zeta {\dot{\overrightarrow{x}}}_{i}(t)\), and conservative forces \(-\overrightarrow{\nabla }V({\overrightarrow{x}}_{i},{\overrightarrow{x}}_{j\ne i})\) with other spheres \(j\ne i\). Simulations are conducted using Langevin Dynamics57,58, evolving according to $$m{\ddot{\overrightarrow{x}}}_{i}=-\zeta {\dot{\overrightarrow{x}}}_{i}-\overrightarrow{\nabla }V+\overrightarrow{\xi }.$$ Since bacteria are microscopic in scale and subject principally to biological sources of noise (see Section Motility Cycle), the temperature of the Gaussian noise is set to an arbitrarily low value of \(T=2\times {10}^{-7}\) with the friction coefficient \(\zeta =1\). Simulations use an integration step of \(\Delta t=0.01\), such that \(100\) integration steps constitute \(\tau =1\) unit time step. Each simulation runs for \({10}^{8}\) integration steps in a 2-dimensional simulation box of size \(100\) with periodic boundaries. Individual twitchers To account for the excluded volume of twitchers, a shifted truncated Lennard-Jones (Weeks-Chandler-Anderson) potential acts between all integrated particle pairs \(\{i,j\}\) $${V}_{{\rm{WCA}}}({r}_{ij})=(\begin{array}{cc}4\varepsilon \left[{\left(\frac{\sigma }{{r}_{ij}}\right)}^{12}-{\left(\frac{\sigma }{{r}_{ij}}\right)}^{6}\right]+\varepsilon , & {r}_{ij} < {r}_{{\rm{c}}}\\ \mathrm{0,} & {r}_{ij}\ge {r}_{{\rm{c}}},\end{array}$$ where \({r}_{{\rm{ij}}}=|{\overrightarrow{x}}_{i}-{\overrightarrow{x}}_{j}|\) is the separation between two particles. The particle size \(\sigma =1\) sets the length scale and the energy \(\varepsilon =1\) sets the energy scale. All quantities are expressed in terms of \(\sigma \), \(\varepsilon \), and \(\tau \). The cutoff \({r}_{{\rm{c}}}={2}^{1/6}\) truncates the long-range potential and \(\varepsilon \) shifts it. Each twitcher body is composed of four spheres, bonded together by finitely extensible nonlinear elastic (FENE) potentials $${V}_{{\rm{FENE}}}({r}_{ij})=-\frac{1}{2}{k}_{{\rm{F}}}{R}_{0}^{2}ln\left(1-{\left[\frac{{r}_{ij}}{{R}_{0}}\right]}^{2}\right),$$ where \({R}_{0}=1.5\) is the maximum extent of the bond and \({k}_{{\rm{F}}}=50\) is a spring constant. Harmonic bonds keep twitchers rigid with \({k}_{{\rm{H}}}=33\) in the potential $${V}_{{\rm{HARM}}}({\theta }_{ijk})={k}_{{\rm{H}}}{({\theta }_{ijk}-{\theta }_{0})}^{2}\mathrm{/2,}$$ which keeps the angle \({\theta }_{ijk}\) between three sequential particles tightly centered around \({\theta }_{0}=\pi \). Each twitcher body has a size \({L}_{{\rm{body}}}=4\). Motility cycle We model the process of twitching with a stochastic rule-based motility cycle and a single dummy pilus particle that actively pulls the twitcher forward. There are three phases in the model twitcher motility cycle (Fig. 1c): The first is a rest phase, in which each twitcher does not undergo self-induced movement (Fig. 1c-1). In this rest phase, the pilus is not adhered to the surface and the twitcher only passively responds to external forces. A twitcher in the rest phase has a 10% chance per \(\tau \) of stochastically transitioning out of this phase. The second period is defined as the pili extensionphase, in which each twitcher hypothetically extends then adheres its dummy pilus to the surface (Fig. 1c-2). This extension phase occurs over a set period of \(10\tau \). As in the rest phase, the twitcher does not undergo self-induced movement during the extension process. At the end of this phase, the dummy pilus is instantly fixed to a point a distance \({L}_{0}=2.4\) away from the head particle, with an angle relative to the body stochastically drawn from a uniform distribution on \([-\pi /4,\pi \mathrm{/4}]\). The third phase is the retraction phase, in which the twitcher is actively motile (Fig. 1c-3). During this phase, the twitcher's head is pulled towards its fixed pilus adhesion point. A linear potential $${V}_{{\rm{PILI}}}({r}_{\gamma })=-{k}_{{\rm{P}}}({r}_{\gamma }-{r}_{0})$$ where \({r}_{\gamma }(t)=|{\overrightarrow{x}}_{\gamma ,{\rm{H}}}-{\overrightarrow{x}}_{\gamma ,{\rm{P}}}|\) is the distance between the head at \({\overrightarrow{x}}_{\gamma ,{\rm{H}}}(t)\) and the pili adhesion point \({\overrightarrow{x}}_{\gamma ,{\rm{P}}}(t)\) of the \({\gamma }^{{\rm{th}}}\) twitcher, is used to model the average force exerted by multiple pili59. The spring constant \({k}_{{\rm{P}}}=1\) and \({r}_{0}=0.2\) is the strength of the pilus force and the cut off distance respectfully. The retraction phase ends when one of three conditions are met: The twitcher reaches its pilus adhesion point. This is achieved if \({r}_{\gamma }(t) < {L}_{{\rm{R}}}=0.2\) (Fig. 1c-3.i). The head of the twitcher is pushed too far from the adhesion point. This is said to occur if \({r}_{\gamma }(t) > {L}_{{\rm{S}}}=3\), causing the pilus adhesion to "snap" (Fig. 1c-3.ii). The twitcher adhesion is exhausted. Since an unobstructed twitcher takes roughly \(10\tau \) to reach its pilus, \({t}_{{\rm{M}}}=70\tau \) is chosen as the maximum time a twitcher can try to reach its pilus adhesion point before the adhesion fails (Fig. 1c-3.iii). Twitcher ensemble Many twitchers are modeled simultaneously, explicitly interacting only through excluded-volume repulsion. We define the 2D surface coverage fraction $$\phi ={A}_{{\rm{twitch}}}N/{A}_{{\rm{box}}}=3.7854\times {10}^{-4}N$$ where \({A}_{{\rm{box}}}={100}^{2}\) is the area of the box, \({A}_{{\rm{twitch}}}=3.7854\) is the area of one twitcher taken to be a rod of length 4 with circular caps, and \(N\) is the number of twitchers in the simulation. This does not include the pili, which have no excluded volume. Our simulations span from the solitary twitcher system with \(N=1\) (\(\phi =4\times {10}^{-4}\)) to \(N=2000\) (\(\phi =0.76\)). To analyze the individual and collective dynamics of the ensemble, we consider the state of each twitcher. The position \({\overrightarrow{x}}_{\gamma }(t)\) and average velocity \({\overrightarrow{v}}_{\gamma }(t)\) over 1 time unit \(\tau \) of the \({\gamma }^{{\rm{th}}}\) twitcher are defined to be the center of mass values, \({\overrightarrow{x}}_{\gamma }(t)={\sum }_{i\in \gamma }{\overrightarrow{x}}_{i}(t)\) and \({\overrightarrow{v}}_{\gamma }(t)={\sum }_{i\in \gamma }{\dot{\overrightarrow{x}}}_{i}(t)\), with average speed \({v}_{\gamma }(t)=|{\overrightarrow{v}}_{\gamma }(t)|\). In addition to the ensemble and time averaged speed \({v}_{{\rm{m}}}\equiv \langle v\rangle \) of all twitchers, we consider the separate contributions due to twitchers in their self-motile retraction phase \({v}_{{\rm{a}}}\equiv {\langle v\rangle }_{{\rm{retr}}}\) and their non-motile resting/extending phases \({v}_{{\rm{r}}}\equiv {\langle v\rangle }_{{\rm{rest}}+{\rm{ext}}}\). The instantaneous direction of motion \({\hat{v}}_{\gamma }(t)={\overrightarrow{v}}_{\gamma }(t)/{v}_{\gamma }(t)\) of each twitcher does not necessarily align with its polar head/tail orientation \({\hat{p}}_{\gamma }(t)=({\overrightarrow{x}}_{\gamma ,{\rm{H}}}-{\overrightarrow{x}}_{\gamma ,{\rm{T}}})/|{\overrightarrow{x}}_{\gamma ,{\rm{H}}}-{\overrightarrow{x}}_{\gamma ,{\rm{T}}}|\), where \({\overrightarrow{x}}_{\gamma ,{\rm{T}}}\) is the tail position (Fig. 1a). In addition to polar ordering, we will consider the nematic alignment of the twitchers denoted by \({\hat{n}}_{\gamma }(t)\equiv -\,{\hat{n}}_{\gamma }(t)\), disregarding parallel/anti-parallel differences. Vicsek, T., Czirók, A., Ben-Jacob, E., Cohen, I. & Shochet, O. Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett. 75, 1226 (1995). ADS MathSciNet CAS PubMed Google Scholar Toner, J. & Tu, Y. Flocks, herds, and schools: A quantitative theory of flocking. Phys. Rev. E 58, 4828 (1998). ADS MathSciNet CAS Google Scholar Aranson, I. S. & Tsimring, L. S. Pattern formation of microtubules and motors: Inelastic interaction of polar rods. Phys. Rev. E 71, 050901 (2005). ADS Google Scholar Peshkov, A., Aranson, I. S., Bertin, E., Chaté, H. & Ginelli, F. Nonlinear field equations for aligning self-propelled rods. Phys. Rev. Lett. 109, 268701 (2012). ADS PubMed Google Scholar Baskaran, A. & Marchetti, M. C. Hydrodynamics of self-propelled hard rods. Phys. Rev. E 77, 011920 (2008). ADS MathSciNet Google Scholar Baskaran, A. & Marchetti, M. C. Enhanced diffusion and ordering of self-propelled rods. Phys. Rev. Lett. 101, 268101 (2008). Kudrolli, A. Concentration dependent diffusion of self-propelled rods. Phys. Rev. Lett. 104, 088001 (2010). Bertin, E., Droz, M. & Grégoire, G. Boltzmann and hydrodynamic description for self-propelled particles. Phys. Rev. E 74, 022101 (2006). Bertin, E. et al. Mesoscopic theory for fluctuating active nematics. New J. Phys. 15, 085032 (2013). Ginelli, F., Peruani, F., Bär, M. & Chaté, H. Large-scale collective properties of self-propelled rods. Phys. Rev. Lett. 104, 184502 (2010). Bertin, E., Baskaran, A., Chaté, H. & Marchetti, M. C. Comparison between Smoluchowski and Boltzmann approaches for self-propelled rods. Phys. Rev. E 92, 042141 (2015). Saintillan, D. & Shelley, M. J. Active suspensions and their nonlinear models. Comptes Rendus Physique 14, 497–517 (2013). ADS CAS Google Scholar Shi, X.-Q. & Ma, Y.-Q. Topological structure dynamics revealing collective evolution in active nematics. Nat. Commun. 4, 3013 (2013). ADS PubMed PubMed Central Google Scholar Großmann, R., Peruani, F. & Bär, M. Mesoscale pattern formation of self-propelled rods with velocity reversal. Phys. Rev. E 94, 050602 (2016). Bär, M., Großmann, R., Heidenreich, S. & Peruani, F. Self-propelled rods: Insights and perspectives for active matter. Annu. Rev. Condens. Matter Phys. 11, null (2020). Siebert, J. T. et al. Critical behavior of active Brownian particles. Phys. Rev. E 98, 030601 (2018). Doostmohammadi, A., Shendruk, T. N., Thijssen, K. & Yeomans, J. M. Onset of meso-scale turbulence in active nematics. Nat. Commun. 8, 15326 (2017). ADS CAS PubMed PubMed Central Google Scholar Cates, M. E. & Tailleur, J. When are active Brownian particles and run-and-tumble particles equivalent? Consequences for motility-induced phase separation. Europhys. Lett. 101, 20010 (2013). Fischer, A., Chatterjee, A. & Speck, T. Aggregation and sedimentation of active Brownian particles at constant affinity. The J. Chem. Phys. 150, 064910 (2019). Mehandia, V. & Nott, P. R. The collective dynamics of self-propelled particles. J. Fluid Mech. 595, 239–264 (2008). ADS MATH Google Scholar Peruani, F., Deutsch, A. & Bär, M. Nonequilibrium clustering of self-propelled rods. Phys. Rev. E 74, 030904 (2006). Cates, M. E. & Tailleur, J. Motility-induced phase separation. Annu. Rev. Condens. Matter Phys. 6, 219–244 (2015). R. Bruss, I. & C. Glotzer, S. Phase separation of self-propelled ballistic particles. Phys. Rev. E 97 (2017). van der Linden, M. N., Alexander, L. C., Aarts, D. G. A. L. & Dauchot, O. Interrupted motility induced phase separation in aligning active colloids. Phys. Rev. Lett. 123, 098001 (2019). Wensink, H. H. et al. Meso-scale turbulence in living fluids. Proc. Natl. Acad. Sci. 109, 14308–14313 (2012). ADS CAS PubMed MATH Google Scholar Zöttl, A. & Stark, H. Hydrodynamics determines collective motion and phase behavior of active colloids in quasi-twodimensional confinement. Phys. Rev. Lett. 112, 118101 (2014). Matas-Navarro, R., Golestanian, R., Liverpool, T. B. & Fielding, S. M. Hydrodynamic suppression of phase separation in active suspensions. Phys. Rev. E 90, 032304 (2014). Zöttl, A. & Stark, H. Emergent behavior in active colloids. J. Physics: Condens. Matter 28, 253001 (2016). Koch, D. L. & Subramanian, G. Collective hydrodynamics of swimming microorganisms: Living fluids. Annu. Rev. Fluid Mech. 43, 637–659 (2011). ADS MathSciNet MATH Google Scholar Elgeti, J., Winkler, R. G. & Gompper, G. Physics of microswimmers—single particle motion and collective behavior: A review. Reports on Prog. Phys. 78, 056601 (2015). Saintillan, D. & Shelley, M. J. Theory of Active Suspensions, 319–355 (Springer New York, New York, NY, 2015). Mattingly, A. E., Weaver, A. A., Dimkovikj, A. & Shrout, J. D. Assessing travel conditions: environmental and host influences on bacterial surface motility. J. Bacteriol. 200, JB.00014–18 (2018). Kearns, D. A field guide to bacterial swarming motility. Nat. Rev. Microbiol. 8, 634–44 (2010). CAS PubMed PubMed Central Google Scholar Deforet, M., van Ditmarsch, D., Carmona-Fontaine, C. & Xavier, J. Hyperswarming adaptations in a bacterium improve collective motility without enhancing single cell motility. Soft Matter 10, 2405–13 (2014). Murray, T. & Kazmierczak, B. Pseudomonas aeruginosa exhibits sliding motility in the absence of type IV pili and flagella. J. Bacteriol. 190, 2700–8 (2008). Gibiansky, M. et al. Bacteria use type IV pili to walk upright and detach from surfaces. Science 330, 197 (2010). ADS CAS PubMed Google Scholar Jin, F., Conrad, J. C., Gibiansky, M. L. & Wong, G. C. L. Bacteria use type-IV pili to slingshot on surfaces. Proc. Natl. Acad. Sci. 108, 12617–12622 (2011). Burrows, L. L. Pseudomonas aeruginosa twitching motility: Type IV pili in action. Annu. Rev. Microbiol. 66, 493–520 (2012). Mierke, C. T. Physical view on migration modes. Cell Adhesion & Migr. 9, 367–379 (2015). PMID: 26192136. O'Toole, G. A. & Kolter, R. Flagellar and twitching motility are necessary for Pseudomonas aeruginosa biofilm development. Mol. Microbiol. 30, 295–304 (1998). Conrad, J. Physics of bacterial near-surface motility using flagella and type IV pili: Implications for biofilm formation. Res. Microbiol. 163 (2012). Muner Otton, L., da Silva Campos, M., Lena Meneghetti, K. & Corção, G. Influence of twitching and swarming motilities on biofilm formation in Pseudomonas strains. Arch. Microbiol. 199 (2017). Chiang, P. & Burrows, L. Biofilm formation by hyperpiliated mutants of Pseudomonas aeruginosa. J. Bacteriol. 185, 2374–8 (2003). Conrad, J. et al. Flagella and pili-mediated near-surface single-cell motility mechanisms in P. aeruginosa. Biophys. J. 100, 1608–1616 (2011). Klausen, M. et al. Biofilm formation by Pseudomonas aeruginosa wild type, flagella and type IV pili mutants. Mol. Microbiol. 48, 1511–24 (2003). Klausen, M., Aaes-Jørgensen, A., Molin, S. & Tolker-Nielsen, T. Involvement of bacterial migration in the development of complex multicellular structures in Pseudomonas aeruginosa biofilms. Mol. Microbiol. 50, 61–68 (2003). Mazza, M. G. The physics of biofilms—An introduction. J. Phys. D: Appl. Phys. 49, 203001 (2016). Madukoma, C. et al. Single cells exhibit differing behavioral phases during early stages of Pseudomonas aeruginosa swarming. J. Bacteriol. (2019). Merz, A. J., So, M. M. & Sheetz, M. P. Pilus retraction powers bacterial twitching motility. Nature 407, 98–102 (2000). Craig, L., Pique, M. E. & Tainer, J. A. Type IV pilus structure and bacterial pathogenicity. Nat. Rev. Microbiol. 2, 363–378 (2004). Skerker, J. M. & Berg, H. C. Direct observation of extension and retraction of type IV pili. Proc. Natl. Acad. Sci. 98, 6901–6904 (2001). de Haan, H. W. Modeling and simulating the dynamics of type IV pili extension of Pseudomonas aeruginosa. Biophys. J. 111 (2016). Oyama, N., Molina, J. J. & Yamamoto, R. Purely hydrodynamic origin for swarming of swimming particles. Phys. Rev. E 93, 043114 (2016). Dell'Arciprete, D. et al. A growing bacterial colony in two dimensions as an active nematic. Nat. Commun. 9 (2018). Pamp, S. J. & Tolker-Nielsen, T. Multiple roles of biosurfactants in structural biofilm development by Pseudomonas aeruginosa. J. Bacteriol. 189, 2531–2539 (2007). Gelimson, A. et al. Multicellular self-organization of P. aeruginosa due to interactions with secreted trails. Phys. Rev. Lett. 117, 178102 (2016). Anderson, J. A., Lorenz, C. D. & Travesset, A. General purpose molecular dynamics simulations fully implemented on graphics processing units. J. Comput. Phys. 227, 5342–5359 (2008). Glaser, J. et al. Strong scaling of general-purpose molecular dynamics simulations on GPUs. Comput. Phys. Commun. 192, 97–107 (2015). Lu, S. et al. Nanoscale pulling of type IV pili reveals their flexibility and adhesion to surfaces over extended lengths of the pili. Biophys. J. 108, 2865–2875 (2015). Talà, L., Fineberg, A., Kukura, P. & Persat, A. Pseudomonas aeruginosa orchestrates twitching motility by sequential control of type IV pili movements. Nat. microbiology 4, 774–780 (2019). Wang, B., Kuo, J., Bae, S. C. & Granick, S. When Brownian diffusion is not Gaussian. Nat. materials 11, 481–485 (2012). Guan, J., Wang, B. & Granick, S. Even hard-sphere colloidal suspensions display Fickian yet non-Gaussian diffusion. ACS nano 8, 3331–3336 (2014). Chubynsky, M. V. & Slater, G. W. Diffusing diffusivity: A model for anomalous, yet Brownian, diffusion. Phys. Rev. Lett. 113, 098302 (2014). Acharya, S., Nandi, U. K. & Maitra Bhattacharyya, S. Fickian yet non-Gaussian behaviour: A dominant role of the intermittent dynamics. The J. Chem. Phys. 146, 134504 (2017). Ghannad, Z. Fickian yet non-Gaussian diffusion in two-dimensional Yukawa liquids. Phys. Rev. E 100, 033211 (2019). Cherstvy, A. G., Thapa, S., Wagner, C. E. & Metzler, R. Non-Gaussian, non-ergodic, and non-Fickian diffusion of tracers in mucin hydrogels. Soft Matter 15, 2526–2551 (2019). Metzler, R. Superstatistics and non-Gaussian diffusion. The Eur. Phys. J. Special Top. 229, 711–728 (2020). Vorselaars, B., Lyulin, A. V., Karatasos, K. & Michels, M. A. J. Non-Gaussian nature of glassy dynamics by cage to cage motion. Phys. Rev. E 75, 011504 (2007). Evers, F. et al. Particle dynamics in two-dimensional random-energy landscapes: Experiments and simulations. Phys. Rev. E 88, 022125 (2013). Ghosh, S., Cherstvy, A., Grebenkov, D. & Metzler, R. Anomalous, non-Gaussian tracer diffusion in crowded twodimensional environments. New J. Phys. 18, 013027 (2016). Höfling, F. & Franosch, T. Anomalous transport in the crowded world of biological cells. Reports on Prog. Phys. 76, 046602 (2013). Moscaritolo, R. et al. Quantifying the dynamics of bacterial crowd surfing. Bull. Am. Phys. Soc. 58 (2013). Kumar, N., Soni, H., Ramaswamy, S. & Sood, A. Flocking at a distance in active granular matter. Nat. Commun. 5, 1–9 (2014). Zhang, H. P., Be'er, A., Florin, E.-L. & Swinney, H. L. Collective motion and density fluctuations in bacterial colonies. Proc. Natl. Acad. Sci. 107, 13626–13630 (2010). Sokolov, A. & Aranson, I. S. Physical properties of collective motion in suspensions of bacteria. Phys. Rev. Lett. 109, 248109 (2012). Ryan, S. D., Sokolov, A., Berlyand, L. & Aranson, I. S. Correlation properties of collective motion in bacterial suspensions. New J. Phys. 15, 105021 (2013). Wioland, H., Lushi, E. & Goldstein, R. E. Directed collective motion of bacteria under channel confinement. New J. Phys. 18, 075002 (2016). You, Z., Pearce, D. J., Sengupta, A. & Giomi, L. Geometry and mechanics of microdomains in growing bacterial colonies. Phys. Rev. X 8, 031065 (2018). Semmler, A. B., Whitchurch, C. B. & Mattick, J. S. A re-examination of twitching motility in Pseudomonas aeruginosa. Microbiology 145, 2863–2873 (1999). Ramaswamy, S., Simha, R. A. & Toner, J. Active nematics on a substrate: Giant number fluctuations and long-time tails. Europhys. Lett. 62, 196 (2003). Chaté, H., Ginelli, F., Grégoire, G., Peruani, F. & Raynaud, F. Modeling collective motion: Variations on the Vicsek model. The Eur. Phys. J. B 64, 451–456 (2008). Chaté, H., Ginelli, F., Grégoire, G. & Raynaud, F. Collective motion of self-propelled particles interacting without cohesion. Phys. Rev. E 77, 046113 (2008). Narayan, V., Ramaswamy, S. & Menon, N. Long-lived giant number fluctuations in a swarming granular nematic. Science 317, 105–108 (2007). Overhage, J., Bains, M., Brazas, M. D. & Hancock, R. E. Swarming of Pseudomonas aeruginosa is a complex adaptation leading to increased production of virulence factors and antibiotic resistance. J. Bacteriol. 190, 2671–2679 (2008). Stenhammar, J., Tiribocchi, A., Allen, R. J., Marenduzzo, D. & Cates, M. E. Continuum theory of phase separation kinetics for active Brownian particles. Phys. Rev. Lett. 111, 145702 (2013). Gonnella, G., Marenduzzo, D., Suma, A. & Tiribocchi, A. Motility-induced phase separation and coarsening in active matter. Comptes Rendus Physique 16, 316–331 (2015). Digregorio, P. et al. Full phase diagram of active Brownian disks: From melting to motility-induced phase separation. Phys. Rev. Lett. 121, 098003 (2018). Klamser, J. U., Kapfer, S. C. & Krauth, W. Thermodynamic phases in two-dimensional active matter. Nat. Commun. 9, 1–8 (2018). Mishra, S. Giant number fluctuation in the collection of active apolar particles: From spheres to long rods. J. Stat. Mech. Theory Exp. 2014, P07013 (2014). MathSciNet Google Scholar Blaschke, J., Maurer, M., Menon, K., Zöttl, A. & Stark, H. Phase separation and coexistence of hydrodynamically interacting microswimmers. Soft Matter 12, 9821–9831 (2016). Schwarzendahl, F. J. & Mazza, M. G. Hydrodynamic interactions dominate the structure of active swimmers' pair distribution functions. The J. Chem. Phys. 150, 184902 (2019). Schwarzendahl, F. J. & Mazza, M. G. Percolation transition of pusher-type microswimmers. arXiv preprint arXiv:1908.10631 (2019). Yang, A., Tang, W. S., Si, T. & Tang, J. X. Influence of physical effects on the swarming motility of Pseudomonas aeruginosa. Biophys. J. 112, 1462–1471 (2017). Mukherjee, S., Jemielita, M., Stergioula, V., Tikhonov, M. & Bassler, B. L. Photosensing and quorum sensing are integrated to control Pseudomonas aeruginosa collective behaviors. PLOS Biol. 17, 1–26 (2019). Gloag, E. S., Turnbull, L. & Whitchurch, C. B. Bacterial stigmergy: An organising principle of multicellular collective behaviours of bacteria. Scientifica 2015 (2015). We thank Lori Burrows, John Dutcher, Christina Kurzthaler, Dominique Limoli, Maxime Deforet, and Robert Wickham for useful discussions. H.W.d.H. gratefully acknowledges funding from the Natural Sciences and Engineering Research Council (NSERC) in the form of Discovery Grant No. 2014-06091. This research has received funding (TNS) from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 851196). University of Ontario Institute of Technology, Faculty of Science, 2000 Simcoe Street North, Oshawa, Ontario, L1H 7K4, Canada Andrew M. Nagel, Michael Greenberg & Hendrick W. de Haan Interdisciplinary Centre for Mathematical Modelling and Department of Mathematical Sciences, Loughborough University, Loughborough, Leicestershire, LE11 3TU, UK Tyler N. Shendruk School of Physics and Astronomy, The University of Edinburgh, Peter Guthrie Tait Road, Edinburgh, EH9 3FD, UK Andrew M. Nagel Hendrick W. de Haan H.W.d.H. conceived the research. A.N. and M.G. designed and implemented the model. A.N. conducted the statistical measurements. A.N., T.N.S. and H.W.d.H. analyzed the results, and wrote and reviewed the manuscript. Corresponding authors Correspondence to Tyler N. Shendruk or Hendrick W. de Haan. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Information. Supplementary Information2. Nagel, A.M., Greenberg, M., Shendruk, T.N. et al. Collective Dynamics of Model Pili-Based Twitcher-Mode Bacilliforms. Sci Rep 10, 10747 (2020). https://doi.org/10.1038/s41598-020-67212-1 Accepted: 27 May 2020
CommonCrawl
Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology | springerprofessional.de Skip to main content vorheriger Artikel Research and Development Trend of Shape Control... 25.07.2017 | Review | Ausgabe 5/2017 Open Access Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology Zu-De Zhou, Lin Gui, Yue-Gang Tan, Ming-Yao Liu, Yi Liu, Rui-Ya Li Supported by National Natural Science Foundation of China (Grant No. 51475343), and International Science and Technology Cooperation Program of China (Grant No. 2015DFA70340). The history of the study of the machine tool thermal error is close to a century long. There is still no solution to the thermal error problem with modern high precision CNC machine tools. Most research about the machine tool thermal error has focused on establishing the relationship between the temperature field and thermal error of machine tools, but no solutions have presented themselves well in industry application. Since there have been no new technological breakthroughs in the experimental studies on the thermal error, traditional electrical testing and laser measurement technology are commonly used. The research objects in thermal error testing are usually small and medium-sized CNC machine tools. There is relatively less research on heavy-duty CNC machine tools. Heavy-duty CNC machine tools are pivotal pieces of equipment in many advanced manufacturing industries, such as aerospace, energy, petrochemicals, rail transport, shipbuilding, and ocean engineering. They are widely used in the machining of large parts and high-end equipment, such as steam turbines, large nuclear pumps, marine propellers, and large aircraft wings [ 1 ]. Improving the machining precision of heavy-duty CNC machine tools is of great significance to comprehensively improving the efficiency of steam turbine units, extending the life of the nuclear power shaft system, reducing the noise of submarine propulsion, reducing the resistance of flight, and so on. The machining error of heavy-duty CNC machine tools can be classified into five parts: Geometric errors produced by machine parts' manufacturing and assembly; Thermal-induced deformation errors caused by internal and external heat sources; Force-induced deformation errors caused by the cutting force, clamping force, machine tool's own gravity, etc.; Control errors caused by issues such as the response lag, positioning detection error of the servo system, CNC interpolation algorithm, etc.; Tool wear and the high frequency flutter of the machine tool. The proportion of the thermal deformation error is often the largest for high precision CNC machine tools. In precision manufacturing, the thermal deformation error accounts for about 40% – 70% of the total machining errors [ 2 ]. In 1933, the influence of heat on precision part processing was noticed for the first time [ 3 ]. A number of qualitative analysis and contrast tests were carried out between the 1930s and the 1960s. Until the 1970s, researchers used the finite element method (FEM) for machine tool thermal deformation calculations and the optimization of the design of machine tools. The CNC thermal error compensation technology appeared in the late 1970s. After the 1990s, thermal error compensation technology rapidly developed, and many research institutions conducted in-depth studies on the thermal error compensation technology of CNC machine tools based on temperature measurements [ 4 – 14 ]. As shown in Figure 1, the ideology of real-time compensation of thermal error for CNC machine tool consists of two steps. First, extensive experiments are carried out on the CNC machine tools, that collect the CNC data, body temperature of the machine tool, ambient temperature, and the thermal error of the cutting tool tip, in order to establish the thermal error prediction models that are always the multiple linear regression (MLR) model, artificial neural network(ANN) model, and genetic algorithm(GA) model, etc.(shown in Figure 1(a)). Then, the established thermal error prediction model is applied on the CNC machine tool to make error compensation at the tool center point (TCP) through the real-time CNC data and temperature data (shown in Figure 1(b)). Ideology of real-time compensation of thermal error for CNC machine tool Over the past few decades, the International Organization for Standardization (ISO) promulgated a series of standards: ISO 230-3 (thermal deformation of the machine tool) [ 15 ], ISO 10791-10 (thermal deformation of the machining center) [ 16 ], and ISO 13041-8 (thermal distortion of the turning center) [ 17 ]. These standards provide systemic analysis methods for machine tool thermal behavior. Compared to small and medium-sized CNC machine tools, heavy-duty CNC machine tools have unique structural and thermal characteristics, including the following items: Larger and heavier moving parts, like the spindle box, moving beam, and moving workbench; Larger and more complex support structures, such as the machine tool base, column, and beam; More decentralized internal heat sources in 3-D (3-dimensional) space; Greater susceptibility to environmental temperature shifts. As the temperature varies over time and the moving parts are heavy, the thermal and mechanical errors exist a strong coupling effect, making the thermal deformation mechanism more complicated and the optimization of the structural design more difficult. As heavy-duty CNC machine tools are more susceptible to environmental temperature shifts (due to the large volume, small changes of environmental temperature can cause noteworthy accumulations of thermal expansion of the machine tool structure in 3-D space), the robustness of the thermal error prediction model of heavy-duty CNC machine tools is more difficult to control. Monitoring technologies related to the thermal error study of heavy-duty CNC machine tools are important foundation of the research on the machine tool thermal error mechanism and the establishment of a thermal error prediction model. These monitoring technologies include the temperature field monitoring technologies and the thermal deformation monitoring technologies. Further, the thermal deformation monitoring consists of the position error monitoring of the cutting tool tip and the thermal deformation field monitoring of the large structural parts of the machine tool. Because of the unique structural and thermal characteristics of the heavy-duty CNC machine tools mentioned above, there are lots of differences between the heavy-duty machine tool and other machine tools for thermal error monitoring, which can be concluded in three aspects: In terms of temperature field monitoring, as heavy-duty CNC machine tools have a large volume and dispersive heat sources, more temperature measuring points are needed in order to establish an accurate temperature field distribution. Additionally, the installation positions of temperature sensors are more difficult to determine, and the optimization of the temperature measuring points is more complex; In terms of thermal deformation monitoring, there are lots of similarity in position error monitoring of the cutting tool tip between the heavy-duty machine tool and other machine tools. However, for thermal deformation field monitoring of the large structural parts, the Heavy-duty CNC machine tools face greater challenges. The existing machine tool deformation detection techniques are mostly based on the displacement detection instruments, which only detect one point or a few points' displacement of the machine tool structure. These methods estimate the deformation by the interpolation method. As the structural parts of the heavy-duty CNC machine tool are larger, more conventional displacement sensors or displacement measurement instruments with wide measurement range in the space are needed to reconstruct the whole thermal deformation of the structures. Additionally, as the moving parts of the heavy-duty CNC machine tool are rather heavier than the small and medium-sized CNC machine tools, when the machine tool works, the sedimentation deformation and vibration of the reinforced concrete foundation is more serious and intractable, which reduces the displacement measurement accuracy directly; The processing environment of heavy-duty CNC machine tool is generally worse than the small and medium-sized CNC machine tools. Traditional electric sensors can be easily influenced by the work environment. Humidity, dust, oil pollution, and electromagnetic interference all reduce the sensors' performance stability and reliability. The long-term thermal error monitoring of the heavy-duty CNC machine tools requires better environmental adaptability and higher reliability to the related sensors. In order to solve the thermal issues of heavy-duty CNC machine tools, we need to analyze the causes of thermal error of machine tool and then carry out in-depth study on the thermal deformation mechanism based on the existing theory and thermal deformation detection technology. In addition, we need to conclude the existing monitoring technologies and provides new technical support for thermal error research on heavy-duty CNC machine tools. Currently, there are many review literatures on the thermal error of CNC machine tools [ 2 , 18 – 26 ], but these papers mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper focuses on the study of thermal error of the heavy-duty CNC machine tool and emphasizes on its thermal error monitoring technology. First, the causes of thermal error of the heavy-duty CNC machine tool are discussed in Section 2, where the heat generation in the spindle and feed system and the environmental temperature influence are introduced. Then, the temperature monitoring technology and thermal deformation monitoring technology are reviewed in detail in Sections 3 and 4, respectively. Finally, in Section 5, the application of the new optical measurement technology, the "fiber Bragg grating distributed sensing technology" for heavy-duty CNC machine tools is discussed. This technology is an intelligent sensing and monitoring system for heavy-duty CNC machine tools and opens up new areas of research on the heavy-duty CNC machine tool thermal error. 2 Causes of Thermal Error of Heavy-Duty CNC Machine Tool 2.1 Classification of the Heat Sources The fundamental causes of thermal error of the heavy-duty CNC machine tool are related to the internal and external heat sources. Internal heat sources Heat generated from friction in the spindle, ball screws, gearbox, guides, and other machine parts; Heat generated from the cutting process; Heat generated from energy loss in the motors, electric circuits, and hydraulic system; Cooling influences provided by the various cooling systems. External heat sources Environmental temperature variation; Thermal radiation from the sun and other light sources. For the internal heat sources, the heat generated from the spindle and ball screws has a significant influence on the heavy-duty CNC machine tools and appears frequently in the literatures. The heating mechanism, thermal distribution, and thermal-induced deformation are often researched by theoretical and experimental methods. For the external heat sources, the dynamic change regularity of the environmental temperature and its individual influence and combined effects with internal heat sources on thermal error of heavy-duty CNC machine tools are studied. 2.2 Heat Generated in the Spindle 2.2.1 Thermal Model of the Supporting Bearing The thermogenesis of the rolling bearings, namely the rolling bearing power loss N f is generally calculated by taking the rolling bearing as a whole. It is the scalar product of the rolling bearing friction torque M f and angular velocity of the inner ring of the bearing. $$N_{\text{f}} = M_{\text{f}} \cdot \pi \cdot n_{i} /30{\kern 1pt} {\kern 1pt},$$ where n i is the rotating speed of the spindle in Eq. ( 1). Palmgren [ 27 , 28 ] developed the experiential formula for the rolling bearing friction torque M f based on experimental tests: $$M_{\text{f}} = M_{\text{l}} + M_{\text{v}} ,$$ $$M_{\text{l}} = f_{1} P_{1} D_{\text{m}} ,$$ $$M_{\text{v}} = \left\{ \begin{aligned} &10^{3} (vn_{i} )^{{\frac{2}{3}D_{\text{m}}^{3} }} ,\quad vn_{i} \ge 2 \times 10^{ - 3} , \hfill \\ &16f_{0} D_{\text{m}}^{3} {\kern 1pt} {\kern 1pt} ,\quad \quad vn_{i} < 2 \times 10^{ - 3} , \hfill \\ \end{aligned} \right.$$ where M l and M v are the load friction torque and viscous friction torque respectively in Eq. ( 2), D m is the pitch diameter of the bearing, and v is the kinematic viscosity of the lubricating oil. f 0, f 1 and P 1 are related to the bearing type in Eqs. ( 3) and ( 4). Atridage [ 29 ] modified Palmgren's equation to take the effect of the lubricating oil flow into consideration. Stein and Tu [ 30 ] modified Palmgren's equation to consider the effect of the induced thermal preload. The above models calculate the thermogenesis value of the rolling bearing as a whole, and they do not involve the the surface friction power loss calculation of the inner concrete components of the bearing. Rumbarger, et al. [ 31 ], used the established fluid traction torque model to calculate the friction power loss of the bearing roller, cage, and inner and outer ring raceway respectively. But their model ignored the heating mechanism differences between the local heat sources. Chen, et al. [ 32 ], calculated the total thermogenesis of the bearing from the local heat sources with different heating mechanisms. Moorthy and Raja [ 33 ] calculated the thermogenesis value of the local heat sources, they also took into consideration the change in the diametral clearance after the assembly and during operation that was attributed to the thermal expansion of the bearing parts, which influenced the gyroscopic and spinning moments contributing to the heat generation. Hannon [ 34 ] detailed the existing thermal models for the rolling-element bearing. 2.2.2 Thermal Distribution in the Spindle When studying the temperature field distribution of the spindle, the cause-and-effect model and power flow model should first be analyzed, to determine the heat sources and heat transfer network. Then the heat transfer parameters should be determined, including heat transfer coefficients of the materials, thermal contact resistances between the contact surface, and heat transfer film coefficients. The heat transfer coefficients are easier obtained relatively. The thermal contact resistances between the contact surfaces are concerned with the surface roughness and contact force, and are often obtained by experimental methods [ 35 ]. Heat convection within the housing is the most difficult to describe, so a rough approximation is often used for the heat transfer film coefficient [ 28 ]. $$h_{\text{v}} = 0.0332kPr^{1/3} \left( {\frac{{u_{\text{s}} }}{{v_{0} x}}} \right)^{1/2} ,$$ where u s equals the bearing cage surface velocity, x equals the bearing pith diameter, v 0 represents the kinematic viscosity and Pr is the Prandtl number of the oil in Eq. ( 5). It is important to note that for different heat convection objects, the transfer film coefficients have different expression formulas. Bossmanns and Tu [ 36 , 37 ] illustrated the detailed causes and effects of the spindle variables (shown in Figure 2), presented the power flow model (shown in Figure 3), and developed a finite difference thermal model to characterize the power distribution of a high speed motorized spindle. The heat transfer coefficients, thermal contact resistances, and heat convection coefficients were all calculated in their analytical method in detail. More research about the thermal resistance network of the bearing assembly can be found in Refs. [ 38 , 39 ]. Cause-and-effect model for spindle [ 36 ] Finite element spindle model marked with the heat sources and sinks [ 37 ] As the real structure of a spindle box is complicated, the finite difference method (FDM) and FEM are often preferred to obtain accurate results. Jedrzejewski, et al. [ 40 ], set up a thermal analysis model of a high precision CNC machining center spindle box using a combination of the FEM and the FDM. Refs. [ 41 , 42 ] created an axially symmetric model for a single shaft system with one pair of bearings using the FEM to estimate the temperature distribution of the whole spindle system. 2.3 Heat Generated in the Feed Screw Nuts The thermal deformation of the feed screw nuts effects the linear position error of heavy-duty machine tools. The axial thermal errors become bigger as the runtime of the feed system increases. However, after running for a period of time, the feed system approaches the thermal balance and reaches the approximation steady state, and the variations of thermal errors ease. The variation of radial thermal expansion of the feed screw nuts is so minor that it may be ignored [ 43 ]. In the ball screw system, the heat generation sources are the nuts and 2 bearings, and the heat loss sources are liquid cooling and surface convection (shown in Figure 4) [ 44 , 45 ]. The thermal balance equation can be expressed by $$Q_{{{\text{b}}1}} + Q_{{{\text{b}}2}} + Q_{\text{n}} - Q_{\text{sc}} - Q_{\text{c}} = \rho cV\frac{\partial T}{\partial t},$$ where Q b1 and Q b2 are the conduction heat from the 2 support bearings, Q n is the conduction heat from the nut, Q sc is the convection heat from rotation of the ball screw shaft, and Q c is the convection heat lost from the cooling liquid. The material density is noted as ρ, c is the specific heat, V is the volume, T is the temperature, and t is the time. Schematic diagram of the screw nuts thermal model The conduction heat from the 2 support bearings, Q b1 and Q b2 can be caculated as shown in section 2.2. The conduction heat from the nut Q sc can be defined as below: $$Q_{\text{sc}} = 0.12\pi f_{{ 0 {\text{n}}}} v_{{ 0 {\text{n}}}} n_{\text{n}} M_{\text{n}} ,$$ where f 0n is a factor related to the nut type and method of lubrication, v 0n is the kinematic viscosity of the lubricant, n n is the screw rotation velocity, and M n is the total frictional torque of the nut (preload and dynamic load) [ 44 ]. Mayr, et al. [ 46 ], established the equivalent thermal network model of the ball screw with an analytical method. Xu, et al. [ 44 , 47 ], discovered that, in the case of a large stroke, the heat produced by the moving nut was dispersed on a larger scale than in other cases, so the screw cooling method has better deformation performance than the nut cooling method. Conversely, in the case of a small stroke, the thermal deformation performance of the nut cooling method is better than that of the screw cooling method. Some researchers [ 4 , 48 , 49 ] developed the FEM model for the screw, in which the strength of the heat source measured by the temperature sensors was applied to the FEM model to calculate the thermal errors of the feed drive system. Jin, et al. [ 50 – 52 ], presented an analytical method to calculate the heat generation rate of a ball bearing in the ball screw/nut system with respect to the rotational speed and load applied to the feed system. 2.4 Environmental Temperature Effects Environmental temperature fluctuation changes the temperature of the heavy-duty CNC machine tool globally, and affects its machining accuracy greater compared with the small and medium-sized machine tools [ 1 ]. Environmental temperature fluctuations have daily periodicity and seasonal periodicity simultaneously. Tan, et al. [ 53 ], decomposed the environmental temperature fluctuations into Fourier series form (shown in Figure 5). x represents time in minutes. The basic angular frequency is ω 0 =2π/ T 0, where T 0=1440 min. A 0 is the average value of the daily cycle temperature that the current temperature belongs to, and it can be obtained from the temperature history through time series analysis. A n represents the amplitude of the temperature fluctuation for each order, and the orders are multiples of the basic frequency ω 0. ϕ n is the initial phase of each order. Tan, et al's experiment verifies that the environmental temperature has a significant impact on the thermal error of the heavy-duty CNC machine tool, and there exists hysteresis time between the environmental temperature and the corresponding thermal deformation that changes with climate and seasonal weather. Time-frequency characteristics of environmental temperature [ 53 ] Zhang, et al. [ 54 ], established the thermal error transfer function of each object of the machine tool based on the heat transfer mechanism. Then, based on the assembly dimension chain principle, the thermal error transfer function of the whole machine tool was obtained. As the thermal error transfer function can be deduced using Laplace transform, the thermal error characteristic of the machine tool can be studied with both time domain and frequency domain methods. Taking the environmental temperature fluctuations as input, based on the thermal error transfer function, the environmental temperature induced thermal error can be obtained. 2.5 Thermal Analysis of the Global Machine Tool The heat generated in the spindle and feed screw nuts is discussed in Sections 2.1 and 2.2. The global thermal deformation of heavy-duty machine tools is influenced by varieties of heat sources as noted at the beginning of Section 2. The FEM was utilized for the thermal analysis of the global machine tool. Mian, et al. [ 55 ], presented a novel offline technique using finite element analysis (FEA) to simulate the effects of the major internal heat sources, such as bearings, motors and belt drives, and the effects of the ambient temperature variation during the machine's operation. For this FEA model, the thermal boundary conditions were tested using 71 temperature sensors. To ensure the accuracy of the results, experiments were conducted to obtain the thermal contact conductance values. Mian, et al. [ 56 ], further studied the influence of the ambiance temperature variation on the deformation of the machine tool using FEM. The validation work was carried out over a period of more than a year to establish the robustness to the seasonal changes and daily changes in order to improve the accuracy of the thermal simulation of machine tools. Zhang, et al. [ 57 ], proposed a whole-machine temperature field and thermal deformation modeling with a simulation method for vertical machining centers. Mayr, et al. [ 58 – 60 ], combined the advantages of FDM and FEA to simulate the thermo-mechanical behavior of machine tools (shown in Figure 6). Schematic of the FDEM [ 58 ] The transient 3-D temperature distribution at discrete points of time during the simulated period was calculated using the FDM. Those were then used as temperature field for the FEM to calculate the thermally induced deformations. 3 Temperature Field Monitoring Technology for Heavy-Duty CNC Machine Tools The formation process of thermal errors in heavy-duty CNC machine tools occurs in the following steps: heat sources→temperature field→thermal deformation field→thermal error. It is obvious that the relationship between the thermal deformation field and thermal error is more relevant than the relationship between the temperature field and thermal error. However, it is quite difficult to measure the micro-thermal-deformation of the whole machine structure directly and the surface temperature of the machine tool is easier to obtain inversely. Existing thermal error prediction models are mostly based on the temperature measurement from the surface of the machine tool, establishing the relationship between the thermal drift of the cutting tool tip and the temperature at critical measuring point. Therefore, the temperature monitoring of the machine tool is a key technology in the thermal error research of CNC machine tools. It can be divided into the contact-type temperature measurements and non-contact temperature measurements, according to the installation form of the temperature sensor. 3.1 Contact-Type Temperature Measurement of Heavy-Duty CNC Machine Tools The contact-type surface temperature measurement sensors used in the temperature monitoring of CNC machine tools are mainly thermocouples and platinum resistance temperature detector (RTD). Their installation can be divided into the paste-type, pad-type, and screw-type. Thermocouples and platinum resistance temperature detectors are mostly used for discrete surface temperature measurement. Heavy-duty CNC machine tools have a large volume and decentralized internal heat sources. Delbressine, et al. [ 61 ], realized that it was difficult to determine the locations and qualities of the temperature sensors, so numerous temperature sensors should be arranged on the surface of the machine tools. Mian, et al. [ 55 ], used 65 temperature sensors to measure the detailed temperature gradient caused by the internal heat sources, and applied them to FEM. Zhang, et al. [ 57 ], used 32 platinum resistance temperature sensors to establish a machine tool temperature field. In 2014, Tan, et al. [ 53 ], installed 33 temperature sensors on the heavy-duty gantry type machine tool XK2650 (shown in Figure 7), and established a thermal error prediction model considered the influence of the environmental temperature. This model can predict 85% of the thermal error and has a good robustness. In addition, Refs. [ 62 – 68 ] used the thermal resistance to measure the machine tool surface temperature, and Refs. [ 69 , 70 ] used the thermocouples. Werschmoeller and Li [ 71 ] embedded 10 mini flaky thermocouples into the cutting tool to monitor the cutting tool temperature field. Liu, et al. [ 72 ], embedded thermocouples into the workpiece to investigate the workpiece temperature variations that resulted from helical milling. Temperature monitoring of the heavy-duty gantry type machine tool XK2650 [ 53 ] These electrical temperature-sensing technologies mainly utilize the linear relationship between the values of the potential, resistance, or other electrical parameters of the sensing materials and temperature to detect the temperature. They have a simple structure, fast response capability, high sensitivity, and good stability. So they play an important role in the thermal error experimental research of heavy-duty CNC machine tools. However, there are some common flaws in the electrical temperature measurement sensors. These include the following: Poor environmental adaptability Many parts of the heavy-duty CNC machine tools are in environments exposed to oil, metal cutting chip dust, and coolant. The wires and sensitive components of electrical temperature sensors are all made from metal materials, which are susceptible to corrosion and damage, and have a short working life in a relatively harsh environment. Weak ability to resist electromagnetic interference Heavy-duty CNC machine tools have many inductance elements, like the motors and electric control cabinet, which form strong time-varying electromagnetic fields. The testing signals of electrical temperature sensors are easily interfered with electromagnetic fields during transmission, reducing the signal-to-noise ratio (SNR), accuracy and reliability of the test data. Wide variety of signal transmission wires The principle of electrical temperature sensors is that of an electrically closed circuit. A single electrical temperature sensor has two conductor wires, and a plurality of electrical sensors cannot be connected in a series connection. If there are N electrical sensors, there are 2 N wires. So it is difficult to create the layout of large amounts of wires in heavy-duty CNC machine tools. The testing results for the above electrical temperature sensors show the discrete-point temperature of the heavy-duty CNC machine tool's surface. The whole temperature field can be reconstructed by using the FDM. Due to the use of few discrete temperature points, it is difficult to establish an accurate integral temperature field of a heavy-duty CNC machine tool, particularly to calculate its internal temperature. Currently, prediction models such as multiple regression or neural networks, are all established based on discrete temperature points, so there is little research on the integral temperature field reconstruction of heavy-duty CNC machine tools. However, it is of great significance for the study of the thermal error mechanism to obtain the 3-D temperature field of CNC machine tools. 3.2 Non-Contact Type Temperature Measurement of Heavy-Duty CNC Machine Tools Currently, infrared thermal imaging technology is a non-contact type temperature measurement method that is often applied to thermal error study of heavy-duty CNC machine tools, and it is part of the radiation temperature measurement method. A thermal infrared imager gathers infrared radiant energy and delivered it to an infrared detector through the optical system, in order to process the infrared thermal image. Using the thermal infrared imager test results, one can select the key temperature points to establish a thermal error model. Qiu, et al. [ 73 ], measured the spindle box temperature field through FLIR thermal imager, and selected 18 temperature points symmetrically to establish the model of the spindle thermal components using the multiple linear regression method. Infrared thermal imaging is suitable for the study of the thermal characteristics of key parts of the heavy-duty CNC machine tools as it visualizes the global temperature field of the surface with a high temperature resolution. Wu, et al. [ 74 ], researched the thermal behaviors of the support bearing (shown in Figure 8) and screw nut of the ball screws (shown in Figure 9) by using infrared thermographs. Uhlmann and Hu [ 75 ], captured the temperature field when the spindle running was at 15000 r/min for 150 min, and compared their data with emulational temperature fields (shown in Figure 10). Xu, et al. [ 47 ], examined the heat generation and conduction of the ball screws and investigated how the different cooling methods affects the temperature distribution using infrared thermal imaging technology. Zhang, et al. [ 76 ], studied temperature variable optimization for precision machine tool thermal error compensation using infrared thermometer. Thermal imaging of the screw support bearings [ 74 ] Thermal imaging of the screw nut [ 74 ] Thermal imaging of the headstock temperature field [ 75 ] The infrared thermal imager can visualize the temperature distribution of CNC machine tools, and plays an important role in thermal error study of CNC machine tools. However, the infrared thermal imager is a two-dimensional plane imaging infrared system. One infrared thermal imager cannot measure the overall global temperature field of heavy-duty CNC machine tools. Even with the use of multiple expensive infrared cameras for measuring the global temperature field of heavy-duty CNC machine tools, it is still difficult to track the temperature field of the moving parts when heavy-duty CNC machine tools are involved in actual processing. The shortcomings mentioned in Section 3.1 and Section 3.2 limit the electrical temperature sensors and infrared sensing technology for monitoring the real-time temperature over the long-term in heavy-duty CNC machine tools. There must be some breakthroughs in the temperature field measurement of heavy-duty CNC machine tools in order to develop a highly intelligent temperature measurement and thermal error compensation system that is suitable for heavy-duty CNC machine tools for commercialization. 4 Thermal Deformation Monitoring Technology for Heavy-Duty CNC Machine Tools 4.1 Thermal Error Monitoring of the Cutting Tool Tip For the thermal error detecting of the cutting tool tip, three categories of sensors are mainly used, that are non-contact displacement detection sensors, high precision double ball gauge, and laser interferometer. The non-contact displacement detection sensors utilized in machine tools include eddy current transducers, capacitive transducers and laser displacement sensors. Though their sensing principles are different, their installation and error detection method are consistent with each other. The high precision double ball gauge and laser interferometer are mainly used to detect the dynamic geometric error of the machine tool, and they can also be competent at thermal error detecting. 4.1.1 Five-Point Detection Method The five-point detection method (shown in Figure 11) is only applicable to monitoring the thermal error of a machine tool when the spindle box is not moving. It detects the thermal deformation caused by the ambient temperature or by the rotation of the spindle in ISO230-3. This method measures the three position errors δ px , δ py , and δ pz in the X, Y, and Z direction and two angle errors ε px and ε py rotating around the X and Y axes of the tool cutting tip. Their values can be calculated by Eq. ( 8): $$\left\{ \begin{aligned} \delta_{{{\text{p}}x}} = \delta_{x1} + L \times \varepsilon_{{{\text{p}}x}} , \hfill \\ \delta_{{{\text{p}}y}} = \delta_{y1} + L \times \varepsilon_{{{\text{p}}y}} , \hfill \\ \delta_{\text{pz}} = \delta_{z} {\kern 1pt} , \hfill \\ \varepsilon_{{{\text{p}}x}} = (\delta_{y1} - \delta_{y2} )/d, \hfill \\ \varepsilon_{{{\text{p}}y}} = (\delta_{x1} - \delta_{x2} )/d, \hfill \\ \end{aligned} \right.$$ where δ x1, δ x2, δ y1, δ y2, and δ z are the displacements detected by the displacement sensors S X1, S X2, S Y1, S Y2, and S Z . d is the sensor distance between sensor S X1 and sensor S X2, and L represents the effective length of the test mandrel that is often made by steel, or invar alloy to oatain higher testing precision. Five-point detection method As the test mandrel is a cylinder, a shift in one direction will cause a test error in the other direction. Using δ x1 and δ y1 for an example (shown in Figure 12), when the cutting tool tip moves from point O to point O' in the XOY plane, the real position errors are δ x1 and δ y1. However, the position errors detected by the displacement sensors are δ′ x1 and δ′ y1, correspondingly. The relationship between them can be expressed by Eq. ( 9): $$\left\{ \begin{aligned} \delta_{x1}^{2} + (R - \delta^{\prime}_{y1} + \delta_{y1} )^{2} = R^{2} , \hfill \\ \delta_{y1}^{2} + (R - \delta^{\prime}_{x1} + \delta_{x1} )^{2} = R^{2} , \hfill \\ \end{aligned} \right.$$ where R represents the radius of the test mandrel. The real position errors δ x1 and δ y1 can be expressed by the displacement sensors's detected data. It has to be noted that δ px , δ py , δ pz , ε px , and ε py interact with each other, and Eq. ( 9) does not consider ε px and ε py . Test error of five-point detection method 4.1.2 High Precision Double Ball Gauge Method A double ball gauge consists of two precision metal spheres and a telescoping bar installed on a grating ruler that can detect the displacement (shown in Figure 13). The double ball gauge method is recommended in the ASME B5.54 [ 77 ] to detect comprehensive error in machine tools. The advantage of this method is that it can detect the tool tip trajectory error caused by the geometric error and thermal deformation. However, the heat expansion and bending deformation of the telescoping bar or a small displacement of the stand affect the test accuracy of this method. Double ball gauge method 4.1.3 Laser Measurement Method The laser interferometer instrument utilizes the Doppler effect caused by the frequency shift to detect the machine tool's linear position error (shown in Figure 14) and angle error (shown in Figure 15) moving along the guide [ 78 ]. A dual frequency laser interferometer is a heterodyne interferometer based on single frequency laser interferometers. It has a large gain and high SNR and it is especially suitable for measuring the thermal error of heavy-duty CNC machine tools. The laser measurement methods are widely used for geometric accuracy calibration and the heat-induced position error, angle error and straightness error of heavy-duty CNC machine tools. Ruiz, et al. [ 79 ], designed a set of optical measuring systems based on the laser interference principle to track and locate the tool tip of the machine tool. Tool tip's linear position error measurement Tool tip's angle error measurement 4.2 Thermal Deformation Monitoring of Large Structural Parts of the Machine Tool Currently, the displacement detection apparatus, which detects the deformation of large structural parts of heavy-duty CNC machine tools is based on the laser displacement sensors, eddy current sensors, and capacitive sensors. For instance, Gomez-Acedo, et al. [ 80 ], utilized the inductive sensors array to measure the thermal deformation of a large gantry-type machine tools(shown in Figure 16). Additionally, the laser interferometer with different accessories can measure a range of values, including the precision position, straightness, verticality, yaw angle, parallelism, flatness, and turntable accuracy, and it plays an important role in the detection of the thermal deformation of a heavy-duty CNC machine tool. However, since some of these instruments mentioned above are very large, or in demanding environments, or have small measurement range, it is difficult to engage in the long-term monitoring of heavy-duty CNC machine tools. The direct measurement method requires the installation of displacement sensors on a fixed base as a benchmark, but for CNC machine tools, especially heavy-duty CNC machine tools, it is difficult to find a large constant benchmark (any large base will incur thermal deformation or force-induced deformation, both of which affect the measurement accuracy). It is difficult for the direct displacement measurement method to completely reconstruct the real-time deformation of heavy-duty CNC machine tools. Therefore, researchers are trying to find a more reliable and practical measuring principal and method to monitor the deformation of heavy-duty CNC machine tool structures using laser interferometer [ 81 ]. Thermal deformation detection of a heavy-duty CNC machine tool based on the inductive sensors array [ 80 ] 5 Application of FBG Sensors in Heavy-Duty CNC Machine Tools 5.1 Principle and Characteristics of Fiber Bragg Grating Sensors A fiber Bragg grating sensor is a type of optical sensitive sensor that has been utilized and studied for nearly forty years. A fiber Bragg grating sensor has a number of unparalleled characteristics. It is small and explosion-proof, has electrical insulation, and is immune to electromagnetic interference. It offers high precision, and high reliability. Multiple FBG sensors can be arranged in one single fiber. Therefore, it has been widely used in many engineering fields and mechanical system [ 82 ]. The sensing principle of a FBG is fundamentally based on a periodic perturbation of the refractive index along the fiber axis formed by exposing the fiber core to the illumination of an intense ultraviolet interference pattern. When a broad-band light propagates along the optical fiber to a grating, a single wavelength is reflected back while therest of the signal is transmitted with a small attenuation (shown in Figure 17). The reflected wavelength is the Bragg wavelength and it can be expressed by the following equation: Distributed detection principle of the fiber Bragg grating sensors $$\lambda_{\text{B}} = 2n_{\text{eff}} \Lambda,$$ where λ B is the Bragg wavelength, n eff is the effective refractive index of the fiber core, and Λ is the grating period. A FBG shows great sensitivity to various external perturbations, especially strain and temperature. Any change of stain or temperature will cause the change of n eff or Λ, and lead to the shift of λ B. Hence, by monitoring the Bragg wavelength shift, the value of the strain or temperature is determined. The wavelength variation response to the axial strain change Δ ε and temperature change Δ T is given by: $$\frac{{\Delta \lambda_{\text{B}} }}{{\lambda_{\text{B}} }} = (1 - p_{\text{e}} )\Delta \varepsilon + (\alpha_{\text{f}} + \zeta )\Delta T{,}$$ where p e, α f, and ζ are, respectively, the effective photoelastic coefficient, thermal expansion coefficient, and thermal-optic coefficient of the fused silica fiber. In the literatures, there is little application of fiber grating sensors in the manufacturing industry and almost nothing concerning the machine tool temperature detection and thermal error monitoring. The detection technology based on fiber Bragg grating sensing is especially suitable for the thermal error monitoring of heavy-duty CNC machine tools. It offers a number of advantages over traditional detection technologies, as shown follows: A fiber Bragg grating sensor has a small volume, light weight, and high measurement precision. It is especially unparalleled when a series of FBG sensors that detect a variety of physical parameters distribute in a single fiber. It is suitable for heavy-duty CNC machine tool's large volume, multiple heat sources, and complex structure. A fiber Bragg grating sensor is highly resistant to corrosion, and high temperature. It is especially suitable for the processing under conditions of high temperature, high humidity, excessive vibration, dust, and other harsh environment. It meets the requirements of the long-term stability and reliability for machine tool detection. A fiber Bragg grating sensor has electrical insulation, and is immune to electromagnetic interference (EMI), making it suitable for harsh processing conditions of the heavy-duty CNC machine tool. It can achieve accurate measurement of the thermal error of machine tools. 5.2 Temperature Field Monitoring of Heavy-Duty CNC Machine Tool Based on Fiber Bragg Grating Sensors 5.2.1 Fiber Bragg Grating Temperature Sensors for the Surface Temperature Measurement of Machine Tools The fiber Bragg grating temperature measurement technology has become more mature, but the research in this field is mainly concentrated on the extremely high or low temperature measurement and the temperature sensitive enhancing technology. Currently, fiber Bragg grating temperature sensors can be divided into 5 parts by the form of packaging: tube-type fiber Bragg grating temperature sensor [ 83 , 84 ], substrate-type fiber Bragg grating temperature sensor [ 85 , 86 ], polymer packaged fiber Bragg grating temperature sensor [ 87 ], metal-coated fiber Bragg grating temperature sensor [ 88 – 93 ], and sensitization-type fiber Bragg grating temperature sensor [ 94 ]. In order to easily install the temperature sensor and not destroy the internal structure of the machine tools, the temperature of the surface of the machine tools is usually tested and used as the basic data for the thermal error compensation. Measuring the surface temperature accurately in the high gradient temperature field is an challenging technical problem. In the present study, the traditional electrometric method for measuring the temperature of the surface of the machine tool rarely considers the precision problem of measurement. Fiber Bragg grating has been widely used in the field of temperature measurement, but there is little research on the measurement error of the surface temperature measurement. The machine tool surface temperature measurement error can be divided into 3 parts: When the temperature sensor's surface makes contact with the machine tool, the heat flow will be more concentrated at the testing point. It results in temperature measurement error Δ T 1. The thermal contact resistance between a temperature sensor's surface and machine tool surface results in a temperature drop Δ T 2. There is a certain distance between the temperature sensor's sensing point and the surface of the machine tool, which creates the temperature measurement error Δ T 3. Optical fibers are mainly made of quartz and organic resin material. Their thermal conductivity is less than the metal material wire of the thermocouple and thermal resistance. The first temperature measurement error Δ T 1 of the fiber Bragg grating is significantly smaller than that of the latter two, and the main error factors are the thermal contact resistance and the distance of the temperature sensing point. A high gradient temperature field model of the heating surface is established by the FEM [ 95 ]. In this model, when the hot surface temperature and the air temperature are 90.2 °C and 22 °C, the temperature falling gradient near the hot surface is −46.4 °C/mm (shown in Figure 18). Temperature gradient distribution of surface temperature measurement [ 95 ] Due to the existence of the coating layer on the surface of the fiber Bragg grating sensor, there is about a 0.15 mm gap between the machine tool surface and the fiber Bragg grating temperature sensing point. In the temperature gradient of −46.4 °C/mm, the small space is sufficient to produce a large temperature test error. By using thermal conductive paste, the uniformity of the surface temperature can be improved, and the error of the surface temperature measurement by FBG can be significantly reduced compared to that from a commercial thermal resistance surface temperature sensor. Ref. [ 96 ] studied influence of the installation types on surface temperature measurement by a FBG sensor. The surface temperature measurement error of the FBG sensor with single-ended fixation, double-ended fixation and fully-adhered fixation are theoretical analyzed and experimental studied. The single-ended fixation results in a positive linear error with increasing surface temperature, while the double-ended fixation and fully-adhered fixation both result in non-linear error with increasing surface temperature that are affected by thermal expansion strain of the tested surface's material. Due to its linear error and strain-resistant characteristics, the single-ended fixation will play an important role in the FBG surface temperature sensor encapsulation design field . 5.2.2 Temperature Measurement of the Machine Tool Spindle Bearing Based on the Fiber Bragg Grating Sensors The spindle is the core component with complex assembly mechanical structure in heavy-duty CNC machine tool. The spindle consists of the rotating shaft, the front and rear bearings, and the spindle base. For the motorized spindle, it also includes the rotor and stator. As the structure of the spindle is very compact and narrow, to fix the temperature sensor inside the spindle is difficult. The thermogenesis of spindle's front bearings is a research hotspot that has great influence to the thermal error of the heavy-duty CNC machine tool. Liu, et al. [ 97 ], installed two FBG temperature sensors (position 1 and 3) and four thermal resistors (position 1, 2, 3, and 4) on the bearing support surface of the spindle (shown in Figure 19). With the shaft rotating freely, the temperature rise amplitudes in position 1, 2, 3, and 4 are consistent with each other. The measured temperature by the FBG temperature sensors and the thermal resistors are the same. As the volume of the FBG temperature sensor is rather smaller than the commercial thermal resistors and thermocouples, it has natural advantages to measure the internal temperature of the spindle. Temperature field measurement of the lathe spindle [ 97 ] Dong, et al. [ 98 ], embedded six FBGs connected in one fiber into the spindle housing. These FBGs were installed on the outer ring surface of the front bearing equidistantly in the circumferential direction (shown in Figure 20). With the shaft rotating freely or under radius force, the corresponding uniform or non-uniform temperature field of the outer ring was measured. Additionally, based on the testing, the influence of bearing preload on the temperature rise of the bearing was studied. FBG sensors installation locations [ 98 ] 5.2.3 Thermal Error Measurement of a Heavy-Duty CNC Machine Tool Based on the Fiber Bragg Grating Sensors Huang, et al. [ 99 ], measured the surface temperature field of a heavy-duty machine tool using the fiber Bragg grating temperature sensor (shown in Figure 21). Three fibers engraved with 27 fiber Bragg grating sensors were arranged on the bed, column, motor, spindle box, and gear box (shown in Figure 22). The temperature was monitored for 24 h. The laser displacement sensors were utilized to measure the offset of the tool cutting tip in the directions of X, Y, and Z. In Figure 23, CH1-10, CH2-1 CH2-7, and CH3-6, show the air temperature changes in the different parts of the environment near to the machine tool. The rest show the temperature in different parts of the structure surface of the machine tool. The parts all had the same change trend, but the temperature of the surrounding environment had an effect on the different parts of the machine tool. There was a large temperature gradient on the surface of the structure. Figure 24 shows the relationship of the thermal drift of the tool tip in three directions and the variation of the environmental temperature and the surface temperature of machine tool. The thermal drift in three directions shifted with environmental temperature and machine tool surface temperature. The thermal drift in the Y direction was the largest. When the ambient temperature shift reached about –6 °C, the error in the direction of Y reached about 15 μm. Temperature field measurement of a heavy-duty machine tool based on the FBG [ 99 ] Locations of the FBG temperature sensors [ 99 ] Diurnal variation of the temperature [ 99 ] Changes of temperature and the thermal drift [ 99 ] The fiber Bragg grating has the characteristics of multi-point temperature measurement. It can realize the layout of the temperature measurement points in the large surface area of the heavy-duty CNC machine tool, which can realize the reconstruction of the temperature field of the machine tool more accurately. 5.3 Heavy-Duty CNC Machine Tool Thermal Deformation Monitoring Based on Fiber Bragg Grating Sensors There have been a number of achievements made in the application of the fiber Bragg grating strain sensor to large structural deformation measurements. By applying the classical beam theory, Kim and Cho [ 100 ] rearranged the formula to estimate the continuous deflection profile by using strains measured directly from several points equipped with the fiber Bragg sensor. Their method can be used to measure the deflection curve of bridges, which represents the global behavior of civil structures [ 101 ]. Kang, et al. [ 102 ], investigated the dynamic structural displacements estimation using the displacement–strain relationship and measured the strain data using fiber Bragg grating. It is confirmed that the structural displacements can be estimated using strain data without displacement measurement. Kang, et al. [ 103 ], presented an integrated monitoring scheme for the maglev guideway deflection using wavelength-division-multiplexing (WDM) based fiber Bragg grating sensors, which can effectively avoid EMI in the maglev guideway. Yi, et al. [ 104 ], proposed a spatial shape reconstruction method using an orthogonal fiber Bragg grating sensor array. Fiber Bragg grating sensing technology opens up a new area of study for the real-time thermal deformation monitoring of heavy-duty CNC machine tool structures. The earliest work was done by Bosetti, et al. [ 105 – 107 ], who put forward a kind of reticular displacement measurement system (RDMS) based on a reticular array of fiber Bragg strain sensors to realize the real-time monitoring of deformations in the structural components of the machine tools (shown in Figure 25). Scheme of a Cartesian milling machine equipped with three RDMSs (typical column height of about 4m) [ 105 ] For a planar and isostatic reticular structure (using the numbering conventions shown in Figure 26), the position of the ith node n i = ( x i , y i) can be expressed as a function of the coordinates of the nodes n i−1 and n i−2 and of the length of the 2 connecting beams L 2i−3 and L 2i−4 : $$\left\{ \begin{aligned} (x_{i} - x_{i - 1} )^{2} + (y_{i} - y_{i - 1} )^{2} = L_{2i - 3}^{2} \hfill \\ (x_{i} - x_{i - 2} )^{2} + (y_{i} - y_{i - 2} )^{2} = L_{2i - 4}^{2}. \hfill \\ \end{aligned} \right.$$ Numbering of the beams and nodes of the lattice [ 105 ] Figure 27 shows that the bending deformation of the RDMS prototype reconstructed by the measurement system respectively, which shows good consistency. In order to allow for the development of more general and 3-dimensional structures, a new algorithm was proposed [ 105 ]. The problem of calculating the nodal positions on the basis of their distances as measured by the FBG sensors can be reformulated as the a minimization problem. 3-point bending of the RDMS prototype and deformed shape as reconstructed by the measurement system [ 105 ] Liu, et al. [ 108 ], detected the thermal deformation of the column of a heavy-duty CNC machine tool with the integral method based on the FBG sensor array (shown in Figure 28). The strain data was gauged by multiple FBG sensors glued on the specified locations of the machine tool, and then transformed into the deformation. The displacement of the machine tool spindle was also gauged for evaluation. The calculation results show consistency with the testing results obtained from the laser displacement sensor (shown in Figure 29). Refs. [ 109 , 110 ], studied the deformation measurement of heavy-duty CNC machine tool base using fiber Bragg grating array and designed a FBG-based force transducer for the anchor supporting force measurement of the heavy-duty machine tool base. These research can be extended to the analysis of the thermal deformation mechanism and thermal deformation measurement of heavy-duty CNC machine tool. 3D models and FBG sensor locations Results calculated by FBG data and results measured by the displacement sensor of the column extension [ 108 ] 6 Conclusions and Outlook The thermal error compensation technology of CNC machine tools has been developed over decades, but its successful application to commercial machine tools is limited. To some extent, it is still in the laboratory stage. Heavy-duty CNC machine tools play an important role in the national economic development and national defense modernization. However, due to the more complex thermal deformation mechanism and difficulty in the monitoring technology caused by a huge volume, overcoming its thermal error problems is extremely difficult. The fiber Bragg grating sensing technology opens up a new areas of research for thermal error monitoring of heavy-duty CNC machine tools. We need to take advantage of the fiber Bragg grating sensing technology in global temperature fields and thermal deformation field measurements for heavy-duty CNC machine tool to study the thermal error mechanism of heavy-duty CNC machine tool. These can provide technological support for thermal structure optimization design of heavy-duty CNC machine tools. We also need to improve the thermal error prediction model, especially in regards to the robustness problem. Intelligent manufacturing is an important trend in manufacturing technology, and the Industry 4.0 promises to create smart factory [ 111 , 112 ]. Intelligent sensing technology is one of the indispensable foundations for the realization of intelligent manufacturing. The fusion of optical fiber sensing technology and high-end manufacturing technology is an important research direction that will play an important role in the Industry 4.0. Zurück zum Zitat L Uriarte, M Zatarain, D Axinte, et al. Machine tools for large parts. CIRP Annals - Manufacturing Technology, 2013, 62(2): 731–750. L Uriarte, M Zatarain, D Axinte, et al. Machine tools for large parts. CIRP Annals - Manufacturing Technology, 2013, 62(2): 731–750. Zurück zum Zitat J Bryan. International status of thermal error research. CIRP Annals– Manufacturing Technology, 1990, 39(2): 645–656. J Bryan. International status of thermal error research. CIRP Annals– Manufacturing Technology, 1990, 39(2): 645–656. Zurück zum Zitat J G Yang. Present situation and prospect of error compensation technology for NC machine tool. Aeronautical Manufacturing Technology, 2012, 48(5): 40–45. (in Chinese) J G Yang. Present situation and prospect of error compensation technology for NC machine tool. Aeronautical Manufacturing Technology, 2012, 48(5): 40–45. (in Chinese) Zurück zum Zitat C H Wu, Y T Kung. Thermal analysis for the feed drive system of a CNC machine center. International Journal of Machine Tools and Manufacture, 2003, 43(15): 1521–1528. C H Wu, Y T Kung. Thermal analysis for the feed drive system of a CNC machine center. International Journal of Machine Tools and Manufacture, 2003, 43(15): 1521–1528. Zurück zum Zitat J H Lee, S H Yang. Statistical optimization and assessment of a thermal error model for CNC machine tools. International Journal of Machine Tools and Manufacture, 2002, 42(1): 147–155. J H Lee, S H Yang. Statistical optimization and assessment of a thermal error model for CNC machine tools. International Journal of Machine Tools and Manufacture, 2002, 42(1): 147–155. Zurück zum Zitat J S Chen, W Y Hsu, Characterizations and models for the thermal growth of a motorized high speed spindle. International Journal of Machine Tools and Manufacture, 2003, 43(11): 1163–1170. J S Chen, W Y Hsu, Characterizations and models for the thermal growth of a motorized high speed spindle. International Journal of Machine Tools and Manufacture, 2003, 43(11): 1163–1170. Zurück zum Zitat S Yang, J Yuan, J Ni. The improvement of thermal error modeling and compensation on machine tools by CMAC neural network. International Journal of Machine Tools and Manufacture, 1996, 36(4): 527–537. S Yang, J Yuan, J Ni. The improvement of thermal error modeling and compensation on machine tools by CMAC neural network. International Journal of Machine Tools and Manufacture, 1996, 36(4): 527–537. Zurück zum Zitat C D Mize, J C Ziegert. Neural network thermal error compensation of a machining center. Precision Engineering, 2000, 24(4): 338–346. C D Mize, J C Ziegert. Neural network thermal error compensation of a machining center. Precision Engineering, 2000, 24(4): 338–346. Zurück zum Zitat D S Lee, J Y Choi, D H Choi. ICA based thermal source extraction and thermal distortion compensation method for a machine tool. International Journal of Machine Tools and Manufacture, 2003, 43(6): 589–597. D S Lee, J Y Choi, D H Choi. ICA based thermal source extraction and thermal distortion compensation method for a machine tool. International Journal of Machine Tools and Manufacture, 2003, 43(6): 589–597. Zurück zum Zitat H Yang, J Ni. Dynamic neural network modeling for nonlinear, nonstationary machine tool thermally induced error. International Journal of Machine Tools and Manufacture, 2005, 45(4-5): 455–465. H Yang, J Ni. Dynamic neural network modeling for nonlinear, nonstationary machine tool thermally induced error. International Journal of Machine Tools and Manufacture, 2005, 45(4-5): 455–465. Zurück zum Zitat Y Kang, C W Chang, Y Huang, et al. Modification of a neural network utilizing hybrid filters for the compensation of thermal deformation in machine tools. International Journal of Machine Tools and Manufacture, 2007, 47(2): 376–387. Y Kang, C W Chang, Y Huang, et al. Modification of a neural network utilizing hybrid filters for the compensation of thermal deformation in machine tools. International Journal of Machine Tools and Manufacture, 2007, 47(2): 376–387. Zurück zum Zitat H Wu, H T Zhang, Q J Guo, et al. Thermal error optimization modeling and real-time compensation on a CNC turning center. Journal of Materials Processing Technology, 2008, 207(1-3): 172–179. H Wu, H T Zhang, Q J Guo, et al. Thermal error optimization modeling and real-time compensation on a CNC turning center. Journal of Materials Processing Technology, 2008, 207(1-3): 172–179. Zurück zum Zitat Q J Guo, J G Yang, H Wu. Application of ACO-BPN to thermal error modeling of NC machine tool. The International Journal of Advanced Manufacturing Technology, 2010, 50(5): 667–675. Q J Guo, J G Yang, H Wu. Application of ACO-BPN to thermal error modeling of NC machine tool. The International Journal of Advanced Manufacturing Technology, 2010, 50(5): 667–675. Zurück zum Zitat Y Zhang, J G Yang, H Jiang. Machine tool thermal error modeling and prediction by grey neural network. The International Journal of Advanced Manufacturing Technology, 2012, 59(9): 1065–1072. Y Zhang, J G Yang, H Jiang. Machine tool thermal error modeling and prediction by grey neural network. The International Journal of Advanced Manufacturing Technology, 2012, 59(9): 1065–1072. Zurück zum Zitat International Organization for Standardization Technical Committees. ISO 230-3-2007 Test code for machine tools–Part 3: Determination of thermal effects. Geneva: International Organization for Standardization, 2007. International Organization for Standardization Technical Committees. ISO 230-3-2007 Test code for machine tools–Part 3: Determination of thermal effects. Geneva: International Organization for Standardization, 2007. Zurück zum Zitat International Organization for Standardization Technical Committees. ISO 10791-10-2007 Test conditions for machining centres–Part 10: Evaluation of thermal distortion. Geneva: International Organization for Standardization, 2007. International Organization for Standardization Technical Committees. ISO 10791-10-2007 Test conditions for machining centres–Part 10: Evaluation of thermal distortion. Geneva: International Organization for Standardization, 2007. Zurück zum Zitat International Organization for Standardization Technical Committees. ISO 13041-8-2004 Test conditions for numerically controlled turning machines and turning centres - Part 8: Evaluation of thermal distortions. Geneva: International Organization for Standardization, 2004. International Organization for Standardization Technical Committees. ISO 13041-8-2004 Test conditions for numerically controlled turning machines and turning centres - Part 8: Evaluation of thermal distortions. Geneva: International Organization for Standardization, 2004. Zurück zum Zitat M Weck, P Mckeown, R Bonse, et al. Reduction and Compensation of Thermal Errors in Machine Tools. CIRP Annals - Manufacturing Technology, 1995, 44(2): 589–598. M Weck, P Mckeown, R Bonse, et al. Reduction and Compensation of Thermal Errors in Machine Tools. CIRP Annals - Manufacturing Technology, 1995, 44(2): 589–598. Zurück zum Zitat R Ramesh, M A Mannan, A N Poo. Error compensation in machine tools - a review: Part II: thermal errors. International Journal of Machine Tools and Manufacture, 2000, 40(9): 1257–1284. R Ramesh, M A Mannan, A N Poo. Error compensation in machine tools - a review: Part II: thermal errors. International Journal of Machine Tools and Manufacture, 2000, 40(9): 1257–1284. Zurück zum Zitat R Ramesh, M A Mannan, A N Poo. Thermal error measurement and modelling in machine tools.: Part I. Influence of varying operating conditions. International Journal of Machine Tools and Manufacture, 2003, 43(4): 391–404. R Ramesh, M A Mannan, A N Poo. Thermal error measurement and modelling in machine tools.: Part I. Influence of varying operating conditions. International Journal of Machine Tools and Manufacture, 2003, 43(4): 391–404. Zurück zum Zitat R Ramesh, M A Mannan, A N Poo, et al. Thermal error measurement and modelling in machine tools. Part II. Hybrid bayesian network-support vector machine model. International Journal of Machine Tools and Manufacture, 2003, 43(4): 405–419. R Ramesh, M A Mannan, A N Poo, et al. Thermal error measurement and modelling in machine tools. Part II. Hybrid bayesian network-support vector machine model. International Journal of Machine Tools and Manufacture, 2003, 43(4): 405–419. Zurück zum Zitat J W Li, W J Zhang, G S Yang, et al. Thermal-error modeling for complex physical systems: the-state-of-arts review. The international Journal of Advanced Manufacturing Technology, 2009, 42(1): 168–179. J W Li, W J Zhang, G S Yang, et al. Thermal-error modeling for complex physical systems: the-state-of-arts review. The international Journal of Advanced Manufacturing Technology, 2009, 42(1): 168–179. Zurück zum Zitat J Z Fu, X Y Yao, Y He, et al. Development of thermal error compensation technology for NC machine tool. Aeronautical Manufacturing Technology, 2010 (4): 64–66. (in Chinese) J Z Fu, X Y Yao, Y He, et al. Development of thermal error compensation technology for NC machine tool. Aeronautical Manufacturing Technology, 2010 (4): 64–66. (in Chinese) Zurück zum Zitat J Mayr, J Jedrzejewski, E Uhlmann, et al. Thermal issues in machine tools. CIRP Annals - Manufacturing Technology, 2012, 61(2): 771–791. J Mayr, J Jedrzejewski, E Uhlmann, et al. Thermal issues in machine tools. CIRP Annals - Manufacturing Technology, 2012, 61(2): 771–791. Zurück zum Zitat Y Li, W H Zhao, S H Lan, et al. A review on spindle thermal error compensation in machine Tools. International Journal of Machine Tools and Manufacture, 2015, 95: 20–38. Y Li, W H Zhao, S H Lan, et al. A review on spindle thermal error compensation in machine Tools. International Journal of Machine Tools and Manufacture, 2015, 95: 20–38. Zurück zum Zitat H T Wang, T M Li, L P Wang, et al. Review on thermal error modeling of machine tools. Journal of Mechanical Engineering, 2015, 51(9): 119–128. (in Chinese) H T Wang, T M Li, L P Wang, et al. Review on thermal error modeling of machine tools. Journal of Mechanical Engineering, 2015, 51(9): 119–128. (in Chinese) Zurück zum Zitat A Palmgren, B Ruley. Ball and roller bearing engineering. Philadelphia: SKF Industries, Inc.,1945. A Palmgren, B Ruley. Ball and roller bearing engineering. Philadelphia: SKF Industries, Inc.,1945. Zurück zum Zitat T A Harris. Rolling bearing analysis. 4th edition. New York: Wiley, 2001. T A Harris. Rolling bearing analysis. 4th edition. New York: Wiley, 2001. Zurück zum Zitat Z Q Liu, Y H Zhang, H Su. Thermal analysis of high speed rolling bearing. Lubrication and Sealing, 1998, 4: 66–68. (in Chineses) Z Q Liu, Y H Zhang, H Su. Thermal analysis of high speed rolling bearing. Lubrication and Sealing, 1998, 4: 66–68. (in Chineses) Zurück zum Zitat J L Stein, J F Tu. A State-space model for monitoring thermally induced preload in anti-friction spindle bearings of high-speed machine tools. Journal of Dynamic Systems Measurement and Control, 1994, 116(3): 372–386. J L Stein, J F Tu. A State-space model for monitoring thermally induced preload in anti-friction spindle bearings of high-speed machine tools. Journal of Dynamic Systems Measurement and Control, 1994, 116(3): 372–386. Zurück zum Zitat J H Rumbarger, E G Filetti, D Gubernick, et al. Gas turbine engine main shaft roller bearing system analysis. Journal of Lubrication Technology, 1973, 95(4): 401–416. J H Rumbarger, E G Filetti, D Gubernick, et al. Gas turbine engine main shaft roller bearing system analysis. Journal of Lubrication Technology, 1973, 95(4): 401–416. Zurück zum Zitat G C Chen, L Q Wang, L Gu, et al. Heating analysis of the high speed ball bearing, Journal of Aerospace Power, 2007, 22(1): 163–168. (in Chinese) G C Chen, L Q Wang, L Gu, et al. Heating analysis of the high speed ball bearing, Journal of Aerospace Power, 2007, 22(1): 163–168. (in Chinese) Zurück zum Zitat R S Moorthy, V P Raja. An improved analytical model for prediction of heat generation in angular contact ball bearing. Arabian Journal for Science and Engineering, 2014, 39(11): 8111–8119. R S Moorthy, V P Raja. An improved analytical model for prediction of heat generation in angular contact ball bearing. Arabian Journal for Science and Engineering, 2014, 39(11): 8111–8119. Zurück zum Zitat W M Hannon. Rolling-element bearing heat transfer - part I.: Analytic model. Journal of Tribology, 2015, 137(3): 031102. W M Hannon. Rolling-element bearing heat transfer - part I.: Analytic model. Journal of Tribology, 2015, 137(3): 031102. Zurück zum Zitat F P Incroper, D P Dewitt, T L Bergman, et al. Fundamentals of heat and mass transfer. 6th ed. Beijing: Chemical Industry Press, 2011. (in Chinese) F P Incroper, D P Dewitt, T L Bergman, et al. Fundamentals of heat and mass transfer. 6th ed. Beijing: Chemical Industry Press, 2011. (in Chinese) Zurück zum Zitat B Bossmanns, J F Tu. A thermal model for high speed motorized spindles. International Journal of Machine Tools and Manufacture, 1999, 39(9): 1345–1366. B Bossmanns, J F Tu. A thermal model for high speed motorized spindles. International Journal of Machine Tools and Manufacture, 1999, 39(9): 1345–1366. Zurück zum Zitat B Bossmanns, J F Tu. A power flow model for high speed motorized spindles - heat generation characterization. Journal of Manufacturing Science and Engineering, 2001,123(3): 494–505. B Bossmanns, J F Tu. A power flow model for high speed motorized spindles - heat generation characterization. Journal of Manufacturing Science and Engineering, 2001,123(3): 494–505. Zurück zum Zitat T Holkup, H Cao, P Kolář, et al. Thermo-mechanical model of spindles. CIRP Annals - Manufacturing Technology, 2010, 59(1): 365–368. T Holkup, H Cao, P Kolář, et al. Thermo-mechanical model of spindles. CIRP Annals - Manufacturing Technology, 2010, 59(1): 365–368. Zurück zum Zitat J Takabi, M M Khonsari. Experimental testing and thermal analysis of ball bearings. Tribology International, 2013, 60(7): 93–103. J Takabi, M M Khonsari. Experimental testing and thermal analysis of ball bearings. Tribology International, 2013, 60(7): 93–103. Zurück zum Zitat J Jędrzejewski, Z Kowal, W Kwaśny, et al. High-speed precise machine tools spindle units improving. Journal of Materials Processing Technology, 2005, 162-163: 615–621. J Jędrzejewski, Z Kowal, W Kwaśny, et al. High-speed precise machine tools spindle units improving. Journal of Materials Processing Technology, 2005, 162-163: 615–621. Zurück zum Zitat K S Kim, D W Lee, S M Lee, et al. A numerical approach to determine the frictional torque and temperature of an angular contact ball bearing in a spindle system. International Journal of Precision Engineering and Manufacturing, 2015, 16(1): 135–142. K S Kim, D W Lee, S M Lee, et al. A numerical approach to determine the frictional torque and temperature of an angular contact ball bearing in a spindle system. International Journal of Precision Engineering and Manufacturing, 2015, 16(1): 135–142. Zurück zum Zitat Z C Du, S Y Yao, J G Yang. Thermal behavior analysis and thermal error compensation for motorized spindle of machine tools. International Journal of Precision Engineering and Manufacturing, 2015, 16(7): 1571–1581. Z C Du, S Y Yao, J G Yang. Thermal behavior analysis and thermal error compensation for motorized spindle of machine tools. International Journal of Precision Engineering and Manufacturing, 2015, 16(7): 1571–1581. Zurück zum Zitat J Y Xia, B Wu, Y M Hu, et al. Experimental research on factors influencing thermal dynamics characteristics of feed system. Precision Engineering, 2010, 34(2): 357–368. J Y Xia, B Wu, Y M Hu, et al. Experimental research on factors influencing thermal dynamics characteristics of feed system. Precision Engineering, 2010, 34(2): 357–368. Zurück zum Zitat Z Z Xu, X J Liu, C H Choi, et al. A study on improvement of ball screw system positioning error with liquid-cooling. International Journal of Precision Engineering and Manufacturing, 2012, 13(12): 2173–2181. Z Z Xu, X J Liu, C H Choi, et al. A study on improvement of ball screw system positioning error with liquid-cooling. International Journal of Precision Engineering and Manufacturing, 2012, 13(12): 2173–2181. Zurück zum Zitat W S Yun, S K Kim, D W Cho. Thermal error analysis for a CNC lathe feed drive system. International Journal of Machine Tools and Manufacture, 1999, 39(7): 1087–1101 W S Yun, S K Kim, D W Cho. Thermal error analysis for a CNC lathe feed drive system. International Journal of Machine Tools and Manufacture, 1999, 39(7): 1087–1101 Zurück zum Zitat J Mayr, M Ess, S Weikert, et al. Thermal behaviour improvement of linear axis . Proceedings of 11th euspen International Conference, Como, Italy, May 23-26, 2011: 291–294. J Mayr, M Ess, S Weikert, et al. Thermal behaviour improvement of linear axis . Proceedings of 11th euspen International Conference, Como, Italy, May 23-26, 2011: 291–294. Zurück zum Zitat Z Z Xu, X J Liu, S K Lyu. Study on positioning accuracy of nut/shaft air cooling ball screw for high-precision feed drive. International Journal of Precision Engineering and Manufacturing, 2014, 15(1): 123–128. Z Z Xu, X J Liu, S K Lyu. Study on positioning accuracy of nut/shaft air cooling ball screw for high-precision feed drive. International Journal of Precision Engineering and Manufacturing, 2014, 15(1): 123–128. Zurück zum Zitat S K Kim, D W Cho. Real-time estimation of temperature distribution in a ball-screw system. International Journal of Machine Tools and Manufacture, 1997, 37(4): 451–464. S K Kim, D W Cho. Real-time estimation of temperature distribution in a ball-screw system. International Journal of Machine Tools and Manufacture, 1997, 37(4): 451–464. Zurück zum Zitat M F Zaeh, T Oertli, J Milberg. Finite element modelling of ball screw feed drive systems. CIRP Annals - Manufacturing Technology, 2004, 53(2): 289–292. M F Zaeh, T Oertli, J Milberg. Finite element modelling of ball screw feed drive systems. CIRP Annals - Manufacturing Technology, 2004, 53(2): 289–292. Zurück zum Zitat C Jin, B Wu, Y M Hu. Heat generation modeling of ball bearing based on internal load distribution. Tribology International, 2012, 45(1): 8–15. C Jin, B Wu, Y M Hu. Heat generation modeling of ball bearing based on internal load distribution. Tribology International, 2012, 45(1): 8–15. Zurück zum Zitat C Jin, B Wu, Y M Hu, et al. Temperature distribution and thermal error prediction of a CNC feed system under varying operating conditions. Precision Engineering, 2015, 77(9–12): 1979–1992. C Jin, B Wu, Y M Hu, et al. Temperature distribution and thermal error prediction of a CNC feed system under varying operating conditions. Precision Engineering, 2015, 77(9–12): 1979–1992. Zurück zum Zitat C Jin, B Wu, Y M Hu, et al. Thermal characteristics of a CNC feed system under varying operating conditions. Precision Engineering, 2015, 42(9-12): 151–164. C Jin, B Wu, Y M Hu, et al. Thermal characteristics of a CNC feed system under varying operating conditions. Precision Engineering, 2015, 42(9-12): 151–164. Zurück zum Zitat B Tan, X Y Mao, H Q Liu, et al. A thermal error model for large machine tools that considers environmental thermal hysteresis effects. International Journal of Machine Tools and Manufacture, 2014. 82-83(7): 11–20. B Tan, X Y Mao, H Q Liu, et al. A thermal error model for large machine tools that considers environmental thermal hysteresis effects. International Journal of Machine Tools and Manufacture, 2014. 82-83(7): 11–20. Zurück zum Zitat C X Zhang, F Gao, Y Li. Thermal error characteristic analysis and modeling for machine tools due to time-varying environmental temperature. Precision Engineering, 2017, 47: 231–238. C X Zhang, F Gao, Y Li. Thermal error characteristic analysis and modeling for machine tools due to time-varying environmental temperature. Precision Engineering, 2017, 47: 231–238. Zurück zum Zitat N S Mian, S Fletcher, A P Longstaff, et al. Efficient thermal error prediction in a machine tool using finite element analysis. Measurement Science and Technology, 2011, 22(8): 085107. N S Mian, S Fletcher, A P Longstaff, et al. Efficient thermal error prediction in a machine tool using finite element analysis. Measurement Science and Technology, 2011, 22(8): 085107. Zurück zum Zitat N S Mian, S Fletcher, A P Longstaff, et al. Efficient estimation by FEA of machine tool distortion due to environmental temperature perturbations. Precision Engineering, 2013, 37(2): 372–379. N S Mian, S Fletcher, A P Longstaff, et al. Efficient estimation by FEA of machine tool distortion due to environmental temperature perturbations. Precision Engineering, 2013, 37(2): 372–379. Zurück zum Zitat J F Zhang, P F Feng, CHEN C, et al. A method for thermal performance modeling and simulation of machine tools. The International Journal of Advanced Manufacturing Technology, 2013, 68(5): 1517–1527. J F Zhang, P F Feng, CHEN C, et al. A method for thermal performance modeling and simulation of machine tools. The International Journal of Advanced Manufacturing Technology, 2013, 68(5): 1517–1527. Zurück zum Zitat J Mayr, S Weikert, Wegener K, et al. Comparing the thermo-mechanical-behaviour of machine tool frame designs using a FDM-FEA simulation approach . Proceedings of the 22nd Annual ASPE Meeting, Dallas, TX, United states, October 14-19, 2007: 17–20. J Mayr, S Weikert, Wegener K, et al. Comparing the thermo-mechanical-behaviour of machine tool frame designs using a FDM-FEA simulation approach . Proceedings of the 22nd Annual ASPE Meeting, Dallas, TX, United states, October 14-19, 2007: 17–20. Zurück zum Zitat J Mayr, M Ess, S Weikert, et al. Calculating thermal location and component errors on machine tools . Proceedings of the 24nd Annual ASPE Meeting, Monterey, CA, United states, October 4-9, 2009. J Mayr, M Ess, S Weikert, et al. Calculating thermal location and component errors on machine tools . Proceedings of the 24nd Annual ASPE Meeting, Monterey, CA, United states, October 4-9, 2009. Zurück zum Zitat J Mayr, M Ess, S Weikert, et al. Compensation of thermal effects on machine tools using a FDEM simulation approach // 9th International Conference and Exhibition on Laser Metrology, Machine Tool, CMM and Robotic Performance, Uxbridge, United kingdom, June 30-July 2, 2009: 38–47. J Mayr, M Ess, S Weikert, et al. Compensation of thermal effects on machine tools using a FDEM simulation approach // 9th International Conference and Exhibition on Laser Metrology, Machine Tool, CMM and Robotic Performance, Uxbridge, United kingdom, June 30-July 2, 2009: 38–47. Zurück zum Zitat F L M Delbressine, G H J Florussen, L A Schijvenaars, et al. Modelling thermomechanical behaviour of multi-axis machine tools. Precision Engineering, 2006, 30(1): 47–53. F L M Delbressine, G H J Florussen, L A Schijvenaars, et al. Modelling thermomechanical behaviour of multi-axis machine tools. Precision Engineering, 2006, 30(1): 47–53. Zurück zum Zitat J Yang, X S Mei, B Feng, et al. Experiments and simulation of thermal behaviors of the dual-drive servo feed system. Chinese Journal of Mechanical Engineering, 2015, 28(1): 76–87. J Yang, X S Mei, B Feng, et al. Experiments and simulation of thermal behaviors of the dual-drive servo feed system. Chinese Journal of Mechanical Engineering, 2015, 28(1): 76–87. Zurück zum Zitat C Jin, B Wu, Y M Hu. Wavelet neural network based on NARMA-L2 model for prediction of thermal characteristics in a feed system. Chinese Journal of Mechanical Engineering, 2011, 24(1): 33–41. C Jin, B Wu, Y M Hu. Wavelet neural network based on NARMA-L2 model for prediction of thermal characteristics in a feed system. Chinese Journal of Mechanical Engineering, 2011, 24(1): 33–41. Zurück zum Zitat J Zhu, J Ni, A J Shih. Robust machine tool thermal error modeling through thermal mode concept. Journal of Manufacturing Science and Engineering, 2008, 130(6): 061006. J Zhu, J Ni, A J Shih. Robust machine tool thermal error modeling through thermal mode concept. Journal of Manufacturing Science and Engineering, 2008, 130(6): 061006. Zurück zum Zitat F C Li, H T Wang, T M Li. Research on thermal error modeling and prediction of heavy CNC machine tools. Journal of Mechanical Engineering, 2016, 52(11): 154–160. (in Chinese) F C Li, H T Wang, T M Li. Research on thermal error modeling and prediction of heavy CNC machine tools. Journal of Mechanical Engineering, 2016, 52(11): 154–160. (in Chinese) Zurück zum Zitat C Chen, J F Zhang, Z J Wu, et al. A real-time measurement method of temperature fields and thermal errors in machine tools // Proceeding of the 2010 International Conference on Digital Manufacturing and Automation, Changsha, China. 2010, 1: 100–103. C Chen, J F Zhang, Z J Wu, et al. A real-time measurement method of temperature fields and thermal errors in machine tools // Proceeding of the 2010 International Conference on Digital Manufacturing and Automation, Changsha, China. 2010, 1: 100–103. Zurück zum Zitat O Horejš, M Mareš, L Novotný, et al. Advanced modeling of thermally induced displacements and its implementation into standard CNC controller of horizontal milling center. Procedia CIRP, 2012, 4: 67–72. O Horejš, M Mareš, L Novotný, et al. Advanced modeling of thermally induced displacements and its implementation into standard CNC controller of horizontal milling center. Procedia CIRP, 2012, 4: 67–72. Zurück zum Zitat J Vyroubal. Compensation of machine tool thermal deformation in spindle axis direction based on decomposition method. Precision Engineering, 2012, 36 (1): 121–127. J Vyroubal. Compensation of machine tool thermal deformation in spindle axis direction based on decomposition method. Precision Engineering, 2012, 36 (1): 121–127. Zurück zum Zitat H J Pahk, S W Lee. Thermal error measurement and real time compensation system for the CNC machine tools incorporating the spindle thermal error and the feed axis thermal error. The International Journal of Advanced Manufacturing Technology, 2002, 20(7): 487–494. H J Pahk, S W Lee. Thermal error measurement and real time compensation system for the CNC machine tools incorporating the spindle thermal error and the feed axis thermal error. The International Journal of Advanced Manufacturing Technology, 2002, 20(7): 487–494. Zurück zum Zitat H Yang, J Ni. Dynamic modeling for machine tool thermal error compensation. Journal of Manufacturing Science and Engineering, 2003, 125(2): 245–254. H Yang, J Ni. Dynamic modeling for machine tool thermal error compensation. Journal of Manufacturing Science and Engineering, 2003, 125(2): 245–254. Zurück zum Zitat D Werschmoeller, X C Li. Measurement of tool internal temperatures in the tool - chip contact region by embedded micro thin film thermocouples. Journal of Manufacturing Processes, 2011, 13(2): 147–152. D Werschmoeller, X C Li. Measurement of tool internal temperatures in the tool - chip contact region by embedded micro thin film thermocouples. Journal of Manufacturing Processes, 2011, 13(2): 147–152. Zurück zum Zitat J Liu, G Chen, C H Ji, et al. An investigation of workpiece temperature variation of helical milling for carbon fiber reinforced plastics (CFRP). International Journal of Machine Tools and Manufacture, 2014, 86(11):89–103. J Liu, G Chen, C H Ji, et al. An investigation of workpiece temperature variation of helical milling for carbon fiber reinforced plastics (CFRP). International Journal of Machine Tools and Manufacture, 2014, 86(11):89–103. Zurück zum Zitat J Qiu, C S Liu, Q W Liu, et al. Thermal errors of planer type NC machine tools and its improvement measures. Journal of Mechanical Engineering, 2012,48(21): 149–157. (in Chinese) J Qiu, C S Liu, Q W Liu, et al. Thermal errors of planer type NC machine tools and its improvement measures. Journal of Mechanical Engineering, 2012,48(21): 149–157. (in Chinese) Zurück zum Zitat C W Wu, C H Tang, C F Chang, et al. Thermal error compensation method for machine center. International Journal of Advanced Manufacturing Technology, 2012, 59(5): 681–689. C W Wu, C H Tang, C F Chang, et al. Thermal error compensation method for machine center. International Journal of Advanced Manufacturing Technology, 2012, 59(5): 681–689. Zurück zum Zitat E Uhlmann, J Hu. Thermal modelling of a high speed motor spindle. Procedia Cirp, 2012, 1: 313–318. E Uhlmann, J Hu. Thermal modelling of a high speed motor spindle. Procedia Cirp, 2012, 1: 313–318. Zurück zum Zitat T Zhang, W H Ye, R J Liang, et al. Study on thermal behavior analysis of nut/shaft air cooling ball screw for high-precision feed drive. Chinese Journal of Mechanical Engineering, 2013, 26(1): 158–165. T Zhang, W H Ye, R J Liang, et al. Study on thermal behavior analysis of nut/shaft air cooling ball screw for high-precision feed drive. Chinese Journal of Mechanical Engineering, 2013, 26(1): 158–165. Zurück zum Zitat American National Standards Institute. ANSI/ASME B5.54-2005 Methods for Performance Evaluation of Computer Numerically Controlled Machining Centers. Washington: American National Standards Institute, 2005. American National Standards Institute. ANSI/ASME B5.54-2005 Methods for Performance Evaluation of Computer Numerically Controlled Machining Centers. Washington: American National Standards Institute, 2005. Zurück zum Zitat H Schwenke, W Knapp, H Haitjema, et al. Geometric error measurement and compensation of machines: an update. CIRP Annals - Manufacturing Technology, 2008, 57(2): 660–675. H Schwenke, W Knapp, H Haitjema, et al. Geometric error measurement and compensation of machines: an update. CIRP Annals - Manufacturing Technology, 2008, 57(2): 660–675. Zurück zum Zitat A R J Ruiz, J G Rosas, F S Granja, et al. A real-time tool positioning sensor for machine-tools. Sensors, 2009, 9(10): 7622–7647. A R J Ruiz, J G Rosas, F S Granja, et al. A real-time tool positioning sensor for machine-tools. Sensors, 2009, 9(10): 7622–7647. Zurück zum Zitat E Gomez-Acedo, A Olarra, L N L D L Calle. A method for thermal characterization and modeling of large gantry-type machine tools. The International Journal of Advanced Manufacturing Technology, 2012, 62(9): 875–886. E Gomez-Acedo, A Olarra, L N L D L Calle. A method for thermal characterization and modeling of large gantry-type machine tools. The International Journal of Advanced Manufacturing Technology, 2012, 62(9): 875–886. Zurück zum Zitat S K Lee, J H Yoo, M S Yang. Effect of thermal deformation on machine tool slide guide motion. Tribology International, 2003, 36(1): 41–47. S K Lee, J H Yoo, M S Yang. Effect of thermal deformation on machine tool slide guide motion. Tribology International, 2003, 36(1): 41–47. Zurück zum Zitat Z D Zhou, Y G Tan, M Y Liu, et al. Actualities and development on dynamic monitoring and diagnosis with distributed fiber Bragg Grating in mechanical systems. Journal of Mechanical Engineering, 2013, 49(19): 55–69. (in Chinese) Z D Zhou, Y G Tan, M Y Liu, et al. Actualities and development on dynamic monitoring and diagnosis with distributed fiber Bragg Grating in mechanical systems. Journal of Mechanical Engineering, 2013, 49(19): 55–69. (in Chinese) Zurück zum Zitat H N Li, L Ren. Structural health monitoring based on fiber grating sensing technology. Beijing: China Building Industry Press, 2008. (in Chinese) H N Li, L Ren. Structural health monitoring based on fiber grating sensing technology. Beijing: China Building Industry Press, 2008. (in Chinese) Zurück zum Zitat N Hirayama, Y Sano. Fiber Bragg grating temperature sensor for practical use. ISA Trans, 2000, 39(2): 169–173. N Hirayama, Y Sano. Fiber Bragg grating temperature sensor for practical use. ISA Trans, 2000, 39(2): 169–173. Zurück zum Zitat D G Kim, H C Kang, J K Pan, et al. Sensitivity enhancement of a fiber Bragg grating temperature sensor combined with a bimetallic strip. Microwave and Optical Technology Letters, 2014, 56(8): 1926–1929. D G Kim, H C Kang, J K Pan, et al. Sensitivity enhancement of a fiber Bragg grating temperature sensor combined with a bimetallic strip. Microwave and Optical Technology Letters, 2014, 56(8): 1926–1929. Zurück zum Zitat Y G Zhan. Study on high resolution optical fiber grating temperature sensor research. Chinese Journal of Lasers, 2005, 32(1): 83–86. (in Chinese) Y G Zhan. Study on high resolution optical fiber grating temperature sensor research. Chinese Journal of Lasers, 2005, 32(1): 83–86. (in Chinese) Zurück zum Zitat W He, X D Xu, D S Jiang. High-sensitivity fiber Bragg grating temperature sensor with polymer jacket and its low-temperature characteristic. Acta Optica Sinica, 2004, 24(10): 1316–1319. (in Chinese) W He, X D Xu, D S Jiang. High-sensitivity fiber Bragg grating temperature sensor with polymer jacket and its low-temperature characteristic. Acta Optica Sinica, 2004, 24(10): 1316–1319. (in Chinese) Zurück zum Zitat C H Lee, M K Kim, K T Kim, et al. Enhanced temperature sensitivity of fiber Bragg grating temperature sensor using thermal expansion of copper tube. Microwave and Optical Technology Letters, 2011, 53(7): 1669–1671. C H Lee, M K Kim, K T Kim, et al. Enhanced temperature sensitivity of fiber Bragg grating temperature sensor using thermal expansion of copper tube. Microwave and Optical Technology Letters, 2011, 53(7): 1669–1671. Zurück zum Zitat C Lupi, F Felli, A Brotzu, et al. Improving FBG sensor sensitivity at cryogenic temperature by metal coating. IEEE Sensors Journal, 2008, 8(7): 1299–1304. C Lupi, F Felli, A Brotzu, et al. Improving FBG sensor sensitivity at cryogenic temperature by metal coating. IEEE Sensors Journal, 2008, 8(7): 1299–1304. Zurück zum Zitat Y L Li, H Zhang, Y Feng, et al. Metal coating of fiber Bragg grating and the temperature sensing character after metallization. Optical Fiber Technology, 2009, 15(4): 391–397. Y L Li, H Zhang, Y Feng, et al. Metal coating of fiber Bragg grating and the temperature sensing character after metallization. Optical Fiber Technology, 2009, 15(4): 391–397. Zurück zum Zitat Y Feng, H Zhang, Y L Li, et al. Temperature sensing of metal-coated fiber Bragg grating. IEEE/ASME Transactions on Mechatronics, 2010, 15(4): 511–519. Y Feng, H Zhang, Y L Li, et al. Temperature sensing of metal-coated fiber Bragg grating. IEEE/ASME Transactions on Mechatronics, 2010, 15(4): 511–519. Zurück zum Zitat R S Shen, J Zhang, Y Wang, et al. Study on high-temperature and high-pressure measurement by using metal-coated FBG. Microwave and Optical Technology Letters, 2008, 50(5): 1138–1140. R S Shen, J Zhang, Y Wang, et al. Study on high-temperature and high-pressure measurement by using metal-coated FBG. Microwave and Optical Technology Letters, 2008, 50(5): 1138–1140. Zurück zum Zitat M J Guo, D S Jiang. Low temperature properties of fiber Bragg grating temperature sensor with plating gold. Chinese Journal of Low Temperature Physics, 2006, 28(2): 138–141. (in Chinese) M J Guo, D S Jiang. Low temperature properties of fiber Bragg grating temperature sensor with plating gold. Chinese Journal of Low Temperature Physics, 2006, 28(2): 138–141. (in Chinese) Zurück zum Zitat Y G Zhan, S L Xue, Q Y Yang, et al. A novel fiber Bragg grating high-temperature sensor. Optik - International Journal for Light and Electron Optics, 2008, 119(11): 535–539. Y G Zhan, S L Xue, Q Y Yang, et al. A novel fiber Bragg grating high-temperature sensor. Optik - International Journal for Light and Electron Optics, 2008, 119(11): 535–539. Zurück zum Zitat Y Liu, Z D Zhou, E L Zhang, et al. Measurement error of surface-mounted fiber Bragg grating temperature sensor. Review of Scientific Instruments, 2014, 85(6): 064905. Y Liu, Z D Zhou, E L Zhang, et al. Measurement error of surface-mounted fiber Bragg grating temperature sensor. Review of Scientific Instruments, 2014, 85(6): 064905. Zurück zum Zitat Y Liu, J Zhang. Model Study of the Influence of ambient temperature and installation types on surface temperature measurement by using a fiber Bragg grating sensor. Sensors, 2016, 16(7): 975. Y Liu, J Zhang. Model Study of the Influence of ambient temperature and installation types on surface temperature measurement by using a fiber Bragg grating sensor. Sensors, 2016, 16(7): 975. Zurück zum Zitat M Y Liu, E L Zhang, Z D Zhou, et al. Measurement of temperature field for the spindle of machine tool based on optical fiber Bragg grating sensors. Advances in Mechanical Engineering, 2013, 2: 940626. M Y Liu, E L Zhang, Z D Zhou, et al. Measurement of temperature field for the spindle of machine tool based on optical fiber Bragg grating sensors. Advances in Mechanical Engineering, 2013, 2: 940626. Zurück zum Zitat Y F Dong, Z D Zhou, Z C Liu, et al. Temperature field measurement of spindle ball bearing under radial force based on fiber Bragg grating sensors. Advances in Mechanical Engineering, 2015, 7(12): 1–6. Y F Dong, Z D Zhou, Z C Liu, et al. Temperature field measurement of spindle ball bearing under radial force based on fiber Bragg grating sensors. Advances in Mechanical Engineering, 2015, 7(12): 1–6. Zurück zum Zitat J Huang, Z D Zhou, M Y Liu, et al. Real-time measurement of temperature field in heavy-duty machine tools using fiber Bragg grating sensors and analysis of thermal shift errors. Mechatronics, 2015, 31: 16–21. J Huang, Z D Zhou, M Y Liu, et al. Real-time measurement of temperature field in heavy-duty machine tools using fiber Bragg grating sensors and analysis of thermal shift errors. Mechatronics, 2015, 31: 16–21. Zurück zum Zitat N S Kim, N S Cho. Estimating deflection of a simple beam model using fiber optic bragg-grating sensors. Experimental Mechanics, 2004, 44(4): 433–439. N S Kim, N S Cho. Estimating deflection of a simple beam model using fiber optic bragg-grating sensors. Experimental Mechanics, 2004, 44(4): 433–439. Zurück zum Zitat S J Chang, N S Kim. Estimation of displacement response from FBG strain sensors using empirical mode decomposition technique. Experimental Mechanics, 2012, 52(6): 573–589. S J Chang, N S Kim. Estimation of displacement response from FBG strain sensors using empirical mode decomposition technique. Experimental Mechanics, 2012, 52(6): 573–589. Zurück zum Zitat L H Kang, D K Kim, J H Han, Estimation of dynamic structural displacements using fiber Bragg grating strain sensors. Journal of Sound and Vibration, 2007, 305(3): 534–542. L H Kang, D K Kim, J H Han, Estimation of dynamic structural displacements using fiber Bragg grating strain sensors. Journal of Sound and Vibration, 2007, 305(3): 534–542. Zurück zum Zitat D Kang, W Chung. Integrated monitoring scheme for a maglev guideway using multiplexed FBG sensor arrays. NDT & E International, 2009, 42(4): 260–266. D Kang, W Chung. Integrated monitoring scheme for a maglev guideway using multiplexed FBG sensor arrays. NDT & E International, 2009, 42(4): 260–266. Zurück zum Zitat J C Yi, X J Zhu, H S Zhang, et al. Spatial shape reconstruction using orthogonal fiber Bragg grating sensor array. Mechatronics, 2012, 22(6): 679–687. J C Yi, X J Zhu, H S Zhang, et al. Spatial shape reconstruction using orthogonal fiber Bragg grating sensor array. Mechatronics, 2012, 22(6): 679–687. Zurück zum Zitat P Bosetti, S Bruschi. Enhancing positioning accuracy of CNC machine tools by means of direct measurement of deformation. The International Journal of Advanced Manufacturing Technology, 2012, 58(5-8): 651–662. P Bosetti, S Bruschi. Enhancing positioning accuracy of CNC machine tools by means of direct measurement of deformation. The International Journal of Advanced Manufacturing Technology, 2012, 58(5-8): 651–662. Zurück zum Zitat F Biral, P Bosetti, R Oboe, et al. A new direct deformation sensor for active compensation of positioning errors in large milling machines .9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, March 27-29, 2006: 126–131. F Biral, P Bosetti, R Oboe, et al. A new direct deformation sensor for active compensation of positioning errors in large milling machines .9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, March 27-29, 2006: 126–131. Zurück zum Zitat F Biral, P Bosetti. On-line measurement and compensation of geometrical errors for Cartesian numerical control machines . 9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, March 27-29, 2006: 120–125. F Biral, P Bosetti. On-line measurement and compensation of geometrical errors for Cartesian numerical control machines . 9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, March 27-29, 2006: 120–125. Zurück zum Zitat Y Liu, M Y Liu, C X Yi, et al. Measurement of the deformation field for machine tool based on optical fiber Bragg grating sensors . 2014 International Conference on Innovative Design and Manufacturing, Quebec, Canada, August 13-15, 2014: 222–226. Y Liu, M Y Liu, C X Yi, et al. Measurement of the deformation field for machine tool based on optical fiber Bragg grating sensors . 2014 International Conference on Innovative Design and Manufacturing, Quebec, Canada, August 13-15, 2014: 222–226. Zurück zum Zitat R Y Li, Y G Tan, Y Liu, et al. A new deformation measurement method for heavy-duty machine tool base by multipoint distributed FBG sensors . Applied Optics and Photonics, China: Optical Fiber Sensors and Applications (AOPC 2015), Beijing, China, May 5-7, 2015: 967903. R Y Li, Y G Tan, Y Liu, et al. A new deformation measurement method for heavy-duty machine tool base by multipoint distributed FBG sensors . Applied Optics and Photonics, China: Optical Fiber Sensors and Applications (AOPC 2015), Beijing, China, May 5-7, 2015: 967903. Zurück zum Zitat R Y Li, Y G Tan, L Hong, et al. A temperature-independent force transducer using one optical fiber with multiple Bragg gratings. IEICE Electronic Express, 2016, 13(10): 20160198. R Y Li, Y G Tan, L Hong, et al. A temperature-independent force transducer using one optical fiber with multiple Bragg gratings. IEICE Electronic Express, 2016, 13(10): 20160198. Zurück zum Zitat Y Li, Q Liu, R Tong, et al. Shared and service-oriented CNC machining system for intelligent manufacturing process. Chinese Journal of Mechanical Engineering, 2015, 28(6): 1100–1108. Y Li, Q Liu, R Tong, et al. Shared and service-oriented CNC machining system for intelligent manufacturing process. Chinese Journal of Mechanical Engineering, 2015, 28(6): 1100–1108. Zurück zum Zitat R Harrison, D Vera, B Ahmad. Engineering the smart factory. Chinese Journal of Mechanical Engineering, 2016, 29(6): 1046–1051. R Harrison, D Vera, B Ahmad. Engineering the smart factory. Chinese Journal of Mechanical Engineering, 2016, 29(6): 1046–1051. Zu-De Zhou Lin Gui Yue-Gang Tan Ming-Yao Liu Yi Liu Rui-Ya Li Research and Development Trend of Shape Control for Cold Rolling Strip Structure of Micro-nano WC-10Co4Cr Coating and Cavitation Erosion Resistance in NaCl Solution Exploring Challenges in Developing a Smart and Effective Assistive System for Improving the Experience of the Elderly Drivers Design and Dynamic Model of a Frog-inspired Swimming Robot Powered by Pneumatic Muscles Future Digital Design and Manufacturing: Embracing Industry 4.0 and Beyond
CommonCrawl
Does visual saliency affect decision-making? | springerprofessional.de Skip to main content Create an alert for Journal of Visualization now and receive an email for each new issue with an overview and direct links to all articles. Download PDF-version previous article IsoExplorer: an isosurface-driven framework for... next article MulUBA: multi-level visual analytics of user be... Activate PatentFit 11-06-2021 | Regular Paper | Issue 6/2021 Open Access Does visual saliency affect decision-making? Journal of Visualization > Issue 6/2021 Goran Milutinović, Ulla Ahonen-Jonnarth, Stefan Seipel » View abstract Download PDF-version A number of studies (e.g., Jarvenpaa 1990 ; Glaze et al. 1992 ; Lohse 1997 ; Speier 2006 ; Lurie and Mason 2007 ) have shown that more vividly presented information is likely to be acquired and processed before the less vividly presented information. Increasing the use of salient information may come at the expense of ignoring other relevant information (Glaze et al. 1992 ), which may have significant implications in the context of decision-making. As far as we know, though, there are no previous studies where the influence of visual saliency has been evaluated for its impact on the performance, i.e., the quality of choice in multi-criteria decision-making (MCDM). Indeed, this is true not only for visual saliency, but for the impact of almost any aspect of visualization on MCDM. One of the few exceptions is the study by Dimara et al. ( 2018 ), where the authors attempt to evaluate three different visualization techniques (scatterplot matrix, parallel coordinates, and tabular visualization) for their ability to support decision-making tasks. They use a novel approach, defining the quality of decisions as the consistency between the choice made and the self-reported preferences for criteria. The authors observed no indication of differences between different visualization techniques. This, at least in part, may be due to the shortcomings of the method they used to elicit participants' preferences. 1.1 Objectives and research questions The main gal of our study is to investigate potential effects of visual saliency on multi-criteria decision-making. Our first objective was to evaluate the effects of saliency on the outcome of a decision process, i.e., on the quality of decisions. The second objective was to evaluate in what way visual saliency may affect users' attention during the decision process. These objectives are achieved answering the following research questions: How do the introduced saliency modes (no saliency, color saliency, size saliency) compare with regard to quality of decisions? How do the introduced saliency modes compare with regard to users' attention to the most preferred criterion? How do the introduced saliency modes compare with regard to time spent on decision tasks? How do the introduced saliency modes compare with regard to users' confidence in decisions? To our knowledge, there are no previous studies on the impact of visual saliency on decision-making. In that respect, our study makes an important contribution to the research concerned with the role of visualization in the context of multi-criteria decision-making. Furthermore, we suggest an alternative method for elicitation of users' preferences, which we believe improves the reliability of the presumably accurate ranking of alternatives. We use the same approach as suggested in Dimara et al. ( 2018 ) to obtain indicative measure of the quality of decisions. However, we use a different method, SWING weighting, to assess participants' preferences for criteria. In SWING weighting, preferences for criteria are obtained considering ranges of values in criteria, instead of rating the importance of criteria without considering the values of actual alternatives. 2 Theoretical background The terms necessary for understanding the concept of multi-criteria decision-making and decision tasks are explained in Sect. 2.1. In Sect. 2.2 we give some examples of how visualization is used in in today's decision support systems, and in Sect. 2.3 we address relevant issues regarding the evaluation of visual decision support tools. We explain the concept of visual saliency and give a brief overview of studies concerning the impact of saliency on decision-making in Sect. 2.4. 2.1 Multi-criteria decision-making The central task of multi-criteria decision-making, sometimes referred to as multi-criteria decision analysis (MCDA), is evaluating a set of alternatives in terms of a number of conflicting criteria (Zavadskas et al. 2014 ). Keeney and Raiffa ( 1993 ) define MCDA as "... a methodology for appraising alternatives on individual, often conflicting criteria, and combining them into an overall appraisal.", and summarize the paradigm of decision analysis in a five-step process: Preanalysis. Identify the problem and the viable action alternatives. Structural analysis. Create a decision tree to structure the qualitative anatomy of the problem: what are the choices, how they differ, what experiments can be performed, what can be learned. Uncertainty analysis. Assign probabilities to the branches emanating from chance nodes. Utility analysis. Assign utility values to consequences associated with paths through the tree. Optimization analysis. Calculate the optimal strategy, i.e., the strategy that maximizes expected utility. Multi-criteria decision-making is often classified as either multi-attribute (MADM) or multi-objective (MODM). Colson and de Bruyn ( 1989 ) define MADM as "...concerned with choice from a moderate/small size set of discrete actions (feasible alternatives)" and MODM is defined as the method that "... deals with the problem of design (finding a Pareto-optimal solution) in a feasible solution space bounded by the set of constraints". One of the most popular MADM methods is Analytic Hierarchy Process (AHP) (Saaty 1980 ), a method based on decomposition of a decision problem into a hierarchy (goal, objectives, criteria, alternatives), pairwise comparisons of the elements on each level of the hierarchy, and synthesis of priorities. Ideal point methods, such as Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) (Hwang and Yoon 1981 ), evaluate alternatives in relation to a specific target or goal (ideal point). Another frequently used family of methods are outranking methods, such as ELECTRE (Benayoun et al. 1966 ) and PROMETHEE (Brans and Vincke 1985 ), which are based on pairwise comparison of alternatives for each criterion. Weighted Linear Combination (WLC) and its extension Ordered Weighting Averaging (OWA) are methods based on the simple additive summation of the products of criteria weights and criteria values for each alternative. It is important to emphasize that we in this paper use the term criteria weight for weight coefficients of utility functions of criteria. These criteria weights are scaling constants as described in Keeney and Raiffa ( 1993 ). The basis for criteria weights are participants' preference evaluations of criteria ranges and thus not ranking of criteria or answers to questions of importance of criteria. The calculation of weight coefficients of utility functions is explained in Sect. 3.7. In this paper, we refer to the criterion with the highest weight as the most preferred criterion. In this study we are concerned with visualization as a support for multi-criteria decision-making, where visual features are used to represent the alternatives in the attribute space. Regardless of the decision method used in a particular decision task, visualization can help the decision-maker to get insight into the distribution of alternatives, to get better understanding of the relations between criteria and potential trends that are difficult to detect in raw data, to detect potential outliers which may lead to reassessment of the criteria weights, etc. 2.2 Use of visualization in decision support systems Virtually all today's decision supports systems rely in one way or another on interactive visualizations to present not only a decision space with available alternatives or outcomes but even more abstract variables, such as criteria weights, utility differences between different outcomes, and decision-maker's preferences. Dimara et al. ( 2018 ) listed a number of decision support tools designed to aid multi-criteria choice using different visualizations, such as parallel coordinates (Riehmann et al. 2012 ; Pu and Faltings 2000 ; Pajer et al. 2017 ), scatterplots or scatterplot matrices (Pu and Faltings 2000 ; Ahlberg and Shneiderman 2003 ; Elmqvist et al. 2008 ), or tabular visualizations (Carenini and Loyd 2044 ; Gratzl et al. 2013 ). Many recently developed decision support tools use combinations of the mentioned visualizations for different purposes. PriEsT (Siraj et al. 2015 ), based on Analytical Hierarchy Process (AHP) (Saaty 1980 ), uses table views and graph views to show inconsistencies in the decision-maker's judgments regarding the importance of criteria (judgments which violate the transitive property of ratio judgments are considered inconsistent). Pareto Browser (Vallerio et al. 2015 ) uses three-dimensional graphs to visualize the Pareto front, two-dimensional graphs for states and controls, scatterplots for visualization of objective functions, and parallel coordinates for visualization of Pareto optimal solutions. Visual GISwaps (Milutinovic and Seipel 2018 ), a domain-specific tool for geo-spatial decision-making, uses interactive maps to visualize alternatives in geographical space, a scatterplot to visualize alternatives in attribute space, and a multi-line chart for visual representation of trade-off value functions. Apart from the mentioned visual representations, other visualizations have been used in the decision-making context. Decision Ball (Li and Ma 2008 ) is a model based on the even swaps method (Hammond et al. 1998 ); it visualizes a decision process as moving trajectories of alternatives on spheres. VIDEO (Kollat and Reed 2007 ) uses 3D scatterplot to visualize up to four dimensions, where the fourth dimension is color coded, and in AHP-GAIA (Ishizaka et al. 2016 ), a n-star graph view is used to visualize the decision-maker's preferences. 2.3 Evaluation issues Regardless what method or tool is used as a support in a decision-making process, the outcome is ultimately dependent on the decision-maker's preferences, expectations, and knowledge. The fact that decision tasks by definition do not come with an objectively best alternative makes comparative evaluations of these tools and methods difficult, as there exists no generally best outcome, nor are there reliable metrics for measuring their efficiency. Evaluation of visual decision support tools is even more difficult, as evaluating visualizations in itself is a demanding task. This is one of the main reasons that such non-comparative evaluations are usually performed through qualitative studies, focusing on user opinion and perception (e.g., Pajer et al. 2017 ; Salter et al. 2009 ; Andrienko and Andrienko 2003 ; Jankowski et al. 2001 )). Andrienko et al. ( 2003 ) used a process tracing-based approach to evaluate tools and techniques in CommonGIS, observing the participants while working with appropriate tools for different tasks. Arciniegas et al. ( 2011 ) performed an experiment to assess usefulness and clarity of tool information in a set of collaborative decision support tools. The assessment was based on participants' ratings of the experience with the tool as well as their answers to a number of questions related to their understanding of the tool. In Gratzl et al. ( 2013 ) an experimental study was used for qualitative evaluation of LineUp—a visualization technique based on bar charts. The tool was evaluated using a 7-point Likert scale, based on the questionnaire provided to the participants. Use of quantitative evaluation methods is more common in comparative studies. For example in Carenini and Loyd ( 2044 ), a quantitative usability study was performed to compare two different versions of ValueCharts based on user performance in terms of the completion time and the quality of choices on low level tasks. Andrienko et al. ( 2002 ) performed a quantitative study to test five different geovisualization tools implemented in CommonGIS for learnability, memorability and user satisfaction. Even when quantitative methods are used in evaluations of MCDM decision support tools and methods, objective measurement of performance is rarely used to assess the effectiveness of a tool, as there are no objective metrics for measuring the quality of a choice, and constructing reliable performance metrics is extremely demanding and difficult task. The only study known to us in which such performance metrics was used to assess the impact of a decision support tool on the quality of decisions was presented in Arciniegas et al. ( 2013 ). In their study, the authors measured the impact on decisions of three different decision support tools. Quality of a choice was used as the metrics to assess the impact on decisions. The quality of a choice was determined by comparing the made choice with the utility values of different choices based on expert judgment. However, one obvious problem with this approach is that the participants' preferences and knowledge were not taken into consideration. Instead, the objective ranking of the different choices, thus the existence of an objectively best choice, is assumed. It may then be argued that the task performed by the participants was not a proper decision-making task; it was de facto to find the best solution, rather than to make an informed choice. 2.4 Visual saliency Looking at Fig. 1 exemplifies that attention will most certainly be drawn to the green circle in image 1, and the larger circle in image 2. This is because those two visual elements differ from their surroundings—they pop out. Indeed, visual attention is attracted to parts of an image which differ from their surroundings, may it be in color, contrast, intensity, speed or orientation of movement, etc. This attraction, which is the effect of bottom-up visual selective attention, is unrelated to the actual relevance of the salient object—it is not voluntary, but purely sensory-driven. The green circle in image 1 and the larger circle in image 2 are likely to attract viewer's attention Psychophysical and physiological aspects of visual attention have been the subject of many studies (e.g., Koch and Ullman 1985 ; Moran and Desimone 1985 ; Treisman and Gelade 1980 ; Treisman 1988 ; Treisman and Sato 1990 ; Desimone and Duncan 1995 )). Koch and Ullman ( 1985 ) suggest that early selective visual attention emerges from selective mapping from the early representation into a non-topographic central representation. The early representation consists of different topographic maps, in which elementary features, such as color, orientation, direction of movement, etc., are represented in parallel. At any instant, the central representation contains the properties of a single location in the scene—the selected location. 2.4.1 Saliency maps The concept of the saliency map was first introduced in Koch and Ullman ( 1985 ), on the assumption that conspicuity of a location in a scene determines the level of activity of the corresponding units in the elementary maps. An early model of saliency-based visual attention for rapid scene analysis by Itti et al. ( 1998 ) was built on this strict hypothesis of a saliency map, that low-level visual features attract visual attention and determine eye movements in the initial inspection of a scene, regardless of cognitive demands. In Itti and Koch ( 2001 ), however, the authors argue that a more advanced attentional control model must also include top-down, i.e., cognition-based influences, as a simple architecture based solely on bottom-up selective attention can only describe the deployment of attention within the first few hundreds of milliseconds. A majority of researchers today agree that both top-down and bottom-up processes influence the allocation of attention. However, there is no agreement regarding the question of to what extent those processes influence attentional selections. The results of the experiment deploying eye-tracking, presented in Underwood et al. ( 2006 ), confirmed that the observer's goals and expectations do influence the fixation patterns, and that the task demands can override the saliency map. The study presented in Donk and van Zoest ( 2008 ) showed similar results. The authors found that saliency is not persistently represented in the visual system, but only for the time of a few hundreds of milliseconds. After this interval has passed, the visual system only holds information concerning object presence, but not information concerning the relative salience of objects, and top-down control overrides bottom-up control. The results of the study by Parkhurst et al. ( 2002 ) showed different results. Namely, while attention was most stimulus-driven just after a visual content was presented, it remained stimulus-driven to smaller extent even after the activation of top-down influences. Even the analysis presented in Orquin et al. ( 2018 ) showed that bottom-up and top-down processes do not operate in different time windows, but are active simultaneously. 2.4.2 Saliency and decision-making In an early study concerning the impact of visual saliency on decision-making, Glaze et al. ( 1992 ) found that the vividness of graphic information may increase its use in decision-making, and that components of decision-making that are most accessible, i.e., most clearly addressed by the information, are likely to be the focus of decision-making. The assessment of the impact of framing effects on decision-making presented in Lurie and Mason ( 2007 ) showed that visual saliency moderates the effect of positive versus negative frames on judgment. An interesting finding presented in this study was that the attraction effect is more likely to influence decision-making if the visual representation used display's information by criteria, rather than if the information is displayed by alternative. The influence of criteria saliency in graphical representations was also demonstrated in Sun et al. ( 2010 ). Kelton et al. ( 2010 ) found that information presentation can affect the decision-maker influencing his/her mental representation of the problem, and influencing his/her characteristics such as involvement and task knowledge. A study by Orquin et al. ( 2018 ) showed that visual biases such as saliency may lead decision-makers to focus their attention in ways that are arbitrary to their decision goals. The results of experiments presented in Lohse ( 1997 ) demonstrated the importance of attention for choice behavior. The authors found that consumers choosing businesses from telephone directories viewed color ads 21% longer than non-color ones, and that they viewed 42% more bold listings than plain listings, spending on average 54% more time viewing ads for businesses they ended up choosing. Similar results were obtained in Milosavljevic et al. ( 2012 ), showing that , when making fast decisions, visual saliency influences choices more than preferences do, and that the bias is particularly strong when the preferences among the options are weak. The study is based on a user performance experiment, carried out in order to obtain data for rigorous quantitative analysis. Participants worked on a simple multi-criteria decision task using a web application developed for the purpose. In this section, we present the decision problem ( 3.1), experiment design ( 3.2), data sets ( 3.3), a brief overview of the web application structure and features ( 3.4), the type of collected data ( 3.5), the details of visual representations used in the evaluation ( 3.6), and the explanation of the performance metrics used to assess choice quality ( 3.7). 3.1 Decision problem scenario When choosing a decision task for evaluation studies, it is first and foremost important to provide a task to which all participants can relate. The decision task we used in this study was to choose a hotel for a holiday stay. Participants were presented with 50 different alternatives, i.e., 50 hotels, and asked to choose the most preferred alternative. Regarding the complexity of the task in terms of number of criteria, we opted to keep it low, as increased complexity is shown to lead to the use of simplifying decision strategies (Timmermans 1993 ). Payne ( 1976 ) found that increased complexity often leads to decision-makers resorting to heuristics, such as elimination-by-aspects. In the present study, each alternative was described in terms of five criteria: Price, Distance to city center, Cleanliness, Service and Breakfast. 3.2 Experiment The experiment was run on the Amazon Mechanical Turk 1 crowd-sourcing platform. A total of 153 participants took part in the experiment. We did not impose any requirements regarding participants' background, knowledge or skills. At the beginning of the experiment, participants were presented with the explanation of the process of assigning the SWING rating values to virtual alternatives (see Sect. 3.7). They were then asked to assign rating values to the virtual alternatives representative to both data sets. Those rating values were then used to calculate criteria weights based on participants' preferences. After completing the rating process, participants were presented with the explanation and examples of either parallel coordinates, or scatterplot matrices, depending on which of the two techniques was randomly assigned first. After getting familiar with the technique, participants proceeded to the first task. After completing the first task, the participants were familiarized with the second technique and then performed the second task. After completing both tasks, the participants answered a questionnaire. The experiment followed a two-factor design with visualization as a within-subject factor and saliency as a between-subject factor. In order to counterbalance the order of the within-factor and to maintain comparable group sizes across the between-factor, participants were quasi-randomly assigned to one of the following test sequences: PC with no saliency (PC_N) followed by SPM with no saliency (SPM_N) SPM with no saliency (SPM_N) followed by PC with no saliency (PC_N) PC with color saliency (PC_C) followed by SPM with color saliency (SPM_C) SPM with color saliency (SPM_C) followed by PC with color saliency (PC_C) PC with size saliency (PC_S) followed by SPM with size saliency (SPM_S) SPM with size saliency (SPM_S) followed by PC with size saliency (PC_S) 3.3 Data sets One potential issue with participants working on the same decision task using different visualization techniques is a possible impact of learning bias. In order to avoid it, we used two different data sets. The list of hotels, as well as the relevant information regarding price and location, was obtained through Trivago web site. Values in terms of price were stated in Euro (the less, the better), and values in terms of distance were given in kilometers (the closer, the better). Values in terms of the remaining three criteria, obtained from the TripAdviser web site, were expressed as ratings on the scale from 1 to 10 (the higher, the better). The first data set contained fifty alternatives (hotels) in Berlin, Germany, and it was used when the participants worked with parallel coordinates. The second set contained fifty hotels in London, UK, and it was used when the participants worked with scatterplot matrices. Minor adjustments to the values in the second data set were made, in order to fit them into the same ranges of values across the criteria as in the first data set. 3.4 Software The web application used in this study was implemented using D3.js JavaScript library. 2 It consists of three conceptual units. The first unit is a preference assessment unit, used to elicitate a participant's preferences which are then used to calculate the weight for each criterion (Fig. 2). These weights are used to calculate utility values for the alternatives (see Sect. 3.7). The decision unit is the main unit, where participants make their choices. There are six different visual representations of the decision space: PC, PC with color saliency, PC with size saliency, SPM, SPM with color saliency, and SPM with size saliency. Finally, the choice assessment unit is used to obtain a participant's own subjective assessment of the made choice. The preference assessment unit 3.5 Data collection Saved data for each participants include The selected alternative. Utility values of all alternatives calculated based on the participant's preference assessment (see Sect. 3.7). The time that the participant spent actively choosing the most preferred alternative. Ordered detail-on-demand sequence, containing all the alternatives on which the participants clicked. Binary value for each click. For scatterplot matrices: 1 if the click occurred inside a scatterplot concerning the most preferred criterion; 0 otherwise. For parallel coordinates: 1 if the click occurred closer to the coordinate representing the most preferred criterion, than to any other coordinate; 0 otherwise. The technique with which the participant worked first (PC or SPM). as well as how confident, on a scale 1–10, the participant is that he/she: understood the decision task. understood the process of rating virtual alternatives. understood parallel coordinates and used them correctly. understood scatterplot matrices and used them correctly. made the best possible choice with parallel coordinates. made the best possible choice with scatterplot matrices. 3.6 Visual representation In our implementation, we use the full matrix for scatterplot matrices, and Inselbergs (Inselberg 1985 ) original representation of parallel coordinates, where parallel axes represent criteria (dimensions) and polylines represent alternatives. The point in which a polyline intersects an axis represents the value of the alternative represented by the polyline in terms of the criterion represented by the axis. To avoid visual clutter and to utilize screen estate, axes were automatically scaled to the value ranges in the dataset, both for scatterplots and parallel coordinates. We used a static layout with no interactive reordering of axes, rows, and columns, not least to minimize biasing factors between subjects. Visual appearance is consistent in terms of size and color across all six different visualizations (compare 3.2). The default color for alternatives, polylines in PC and dots in SPM, was a medium light yellow with the coordinates [44 \(^{\circ }\), 0.98, 0.55] in terms of the HSL color space. In the visualizations where saliency was used to emphasize the most preferred criterion either deviating color or size were used to mark alternatives along the corresponding criterion axes (both in PC and SPM). 3.6.1 Salient color The decision unit using parallel coordinates (left) and scatterplot matrices (right) with color saliency. Blue marks the most preferred criterion For the visualizations deploying color saliency, we chose to show values of alternatives with respect to the most preferred criterion in blue color. The choice of blue as salient color is motivated, as it does not have any apparently misleading connotation in context of the decision task. Also according to opponent color theory, blue is well contrasted against the default color yellow. To assure comparable contrast with the white background, a lightness value close to the one of the yellow was chosen for the blue. In the color literature, saturation is often discussed as a perceptual dimension of color that is associated with uncertainty of a variable (for a comprehensive overview, see, e.g., Seipel and Lim 2017 ). Therefore we also maintained almost equal saturation levels for the blue color of the salient criterion, which has the coordinates [240 \(^{\circ }\), 0.97, 0.59] in terms of the HSL color space. For SPM, the alternatives (dots) are simply colored blue in scatterplots concerning that criterion. In PC, since the representation of an alternative is a continuous polyline, we opted for a linear transition from default dark yellow to blue color, starting from neighboring axes toward the axis representing the most preferred criterion (Fig. 3). We used the Data Visualization Saliency (DSV) model by Matzen et al. ( 2017 ) to assess whether our color enhanced representation is suitable for the purpose, i.e., if visually emphasized areas would draw a viewer's attention as intended. We chose this model as it is tailored to perform well for abstract data visualization. It also showed to agree well with experimental validation using eye-tracking data (Matzen et al. 2017 ). Saliency maps of our color enhanced visualization obtained by applying the DVS model are shown in Fig. 4. Saliency maps for parallel coordinates and scatterplot matrices using color saliency 3.6.2 Salient size For parallel coordinates, the most preferred criterion is accentuated by increasing its size by 100% compared with the size of the coordinates representing the other four criteria. For scatterplot matrices, for each scatterplot concerning the most preferred criterion, the axis on which that criterion is plotted is increased in length by 100% compared to the remaining axes. Furthermore, the size of dots in the plots concerning the most preferred criterion is set to 4 pixels, compared with a dot size of 3 pixels for the remaining plots. One example of each visualization is shown in Fig. 5. The decision unit using parallel coordinates (left) and scatterplot matrices (right) with size saliency 3.6.3 Interaction During the pilot studies prior to the experiment, we noticed that a majority of participants, regardless of the visualization technique and the saliency enhancement they were working with, concentrated almost exclusively on filtering feature and made their choices by adjusting the thresholds until a single alternative was left. For that reason, although data filtering (PC and SPM) and dimension reordering (PC) are useful and frequently used interaction features, we opted not to enable them in the final version of the web application used in the experiment. 3.7 Performance metrics Due to the subjective nature of decision-making, there is never an objectively best outcome, i.e., an outcome which would be best for every decision-maker. In addition, the quality of a choice is difficult to assess accurately. Dimara et al. ( 2018 ) calculated desirability scores representing the consistency between a participant's choice and his/her self-reported preferences as an indicative measure of accuracy. We deploy the same principle; however, we use a different metrics to elicit participants' preferences. Dimara et al. ( 2018 ) used rating of the criteria importance (0–10) to calculate criteria weights. Comparing the importance of different criteria without considering the actual degree of variation among the alternatives was criticized by many (e.g., Hammond et al. 1998 ; Keeney 2013 ; Korhonen et al. 2013 ). It introduces a level of abstraction which, together with the possibility of participants not being able to perfectly express their preferences, is likely to introduce further noise to the accuracy metrics, as pointed out by Dimara et al. ( 2018 ). To eliminate this level of abstraction and minimize risks of further biases, we use SWING weighting (Parnell 2009 ), which considers the value ranges in criteria, to collect data about participants' preferences. These data are then used to calculate weight coefficients for the utility functions (criteria weights) of all criteria (see Clemen and Reilly 2013 ). SWING weighting is based on comparison of \(n+1\) hypothetical alternatives, where n is the number of criteria. One of the alternatives, the benchmark alternative, has the worst value in terms of all n criteria, and its grading value is set to zero. Each of the remaining n alternatives has the best value in terms of one of the criteria, and worst value in terms of the others. The decision-maker assigns a grading value 100 to the most preferred alternative. In the example in Fig. 6, it is alternative A2. Then the decision-maker assigns the grading values for the other alternatives in a way that reflects his or her preferences. In the example, the decision-maker assigned the following values: A1 : 85; A2 : 100; A3 : 60; A4 : 40; A5 : 75. The preference assessment unit after a participant has assigned grading values to the alternatives The grading values of the virtual alternatives, \(g_i\), are used for calculations of weight coefficients by normalization (values between 0 and 1), $$\begin{aligned} w_i = \frac{g_i}{\sum _{j=1}^{n}g_j} \end{aligned}$$ where n is the number of criteria. For example, the weight coefficient for utility function of criterion Price in the example above is $$\begin{aligned} w_{Price} = \frac{85}{100+85+75+60+40} = 0.24 \end{aligned}$$ We assume that the utility is linear for all criteria and calculate the utility values of the actual alternatives by normalizing the criteria values, \(v_i\). For Cleanliness, Service and Breakfast, the utility values of the alternative a are obtained as $$\begin{aligned} u_i(a) = \frac{v_i(a) - v_{i_\mathrm{min}}}{v_{i_\mathrm{max}} - v_{i_\mathrm{min}}} \end{aligned}$$ and for Price and Distance to city center, which are "the less, the better" type of criteria, the rescaled values are calculated as $$\begin{aligned} u_i(a) = 1 - \frac{v_i(a) - v_{i_\mathrm{min}}}{v_{i_\mathrm{max}} - v_{i_\mathrm{min}}} \end{aligned}$$ where \(v_{i_\mathrm{max}}\) is the maximum value for criterion i, and \(v_{i_\mathrm{min}}\) is the minimum value for criterion i. The weighted summation method is then used to calculate the total utility value u for each alternative as $$\begin{aligned} u(a) = \sum _{i=1}^{n}w_iu_i(a) \end{aligned}$$ For our evaluations we use two metrics, denoted as Q and R. The value of Q expresses how consistent the participant's choice is with his/her self-reported preferences. It expresses the closeness between the selected alternative A and the alternative H that has the highest utility value based on Eqs. (3–5). Q is calculated as the proportion of the total utility of the selected alternative, \(u_A\), out of the total utility of the best alternative, \(u_H\), according to participants' preferences, i.e., $$\begin{aligned} Q = \frac{u_A}{u_H} \end{aligned}$$ As such, Q is indicative of the quality of choice. The value of R is calculated considering only the most preferred criterion. It is based on the highest and the lowest values of that criterion, \(v_{i_\mathrm{max}}\) and \(v_{i_\mathrm{min}}\), respectively, and the value in terms of that criterion of the alternative the participant selected, \(v_i(a)\). For example, if a participant chose the hypothetical alternative A2 from the example in Fig. 2 as the best one, R is calculated for the criterion Distance. The best value for Distance is the lowest value, \(v_{D_\mathrm{min}} = 0.1\) km, and the worst value is the largest value, i.e., \(v_{D_\mathrm{max}} = 13.2\) km. Suppose that the value for Distance is 0.8 km for the alternative which the participant selected, i.e., \(v_D(a) = 0.8\) km. In this example R is calculated as $$\begin{aligned} R = 1 - \frac{v_D(a) - v_{D_\mathrm{min}}}{v_{D_\mathrm{max}} - v_{D_\mathrm{min}}} = 1 - \frac{0.8 - 0.1}{13.2 - 0.1} = 0.95 \end{aligned}$$ When the highest criterion value is the best one ( Cleanliness, Service and Breakfast), R is calculated as $$\begin{aligned} R = \frac{v_D(a) - v_{D_\mathrm{min}}}{v_{D_\mathrm{max}} - v_{D_\mathrm{min}}} \end{aligned}$$ In other words, R measures the score of the chosen alternative a in terms of the most preferred criterion, and as such, it is indicative of the participant's attachment to that criterion. It is important to note that R does not tell us anything about the total utility of a. Prior to the experiment, we carried out a pilot study. The results of the pilot study and post-experiment conversations with those pilots indicated that the minimum time needed to complete a task was twenty seconds per decision scenario. Based on that, we decided that the results for 20 out of 153 participants who spent less than twenty second working on any of the two tasks could not be considered reliable and should be discarded. Of the remaining participants, 44 participants worked with plain representation, 45 participants worked with representation with color saliency, and 44 participants worked with representation with scale saliency. Statistical analysis of results was carried out using an estimation approach instead of commonly used null hypothesis significance testing, offering nuanced interpretations of results (see Cumming 2014 ; Dragicevic 2016 ). Our estimations are based on confidence intervals and effect sizes. We followed recommendations by Cumming ( 2014 ), based partly on Coulson et al. ( 2010 ), and neither reported nor made any conclusions based on p values. We used R for inferential statistics, with the bootES package (Kirby and Gerlanc 2013 ) for calculation of bootstrap confidence intervals. For calculations and plotting, we used modified R code developed by Dimara et al. ( 2018 ), available at https://​aviz.​fr/​dm. Inferential statistics with regard to the decision quality are given in Sect. 4.1, with regard to participants' attention in Sect. 4.2, with regard to time in Sect. 4.3, and with regard to participants' perception of the techniques and confidence in Sect. 4.4. 4.1 Decision quality No noticeable difference in performance was observed between groups working with scatterplot matrices with different saliency modes. For parallel coordinates, the results showed clearly better performance in the group working with color saliency, compared to the group working with the basic visualization with no saliency and the group working with size saliency. For PC_C – PC_N, the average increase in decision quality was 0.135, and with 95% probability not lower than 0.043. For PC_C – PC_S, the average increase was 0.122, and with 95% probability not lower than 0.034 (Figs. 7, 8). Q-value means for each saliency mode (no saliency, color saliency, and size saliency) for parallel coordinates (PC) and scatterplot matrices (SPM), respectively Mean differences in Q-value between each pair of saliency modes for PC and SPM, respectively. Confidence intervals 95% A comparison of the results for the within-subject variable visualization (parallel coordinates and scatterplot matrices) for all types of saliency reveals a clear difference. Participants performed noticeably better when working with parallel coordinates compared with using the scatterplot matrix (Figs. 9, 10). Q-value means for parallel coordinates and scatterplot matrices, regardless saliency mode Mean difference in Q-values between parallel coordinates and scatterplot matrices. Confidence interval 95% 4.2 Attention Attention to salient parts of a visualization can be measured with gaze tracking. However, due to the design of our study as a web experiment, this is not a viable approach. We therefore characterize users' attention indirectly in terms of their attachment to the most preferred criterion ( R), and by quantifying their interaction with the visualization close to this attribute. Results for the R-value of the chosen alternative show a similar pattern as the results regarding the decision quality. There is a strong indication of difference in R-value between participants working with parallel coordinates with color saliency and participants working with parallel coordinates with no saliency. The average increase in R for PC_C – PC_N is 0.092, and with 95% probability not lower than 0.002. However, there is no noticeable difference between color saliency and size saliency. For the visualizations with scatterplot matrices there are no clearly evident differences for different modes of saliency (Figs. 11, 12). A comparison of the results for the within-subject variable visualization (parallel coordinates and scatterplot matrices) for all types of saliency shows no notable differences (Figs. 13, 14). R-value means for each saliency mode (no saliency, color saliency, and size saliency) for parallel coordinates (PC) and scatterplot matrices (SPM), respectively Mean differences in R-values between each pair of saliency modes for PC and SPM, respectively. Confidence intervals 95% R-value means for PC and SPM Mean differences in R-values for PC and SPM To quantify users' interaction we analyzed the recorded mouse data, which comprised timestamps and positions of the mouse when clicked. Based on spatial proximity to the visualized variable with the highest weight, such mouse interactions where classified as near the salient coordinate. The analysis of click tracking data for parallel coordinates shows indication of difference between participants working with color saliency and participants working with no saliency. Participants were more likely to concentrate clicks near the coordinate representing the most preferred criterion when working with color saliency. On average, 47% of all clicks in the PC_N group were near the coordinate with the highest weight, compared to the PC_C group, where 65% of clicks were near that coordinate. No noticeable difference was detected for different saliency modes for SPM (Figs. 15, 16). However, the percentage of clicks in a plot concerning the most preferred criterion when working with SPM is clearly higher than the percentage of clicks near the coordinate representing that criterion when working with PC (Figs. 17, 18). Means of the percentage of clicks which occurred near the coordinate representing the most preferred criterion (PC), or in the plot concerning the most preferred criterion (SPM), for each saliency mode for PC and SPM Mean differences in the percentage of clicks which occurred near the coordinate representing the most preferred criterion (PC), or in the plot concerning the most preferred criterion (SPM), for each saliency mode for PC and SPM Means for percentage of clicks which occurred near the coordinate representing the most preferred criterion for PC and SPM Mean differences for percentage of clicks which occurred near the coordinate representing the most preferred criterion for PC and SPM 4.3 Time In terms of the time spent on the task, the results indicated no difference between representation types for parallel coordinates. For participants working with scatterplot matrices, there is a weak indication that participants may tend to spend more time on a task when working with color saliency, compared to size saliency or no saliency (Figs. 19, 20). On average, participants spent 15% more time working with SPM, compared to working with PC (Figs. 21, 22). Means for time in seconds spent on the task for each saliency mode (no saliency, color saliency, and size saliency) for parallel coordinates (PC) and scatterplot matrices (SPM), respectively Mean differences in time spent on the task between each pair of saliency modes for PC and SPM, respectively. Confidence intervals 95% Means for time in seconds spent on the task for PC and SPM Mean differences in time spent on the task for PC and SPM 4.4 Perception and confidence Participants' ratings show that, on average, participants understand the parallel coordinates technique better than scatterplot matrices, and that they are more confident in their decisions when working with parallel coordinates (Figs. 23, 24, 25, 26). This is consistent with the results concerning the decision quality (Sect. 4.1). Means for ratings of understanding of presentation techniques for PC and SPM, respectively Mean difference in ratings of understanding of presentation techniques for PC and SPM. Confidence interval 95% Means for ratings of confidence of decisions for PC and SPM, respectively Mean difference in ratings of confidence of decisions for PC and SPM. Confidence interval 95% 5 Discussion and conclusion Wouldn't it be appealing to use visual saliency in visualizations for MCDA to direct decision-makers' attention toward criteria of their highest preference, if that would help them to arrive at better decision outcomes? On the other hand, given humans' limited cognitive capacity, wouldn't too much of attention on some preferred criteria also come with the risk of overlooking, or at least underestimating, the value of remaining criteria for the total utility of the chosen alternative? The overreaching goal of the study presented here was to investigate, if preference controlled saliency in visualizations of multiple attribute datasets has an effect—either positive or negative—on the quality of the decisions made in multiple-attribute decision tasks. Altogether, the results from our experiment show that the quality of decision outcomes differed not only depending on the mode of visual saliency used (or if no saliency was used), but also depending on the employed visualization technique. We feel confident to state that visual saliency-based enhancement on the most preferred criterion did not lead to any adverse effect, i.e., decision quality did not degrade, no matter if color or size were used as facilitating visual variables and regardless of the chosen visualization technique (scatterplot matrices or parallel coordinates). On the other hand, we could observe favorable effects, i.e., improved decision quality, under certain conditions. More specifically, visual saliency, when facilitated by means of color, led to substantial improvement of decision quality in terms of our quality metric Q, but only when parallel coordinates were used for visualization. Compared with that, scale as a visual variable to accomplish saliency did hardly exhibit any positive effect on decision outcome in any of the visualizations in our study. This is unexpected, considering that the 100% scaled-up attribute axis/scatterplots consumed more screen estate leading to less cluttered representations for these attributes. Evidently, the degree to which visual saliency is influential to the quality of the outcome in MCDA tasks as studied here, varies depending on the visual variable used to facilitate visual saliency. Effect sizes in terms of increased decision quality are also most likely a matter of parameter tuning, i.e., optimal choices of chromaticity differences and scaling ratios. Regarding our choice of salient color, we made a perceptually informed best attempt by choosing opponent colors and considering other constraints. As for chosen 100% up-scale factor, there seems to be room for improvement. More research will be needed in the future to establish the relationship of those parameters on effect size, as well as their sensitivity to other factors such as, e.g., task complexity. The total absence of effects of visual saliency (both color and scale) in the scatterplot matrix visualizations may, at least to some extent, be explained with observed longer task completion times (89 seconds on average for parallel coordinates, 123 seconds for scatterplot matrices). From a practical point of view, the increased times for SM are most likely not relevant, however, they suggest that with scatterplot matrices users had to put more effort—by interacting and thinking—into the task. Indeed, the scatterplot matrices were also rated more difficult to be understood by subjects in our study (see also 4.4), users interacted more with them in terms of mouse clicks, and yet they reported to be less confident with their choices. Altogether, this leads us to conclude that users spent more cognitive efforts on the task when working with SPM. This, by comparison with parallel coordinates, increased amount of top-down processing is to our belief a factor that overrides, or at least counteracts, the effects gained from increased attention from visual saliency in the short time bottom-up processing phase of visual stimuli, as discussed in Donk and van Zoest ( 2008 ). From this we lean toward the conclusion that visual saliency is probably more effective in multiple-criteria decision tasks that require fast user response such as in crisis management or alarm handling. Decision quality Q in our study was measured in terms of how close (in percent) the subject's choice is to the best alternative based on the subject's own preferences. Except for the parallel coordinate visualization with color saliency, these values are around 0.62–0.66 on average (see Fig. 7). These numbers are surprisingly low, and they illustrate, that choosing best alternatives is a difficult task even in limited multi-objective decision-making situations. For the parallel coordinates with color saliency, decision quality was close to 80% on average. This means for the chosen alternative an improvement, which in practice indeed can make a considerable difference in terms of criteria values. Therefore, and in light of the fact that none of the visualizations with saliency introduced any adverse effects on decision quality, we consider it a rational design choice, to employ preference-controlled visual saliency in visual tools for multi-criteria decision-making. Another result of our study relates to how visual saliency affects users' attention to the most preferred criterion. Due to the design of our experiment as a web-based experiment, the use of gaze-tracking for validation of users' attention was not a viable option. Instead, we first used Data Visualization Saliency (DSV) model by Matzen et al. ( 2017 ) to qualitatively assess if the intended visual saliency is maintained in our visualizations. For the experimental evaluation, we devised two indirect measures to capture users' attention on their most preferred criterion. The R-value describes the chosen alternative's score only with respect to this criterion. In addition, we analyzed how much users interacted with visual elements representing this criterion by determining the percentage of mouse clicks nearby those elements. We note that visual saliency, regardless of the visualization method, led users to choices, which are in favor (in terms of high R-values) of the most preferred criterion, which is consistent with a strategy of maximizing score on this criterion. Significantly increased scores were, however, only observed for the parallel coordinates visualization with color saliency (see Fig. 12), which is consistent with the pattern already found for decision quality. Increased attention on the most preferred criterion under the use of visual saliency became also evident in terms of percentage mouse-clicks nearby that attribute. However, although differences are on average as large as 20% (see Fig. 16) they are not significant in terms of a 95% confidence interval. Assessing decision quality in MCDA tasks in an objective way is a delicate undertaking due to the inherent subjective nature of individuals' preferences. The approach chosen by Dimara et al. ( 2018 ) who suggested a metric based on subjects' compliance with their own preferences is a very appealing approach to this problem. In their work the authors used rating on a normalized scale for direct elicitation of user preferences, and they point out the risks of bias caused by user's difficulties to express their criteria preferences. We highly agree with their discourse and we strongly believe that some of these difficulties arise from the abstraction induced by direct criteria ranking using standardized (abstract) scales. To alleviate this, we suggested to use an alternative approach, SWING weighting, as a method to elicit users' criteria preferences, whereby users had to relate to the real value ranges (and units) of the attributes. By that, we believe to reduce one level of abstraction and thus to reduce inherent bias in the preference elicitation phase. Albeit, based on the results of our study, we cannot preclude that participants in the study, knowingly or not, did have difficulties to use SWING weighting correctly to express their preferences. More work is needed, rather in the field of MCDA than within visualization, to study the sensitivity of alternative preference elicitation methods in the context of assessment of decision quality. Another critical aspect to our methodology is the potential risk that participants, knowingly or not, would reassess significantly their preferences if the visualizations they worked on would reveal unanticipated patterns in the data, which is usually the case in a real application. To prevent this, we designed decision scenarios, which exhibited no unanticipated relations or trends between criteria, nor clear outliers in the data sets. This ensures our assumption that participants acted in agreement with their preferences, which is our quality metric. Revisiting the questions in the beginning of this section, we conclude that in our study no adverse effects of using visual saliency in form of color or size were observed, neither in terms of reduced decision quality nor in terms of efficiency (notably longer time on task). Instead, specific combinations of saliency form and visualization method seem to be favorable in terms of gained decision quality and attribute attachment. Without drawing too far-reaching conclusions, we consider the results very encouraging, and we assert that it is relevant to consider saliency in visualizations for MCDA in different ways: Firstly, by creating an awareness about saliency effects in visualizations using saliency analysis according to, e.g., Matzen et al. ( 2017 ) designers can reveal potential risks for biases in visual MCDA. Secondly, this research can inform the design of novel visual MCDA tools and their evaluation. In this context, devising general guidelines on how to design visualizations for saliency is an interesting direction of more research in the future, which in a more general perspective should analyze the effects of spatial layout and use of visual variables on saliency in visualizations. They can inform the design of novel MCDA tools and visualizations for forthcoming research to evaluate the effectiveness of saliency in visualizations for other MCDA tasks. Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. https://​www.​mturk.​com/​. https://d3js.org/. go back to reference Ahlberg C, Shneiderman B (2003) Visual information seeking: tight coupling of dynamic query filters with starfield displays. In: Bederson BB, Shneiderman B (eds) The craft of information visualization, interactive technologies. Morgan Kaufmann, San Francisco, pp 7–13 CrossRef Ahlberg C, Shneiderman B (2003) Visual information seeking: tight coupling of dynamic query filters with starfield displays. In: Bederson BB, Shneiderman B (eds) The craft of information visualization, interactive technologies. Morgan Kaufmann, San Francisco, pp 7–13 CrossRef go back to reference Andrienko G, Andrienko N, Jankowski P (2003) Building spatial decision support tools for individuals and groups. J Decis Syst 12(2):193–208 CrossRef Andrienko G, Andrienko N, Jankowski P (2003) Building spatial decision support tools for individuals and groups. J Decis Syst 12(2):193–208 CrossRef go back to reference Andrienko N, Andrienko G (2003) Informed spatial decisions through coordinated views. Inf Vis 2(4):270–285 CrossRef Andrienko N, Andrienko G (2003) Informed spatial decisions through coordinated views. Inf Vis 2(4):270–285 CrossRef go back to reference Andrienko N, Andrienko G, Voss H, Hipolito J, Bernardo F, Kretchmer U (2002) Testing the usability of interactive maps in CommonGIS. Cartogr Geogr Inf Sci 29(4):325–342 CrossRef Andrienko N, Andrienko G, Voss H, Hipolito J, Bernardo F, Kretchmer U (2002) Testing the usability of interactive maps in CommonGIS. Cartogr Geogr Inf Sci 29(4):325–342 CrossRef go back to reference Arciniegas G, Janssen R, Omtzigt N (2011) Map-based multicriteria analysis to support interactive land use allocation. Int J Geogr Inf Sci 25(12):1931–1947 CrossRef Arciniegas G, Janssen R, Omtzigt N (2011) Map-based multicriteria analysis to support interactive land use allocation. Int J Geogr Inf Sci 25(12):1931–1947 CrossRef go back to reference Arciniegas G, Janssen R, Rietveld P (2013) Effectiveness of collaborative map-based decision support tools: results of an experiment. Environ Model Softw 39:159–175 CrossRef Arciniegas G, Janssen R, Rietveld P (2013) Effectiveness of collaborative map-based decision support tools: results of an experiment. Environ Model Softw 39:159–175 CrossRef go back to reference Benayoun R, Roy B, Sussman B (1966) Une méthode pour guider le choix en présence de points de vue multiples. Note de Travail, 49 Benayoun R, Roy B, Sussman B (1966) Une méthode pour guider le choix en présence de points de vue multiples. Note de Travail, 49 go back to reference Brans JP, Vincke P (1985) A preference ranking organisation method. Manag Sci 31(6):647–657 MATHCrossRef Brans JP, Vincke P (1985) A preference ranking organisation method. Manag Sci 31(6):647–657 MATHCrossRef go back to reference Carenini G, Loyd J (2004) Valuecharts: analyzing linear models expressing preferences and evaluations. In: Proceedings of the working conference on advanced visual interfaces, AVI '04. New York, NY, USA. ACM, pp 150–157 Carenini G, Loyd J (2004) Valuecharts: analyzing linear models expressing preferences and evaluations. In: Proceedings of the working conference on advanced visual interfaces, AVI '04. New York, NY, USA. ACM, pp 150–157 go back to reference Clemen RT, Reilly T (2013) Making hard decisions with decision tools. South-Western College Publishing, Mason Clemen RT, Reilly T (2013) Making hard decisions with decision tools. South-Western College Publishing, Mason go back to reference Colson G, de Bruyn C (1989) Models and methods in multiple objectives decision making. Math Comput Model 12(10–11):1201–1211 CrossRef Colson G, de Bruyn C (1989) Models and methods in multiple objectives decision making. Math Comput Model 12(10–11):1201–1211 CrossRef go back to reference Coulson M, Healey M, Fidler F, Cumming G (2010) Confidence intervals permit, but do not guarantee, better inference than statistical significance testing. Frontiers Psychol 1(JUL):1–9 Coulson M, Healey M, Fidler F, Cumming G (2010) Confidence intervals permit, but do not guarantee, better inference than statistical significance testing. Frontiers Psychol 1(JUL):1–9 go back to reference Cumming G (2014) The new statistics: why and how. Psychol Sci 25(1):7–29 CrossRef Cumming G (2014) The new statistics: why and how. Psychol Sci 25(1):7–29 CrossRef go back to reference Desimone R, Duncan J (1995) Neural mechanisms of selective visual attention. Ann Rev Neurosci 18(1):193–222 CrossRef Desimone R, Duncan J (1995) Neural mechanisms of selective visual attention. Ann Rev Neurosci 18(1):193–222 CrossRef go back to reference Dimara E, Bezerianos A, Dragicevic P (2018) Conceptual and methodological issues in evaluating multidimensional visualizations for decision support. IEEE Trans Vis Comput Graph 24(1):749–759 CrossRef Dimara E, Bezerianos A, Dragicevic P (2018) Conceptual and methodological issues in evaluating multidimensional visualizations for decision support. IEEE Trans Vis Comput Graph 24(1):749–759 CrossRef go back to reference Donk M, van Zoest W (2008) Effects of salience are short-lived. Psychol Sci 19(7):733–739 CrossRef Donk M, van Zoest W (2008) Effects of salience are short-lived. Psychol Sci 19(7):733–739 CrossRef go back to reference Dragicevic P (2016) Fair statistical communication in HCI. In: Robertson J, Kaptein M (eds) Modern statistical methods for HCI. Springer, Berlin, pp 291–330 CrossRef Dragicevic P (2016) Fair statistical communication in HCI. In: Robertson J, Kaptein M (eds) Modern statistical methods for HCI. Springer, Berlin, pp 291–330 CrossRef go back to reference Elmqvist N, Dragicevic P, Fekete JD (2008) Rolling the dice: multidimensional visual exploration using scatterplot matrix navigation. IEEE Trans Vis Comput Graph 14(6):1141–1148 CrossRef Elmqvist N, Dragicevic P, Fekete JD (2008) Rolling the dice: multidimensional visual exploration using scatterplot matrix navigation. IEEE Trans Vis Comput Graph 14(6):1141–1148 CrossRef go back to reference Glaze R, Steckel JH, Winer RS (1992) Locally rational decision making: the distracting effect of information on managerial performance. Manag Sci 38(2):212–226 CrossRef Glaze R, Steckel JH, Winer RS (1992) Locally rational decision making: the distracting effect of information on managerial performance. Manag Sci 38(2):212–226 CrossRef go back to reference Gratzl S, Lex A, Gehlenborg N, Pfister H, Streit M (2013) LineUp: visual analysis of multi-attribute rankings. IEEE Trans Vis Comput Graph 19(12):2277–2286 CrossRef Gratzl S, Lex A, Gehlenborg N, Pfister H, Streit M (2013) LineUp: visual analysis of multi-attribute rankings. IEEE Trans Vis Comput Graph 19(12):2277–2286 CrossRef go back to reference Hammond JS, Keeney RL, Raiffa H (1998) Even swaps: a rational method for making trade-offs. Harv Bus Rev 76(2):137–149 Hammond JS, Keeney RL, Raiffa H (1998) Even swaps: a rational method for making trade-offs. Harv Bus Rev 76(2):137–149 go back to reference Hwang C-L, Yoon K (1981) Multiple attribute decision making: methods and applications. Springer-Verlag, Heidelberg MATHCrossRef Hwang C-L, Yoon K (1981) Multiple attribute decision making: methods and applications. Springer-Verlag, Heidelberg MATHCrossRef go back to reference Inselberg A (1985) The plane with parallel coordinates. Vis Comput 1(4):69–91 MathSciNetMATHCrossRef Inselberg A (1985) The plane with parallel coordinates. Vis Comput 1(4):69–91 MathSciNetMATHCrossRef go back to reference Ishizaka A, Siraj S, Nemery P (2016) Which energy mix for the UK (United Kingdom)? An evolutive descriptive mapping with the integrated GAIA (graphical analysis for interactive aid)-AHP (analytic hierarchy process) visualization tool. Energy 95:602–611 CrossRef Ishizaka A, Siraj S, Nemery P (2016) Which energy mix for the UK (United Kingdom)? An evolutive descriptive mapping with the integrated GAIA (graphical analysis for interactive aid)-AHP (analytic hierarchy process) visualization tool. Energy 95:602–611 CrossRef go back to reference Itti L, Koch C (2001) Computational modelling of visual attention. Nat Rev Neurosci 2(3):194–203 CrossRef Itti L, Koch C (2001) Computational modelling of visual attention. Nat Rev Neurosci 2(3):194–203 CrossRef go back to reference Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259 CrossRef Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259 CrossRef go back to reference Jankowski P, Andrienko N, Andrienko G (2001) Map-centred exploratory approach to multiple criteria spatial decision making. Int J Geogr Inf Sci 15(2):101–127 CrossRef Jankowski P, Andrienko N, Andrienko G (2001) Map-centred exploratory approach to multiple criteria spatial decision making. Int J Geogr Inf Sci 15(2):101–127 CrossRef go back to reference Jarvenpaa SL (1990) Graphic displays in decision making—the visual salience effect. J Behav Decis Mak 3(4):247–262 CrossRef Jarvenpaa SL (1990) Graphic displays in decision making—the visual salience effect. J Behav Decis Mak 3(4):247–262 CrossRef go back to reference Keeney RL (2013) Identifying, prioritizing, and using multiple objectives. EURO J Decis Process 1(1–2):45–67 CrossRef Keeney RL (2013) Identifying, prioritizing, and using multiple objectives. EURO J Decis Process 1(1–2):45–67 CrossRef go back to reference Keeney RL, Raiffa H (1993) Decisions with multiple objectives: preferences and value tradeoffs. Cambridge University Press, Cambridge MATHCrossRef Keeney RL, Raiffa H (1993) Decisions with multiple objectives: preferences and value tradeoffs. Cambridge University Press, Cambridge MATHCrossRef go back to reference Kelton AS, Pennington RR, Tuttle BM (2010) The effects of information presentation format on judgment and decision making: a review of the information systems research. J Inf Syst 24(2):79–105 Kelton AS, Pennington RR, Tuttle BM (2010) The effects of information presentation format on judgment and decision making: a review of the information systems research. J Inf Syst 24(2):79–105 go back to reference Kirby KN, Gerlanc D (2013) BootES: an R package for bootstrap confidence intervals on effect sizes. Behav Res Methods 45(4):905–927 CrossRef Kirby KN, Gerlanc D (2013) BootES: an R package for bootstrap confidence intervals on effect sizes. Behav Res Methods 45(4):905–927 CrossRef go back to reference Koch C, Ullman S (1985) Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol 4(4):219–227 Koch C, Ullman S (1985) Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol 4(4):219–227 go back to reference Kollat JB, Reed P (2007) A framework for visually interactive decision-making and design using evolutionary multi-objective optimization (video). Environ Model Softw 22(12):1691–1704 CrossRef Kollat JB, Reed P (2007) A framework for visually interactive decision-making and design using evolutionary multi-objective optimization (video). Environ Model Softw 22(12):1691–1704 CrossRef go back to reference Korhonen PJ, Silvennoinen K, Wallenius J, Öörni A (2013) A careful look at the importance of criteria and weights. Ann Oper Res 211(1):565–578 MathSciNetMATHCrossRef Korhonen PJ, Silvennoinen K, Wallenius J, Öörni A (2013) A careful look at the importance of criteria and weights. Ann Oper Res 211(1):565–578 MathSciNetMATHCrossRef go back to reference Li HL, Ma LC (2008) Visualizing decision process on spheres based on the even swap concept. Decis Support Syst 45(2):354–367 CrossRef Li HL, Ma LC (2008) Visualizing decision process on spheres based on the even swap concept. Decis Support Syst 45(2):354–367 CrossRef go back to reference Lohse GL (1997) Consumer eye movement patterns on yellow pages advertising. J Advert 26(1):61–73 CrossRef Lohse GL (1997) Consumer eye movement patterns on yellow pages advertising. J Advert 26(1):61–73 CrossRef go back to reference Lurie NH, Mason CH (2007) Visual representation: implications for decision making. J Mark 71(1):160–177 CrossRef Lurie NH, Mason CH (2007) Visual representation: implications for decision making. J Mark 71(1):160–177 CrossRef go back to reference Matzen LE, Haass MJ, Divis KM, Wang Z, Wilson AT (2017) Data visualization saliency model: a tool for evaluating abstract data visualizations. IEEE Trans Vis Comput Graph 24(1):563–573 CrossRef Matzen LE, Haass MJ, Divis KM, Wang Z, Wilson AT (2017) Data visualization saliency model: a tool for evaluating abstract data visualizations. IEEE Trans Vis Comput Graph 24(1):563–573 CrossRef go back to reference Milosavljevic M, Navalpakkam V, Koch C, Rangel A (2012) Relative visual saliency differences induce sizable bias in consumer choice. J Consum Psychol 22(1):67–74 CrossRef Milosavljevic M, Navalpakkam V, Koch C, Rangel A (2012) Relative visual saliency differences induce sizable bias in consumer choice. J Consum Psychol 22(1):67–74 CrossRef go back to reference Milutinovic G, Seipel S (2018) Visual GISwaps—an interactive visualization framework for geospatial decision making. In: Bechmann D, Cláudio AP, Braz J (eds) Proceedings of the 13th international joint conference on computer vision, imaging and computer graphics theory and applications. SCITEPRESS Milutinovic G, Seipel S (2018) Visual GISwaps—an interactive visualization framework for geospatial decision making. In: Bechmann D, Cláudio AP, Braz J (eds) Proceedings of the 13th international joint conference on computer vision, imaging and computer graphics theory and applications. SCITEPRESS go back to reference Moran J, Desimone R (1985) Selective attention gates visual processing in the extrastriate cortex. Science 229(4715):782–784 CrossRef Moran J, Desimone R (1985) Selective attention gates visual processing in the extrastriate cortex. Science 229(4715):782–784 CrossRef go back to reference Orquin JL, Perkovic S, Grunert KG (2018) Visual biases in decision making. Appl Econ Perspect Policy 40(4):523–537 CrossRef Orquin JL, Perkovic S, Grunert KG (2018) Visual biases in decision making. Appl Econ Perspect Policy 40(4):523–537 CrossRef go back to reference Pajer S, Streit M, Torsney-Weir T, Spechtenhauser F, Möller T, Piringer H (2017) WeightLifter: visual weight space exploration for multi-criteria decision making. IEEE Trans Vis Comput Graph 23(1):611–620 CrossRef Pajer S, Streit M, Torsney-Weir T, Spechtenhauser F, Möller T, Piringer H (2017) WeightLifter: visual weight space exploration for multi-criteria decision making. IEEE Trans Vis Comput Graph 23(1):611–620 CrossRef go back to reference Parkhurst D, Law K, Niebur E (2002) Modeling the role of salience in the allocation of overt visual attention. Vis Res 42(1):107–123 CrossRef Parkhurst D, Law K, Niebur E (2002) Modeling the role of salience in the allocation of overt visual attention. Vis Res 42(1):107–123 CrossRef go back to reference Parnell GS, Trainor TE (2009) Using the swing weight matrix to weight multiple objectives. In: 19th annual international symposium of the international council on systems engineering, INCOSE 2009, 1(July 2018), pp 283–298 Parnell GS, Trainor TE (2009) Using the swing weight matrix to weight multiple objectives. In: 19th annual international symposium of the international council on systems engineering, INCOSE 2009, 1(July 2018), pp 283–298 go back to reference Payne JW (1976) Task complexity and contingent processing in decision making: An information search and protocol analysis. Organ Behav Hum Perform 16(2):366–387 CrossRef Payne JW (1976) Task complexity and contingent processing in decision making: An information search and protocol analysis. Organ Behav Hum Perform 16(2):366–387 CrossRef go back to reference Pu P, Faltings B (2000) Enriching buyers' experiences: the smartclient approach. In: Conference on human factors in computing systems—proceedings, pp 289–296 Pu P, Faltings B (2000) Enriching buyers' experiences: the smartclient approach. In: Conference on human factors in computing systems—proceedings, pp 289–296 go back to reference Riehmann P, Opolka J, Froehlich B (2012) The product explorer: decision making with ease. In: AVI Riehmann P, Opolka J, Froehlich B (2012) The product explorer: decision making with ease. In: AVI go back to reference Saaty TL (1980) The analytic hierarchy process. McGraw-Hill, New York MATH Saaty TL (1980) The analytic hierarchy process. McGraw-Hill, New York MATH go back to reference Salter JD, Campbell C, Journeay M, Sheppard SRJ (2009) The digital workshop: exploring the use of interactive and immersive visualisation tools in participatory planning. J Environ Manag 90(6):2090–2101 CrossRef Salter JD, Campbell C, Journeay M, Sheppard SRJ (2009) The digital workshop: exploring the use of interactive and immersive visualisation tools in participatory planning. J Environ Manag 90(6):2090–2101 CrossRef go back to reference Seipel S, Lim NJ (2017) Color map design for visualization in flood risk assessment. Int J Geogr Inf Sci 31(11):2286–2309 CrossRef Seipel S, Lim NJ (2017) Color map design for visualization in flood risk assessment. Int J Geogr Inf Sci 31(11):2286–2309 CrossRef go back to reference Siraj S, Mikhailov L, Keane JA (2015) PriEsT: an interactive decision support tool to estimate priorities from pairwise comparison judgments. Int Trans Oper Res 22(2):217–235 MathSciNetMATHCrossRef Siraj S, Mikhailov L, Keane JA (2015) PriEsT: an interactive decision support tool to estimate priorities from pairwise comparison judgments. Int Trans Oper Res 22(2):217–235 MathSciNetMATHCrossRef go back to reference Speier C (2006) The influence of information presentation formats on complex task decision-making performance. Int J Hum Comput Stud 64(11):1115–1131 CrossRef Speier C (2006) The influence of information presentation formats on complex task decision-making performance. Int J Hum Comput Stud 64(11):1115–1131 CrossRef go back to reference Sun Y, Li S, Bonini N (2010) Attribute salience in graphical representations affects evaluation. Judgm Decis Mak 5(3):151–158 Sun Y, Li S, Bonini N (2010) Attribute salience in graphical representations affects evaluation. Judgm Decis Mak 5(3):151–158 go back to reference Timmermans D (1993) The impact of task complexity on information use in multi-attribute decision making. J Behav Decis Mak 6(2):95–111 CrossRef Timmermans D (1993) The impact of task complexity on information use in multi-attribute decision making. J Behav Decis Mak 6(2):95–111 CrossRef go back to reference Treisman A (1988) Features and objects: the fourteenth Bartlett memorial lecture. Q J Exp Psychol 40A(2):201–237 CrossRef Treisman A (1988) Features and objects: the fourteenth Bartlett memorial lecture. Q J Exp Psychol 40A(2):201–237 CrossRef go back to reference Treisman A, Sato S (1990) Conjunction search revisited. J Exp Psychol Hum Percept Perform 16(3):459–478 CrossRef Treisman A, Sato S (1990) Conjunction search revisited. J Exp Psychol Hum Percept Perform 16(3):459–478 CrossRef go back to reference Treisman AM, Gelade G (1980) A feature-integration theory of attention. Cognit Psychol 12:97–136 CrossRef Treisman AM, Gelade G (1980) A feature-integration theory of attention. Cognit Psychol 12:97–136 CrossRef go back to reference Underwood G, Foulsham T, van Loon E, Humphreys L, Bloyce J (2006) Eye movements during scene inspection: a test of the saliency map hypothesis. Eur J Cognit Psychol 18(3):321–342 CrossRef Underwood G, Foulsham T, van Loon E, Humphreys L, Bloyce J (2006) Eye movements during scene inspection: a test of the saliency map hypothesis. Eur J Cognit Psychol 18(3):321–342 CrossRef go back to reference Vallerio M, Hufkens J, Van Impe J, Logist F (2015) An interactive decision-support system for multi-objective optimization of nonlinear dynamic processes with uncertainty. Expert Syst Appl 42(21):7710–7731 CrossRef Vallerio M, Hufkens J, Van Impe J, Logist F (2015) An interactive decision-support system for multi-objective optimization of nonlinear dynamic processes with uncertainty. Expert Syst Appl 42(21):7710–7731 CrossRef go back to reference Zavadskas EK, Turskis Z, Kildiene S (2014) State of art surveys of overviews on MCDM/MADM methods. Technol Econ Dev Econ 20(1):165–179 CrossRef Zavadskas EK, Turskis Z, Kildiene S (2014) State of art surveys of overviews on MCDM/MADM methods. Technol Econ Dev Econ 20(1):165–179 CrossRef Goran Milutinović Ulla Ahonen-Jonnarth Stefan Seipel Journal of Visualization Regular Paper FGVis: visual analytics of human mobility patterns and urban areas based on F-GloVe Features of the background oriented schlieren method for studying small axisymmetric plasma objects Comparison of three-dimensional density distribution of numerical and experimental analysis for twin jets Visualization of supersonic flow over a rotating blunt cylinder at a high angle of attack Standard operational procedures (SOP) for effective fire safety evacuation visualization in college dormitory buildings Modeling layout design for multiple-view visualization via Bayesian inference
CommonCrawl
Chapters (1) Laser and Particle Beams (3) Advances in X-ray Analysis (1) Compositio Mathematica (1) Dialogue: Canadian Philosophical Review / Revue canadienne de philosophie (1) Powder Diffraction (1) Quaternary Research (1) Radiocarbon (1) Cambridge University Press (1) The Canadian Philosophical Association/ L'Association canadienne de philosophie ACPA (1) The effect of phase change on stability of convective flow in a layer of volatile liquid driven by a horizontal temperature gradient Roman O. Grigoriev, Tongran Qin Journal: Journal of Fluid Mechanics / Volume 838 / 10 March 2018 Published online by Cambridge University Press: 12 January 2018, pp. 248-283 Print publication: 10 March 2018 Buoyancy–thermocapillary convection in a layer of volatile liquid driven by a horizontal temperature gradient arises in a variety of situations. Recent studies have shown that the composition of the gas phase, which is typically a mixture of vapour and air, has a noticeable effect on the critical Marangoni number describing the onset of convection as well as on the observed convection pattern. Specifically, as the total pressure or, equivalently, the average concentration of air is decreased, the threshold of the instability leading to the emergence of convective rolls is found to increase rather significantly. We present a linear stability analysis of the problem which shows that this trend can be readily understood by considering the transport of heat and vapour through the gas phase. In particular, we show that transport in the gas phase has a noticeable effect even at atmospheric conditions, when phase change is greatly suppressed. Bifurcations in a quasi-two-dimensional Kolmogorov-like flow Jeffrey Tithof, Balachandra Suri, Ravi Kumar Pallantla, Roman O. Grigoriev, Michael F. Schatz Journal: Journal of Fluid Mechanics / Volume 828 / 10 October 2017 Print publication: 10 October 2017 We present a combined experimental and theoretical study of the primary and secondary instabilities in a Kolmogorov-like flow. The experiment uses electromagnetic forcing with an approximately sinusoidal spatial profile to drive a quasi-two-dimensional (Q2D) shear flow in a thin layer of electrolyte suspended on a thin lubricating layer of a dielectric fluid. Theoretical analysis is based on a two-dimensional (2D) model (Suri et al., Phys. Fluids, vol. 26 (5), 2014, 053601), derived from first principles by depth-averaging the full three-dimensional Navier–Stokes equations. As the strength of the forcing is increased, the Q2D flow in the experiment undergoes a series of bifurcations, which is compared with results from direct numerical simulations of the 2D model. The effects of confinement and the forcing profile are studied by performing simulations that assume spatial periodicity and strictly sinusoidal forcing, as well as simulations with realistic no-slip boundary conditions and an experimentally validated forcing profile. We find that only the simulation subject to physical no-slip boundary conditions and a realistic forcing profile provides close, quantitative agreement with the experiment. Our analysis offers additional validation of the 2D model as well as a demonstration of the importance of properly modelling the forcing and boundary conditions. Tautological rings for high-dimensional manifolds Fiber spaces and bundles Differential topology Søren Galatius, Ilya Grigoriev, Oscar Randal-Williams Journal: Compositio Mathematica / Volume 153 / Issue 4 / April 2017 Published online by Cambridge University Press: 13 March 2017, pp. 851-866 We study tautological rings for high-dimensional manifolds, that is, for each smooth manifold $M$ the ring $R^{\ast }(M)$ of those characteristic classes of smooth fibre bundles with fibre $M$ which is generated by generalised Miller–Morita–Mumford classes. We completely describe these rings modulo nilpotent elements, when $M$ is a connected sum of copies of $S^{n}\times S^{n}$ for $n$ odd. Preliminary paleoenvironmental analysis of permafrost deposits at Batagaika megaslump, Yana Uplands, northeast Siberia Julian B. Murton, Mary E. Edwards, Anatoly V. Lozhkin, Patricia M. Anderson, Grigoriy N. Savvinov, Nadezhda Bakulina, Olesya V. Bondarenko, Marina V. Cherepanova, Petr P. Danilov, Vasiliy Boeskorov, Tomasz Goslar, Semyon Grigoriev, Stanislav V. Gubin, Julia A. Korzun, Alexei V. Lupachev, Alexei Tikhonov, Valeriya I. Tsygankova, Galina V. Vasilieva, Oksana G. Zanina Journal: Quaternary Research / Volume 87 / Issue 2 / March 2017 Published online by Cambridge University Press: 16 February 2017, pp. 314-330 Print publication: March 2017 A megaslump at Batagaika, in northern Yakutia, exposes a remarkable stratigraphic sequence of permafrost deposits ~50–80 m thick. To determine their potential for answering key questions about Quaternary environmental and climatic change in northeast Siberia, we carried out a reconnaissance study of their cryostratigraphy and paleoecology, supported by four rangefinder 14C ages. The sequence includes two ice complexes separated by a unit of fine sand containing narrow syngenetic ice wedges and multiple paleosols. Overall, the sequence developed as permafrost grew syngenetically through an eolian sand sheet aggrading on a hillslope. Wood remains occur in two forest beds, each associated with a reddened weathering horizon. The lower bed contains high amounts of Larix pollen (>20%), plus small amounts of Picea and Pinus pumila, and is attributed to interglacial conditions. Pollen from the overlying sequence is dominated by herbaceous taxa (~70%–80%) attributed to an open tundra landscape during interstadial climatic conditions. Of three hypothetical age schemes considered, we tentatively attribute much of the Batagaika sequence to Marine Oxygen Isotope Stage (MIS) 3. The upper and lower forest beds may represent a mid–MIS 3 optimum and MIS 5, respectively, although we cannot discount alternative attributions to MIS 5 and 7. Structural peculiarities of aged 238Pu-doped monazite Andrei A. Shiryaev, Maximillian S. Nickolsky, Alexey A. Averin, Mikhail S. Grigoriev, Yan V. Zubavichus, Irina E. Vlasova, V. G. Petrov, Boris E. Burakov Journal: MRS Advances / Volume 1 / Issue 63-64 / 2016 Published online by Cambridge University Press: 20 February 2017, pp. 4275-4281 Results of characterization of 238Pu-doped Eu- and La-monazites using single crystal XRD, Raman and XAFS spectroscopy and TEM are presented. It is shown that despite significant accumulated doses (up to 9x1018 α-decays/gram) the Eu-monazite remains a single crystal. Unusual foamy structures are observed by TEM and are interpreted as recrystallisation of domains damaged by recoil U-ions. Partial recrystallisation of the surface material is also supported by Raman and luminescence data. X-ray computed tomography of two mammoth calf mummies Daniel C. Fisher, Ethan A. Shirley, Christopher D. Whalen, Zachary T. Calamari, Adam N. Rountrey, Alexei N. Tikhonov, Bernard Buigues, Frédéric Lacombat, Semyon Grigoriev, Piotr A. Lazarev Journal: Journal of Paleontology / Volume 88 / Issue 4 / July 2014 Print publication: July 2014 Two female woolly mammoth neonates from permafrost in the Siberian Arctic are the most complete mammoth specimens known. Lyuba, found on the Yamal Peninsula, and Khroma, from northernmost Yakutia, died at ages of approximately one and two months, respectively. Both specimens were CT-scanned, yielding detailed information on the stage of development of their dentition and skeleton and insight into conditions associated with death. Both mammoths died after aspirating mud. Khroma's body was frozen soon after death, leaving her tissues in excellent condition, whereas Lyuba's body underwent postmortem changes that resulted in authigenic formation of nodules of the mineral vivianite associated with her cranium and within diaphyses of long bones. CT data provide the only comprehensive approach to mapping vivianite distribution. Three-dimensional modeling and measurement of segmented long bones permits comparison between these individuals and with previously recovered specimens. CT scans of long bones and foot bones show developmental features such as density gradients that reveal ossification centers. The braincase of Khroma was segmented to show the approximate morphology of the brain; its volume is slightly less (∼2,300 cm3) than that of neonate elephants (∼2,500 cm3). Lyuba's premaxillae are more gracile than those of Khroma, possibly a result of temporal and/or geographic variation but probably also reflective of their age difference. Segmentation of CT data and 3-D modeling software were used to produce models of teeth that were too complex for traditional molding and casting techniques. High-Resolution SEM with Coupled Transmission Mode and EDX for Quick Characterization of Micro- and Nanocapsules for self-Healing Anti-Corrosion Coatings V.-D. Hodoroaba, D. Akcakayiran, D. Grigoriev, D.G. Shchukin Published online by Cambridge University Press: 09 October 2013, pp. 374-375 Extended abstract of a paper presented at Microscopy and Microanalysis 2013 in Indianapolis, Indiana, USA, August 4 – August 8, 2013. Radiocarbon and Tree-Ring Dates of the Bes-Shatyr #3 Saka Kurgan in the Semirechiye, Kazakhstan Irina Panyushkina, Fedor Grigoriev, Todd Lange, Nursan Alimbay Journal: Radiocarbon / Volume 55 / Issue 3 / 2013 This study employs tree-ring crossdating and radiocarbon measurements to determine the precise calendar age of the Bes-Shatyr Saka necropolis (43°47′N, 81°21′E) built for wealthy tribe leaders in the Ili River Valley (Semirechiye), southern Kazakhstan. We developed a 218-yr tree-ring chronology and a highly resolved sequence of 14C from timbers of Bes-Shatyr kurgan #3. A 4-decadal-point 14C wiggle dates the Bes-Shatyr necropolis to 600 cal BC. A 47-yr range of cutting dates adjusted the kurgan date to ∼550 BC. This is the first result of high-resolution 14C dating produced for the Saka burials in the Semirechiye. The collective dating of Bes-Shatyr indicates the early appearance of the Saka necropolis in the Semirechiye eastern margins of the Saka dispersal. However, the date is a couple of centuries younger than previously suggested by single 14C dates. It is likely that the Shilbiyr sanctuary (location of the Bes-Shatyr) became a strategic and sacral place for the Saka leadership in the Semirechiye long before 550 BC. Another prominent feature of the Semirechiye burial landscape, the Issyk necropolis enclosing the Golden Warrior tomb, appeared a few centuries later according to 14C dating reported by other investigators. This study contributes to the Iron Age chronology of Inner Asia, demonstrating successful results of 14C calibration within the Hallstatt Plateau of the 14C calibration curve. It appears that the wide range of calibrated dates for the Saka occurrences in Kazakhstan (from 800 BC to AD 350) is the result of the calibration curve constraints around the middle of the 1st millennium BC. 6 - How do replication and transcription change genomes? from PART II - Gene Transcription and Regulation By Andrey Grigoriev, Rutgers State University of New Jersey Edited by Pavel Pevzner, University of California, San Diego, Ron Shamir, Tel-Aviv University Book: Bioinformatics for Biologists Print publication: 15 September 2011, pp 111-125 From the evolutionary standpoint, DNA replication and transcription are two fundamental processes enabling reliable passage of fitness advantages through generations (in DNA form) and manifestation of these advantages (in RNA form), respectively. Paradoxically, both of these basic mechanisms not only preserve genetic information but also apparently cause systematic genomic changes directly. Here, I show how genome-scale sequence analysis can help identify such effects, estimate their relative contributions, and find practical application (e.g. for predicting replication origins). Visualization of bioinformatics results is often the best way of connecting them to the underlying biological question and I describe the process of choosing the visual representation that would help compare different organisms, genomes, and chromosomes. A species' genome relies on faithful reproduction to reap the benefits of selection. The very fact that the "fine-tuned" genomes of previous generations carrying important fitness advantages can be preserved in the proliferating progeny is the basis of natural selection. That is how we currently understand evolution and life around us, and this grand scheme can operate only under stringent requirements for the precision with which DNA replicates. It is not surprising, therefore, that one observes higher replication fidelity in more complex organisms. For the sake of clarity, however, we leave the "more complex organisms" aside for the duration of this chapter. The higher fidelity mentioned above results from many additional processes (including advanced repair) taking place in a cell besides replication. Meaning as Hypothesis: Quine's Indeterminacy Thesis Revisited Serge Grigoriev Journal: Dialogue: Canadian Philosophical Review / Revue canadienne de philosophie / Volume 49 / Issue 3 / September 2010 ABSTRACT: Despite offering many formulations of his controversial indeterminacy of translation thesis, Quine has never explored in detail the connection between indeterminacy and the conception of meaning that he had supposedly derived from the work of Peirce and Duhem. The outline of such a conception of meaning, as well as its relationship to the indeterminacy thesis, is worked out in this paper; and its merits and implications are assessed both in the context of Quine's own philosophical agenda, and also with a view to a very different approach to meaning and understanding exemplified by the work of Gadamer. Structure of Complex Oxides in High Electric Fields A Grigoriev, R Sichel, HN Lee, CB Eom, B Adams, EM Dufresne, Z Cai, PG Evans Extended abstract of a paper presented at Microscopy and Microanalysis 2008 in Albuquerque, New Mexico, USA, August 3 – August 7, 2008 High energy density physics problems related to liquid jet lithium target for Super-FRS fast extraction scheme N.A. Tahir, V. Kim, I.V. Lomonosov, D.A. Grigoriev, A.R. Piriz, H. Weick, H. Geissel, D.H.H. Hoffmann Journal: Laser and Particle Beams / Volume 25 / Issue 2 / June 2007 Published online by Cambridge University Press: 19 June 2007, pp. 295-304 Print publication: June 2007 The new international facility for antiproton and ion research (FAIR), at Darmstadt, Germany, will accelerate beams of all stable isotopes from protons up to uranium with unprecedented intensities (of the order of 1012 ions per spill). Planned future experiments include production of exotic nuclei by fragmentation/fission of projectile ions of different species with energies up to 1.5 GeV/u at the proposed super conducting fragment separator, Super-FRS. In such experiments, the production target must survive multiple irradiations over an extended period of time, which in case of such beam intensities is highly questionable. Previous work showed that with full intensity of the uranium beam, a solid graphite target will be destroyed after being irradiated once, unless the beam focal spot is made very large that will result in extremely poor transmission and resolution of the secondary isotopes. An alternative to a solid target could be a windowless liquid jet target. We have carried out three-dimensional numerical simulations to study the problem of target heating and propagation of pressure in a liquid Li target. These first calculations have shown that a liquid lithium target may survive the full uranium beam intensity for a reasonable size focal spot. Strained Rhombohedral Stripe Domains in BiFeO3 (001) Thin Films Rebecca Sichel, Alexei Grigoriev, Dal-Hyun Do, Rasmi R. Das, Dong Min Kim, Seung-Hyub Baek, Daniel Ortiz, Zhonghou Cai, Chang-Beom Eom, Paul G. Evans Published online by Cambridge University Press: 12 July 2019, 1000-L11-02 This is a copy of the slides presented at the meeting but not formally written up for the volume. Stripe domains in ferroelectric thin films form in order to minimize the total energy of the film. It has been known for some time that a stable configuration is reached when the decrease in elastic energy from domain formation is balanced by the energetic costs of domain wall formation, local elastic strains in the substrate, and internal electric field formation from domain polarizations. The size and strain of each domain is determined by the lattice mismatch and the energetic costs of interface formation. Recent piezoelectric force microscopy measurements have shown that BiFeO3 (BFO) films on SrRuO3/SrTiO3 (001) substrates form striped polarization domains. Since the details of the local structure and polarization cannot be measured at the same time with conventional techniques, we have used synchrotron x-ray microdiffraction to study these effects. Probing only a few domains at a time with the submicron x-ray spot resulted in a diffraction pattern near the substrate (103) reflection consisting of several BFO peaks. We have unambiguously assigned these peaks to individual structural variants. Based on these results, we propose a physical model that includes the striped domains. The structural variants within the stripes are similar to those predicted by striped patterns in rhombohedral films which minimize elastic energy. The local piezoelectric properties were measured using time-resolved microdiffraction in order to examine the role of the striped domains in the linear responses of the film. The out of plane piezoelectric coefficient d33 was approximately 50 pm/V and the piezoelectric strain was proportional to electric field was up to 0.55%, the maximum strain we have measured. The projection of the in-plane piezoelectric coefficients onto the reciprocal space maps for different structural variants had vastly different values due to the differences in orientation of the domains. D043 Time-resolved X-ray microdiffraction imaging of nanosecond structural transformations in thin ferroelectric films A. Grigoriev, D.-H. Do, D. M. Kim, C.-B. Eom, P. G. Evans, B. Adams, E. M. Dufresne Journal: Powder Diffraction / Volume 21 / Issue 2 / June 2006 Published online by Cambridge University Press: 20 May 2016, p. 171 Microstructure mapping: a new method for imaging deformation-induced microstructural features of ice on the grain scale Sepp Kipfstuhl, Ilka Hamann, Anja Lambrecht, Johannes Freitag, Sérgio H. Faria, Dimitri Grigoriev, Nobuhiko Azuma This work presents a method of mapping deformation-related sublimation patterns, formed on the surface of ice specimens, at microscopic resolution (3–4 μm pixel−1). The method is based on the systematic sublimation of a microtomed piece of ice, prepared either as a thick or a thin section. The mapping system consists of an optical microscope, a CCD video camera and a computer-controlled xy-stage. About 1500 images are needed to build a high-resolution mosaic map of a 4.5 × 9 cm section. Mosaics and single images are used to derive a variety of statistical data about air inclusions (air bubbles and air clathrate hydrates), texture (grain size, shape and orientation) and deformation-related features (subgrain boundaries, slip bands, subgrain islands and loops, pinned and bulged grain boundaries). The most common sublimation patterns are described, and their relevance for the deformation of polar ice is briefly discussed. Structural dynamics of PZT thin films at the nanoscale Alexei Grigoriev, Dal-Hyun Do, Dong Min Kim, Chang-Beom Eom, Bernhard Adams, Eric M. Dufresne, Paul G. Evans Published online by Cambridge University Press: 26 February 2011, 0902-T06-09 When an electric field is applied to a ferroelectric the crystal lattice spacing changes as a result of the converse piezoelectric effect. Although the piezoelectric effect and polarization switching have been investigated for decades there has been no direct nanosecond-scale visualization of these phenomena in solid crystalline ferroelectrics. Synchrotron x-rays allow the polarization switching and the crystal lattice distortion to be visualized in space and time on scales of hundreds of nanometers and hundreds of picoseconds using ultrafast x-ray microdiffraction. Here we report the polarization switching visualization and polarization domain wall velocities for Pb(Zr0.45Ti0.55)O3 thin film ferroelectric capacitors studied by time-resolved x-ray microdiffraction. Iron Age society and chronology in South-east Kazakhstan Claudia Chang, Norbert Benecke, Fedor P. Grigoriev, Arlene M. Rosen, Perry A. Tourtellotte Journal: Antiquity / Volume 77 / Issue 296 / June 2003 This new view of Iron Age society in Kazakhstan breaks away from the old documentary and ethnic framework and offers an independent archaeological chronology. Excavated house types and new environmental data show that nomadism and cultivation were practised side by side. Scholars had previously tended to emphasise the ability of documented Saka leaders to plunder and collect tribute from sedentary agriculture groups through military aggression. But what really gave them a political and economic edge over other steppe groups was a dual economy based upon farming and herding. Plasma sources based on a low-pressure arc discharge YU.H. AKHMADEEV, S.V. GRIGORIEV, N.N. KOVAL, P.M. SCHANIN Journal: Laser and Particle Beams / Volume 21 / Issue 2 / April 2003 This article presents two types of a hollow-cathode plasma source based on an arc discharge where the electrons emitted either by a hot filament or by a surface-discharge-based trigger system initiate a gas arc discharge. The sources produce gas plasmas of densities 1010–1012 cm−3 in large volumes of up to 0.5 m3 at a discharge current of 100–200 A and at a pressure of 10−1–10−2 Pa. Consideration is given to some peculiarities of the operation of the plasma sources with various working gases (Ar, N2, O2). The erosion rate of the cold hollow cathode in the designed plasma sources is shown to be 10 times lower than that found in an ordinary one. The sources are employed for plasma-assisted surface modification of solids. Diffuse x-ray scattering from InGaAs/GaAs quantum dots Rolf Köhler, Daniil Grigoriev, Michael Hanke, Martin Schmidbauer, Peter Schäfer, Stanislav Besedin, Udo W. Pohl, Roman L. Sellin, Dieter Bimberg, Nikolai D. Zakharov, Peter Werner Published online by Cambridge University Press: 01 February 2011, T6.6/N8.6/Z6.6 Multi-fold stacks of In0.6Ga0.4As quantum dots embedded into a GaAs matrix were investigated by means of x-ray diffuse scattering. The measurements were done with synchrotron radiation using different diffraction geometries. Data evaluation was based on comparison with simulated distributions of x-ray diffuse scattering. For the samples under consideration ((001) surface) there is no difference in dot extension along [110] and [-110] and no directional ordering. The measurements easily allow the determination of the average indium amount in the wetting layers. Data evaluation by simulation of x-ray diffuse scattering gives an increase of Incontent from the dot bottom to the dot top. Published online by Cambridge University Press: 01 February 2011, N8.6.1/T6.6.1/Z6.6
CommonCrawl
IPM Gallery Founding Fellows Postdoc Research Fellows Math. Colloq. Administration Founding Fellows Faculty Postdoc Research Fellows Students Researchers Papers Books Technical Reports Today (1 FEB 2023) On Motivic Gamma Functions Vasily Golyshev, IITP Moscow, Russia I will give a survey of some problems and recent results concerning the link between the monodromies of structure connections on certain Frobenius manifolds with the numerology of the derived categories of Fano varieties. https://zoom.us/j/92708332316?pwd=MGlBRWhTUy9wSzg5VGppTUNQVXN6dz09 Subscribing the Mathematics Colloquium mailing list: https://groups.google.com/g/ipm-math-colloquium Quantum Non-locality in Networks Salman Beigi, IPM Bell non-locality and its experimental realizations are celebrated results in quantum physics. Non-locality asserts that certain correlations in nature cannot be explained classically, i.e., by the local hidden variable model. In Bell's setting we study correlations between two or more distant parties who share a single source in common. Recently, by the development of quantum communication networks, the more general setup of non-locality in networks is also studied. In this setup, the parties have several sources in common that are shared through a network, so we expect richer non-locality in networks comparing to the standard Bell's setup. In this talk after an introduction to non-locality, I will explain some examples of non-locality in networks. Algebraic Geometry Webinar Serre Polynomials and Geometry of Character Varieties Azizeh Nozad, IPM, Iran With G a complex reductive group, let XrG denote the G-character varieties of free group Fr, of rank r, and XirrG ⊂ XrG be the locus of irreducible representation conjugacy classes. In this talk we shall present a result showing that the mixed Hodge structures on the cohomology groups of XrSLn and of XrPGLn, and on the compactly supported cohomology groups of the irreducible loci XirrSLn and XirrPGLn are isomorphic, for any n,r ∈ N. The proof uses a natural stratification of XrG by polystable type coming from affine GIT and the combinatorics of partitions. In particular, this result would imply their E-polynomials coincide, settling the question raised by Lawton-Muñoz. This is based on joint work with Carlos Florentino and Alfonso Zamora. https://zoom.us/join Meeting ID: 9086116889 Sasaki-Einstein 5-manifolds and del Pezzo Surfaces (It's canceled.) Jihun Park, IBS Center for Geometry and Physics, POSTECH, South Korea This talk briefly explains how to find closed simply connected Sasaki-Einstein 5-manifolds from K-stable log del Pezzo surfaces. It then lists closed simply connected 5-manifolds that are known so far to admit Sasaki-Einstein metrics. It also presents possible candidates for Sasaki-Einstein 5- manifolds to complete the classification of closed simply connected Sasaki-Einstein 5-manifolds. Finite Groups of Birational Transformations Yuri Prokhorov, Steklov Mathematical Institute and Moscow State University, Russia First, I survey know results on finite groups of birational transformations of higher-dimensional algebraic varieties. This theory has been significantly developed during the last 10 years due to the success of the minimal model program. Then I will talk about finite groups of birational transformations of surfaces over fields of positive characteristic. In particular, I will discuss a recent result on Jordan property of Cremona groups over finite fields (joint with Constantin Shramov). Commutative Algebra Webinars On Self$-$dual Skew Cyclic Codes of Length $p^s$ over $\mathbb{F}_{p^{m}}+u\mathbb{F}_{p^{m}} $ Roghaye Mohammadi Hesari,Malayer University The ring F_{p^m} + uF_{p^m} is a finite chain ring of nilpotency index 2, characteristic p and the unique maximal ideal uF_{p^m}. Dinh et al. (Discrete Mathematics 341 (2018) 324-335) obtained all self-dual constacyclic codes of length p^s over R_2 = F_{p^m} + uF_{p^m}, where p is a prime number and u^2 = 0. In this work, we determine the structure of (Euclidean) dual of some special skew cyclic codes of length p^s over R_2, and establish all of them which are self-dual. As a special case, we conclude all of results appeared in the above paper for cyclic codes. https://vmeeting.ipm.ir/b/mat-9px-g3c The Hessian Map Giorgio Ottaviani, University of Florence, Italy In a joint work with C. Ciliberto we study the Hessian map h_{d,r} which associates to any hypersurface of degree d>=3 in P^r its Hessian hypersurface, which is the determinant of the Hessian matrix. We prove that h_{d,r} is generically finite unless h_{3,1}, and in the binary case h_{d,1} is birational onto its image if d>=5, which is sharp. We conjecture that h_{d,r} is birational onto its image unless h_{3,1}, h_{4,1} and h_{3,2}, these exceptional cases were well known in classical geometry. The first evidence for our conjecture is given by h_{3,3} (the case of cubic surfaces) which is again birational onto its image. Symbolic Powers of Edge Ideals Mousumi Mandal, IIT, India In this talk I will discuss the symbolic powers of the edge ideal of graphs and talk about the recent development of Minh's conjecture. Then I would like to discuss the necessary and su cient condition for the equality of the symbolic powers of edge ideals of weighted oriented graphs. Heights and moments of abelian varieties Farbod Shokrieh, University of Washington (USA) Abstract: We give a formula which, for a principally polarized abelian variety $(A, \lambda)$ over a number field (or a function field), relates the stable Faltings height of $A$ with the N\'eron--Tate height of a symmetric theta divisor on $A$. Our formula involves invariants arising from tropical geometry. We also discuss the case of Jacobians in some detail, where graphs and electrical networks will play a key role. (Based on joint works with Robin de Jong.) Algebraic Geometry and Number Theory Combinatorics and Computation Commutatice Algebra Geometry and Dynamical Systems Mathematical Logic Weekly Seminar Series Winter 2023 Introduction to Intersection Theory Topics in Hamiltonian Dynamics, Symplectic Topology and Complex Geometry Call for Research Proposal Deadline for registraion: December 22, 2022 till January 20, 2023 at 24:00 Salman Beigi, School of Mathematics, IPM Reading Course The Number Theory group at IPM is organizing a reading course on Fermat's Last Theorem. The School of Mathematics is pleased to announce the appointment of Dr. Eaman Eftekhary as the new head of the School. Algebraic Geometry Biweekly Webinar Series Fall/Winter 2022 Weekly Seminar Combinatorics and Computing © Copyright 2023, IPM.
CommonCrawl
Why is Voevodsky's motivic homotopy theory 'the right' approach? Morel and Voevoedsky developed what is now called motivic homotopy theory, which aims to apply techniques of algebraic topology to algebraic varieties and, more generally, to schemes. A simple way of stating the idea is that we wish to find a model structure on some algebraic category related to that of varieties or that of schemes, so as to apply homotopy theory in an abstract sense. The uninitiated will find the name of the theory intriguing, and will perceive the simple idea presented above as a very fair approach, but upon reading the details of the theory, he might get somewhat puzzled, if not worried, about some of its aspects. We begin with the following aspects, which are added for completeness. The relevant category we start out with is not a category of schemes, but a much larger category of simplicial sheaves over the category of smooth schemes endowed with a suitable Grothendieck topology. A relevant MathOverflow topic is here. The choice of Grothendieck topology is the Nisnevich topology, the reasons being discussed right over here. The choice of smooth schemes rather than arbitrary schemes has been discussed here. Let us now suppose the uninitiated has accepted the technical reasons that are mentioned inside the linked pages, and that he will henceforth ignore whatever aesthetic shortcomings he may still perceive. He continues reading through the introduction, but soon finds himself facing two more aspects which worries him even more. The resulting homotopy theory does not behave as our rough intuition would like. In particular, what we might reasonable want to hold, such as the fact that a space ought to be homotopy equivalent to the product of itself with the affine line, is false. We solve the issue by simply forcing them to be homotopy equivalent, and hope that whatever theory rolls out is more satisfactory. There are two intuitive analogues of spheres in algebraic geometry, and the theory does not manage to identify them. We solve the issue by just leaving both of them into the game, accepting that all homology and cohomology theories will be bigraded, and we hope that this doesn't cause issues. The fact that the resulting theory is satisfactory, has proved itself over time. But hopefully the reader will not find it unreasonable that the uninitiated perceives the two issues mentioned above as a warning sign that the approach is on the wrong track, if not the 'wrong' one altogether; moreover, he will perceive the solutions presented as 'naive', as though it were but a symptomatic treatment of aforementioned warning signs. Question. How would you convince the uninitiated that Voevodsky's approach of motivic homotopy theory is 'the right one'? ag.algebraic-geometry at.algebraic-topology homotopy-theory model-categories motivic-homotopy PatriotPatriot $\begingroup$ Someone here told me a few months back that there is some kind of attempt to do $\mathbf{P}^1$ homotopy theory, where algebraic K-theory might be actually homotopy invariant (it isn't $\mathbf{A}^1$ invariant). It's a vague comment but maybe someone else here can flesh it out. This seems to me to indicate that there's at least some way in which the motivic category isn't optimal. $\endgroup$ – Harry Gindi $\begingroup$ I don't understand exactly the complaints, but the point is that one wants motives to have the properties shared by all the main known realisations (for instance, perverse sheaves). So a contractible line, a (co)homological nature (coming from spectra after the stabilisation), a bigrading (due to the Tate twists which are always present in the known (co)homologies) are all desirable features. The restriction to Nisnevich descent comes from its good behaviour in K-theory (motivic complexes were first conceived through K-theory by Beilinson and Lichtenbaum)... $\endgroup$ $\begingroup$ @Patriot If I remember correctly, I heard about this from Federico Binda here at Regensburg. When I looked it up, he has a paper called 'Motivic Homotopy Theory without A1 invariance'. I could ask him in person next week, but maybe take a look at that paper? $\endgroup$ $\begingroup$ (I'm looking through the paper now, and Federico's $\square$, which plays the role of the interval is built out of $\mathbf{P}^1$). Hope this helps! $\endgroup$ $\begingroup$ Here's a video of a talk about it m.youtube.com/watch?v=vb59NwOho9A $\endgroup$ (Don't be afraid about the word "$\infty$-category" here: they're just a convenient framework to do homotopy theory in). I'm going to try with a very naive answer, although I'm not sure I understand your question exactly. The (un)stable motivic ($\infty$-)category has a universal property. To be precise the following statements are true Theorem: Every functor $E:\mathrm{Sm}_S\to C$ to a(n $\infty$-)category $C$ that is $\mathbb{A}^1$-invariant (i.e. for which $E(X\times \mathbb{A}^1)\to E(X)$ is an equivalence) satisfies "Mayer-Vietoris for the Nisnevich topology" (i.e. sends elementary Nisnevich squares to pushout squares) factors uniquely through the unstable motivic ($\infty$-)category. (see Dugger Universal Homotopy Theories, section 8) Theorem: Every symmetric monoidal functor $E:\mathrm{Sm}_S\to C$ to a pointed presentable symmetric monoidal ($\infty$-)category that is $\mathbb{A}^1$-invariant satisfies "Mayer-Vietoris for the Nisnevich topology" sends the "Tate motive" (i.e. the summand of $E(\mathbb{P}^1)$ obtained by splitting off the summand corresponding to $E(\mathrm{Spec}S)\to E(\mathbb{P}^1)$) to an invertible object factors uniquely through the stable motivic ($\infty$-)category. (see Robalo K-Theory and the bridge to noncommutative motives, corollary 2.39) These two theorems are saying that any two functors that "behave like a homology theory on smooth $S$-schemes" will factor uniquely through the (un)stable motivic ($\infty$-)category. Examples are $l$-adic étale cohomology, algebraic K-theory (if $S$ is regular Noetherian), motivic cohomology (as given by Bloch's higher Chow groups)... Conversely, the canonical functor from $\mathrm{Sm}_S$ to the (un)stable motivic ($\infty$-)category has these properties. So the (un)stable motivic homotopy theory is in this precise sense the home of the universal homology theory. In particular all the properties we can prove for $\mathcal{H}(S)$ or $SH(S)$ reflect on every homology theory satisfying the above properties (purity being the obvious example). Let me say a couple of words about the two aspects that "worry you" $\mathbb{A}^1$-invariance needs to be imposed. That's not surprising: we do need to do that also for topological spaces, when we quotient the maps by homotopy equivalence (or, more precisely, we need to replace the set of maps by a space of maps, where paths are given by homotopies: this more complicated procedure is also responsible for the usage of simplicial presheaves rather than just ordinary presheaves) Having more kinds of spheres is actually quite common in homotopy theory. A good test case for this is $C_2$-equivariant homotopy theory. See for example this answer of mine for a more detailed exploration of the analogy. Surprisingly, possibly the most problematic of the three defining properties of $SH(S)$ is the $\mathbb{A}^1$-invariance. In fact there are several "homology theories" we'd like to study that do not satisfy it (e.g. crystalline cohomology). I know some people are trying to find a replacement for $SH(S)$ where these theories might live. As far as I know there are some definitions of such replacements but I don't think they have been shown to have properties comparable to the very interesting structure you can find on $SH(S)$, so I don't know whether this will bear fruit or not in the future. Denis NardinDenis Nardin $\begingroup$ Just a remark concerning your last point. Crystalline cohomology was never supposed to work for non proper stuff. Instead, you have to use Monsky-Washnitzer cohomology (for affine schemes) or, more generally, rigid cohomology. In this case, the affine line is contractible. If you're working in a rigid setting, you will see a $\mathbb{B}^1$ there instead, though. $\endgroup$ $\begingroup$ Dear Denis, thanks for your detailed explanation. The universal property is pretty convincing, and I'm surprised by the occurrence of more kinds of spheres in 'ordinary' homotopy theory --- I should've known about that! The failure of certain invariants to be $\mathbb{A}^1$-invariant (which Harry also mentioned in the comments) made me rethink one more detail, which ironically I took for granted in the main question: Are there technical motivations for letting $\mathbb{A}^1$ play the role of the interval, other than the obvious but not truly convincing 'the mental image looks like line'? $\endgroup$ $\begingroup$ @Patriot It is a choice we make, but not a completely arbitrary one. A lot of theories are $\mathbb{A}^1$-invariant (at least in nice situations, like over a field) and collapsing $\mathbb{A}^1$ allows us to run some proofs almost as in classical topology (e.g. purity can replace the tubular neighborhood theorem, we can talk about Thom spectra and we can prove the contractibility of certain parameter spaces using straight-line homotopies...). So it is a very common property and it allows us to do cool things we want to do, it seems at least worth investigating ;). $\endgroup$ – Denis Nardin I just want to point out, with regards to your second question, that the fact that the $\mathbb P^1$ is not equivalent to a simplicial complex homotopic to the sphere built out of affine spaces (or whichever model for an un-Tate-twisted sphere) is exactly what you would expect from Grothendieck et. al.'s theory of motives. In fact having a homology or cohomology theory without a notion of Tate twist would be very strong evidence that we are on the wrong track! More carefully one might point out that we want a map to etale cohomology and a notion of Chern class, which requires the Tate twisting because Tate twists show up in the Chern class map in etale cohomology. Will SawinWill Sawin $\begingroup$ Indeed, the reason why we invert $\mathbb{G}_m$ and not just $S^1$ is precisely to capture this kind of phenomena. That it results in a very rich theory is just icing on the cake $\endgroup$ $\begingroup$ Dear Will, thank you for your answer. user40276 also pointed out that the bigrading is to be expected due to Tate twists. I based the motivation for writing the question mostly on my (admittedly limited) experience in homotopy theory in the classical algebro-topological setting, where 'Tate twist' is a foreign buzzword. This suggests it's time for me to dive into some more literature... $\endgroup$ $\begingroup$ @Patriot if you wanted to develop a theory that was the same in all respects as homotopy theory of spaces, your theory would just be the homotopy theory of spaces. $\endgroup$ – Will Sawin Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry at.algebraic-topology homotopy-theory model-categories motivic-homotopy or ask your own question. Is there a high-concept explanation for why "simplicial" leads to "homotopy-theoretic"? Why is the motivic category defined over the site of smooth schemes only? Reasons for the use of Nisnevich topology in motivic homotopy theory How to think about $\mathbf{Z}(n)_{\mathcal{M}}$ Homotopy theory of acyclic categories Categorical or simplicial introduction to modern homotopy theory Is algebraic $K$-theory a motivic spectrum? What's motivic about $\mathbb{A}^1$-homotopy theory? What's motivic about correspondences?
CommonCrawl
CABI Agriculture and Bioscience Journal Videos Modeling future climate change impacts on sorghum (Sorghum bicolor) production with best management options in Amhara Region, Ethiopia Adem Mohammed1 & Abebe Misganaw2 CABI Agriculture and Bioscience volume 3, Article number: 22 (2022) Cite this article Sorghum is one of the most important cereal crops well adapted in arid and semi-arid areas of Ethiopia but yield is low as compared to its potential. The crop has been adversely affected by climate change and climate variability accompanied by low soil fertility, insects and weeds. Thus, assessment of impact of projected climate change is important for developing suitable management strategies. The present study was conducted with the objectives (1) to calibrate and evaluate the CERES-sorghum model in DSSAT (2) to assess impact of projected climate change on sorghum production in 2030s (2020–2049) and 2050s (2040–2069) under RCP4.5 and RCP8.5 scenarios and (3) to identify best crop management strategies that can sustain sorghum production. The CERES-sorghum model was calibrated and evaluated using field experimental data of anthesis, physiological maturity, grain yield and aboveground biomass yield. In the simulation, the initial weather and CO2 were modified by future climates under the two climatic change scenarios (RCP4.5 and RCP8.5). Historical daily weather data (1981–2010) of rainfall, maximum temperature, minimum temperature, and solar radiation were obtained from the nearest weather stations at Sirinka and Kombolcha while future climate date for 2030s and 2050s were downloaded from the ensemble of 17 CMIP5 GCM outputs run under RCP4.5 and RCP8.5 downscaled to the study sites using MarkSim. Different sowing dates, nitrogen rates, and supplemental irrigation were evaluated for their effectiveness to increase sorghum yield under the present and future climate conditions of the study area. The result of model calibration showed that the RMSE for anthesis, physiological maturity, grain yield, and above-ground biomass yield were 2 days, 2 days, 478 kg ha−1, and 912 kg ha−1, respectively with normalized nRMSE values of 2.74%, 1.6%, 13.42%, and 5.91%, respectively. During the model evaluation the R2 values were 78% for anthesis, 99% for physiological maturity, 98% for aboveground biomass yield, and 94% for grain yield. The d-statistics values were 0.87, 0.91, 0.67, and 0.98 while the nRMSE values were 2.6%, 2.7%, 23.4%, and 4.1% for the respective parameters. The result of statistical analysis for both model calibration and evaluation revealed that there existed strong fit between the simulated and observed values that indicated the model can be used for different application to improve sorghum productivity in the region. The result of impact analysis showed that sorghum grain yield may decrease by 2030s and 2050s under both RCPs scenarios. However, the result of management scenarios showed that sorghum yield may be substantially increased through use of optimum nitrogen fertilizer, application of supplemental irrigation and by using early sowing dates individually or in combination. In conclusion, projected climate change could adversely affect sorghum production in the semi-arid areas of Ethiopia in the present and future climate conditions but impact could be reduced by using suitable crop management strategies. In Ethiopia, agriculture is the dominant sector contributing nearly 50% of the Gross Domestic Product (GDP). About 85% of the total employment and livelihoods in the country depend on this sector. The sector is a major source of food for the population and it is the primary contributor to food security (CEEPA 2006). However, the agriculture sector in Ethiopia is highly vulnerable to climate change and climate variability particularly the arid and semi-arid environments. According to the report of Climate Resilient Green Economy (CRGE 2011) climate change has the potential to reverse the economic progress of Ethiopia and could aggravate social and economic problems. The agricultural sector in Ethiopia is highly dependent on rainfall. Irrigation potential of the country is high but irrigation activity is very limited which accounts for less than 1% of the total cultivated land. Crop production is dominated by small-scale subsistence farmers. The subsistence farming accounts for more than 90% of the total agricultural output (CSA 2011). Agricultural output in Ethiopia is highly affected by erratic and unpredictable rainfall with poor distribution (Tefera 2012). Thus, economy of the country is expected to be negatively affected by future climatic conditions. A study conducted by Eshetu et al. (2014) indicated that the GDP of Ethiopia may decrease by about 0.5%–2.5% per year in the near future. Sorghum (Sorghum bicolor (L) Moench) originated in eastern Africa (Lupien 1990). The greatest diversity of sorghum in both cultivated and wild species is found in the eastern part of the African continent (House 1985). In Africa, sorghum is ranked second in terms of production (Belton and Taylor 2003). The crop has the potential to tolerate the effects of water deficit in stressful environments (Haussmann et al. 2007). It is considered an excellent model for drought tolerance among higher plants (Saxena et al. 2002). The crop can also produce reasonable yield in poor soils owing to its nutrient uptake capacity (Dollin et al. 2007). Although sorghum is an important food crop in many of the semi-arid areas of Ethiopia, its yield is still below the expected level because of abiotic and biotic factors. Sorghum production in Ethiopia is highly affected by climate change, climate variability, low soil fertility, water scarcity, lack of improved crop varieties, pests (insect, diseases and weeds) etc. The production in most developing countries in Africa is also highly vulnerable to the impacts of climate change and climate variability (World Bank 2010). At present, climate change negatively affects the sustainability of agricultural systems in many areas and may continue to challenge communities in developing countries whose livelihoods directly depend on local food production (Wheeler and von Braun 2013). The impacts of climate change may be severe in arid and semi-arid areas where water resources are already low. These areas are very sensitive to climate variability and climate change particularly to high temperature and rainfall variability. Most of the small-scale agricultural systems in many areas are rainfed and may be liable to the direct effect of unpredictable temperature and rainfall variations (Kurukulasuriya et al. 2006). There is a high level of confidence regarding the increase in future temperatures and rainfall over Ethiopia (Hadgu et al. 2015). Thus, Ethiopia may experience further warming by the 2030s and the 2050s in all seasons (Hadgu et al. 2015). A previous study also indicated that annual rainfall may increases in Ethiopia, but there is less certainty regarding spatial and temporal patterns (Conway and Schipper 2011). At present, drought, loss of cattle, reduced harvests, land degradation, and water scarcity occur in Ethiopia (Hadgu et al. 2015). The impacts of future climate change on crop production in Ethiopia are predicted either at the national (Deressa and Hassan 2009) or larger scales such as east Africa (Bryan et al. 2009). There are limited studies at the subnational or local levels in Ethiopia (Alemayehu and Bewket 2016). Therefore, it is necessary to assess impacts of future climate change on crop production at local scale to design appropriate adaptation strategies. The solution to the food crisis in Sub-Saharan Africa (SSA) is improving crops productivity by improving adaptation strategies to both biotic and abiotic stresses (Taylor et al. 2006). This study attempted how future climate will likely affect sorghum production in the semi-arid region of Ethiopia. The use of adaptive strategies that can reduce impacts on crops is less common in the arid and semi-arid areas of Ethiopia and is often limited to very small groups of farmers. Climate-smart agricultural practices are proven techniques to reverse or reduce the impacts of climate change. Mitigation and adaptation technologies that include the introduction of new crop varieties, the use of water efficient technologies, soil fertility management practices, and the use of optimum inputs (improved seed and fertilizers) are very effective options to cope up the adverse effects of climate change. The negative effects of climatic change and climate variability on sorghum production can be minimized by using proven adaptation technologies (Sandeep et al. 2018). Currently promising adaptation strategies such as changes in sowing dates, application of irrigation and use of optimum fertilizers are used in different areas to minimize yield reduction in wheat crop (Pramod et al. 2017). A study conducted by Adem et al. (2016) also showed that supplemental irrigation, changes in sowing dates, and the use of improved crop cultivar significantly increased yield in chickpea in the semi-arid region of northeastern Ethiopia under the present and future climate conditions. Thus, the present study attempted how suitable adaptation strategies can enhance sorghum production and productivity under the present and future climate conditions of the study region. Agricultural systems require systematic approach because of its complexity. Computer science has made it possible to systematically analyze the combined impact of several factors on crops. It becomes possible to accurately predict the crop yield response to the combined effects of soil, plants, and climatic systems. Crop models can predict real crop systems by predicting their growth and development. The Decision Support System for Agrotechnology (DSSAT) technology has been widely used to study soil fertility, water and irrigation management, yield gap analysis, genotype by environment interaction, predicting impact of climate change and climate variability on crops and evaluation of adaptive measures (Bhupinde 2018). At present, the applications of crop modeling techniques has received significant attention and has provided solutions by reducing costs and improving our understanding. The DSSAT technology has been used to simulate crop biomass, yield, and soil nitrogen dynamics under different management practices and climatic conditions (Li et al. 2015). However, there is continuous need to calibrate and evaluate the model under wide ranges of environments and cropping practices (López et al. 2008). The Crop-Environment-Resource-Synthesis (CERES)-sorghum module is one of the crop models in DSSAT technology. The major components of the model are vegetative and reproductive development, carbon balance, water balance and nitrogen balance (Singh and Virmani 1996). The model can simulate sorghum growth and development on daily time step from sowing to maturity and ultimately predicts yield. The model also simulated physiological processes that describe the crop response to major weather factors, including temperature, precipitation and solar radiation and include the effect of soil characteristics on water availability for crop growth. Genotypic differences in growth, development and yield of crop cultivars are affected through genetic coefficients (cultivar-specific parameters) that are inputs to the model. Therefore, the present study was conducted with the objectives (1) to calibrate and evaluate the CERES-Sorghum model to simulate phenology, growth, and yield of sorghum (2) to predict impact of projected climate change on phenology and yield of sorghum and (3) to evaluate effectiveness of change in sowing date, nitrogen fertilization and supplemental irrigation as adaptation strategies for sustainable sorghum production in the study region. Description of the study sites The study was conducted at two sites (Sirinka and Harbu) located in the semi-arid areas of Amhara region, northeastern Ethiopia. The Sirinka site is situated at an altitude of 1850 m above sea level (masl) with latitude 11° 45′ 00″ N and longitude of 39° 36′ 36″ E while the Harbu site is located at an altitude of 1450 masl with latitude of 10° 55′ 00″ N and longitude of 39° 47′ 00″ E. The region receives annual total rainfall of 945 mm with mean annual maximum and minimum temperatures of 27.3 °C and 13.6 °C, respectively. Rainfall in the region is low, erratic and uneven in distribution (Adem et al. 2016). The soil type is characterized as Eutric Vertisol (Adem et al. 2016). The study region is dominated by rugged mountains with undulating hills and valley bottoms. Both sites received bimodal rainfall with a small rainfall season that extends from February to April/May (locally known as Belg) and the main rainfall season (locally known as Kiremt) extends from June to September. Terminal water deficit caused by dry spells is a major constraint for crop production. Major crops grown in the study sites are sorghum, maize, chickpea, haricot bean, field pea, lentil, teff (Eragrostis teff). Mixed farming (crops and livestock) is a major production system. Monoculture is dominant whereas crop rotation (cereals with pulse crops) and intercropping are practiced to some extent. The majority of field crops are grown under rainfed conditions during the main rainy season and some crops such as field pea, teff, and mung bean are grown during the short rainy season. Sorghum can be grown in the short as well as during the long rainy seasons based on the nature of the sorghum cultivars (short or long maturing types). There are four distinct seasons in Ethiopia namely Summer (June, July and August), Autumn (September, October and November), Winter (December, January and February) and Spring (March, April and May) as indicated in Fig. 1. Mean annual and seasonal total rainfall (A) and mean annual and mean seasonal maximum and minimum temperatures (B) at Kombolcha and Sirinka weather stations (1988–2018), Ethiopia Description of the DSSAT and CERES-sorghum model The DSSAT technology is the most widely used software across many countries. Currently, it incorporates more than 42 different crops including cereals, grains, grain legumes, and root crops (Hoogenboom 2003). The DSSAT is the first package with weather simulation generators. Its process-oriented and is designed to work independently of location, season, crop cultivar, and management system. It is capable of simulating the effects of weather, soil water, genotype, and soil and crop nitrogen dynamics on crop growth and yields (Jones et al. 2003). DSSAT and its crop simulation models have been used in a wide range of applications in many countries. DSSAT integrates the effects of soil, crop phenotypes, weather, and management options and analyzes the results in minutes. The CERES-sorghum model is one of the models in the DSSAT with major components of vegetative and reproductive development, carbon balance, water balance, and nitrogen balance (Singh and Virmani 1996). The model can simulate the growth, development, and yield using a daily time step from sowing to maturity of the crop. Differences in growth, development, and yield of crop cultivars are affected by genetic coefficients (cultivar-specific parameters) which are inputs to the crop model. The model can simulate physiological processes that describe the crop response to weather factors such as temperature, precipitation, and solar radiation including the effect of soil characteristics on water availability for crop growth. Model inputs Field experiments and data collection procedures For calibrating the crop model, field experiment was conducted at Sirinka in 2019 main crop season in a plot size of 10 m * 10 m replicated three times. In the analysis we considered each individual replicate as a pair data (observed-simulated) to calculate R2, RMSE, nRMSE and d statistis valuze for each parameters. Sorghum cultivar named Girana-1 was used as a test crop and was planted in a spacing of 0.75 m * 0.15 m. Recently recommended blended fertilizer (NPSB) with nutrient contents of 18.9% N, 37.7% P2O5, 6.95% S and 0.18% B was applied during the sowing time of the crop at a rate of 100 kg ha−1. Nitrogen fertilizer in the form of urea (46%N) was applied during the sowing time a rate of 25 kg ha−1 and additional 25 kg ha−1 was applied 35 days after the crop emergence. For observation of anthesis date, physiological maturity date, grain-filling period five plants were randomly selected from each plot and tagged. Days to anthesis was recorded as the number of days from the date of sowing to the date at which 50% of the plants in a plot start heading. Days to physiological maturity was recorded as the number of days from the date of sowing to the date at which 75% of the plants in a plot physiologically matured. Grain-filling period is the numbers of days from 50% flowering to 75% physiological maturity, For those measurements on weight bases a sub-sample (from all plant part) were taken to dry in an oven for 72 h at 60 °C to a constant weight and their weights were determined by using a sensitive balance. Leaf area at 50% anthesis was measured by multiplying leaf length with maximum leaf width and was adjusted by correction factor of 0.75 (i.e. 0.75 * leaf length * maximum leaf width) as suggested by Francis et al. (1969). Thus, the Leaf area index (LAI) was calculated by dividing the leaf area by the sampled ground area. The crop model was evaluated using anthesis date, phenological maturity date and grain yield collected from field trial conducted in 2013, 2014, 2015 and 2017 at Sirinka. Crop management data Recommended management practices for sorghum crop are required as input by the model. Thus, information on planting date, planting method, planting distribution, plant population, row spacing, planting depth, cultivar selection, irrigation amount and schedule, fertilizer type and amount, and tillage type were obtained from the nearest Agricultural Research Centre at Sirinka located in the study region. Soil data About two weeks before sowing of the crop, soil samples were collected from 1.6 m soil depth near the experimental site for chemical and physical analysis. A total of four distinct soil horizons were identified. Soil samples were collected based on soil horizon and were analyzed for soil texture, pH, organic carbon, total nitrogen, available phosphorous, exchangeable cations, electrical conductivity, bulk density, drained upper limit of soil water content, lower limit soil water content and saturated water content. The soil texture was determined by the modified Bouyoucos hydrometer method (Bouyoucos 1962) using sodium hexametaphosphate as dispersing agent. The soil pH was determined potentiometrically using a digital pH meter in a 1:2.5 soil water suspension (Van Reeuwijk 2002). Organic carbon was determined by wet digestion method whereas total nitrogen was determined through Kjeldahl digestion, distillation and titration procedures of the wet digestion method (Black 1965). Available phosphorus was determined colorimetrically using Olsen's method (Olsen 1954). The Cation exchange capacity was estimated titrimetrically by distillation of ammonium that was displaced by sodium from NaCl solution (Chapman 1965). The soil water dynamics were estimated by inputting soil texture, soil organic matter content and soil bulk density into a soil file creation utility program of the DSSAT software package. Weather data and RCP scenarios Daily data of maximum and minimum air temperatures (°C), daily rainfall (mm) and daily total solar radiation (M J M−2 day−1) for the period 1981–2020 was obtained from the nearest weather stations at Sirinka and Kombolcha. The Weather Man utility program of DSSAT 4.6 was used to convert the sunshine hours to solar radiation (M J M−2 Day−1). Future climate data for the 2030s (2020–2049) and 2050s (2040–2069) were obtained from the 17 CMIP5 GCM outputs run under RCP4.5 and RCP8.5 scenarios downloaded from International Center for Tropical Agriculture (CIAT) climate change portal (http://ccafs-climate.org/) and downscaled to the target site using MarkSim software (Jones and Thornton 2013). WorldClim V1.3 was used to interpolate the climate at the required point. This climate database may be considered representative of the current climatic conditions. It uses historical weather data from several databases. Thus, MarkSim uses the climate records for any given location. In this study, two climate change scenarios (RCP4.5, RCP8.5) were used to predict impact of projected climate change on sorghum production and to explore crop adaptation strategies. The study assumes CO2 fertilization effect on sorghum Thus, we used 380 ppm of CO2 for the baseline period whereas 423 ppm and 432 ppm were used for 2030s and 499 and 571 ppm for 2050s for RCP 4.5 and RCP8.5 scenarios, respectively (IPCC 2013). RCP's are greenhouse gas concentration trajectories adopted by the IPCC for its fifth assessment (IPCC 2013). In the RCP4.5 scenario, Greenhouse gas (GHG) concentrations rise with increasing speed until the forcing is 4.5 W m−2 in the year 2100. This is a moderate emission scenario of concentration rise whereas, in RCP8.5, GHG concentrations rise with increasing speed until the forcing is 8 W m−2 in the year 2100. This is a high scenario of concentration rise. Model calibration and evaluation procedures The CERES-Sorghum model in the DSSAT model was calibrated using field experimental data of 2019 main cropping season conducted at Sirinka site. Calibration is defined as adjustment of model parameters so that the predicted results are very close to the results obtained from the field experiments. The model used genetic coefficients that determine phenology, growth, and yield characteristics of a given crop cultivar. The calibration of the model was performed through a trial and error method by applying small change (+ 5%) on each parameter and by adjusting the genetic coefficients that determine the phenology of the crop followed by yield and yield components. The adjusted genetic coefficients were used in the subsequent evaluation of the crop model. In the calibration and evaluation phases, the observed dates of anthesis, physiological maturity, and yield were statistically compared to the simulated values using a set of statistical approaches such as the root mean square error (RMSE) (Loague and Green 1991), normalized root mean square error (nRMSE), index of agreement (d) (Willmott et al. 1985), and coefficient of determination (R2). The RMSE is the standard deviation of the residuals (prediction errors). The residuals measure how far the data points are from the regression line. It tells us how concentrated the data is around the line of best fit. R2 is a statistical measure of how well the regression predictions approximate the real data points. An R2 of 1 indicates that the regression predictions perfectly fit the data. The Index of Agreement (d) developed by Willmott (1981) is used as a standardized measure of the degree of model prediction error and varies between 0 and 1. A value of 1 indicates a perfect match, and 0 indicates no agreement at all (Willmott 1981). The nRMSE gives the measure (%) of the relative difference between simulated and observed data. Less value indicates good fit of the model $$RMSE = \sqrt {\frac{{\sum\nolimits_{i = 1}^{n} {\left( {P_{i} - O_{i} } \right)^{2} } }}{n}}$$ where n = number of observations, Pi = predicted value for the ith measurement and Oi = observed value for the ith measurement. Thus, lower value indicates good fit of the model. $$\mathrm{n}RMSE=\frac{RMSE}{N}\times 100$$ where N is the mean of the observed variables. nRMSE gives the measure (%) of the relative difference between simulated and observed data. Less value indicates good fit of the model $$d=1-\left[\frac{{\sum }_{i=1}^{n}{\left(Pi-Oi\right)}^{2}}{{\sum }_{i=0}^{n}{\left(\left|Pi-O\right|\right)+\left(\left|Oi-O\right|\right)}^{2}}\right]$$ The d-statistic was calculated as (0 ≤ d ≤ 1). The more values close to unity are regarded as best agreement between the predicted and observed data (Musongaleli et al. 2014). When d = 1 indicates excellent. Where n: number of observations, Oi and Pi are the observed and predicted values, respectively for the ith data pair; and O is the mean of the observed values. Analysis of impact of projected climate change on sorghum The CERES-sorghum model in combination with the seasonal analysis program in DSSAT was used to simulate phenology, growth, and yield of sorghum under the present and future climate conditions of the study area. The sorghum cultivar (Girana-1) was used as the test crop. Simulations were carried out for the baseline period (1981–2010) and for the projected climate changes in 2030s (2020–2049) and 2050s (2040–2069) under RCP4.5 and RCP8.5 scenarios. In this study, all the simulations were started on July-2 as it is the average planting date for sorghum in the area most farmers practice. The long rain season usually starts at the end of June to the first week of July. Thus, this study also assumed that the soil profile was at the upper limit of soil water availability in that date and the crop was grown under rainfall conditions in the model. It also assumed that soil condition, crop management practices and crop cultivar characteristics are similar to the present situation. Thus, the response of the sorghum cultivar to future climate was evaluated using typical soil and crop management practices (fertilizers application rates, row spacing, planting date, planting method etc.). In the simulation, the crop was planted in 0.75 m * 0.15 m spacing using blended (NPSB) and Urea fertilizers at rates of 100 kg ha−1 and 50 kg ha−1, respectively. This study also assumed no problems of insect, disease and weeds during the simulation periods. The outputs from the crop model like days to anthesis, days to physiological maturity, grain yield and seasonal crop transpiration were computed. The change in phenology and yield were compared as followes. $$\mathrm{change in antheis or physiological maturity }(\mathrm{\%})=\frac{\mathrm{X predicted}-\mathrm{X base}}{\mathrm{X base}}*100$$ where, X is anthesis or physiological maturity $$\mathrm{change in grain yield }\left(\mathrm{\%}\right)=\frac{\mathrm{Y Predicted}-\mathrm{Y base}}{\mathrm{Y base}}*100$$ where, Y is grain yield. Analysis of management scenarios for sorghum The effect of changes in sowing date, nitrogen rates, and supplemental irrigation were evaluated as sorghum adaptation strategies for their effectiveness to sustain production in the study region. The sowing window for sorghum in the study region is between mid-June and mid-July. Accordingly, the sowing window was categorized as early sowing date, standard (normal) sowing date and late sowing date. Based on this category three sowing dates (15th June, 30th June, and 15th July) were selected. In this regard, sowing on 15th June was considered as early sowing while sowing on 30th June was the normal sowing date as it was practiced by most farmers whereas sowing on July 15 was considered as late sowing for sorghum. Effect of nitrogen was evaluated at three levels (0, 46, and 92 kg N ha−1) and was applied in the form of Urea fertilizer (46%). Regarding irrigated treatments, 100 mm water was applied as supplemental irrigation in ten days interval starting the anthesis period of the crop to reduce the effect of terminal water deficit. Thus, two levels of irrigation treatments (rainfed and supplementaly irrigated) were evaluated. The simulation analysis was performed individually and in combination for all the factors indicated above (change in sowing date, nitrogen rates, and supplemental irrigation) to find the most promising adaptation strategies. Finally, simulated output data were analyzed using analysis of variance (ANOVA) techniques using a statistical analysis system (SAS 2009). Means were compared using the least significant difference (LSD) at 5% probability level. The simulation years were considered as replications as yield in one year under a given treatment was not affected by another year (prior year carryover of soil water was not simulated). Simulation years were unpredictable weather characteristics and therefore formal randomization of the simulation years was not needed. Results of model calibration evaluation The CERES-Sorghum model was calibrated using experimental data of anthesis, physiological maturity, grain yield and aboveground biomass yield collected at Sirinka during the 2019 main crop season. The CERES-sorghum model consists of eleven eco-physiological coefficients that are used to simulate phenology, growth, and yield of sorghum (Table 1). The calibrated genetic coefficients in the model are also indicated in Table 1. After the calibration, the value of the thermal time from the beginning of grain filling to physiological maturity (P5) was set to 490 degree days while the thermal time from seedling emergence to the end of the juvenile phase was set to 420 degree days (Table 1). The RMSE for anthesis, physiological maturity, grain yield, and above-ground biomass yield were 2 days, 2 days, 478 kg ha−1, and 912 kg ha−1, respectively while the nRMSE for the respective parameters were 2.74%, 1.6%, 13.42%, and 5.91% (Table 2). The results showed that there were an acceptable agreement between the simulated and measured anthesis, physiological maturity, and yield of sorghum which indicated that the cultivar specific parameters (genetic coefficients) within the crop model were reasonably adjusted. However, the performance of the model should be further evaluated with independent set of data for simulation of real situation in the study areas. Table 1 Calibrated genetic coefficients of Girana-1 cultivar within the model Table 2 Comparison between simulated and observed anthesis date, physiological maturity (days), grain yield and aboveground biomass yield of Girana-1 sorghum cultivar (number of pair data (n) = 3) using R2, RMSE and nRMSE during model calibration at Sirinka Thus, for evaluating the performance of the model for simulating phenology and yield of sorghum four years of experimental data (2013, 2014, 2015 and 2017) were obtained from Sirinka site conducted by Sirinka Agricultural Research Center located in the study region. The crop parameters used for the evaluation were anthesis date, physiological maturity date, grain yield and aboveground biomass yield. The result showed that the goodness of fit (R2) was 78% for anthesis, 99% for physiological maturity, 98% for aboveground biomass yield, and 94% for grain yield while the d statistics values were 0.87 for anthesis, 0.91 for physiological maturity, 0.67 for grain yield and 0.98 for aboveground biomass yield (Table 3). The nRMSE values were 2.6% for anthesis, 2.7% for physiological maturity, 23.4% for grain yield, and 4.1% for aboveground biomass yield (Table 3). Results of model evaluation showed that there were an excellent fit between the simulated and observed values indicating the performance of the model to simulate phenology (anthesis and flowering), growth, and yield under the semi-arid environments of northeastern Ethiopia. The results also indicated that the model can be used for different applications such as to study the effect of climate change on sorghum production and to select best crop adaptation strategies for sorghum production under the present and future climate conditions of the study region. Table 3 Comparison between simulated and observed anthesis date, physiological maturity date, grain and biomass yields using R2, d, RMSE and nRMSE during model evaluation (number of comparison = 4) Projected climate changes and its implication Projected climate changes in the study region Future climate projection in the study region showed that both mean annual maximum and mean annual minimum temperatures may increase by 2030s and 2050s under both RCP4.5 and RCP8.5 scenarios (Fig. 2). The projection result showed that mean annual maximum temperature may increase by 1.4 °C and 1.5 °C by 2030s under RCP4.5 and RCP8.5 scenarios, respectively as compared to the baseline period. Projection for 2050s period also showed that mean annual maximum temperature may increase by 1.9 °C and 2.5 °C for the respective RCP scenarios. Likewise, mean annual minimum temperature may increase by about 1.4 °C and 1.6 °C by 2030s whereas it is projected to increase by 2 °C and 2.5 °C by 2050s under the respective RCP scenarios (Fig. 2). The projection result revealed that mean annual total rainfall may increase by about 4% and 5% by 2030s under RCP4.5 and RCP8.5, respectively whereas it may increase by 5% and 8% by 2050s under the respective scenarios (Fig. 2). The result of the projected climate changes in this study is in line with that of Conway and Schipper (2011), Setegn et al. (2011), and Dereje et al. (2012) who reported an increase in future temperature in the coming decades in Ethiopia. It can be concluded that the variations in these climate parameters could negatively affect crop production in the semiarid environments of northeastern Ethiopia (Table 4). Projected mean annual total rainfall (mm), mean annual maximum and mean annual minimum temperature (°C) of the baseline period, 2030s and 2050s under RCP4.5 and RCP8.5 scenarios in the study region Table 4 Physico-chemical characteristics of soil at Sirinka in 2019 main season where PWP, permanent wilting point; FC, field capacity; SAT, saturation; RGF, root growth factor; SKS, saturated hydraulic conductivity; BD, bulk density and CEC, cation exchange capacity Projected climate change impact on phenology of sorghum Effect of future climate (rainfall and temperature) on anthesis and physiological maturity dates of sorghum at the two sites (Harbu and Sirinka) is depicted in Figs. 3 and 4. The Simulated values at both sites showed that anthesis and physiological maturity dates of the cultivar Girana-1 may significantly (P < 0.05) decrease by 2030s and 2050s under both RCP4.5 and RCP8.5 scenarios as compared to the simulated value for the baseline period (1981–2010). The reduction of anthesis date at Harbu site may be 3.4% and 3.9% by 2030s for RCP4.5 and RCP8.5 climate scenarios, respectively whereas the reduction by 2050s may be 4.8% and 7.2%, for the respective RCP scenarios (Fig. 3). The prediction results for 2030s and 2050s periods also showed that physiological maturity date may decrease under both RCP4.5 and RCP8.5 scenarios with reduction of 3.4% and 3.7% by 2030s and 4.5% and 5.8% reduction by 2050s for the respective RCP4.5 and RCP8.5 scenarios. In the same way, the reduction of anthesis date at Sirinka site may be 10.5% and 12.5% by 2030s whereas by 2050s the reduction may be 6% and 20% for the respective RCP scenarios (Fig. 4). Similarly, physiological maturity date at the Sirinka site may be reduced by 13% and 17.6% by 2030s whereas maturity date may decrease by19% and 21% by 2050s for the respective RCPs scenarios (Fig. 4). The decrease in maturity date was higher than the anthesis date at Sirinka site due to greater terminal water deficit at Sirinka site as compared to Harbu site. There is also variation in soil texture and related soil characteristics among the two sites that could contribute variation in soil moisture availability for the crop. These conditions could affect the response of crop to terminal water deficit. At both sites, the highest reduction in anthesis and physiological maturity dates were predicted by 2050s time period as compared to the prediction result for 2030s and the baseline periods. This could be due to the highest increase in temperature by 2050s that may cause water deficit on crop as high temperature can aggravate evaporation and evapotranspiration rates. Under very high temperature condition the crop may be forced to complete its life cycle before the occurrence of water deficit. Very high temperature may accelerate the growth and development stages of the sorghum crop and reduce the crop life cycle. Crop physiological process such as respiration and photosynthesis rates may increase as temperature increase and may result in shortening of growth and development stages of the crop. A previous study by Turner and Rao (2013) also showed that anthesis and maturity days of sorghum were significantly reduced when the temperature was increased by about 1%. A study by Baviskar et al. (2017) also showed that increased temperature resulted in quick accumulation of heat units from sowing to flowering making the crop flower and mature earlier. Change in anthesis and maturity dates (%) of Girana-1 cultivar at Harbu in 2030s and 2050s) under RCP4.5 and RCP8.5 scenarios as compared to simulated values for the baseline period (1981–2010) Change in anthesis and maturity dates (%) of Girana-1 sorghum cultivar at Sirinka in 2030s and 2050s under RCP4.5 and RCP8.5 scenarios as compared to simulated values for the baseline period (1981–2010) Projected climate change impact on sorghum grain yield The change in sorghum grain yield at Harbu and Sirinka sites under the projected climate conditions are depicted in Fig. 5. The simulated grain yield for the baseline period was 2699 kg ha−1 whereas the simulated yields by 2030s were 2462.9 kg ha−1 and 2519.3 kg ha−1 for RCP4.5 and RCP8.5 scenarios whereas the simulated grain yields by 2050s for the respective RCP scenarios were 2171 kg ha−1 and 2343 kg ha−1. The simulated values showed that grain yield may decrease at both sites by 2030s and 2050s time periods as compared to the baseline yield. Based on the prediction result at Harbu site, sorghum grain yield may decrease by 8.8% and 6.7% by 2030s and it may decrease by 19.6% and 13.2% by 2050 under RCP4.5 and RCP8.5 scenarios, respectively (Fig. 5). Simulated results for the Sirinka site also showed similar trend in that grain yield may decrease by 12.9% and 12.3% by 2030s and by 17.1% and 11.9% by 2050s for the respective RCP scenarios (Fig. 5). The relatively lower reduction in grain yield by 2050s as compared to grain yield reduction by 2030s could be associated to highest increase in rainfall by 2050s which may increase soil moisture availability for the crop by reducing problem of terminal water deficit. Change in grain yield (%) of Girana-1 sorghum cultivar at Sirinka and Harbu sites in 2030s and 2050s as compared to the baseline period (1981–2010) The reduction of sorghum grain yield under future climate conditions could be attributed more to the increase in future temperature which may affect the crop by accelerating the growth and developmental stages and ultimately reduce yield. The increase in future temperature may aggravate evaporation and evapotranspiration rates and may cause water shortage required for the normal growth and development of the crop. Results showed that total seasonal transpiration by the crop was reduced under future climate conditions due to increase in temperature (Fig. 6). The reduction in crop transpiration could lead to yield reduction. Water shortage due to high temperature can affects nutrients absorption by the crop, photosynthesis rate translocation of photosynthesis products from the source (leaves) to sink (grain) and also affect other physiological processes of the crop. The decrease in grain yield of sorghum may be also associated to the adverse effect of future climate on soil physical and chemical properties. For instance, climate can affect soil texture, structure, bulk density, porosity, nutrient retention capacity, etc. In addition, change in climate could affect soil fertility due to increase in soil salinity (Schofield and Kirkb 2003; De Paz et al. 2012) and reduced nutrients and water availability for crops. Climate change can also affect the chemical properties of soil such as soil pH (Reth et al. 2005), soil salinity, cation exchange capacity, nutrient cycle, nutrients acquisition and biodiversity. Studies showed that soil physical and chemical properties are highly correlated with soil biological properties which can affect the soil fertility (Haynes 2008). Most soil functions such as pH, cation exchange capacity, water and nutrient retention, and soil structure are dependent on soil organic matter. Thus, the variation in decomposition rate of soil organic matter could adversely affects soil fertility (Golovchenko et al. 2007). This fact can indicate that the increase in future temperature associated with the change in rainfall pattern may adversely affect sorghum yield. Thus, these conditions may require changes in crop management practices that can sustain soil fertility in the study area. The study by Seo et al. (2005) also showed that global warming is expected to affect many crops negatively but the increase in future rainfall will have a beneficial effect. In contrast to this result, Chipanshi et al. (2003) and Msongaleli (2015) reported that the grain yield of sorghum will increase under future climate changes. The study by Wortmann et al. (2009) also showed that sorghum production could increase in eastern Africa considering slight temperature increases. Such variations could be attributed to differences in the global climate models (GCMs) used in the simulation and/or the downscaling methods employed on the GCMs in the studies. Another possible reason could be regional variations among the study areas as climate change impact is highly location-specific. Mean simulated seasonal sorghum transpiration (mm) in the baseline period, 2030s and 2050s under RCP4.5 and RCP8.5 scenarios at Sirinka Crop management scenarios for sorghum production At Harbu site, both the main and the interaction of sowing dates and supplemental irrigation significantly (P < 0.05) affected sorghum grain yield in the baseline period, in 2030s, and 2050s under both RCP scenarios (Table 5). Simulation for the baseline period showed that the highest significant simulated grain yield (3672 kg ha−1) was from early sowing (15th June) under irrigated conditions whereas the lowest grain yield (1986 kg ha−1) was from late sowing (15th July) under non-irrigated condition. The normal sowing date (30 June) and the late sowing dates under irrigation conditions did not show any significant yield variation with early sowing treatment under non-irrigated condition. Simulation under the baseline climate showed that late sowing under rainfed condition significantly decreased grain yield. Simulation for 2030s under RCP4.5 scenario showed that early sowing, the normal sowing date and late sowing date all under irrigated condition did not show significant grain yield variations among them. Under this scenario, the lowest simulated grain yield (1994 kg ha−1) was from late sowing under non-irrigated condition and it was statistically similar to yield from the normal sowing date under non-irrigated condition but it was significantly lower than simulated yield from early sowing under non-irrigated condition (Table 5). Simulation for 2030s period under RCP8.5 also showed similar trend. Simulation for 2050s under RCP4.5 showed that the highest simulated grain yield (2799 kg ha−1) was from early sowing under irrigated condition but it was statistically similar to yield from the normal sowing date and the late sowing date treatments under irrigation condition. The lowest simulated grain yield (1983 kg ha−1) was from late sowing under non-irrigated condition and it was statistically similar to yield from the normal sowing date but statistically lower than yield simulated under early sowing and non-irrigated condition. Simulation for 2050s under RCP8.5 scenario showed similar trend. The overall results showed that change in sowing date has not significant effect on sorghum yield under this scenario that could be due to the highest increase in temperature. Rather the use of irrigation or a change in cultivar type may result in improved sorghum yield by 2050s under RCP8.5 scenario.. Table 5 Effects of sowing dates and supplemental irrigation on sorghum grain yield (kg ha−1) in the baseline, 2030s and 2050s under RCP4.5 and RCP8.5 scenarios at Harbu Most of the simulated results showed that supplemental irrigation was an important water management practice that can increase sorghum yield across different time periods and climate scenarios. In similar manner, early sowing can be an important practice to increase sorghum production under the semi-arid environments of the study area particularly under non-irrigated condition. In most of the simulation, early sowing (15 June) resulted in the highest yield under non-irrigated conditions (Table 5). Early sowing is preferable as it helps for the crop to match the peak water demand with the main rainy period whereas late planting will push the peak water demand period of the crop after the main rainy season passed. Results also showed that there were limited synergetic effects of sowing dates and supplemental irrigation on increasing sorghum productivity. All the sowing dates under irrigated conditions did not show any significant yield variations across time periods and climate scenarios. The semi-arid environment of northeastern Ethiopia is characterized by low and variable rainfall and terminal water deficit is the key problem for crop production. Thus, sorghum yield in these environments can be maximized through early sowing of the crop and by using supplemental irrigation. At Sirinka site, both the main and the interaction of sowing dates and supplemental irrigation significantly (P < 0.05) affected simulated grain yield in the baseline, by 2030s, and 2050s under both RCP4.5 and RCP8.5 scenarios (Table 6). Simulation for the baseline period showed that the highest significant simulated grain yield (3616 kg ha−1) was from the early sowing date under irrigated condition. Simulated yield from early sowing date under rainfed conditions was statistically similar to yield from the normal sowing date under irrigated condition (Table 6). The lowest simulated grain yield (1270 kg ha−1) was from the late sowing date under non-irrigated condition (Table 6) followed by yield from the normal sowing date under non-irrigated condition. Simulation result under the baseline scenario showed a decreasing trend in yield under non-irrigated conditions when sowing date was delayed (Table 6). Simulation for 2030s under RCP4.5 showed that the highest simulated grain yield (3119 kg ha−1) was from early sowing date under irrigation conditions but it was statistically similar to yield from the normal and late sowing dates under irrigated conditions. The lowest simulated grain yield (1976 kg ha−1) was from late sowing under non-irrigated conditions and it was statistically similar to the normal sowing date under non-irrigated conditions but statistically lower than yield from the early sowing date under non-irrigated condition (Table 6). Simulation for 2030s under RCP8.5 scenario showed similar trend to simulated yield by the 2030s time period. Simulation for 2050s under RCP4.5 scenario also showed that all the sowing dates under irrigated conditions did not showed any significant yield variations The lowest simulated grain yield was from the late sowing date under non-irrigated condition and it was statistically lower than yield from the early sowing date under non-irrigated condition but statistically similar to yield from the nomal sowing date under non-irrigated condition. Simulation for 2050s under RCP8.5 showed similar trend in that all the sowing dates under irrigation conditions and the early sowing date under non-irrigated condition did not showed any yield variations (Table 6). All the sowing dates under rainfed conditions also did not show any significant grain yield variation that could be attributed to extreme increase in temperature under RCP8.5 scenario by 20050 s. Thus, change in cultivars and supplemental irrigation could be suggested as management options for sorghum under this scenario. However, the simulated values showed that delayed sowing date under non-irrigated conditions significantly reduced sorghum grain yield in the present and future climate conditions of the study area. Change in planting dates did not show any significant effect on grain yield of sorghum under irrigated condition. The reduction in grain yield under late sowing in rainfed conditions could be attributed to terminal water deficit that may occur during the critical growth stage of the crop particularly at the anthesis and grain filling stages. The semi-arid areas in Ethiopia are characterized by low and uneven distribution of rainfall. The occurrence of frequent dry spells during the growing season of sorghum is common in the area. Terminal water deficit may affect the sorghum crop and there may be complete crop failure as well. The results of the present study at both sites showed that early sown of sorghum may successfully increase yield in rainfed condition under the present as well as future climates of the study areas. Crops under early sowing condition can utilize soil moisture required for their growth and development. Under early sowing the sorghum crop may complete its growth and development stages before the occurrence of terminal water deficit. In contrary, late sowing under rainfed condition significantly affected grain yield that could be associated to the late-season water deficit. The effect of terminal water deficit on crop can be minimized by using supplemental irrigation which can substantially increase yield under all sowing date conditions. Supplemental irrigation is the addition of limited amount of water to rainfed crops to improve and stabilized yield when the rainfall is unable to provide sufficient moisture for the crop. Supplemental irrigation is an effective adaptation strategy to climate change that can reduce the adverse effect of water deficit which mostly occurs during the critical crop growth stages of crop. Thus, it can be concluded that early sowing under rainfed condition and application of supplemental irrigation can be considered as potential adaptation strategy to increase sorghum yield in the present and future climate conditions of the semi-arid environments of Ethiopia where water deficit is major constraints for crop production. The study by Cunha et al. (2015) also revealed that change in sowing date is an adaptive strategy to climate change which can prevent crops from terminal water deficit. Table 6 Effect of sowing dates and supplemental irrigation on grain yield in the baseline period, 2030s and 2050s under RCP4.5 and RCP8.5 scenarios at Sirinka Effect of nitrogen and supplemental irrigation on sorghum grain yield At Harbu site, both the main and interaction of nitrogen and supplemental irrigation significantly (P < 0.05 affected grain yield in the baseline, by 2030s and 2050s under both RCP4.5 and RCP8.5 scenarios (Table 7). Simulation in the baseline period showed that the highest simulated grain yield (3768 kg ha−1) was from the application of 92 kg N ha−1 under supplemental irrigation (SI) condition but it was not significantly different from simulated yield due to the application of 46 kg N ha−1 under supplemental irrigation condition (Table 7). Simulated yield from unfertilized and non-irrigated conditions was the lowest with simulated yield of 2376 kg ha−1 (Table 7). Results also showed that simulated yield under fertilized and non-irrigated conditions were significantly lower as compared to yield from fertilized and irrigated conditions (Table 7). The simulated grain yield from non-fertilized and irrigated conditions was statistically similar to the grain yield from the fertilized but non-irrigated conditions in the baseline period as well as in future climate periods (Table 7). The simulation result by 2030s under RCP4.5 also indicated that the highest simulated yield (3145 kg ha−1) was due to the application of 46 kg N ha−1 under irrigation conditions but it statistically similar to simulated yield due to the application of 92 kg N ha−1 under irrigated conditions. The lowest simulated grain yield (2279 kg ha−1) was due to the unfertilized and non-irrigated treatment but it was statistically similar to the simulated yield due to the applications of 46 kg N ha−1 and 92 kg N ha−1 applications under non-irrigated conditions. In general, application of nitrogen fertilizer under irrigated condition significantly increased sorghum grain yield as compared to simulated grain yield under nitrogen application only across all time periods and climate scenarios. This result indicated there existed strong synergetic effect of nitrogen and irrigation application on sorghum productivity. Availability of soil moisture is very important for the crop to maintain the water requirement of the criop. In addition, water is very important for nutrient absorption, translocation of photosynthesis products from leaves to grain, maintain internal temperature etc. All these conditions leads to increase in crop yield. Table 7 Effect of nitrogen fertilization (kg ha−1) and supplemental irrigation on sorghum grain yield in the baseline, 2030s and 2050s under RCP4.5 and RCP8.5 scenarios at Harbu At the Sirinka site, results revealed that both the main and the interaction of nitrogen and supplemental irrigation significantly (P < 0.05) affected grain yield in the baseline, by 2030s and 2050s under both RCP4.5 and RCP8.5 scenarios (Table 7). Simulation for the baseline period showed that the highest simulated grain yield (5265 kg ha−1) was due to the application of 92 kg N ha−1 under irrigated conditions (Table 8). The mean simulated grain yield from the non-fertilized and non-irrigated treatments was significantly inferior as compared to the mean simulated grain yield from fertilized and irrigated treatments. Simulation by 2030s under RCP4.5 scenario showed that the highest grain yield (3119 kg ha−1) was due to the application of 46 kg N ha−1 under irrigated conditions but it was statistically similar to simulated yield due to the application of 92 kg N ha−1` under irrigated condition. In all the simulations, the lowest simulated grain yields were due to unfertilized and non-irrigated treatment across the climate periods and scenarios (RCPs). Results revealed that yield response of sorghum to nitrogen fertilizer application was much lower under non-irrigated conditions, even when the nitrogen application rate was increased as compared to grain yield response to increase nitrogen rates under irrigated conditions. This clearly indicated that nutrient absorption by plant is greatly facilitated by moisture availability in the soil. Plants can easily absorb nitrogen from the soil when there is available soil moisture in the soil. Nitrogen is one of the essential nutrients reuired by crops for the normal growth and development. Water deficit in the soil can limit nutrients availability by the crop and finally affect yield. Previous studies by Gonzalez et al. (2005); Garwood and Williams (1967) showed that the uptake of nitrogen by crop roots requires soil water, as water is the main agent that transports solutes from the soil system to the plant system. Water deficit in the soil affect nitrogen uptake by the crop. Nitrogen deficiency in associated with low soil moisture could significantly reduce the concentration of nitrogen in the crop tissue and affect yield of the crop. A study by Sadras et al. (2016) also showed that the interaction effect of water and nitrogen could modulate the geochemical cycling of nitrogen, regulate functional diversity of crops, regulate crop yield, grain size, root growth, leaf stoichiometry, and photosynthesis. Sinclair and Rufty (2012) also reported that availability of nitrogen and water can increase yield in many crops. Thus, it can be concluded that the use of optimum nitrogen fertilizer and supplemental irrigation can sustain sorghum production in the semi-arid areas of Ethiopia under the present as well as future climate conditions. Table 8 Effect of nitrogen fertilization (kg ha−1) and supplemental irrigation on grain yield in the baseline, 2030s and 2050s under RCP4.5 and RCP8.5 scenarios at Sirinka Agriculture is the main economic sector in Ethiopia and it is the major contributor to the GDP and Foreign Exchange. At present, productivity of major crops in Ethiopia is declining due to climate variability, climate change, low soil fertility, low yielding crop varieties, lack of suitable crop management practices, diseases, insects and weeds. At present, sorghum production is mainly affected by the increasing temperatures, changes in rainfall patterns and extreme weather events. The consequence of climate change is mainly severe in semi-arid areas of Ethiopia as the country is among the most vulnerable countries in the Sub-Saharan. Due to the frequent occurrence of drought and rainfall variability, the farming system in the country is exposed to seasonal shocks that lead to seasonal food insecurity. The early offset of rainfall in association with the low water retention capacity of the soil exposes crops to terminal drought As the result, the crop production system in arid and semi-arid areas does not support the food requirement of the households throughout the year. Crop models have been used to study the effects of climate change on crops and to identify sustainable crop management practices. However, models must be calibrated and evaluated before they are used for different applications. Thus, a field experiment was conducted in Sirinka and Harbu sites located in the northeastern Ethiopia. The CERES-sorghum model in DSSAT technology was used in this study. The model was first calibrated and evaluated with phenology, growth and yield data obtained from field experiments conducted in the study region. Historical daily climate data (1981–2010) was obtained from Ethiopian national meteorological agency whereas future climate data of temperature, rainfall and solar radiation for 2030s (2020–2049) and 2050s (2040–2069)) were obtained from ensembles of the 17 GCMs (CMIP5) models outputs. The seasonal analysis program in DSSAT coupled with the CERESS-sorghum model were used to simulate impact of future climate and to identify suitable management scenarios for sorghum crop. The effect of different sowing dates, nitrogen fertilizer and supplemental irrigation were evaluated as management scenarios individually and in combination for their effectiveness to increase sorghum productivity under the present and future climate conditions of northeastern Ethiopia. The results of model calibration and evaluation showed that the simulated growth, development, and yield were in good agreement with the observed values. The crop model successfully simulated the growth, development, and yield of the sorghum cultivar Girana-1. We concluded that if the model is properly calibrated, it could be used for decision support to improve sorghum production under the present and future climate conditions of the study area. The result of future climate change impact on sorghum production showed that future climate may have a profound negative effect on sorghum production under both RCP4.5 and RCP8.5 scenarios. Sorghum yield could be substantially increased through early sowing, application of optimum nitrogen fertilizer, and by using supplemental irrigation individually and in combination. However, the adoption of early sowing and the use of supplemental irrigation could be limited due to shortage of irrigation water, high fertilizer cost, dryspells and insect pests. The agriculture extension system in the region should focus on developing small scale irrigation in the study areas. In addition, farmers may be advised to apply integrated nutrient management and integrated pest management practices to reduce challenges related to fertilizers and pests and increase sorghum yield. Thus, future studies should focus on identification of sound climate adaptative strategies for sorghum production in the semi-arid areas of Ethiopia and similar agroecologies. Adem M, Tamado T, Singh P, Driba K, Adamu M. Modeling climate change impact on chickpea production and adaptation options in the semi-arid North-Eastern Ethiopia. J Agric Environ Int Develop. 2016;110(2):377–95. https://doi.org/10.12895/jaeid. Alemayehu A, Becket W. Local climate variability and crop production in the Central Highlands of Ethiopia. Environ Develop. 2016;2016(19):36–48. Baviskar SB, Andrinjen AD, Walomna CK. Heat units and heat unit efficiency influenced by environment effect on yield and dry matter of Rabi sorghum. Int J Chem Stud. 2017;5(3):395–8. Belton PS, Taylor JR. Sorghum and millets: source of protein Africa. Trends Food Sci Tech. 2003;15(2):94–8. Bhupinderdhir A. Crop productivity in changing climate chapter. Sustain Agric Rev. 2018;27:213–41. https://doi.org/10.1007/978-3-319-75190-0. Black CA. Methods of soil analysis. Part I, American Society of Agronomy. Madison, Wisconsin, USA. 1572 P. 1965. Bouyoucos GJ. Hydrometer method im-proved for making particle size analysis of soils. Agron J. 1962;54:464–5. Bryan E, Deressa TT, Gbetibouo GA, Ringler C. Adaptation to climate change in Ethiopia and South Africa: options and constraints. Environ Sci Policy. 2009;12:413–26. CEEPA. Climate Change and African Agriculture Policy, 2006; Note No. 10. Centre for Environmental Economics and Policy in Africa (CEEPA), University of Pretoria, Pretoria. Chapman HD. Cation Exchange Capacity. In: Black CA, editor. Methods of Soil Analysis. Madison: American Society of Agronomy; 1965. p. 891–901. Chipanshi AC, Chanda R, Totoro O. Vulnerability Assessment of the Maize and Sorghum Crops to Climate Change in Botswana. Clim Change. 2003;61:339–60. Conway D, Schipper ELF. Adaptation to climate change in Africa: Challenges and opportunities identified from Ethiopia'. Glob Environ Chang. 2011;21:227–37. CRGE. Ethiopia's Climate Resilient Green Economy. Climate Resilience Strategy. Agriculture and Forestry Federal Republic democratic Republic of Ethiopia. 2011. CSA. Report on area and production of major crops (private peasant holdings, Meher season): Agricultural sample survey, Central Statistical Agency (CSA), Addis Ababa, Ethiopia. 2011. Cunha DA, Coelho AB, Féres JG. Irrigation as an adaptive strategy to climate change: an economic perspective on Brazilian agriculture. Environ Develop Econ. 2015;20(1):57–79. https://doi.org/10.1017/S1355770X14000102. De Paz JM, Viscontia F, Molina MJ, Ingelmo F, Martinez D, Sanchezb J. Prediction of the effects of climate change on the soil salinity of an irrigated area under Mediterranean conditions. J Environ Management. 2012;95:53783. Dereje A, Kindie T, Girma M, Birru Y, Wondimu B. Variability of rainfall and its current trend in the Amhara region. Ethiopia Africa J Agric Res. 2012;7(10):1475–86. https://doi.org/10.5897/AJAR11.698. Deressa TT, Hassan RM. The economic impact of climate change on crop production in Ethiopia: evidence from cross-section measures. J Afric Econ. 2009;18:529–54. Dollin S, Shapter FM, Henry R, Giovanni GL, Izquierd B, Slade L. Domestication and crop improvement: Genetic resources of sorghum and Saccharum. Ann Bot. 2007;100(5):975–89. Eshetu Z, Simane B, Tebeje G, Negatu W, Amsalu A, Berhanu A, Bird N, Welham B, Trujillo NC.. Climate finance in Ethiopia. Overseas Development Institute, London and Climate Science Centre, Addis Ababa. 2014. Francis C, Rutger AJN, Palmer AFE. A rapid method for plant leaf area estimation in maize (Zea mays L). Crop Sci. 1969;9:537–9. Garwood EA, Williams TE. Growth, water use, and nutrient uptake from the subsoil by grass swards. J Agric Sci. 1967;1967(93):13–24. https://doi.org/10.1017/S002185960008607X. Golovchenko AV, Tikhonova EY, Zvyagintsev DG. Abundance, biomass, structure, and activity of the microbial complexes of minerotrophic and ombrotrophic peatlands. Microbiology. 2007;76:630–7. Gonzalez-Dugo VJL, Durand F, Picon-Cochard C. Short-term response of the nitrogen nutrition status of tall fescue and Italian ryegrass swards under water deficit. Aust J Agric Res. 2005;56:1269–76. https://doi.org/10.1071/AR05064. Hadgu G, Tesfaye K, Mamo G. Analysis of climate change in northern Ethiopia: implications for agricultural production. Theor Appl Climatol. 2015;121(3):733–47. https://doi.org/10.1007/s00704-014-1261-5. Haussmann BI, Mahalakshmi V, Reddy BV, Seetharama N, Hash CT, Geiger HH. QTL mapping of stay-green in two sorghum recombinant inbred. Mapping. 2007;106(1):133–42. Haynes RJ. Soil organic matter quality and the size and activity of the microbial biomass: their significance to the quality of agricultural soils. In: Soil mineral microbe-organic interactions. Springer, Berlin. 2008; pp 201–231. Hoogenboom G. Crop growth and development. In: Bendi DK, Nieder R, editors. Handbook of Processes and Modeling in the Soil-Plant System. New York: The Haworth Press, Binghamton; 2003. p. 655–91. House LR. A guide to sorghum breeding. 2nd eds. Patacheru, A. P.502324, India. 1985. SAS Institute. SAS/STATR 9.2 User's guide, Cary, NC, USA. 2009 IPCC. Summary for Policy makers. Climate Change: The physical Science Basis. Jones PG, Thornton K. Generating downscaled weather data froma suite of climate models for agricultural modeling application. Agric Syst. 2013;114:1–5. https://doi.org/10.1016/j.agsy.2012.08.002. Jones JW, Hoogenboom G, Porter CH, Boote KJ, Batchelor WD, Hunt LA, Wilkens PW, Singh U, Gijsman AJ, Ritchie JT. DSSAT cropping system model. Eur J Agron. 2003;18:235–65. Kurukulasuriya P, Mendelsohn R, Hassan R, Benhin J, Deressa T, Diop M, Eid HM, Fosu KY, Gbetibouo G, Jain S, Mahamadou A, Mano R, Kabubo-Mariara J, El-Marsafawy S, Molua E, Ouda S, Ouedraogo M, Sene I, Maddison D, Seo SN, Dinar A. Will African agriculture survive climate change? World Bank Econ Rev. 2006;20:367–88. Li GH, Zhao B, Dong ST, Zhang JW, Liu P, Lu WP. Controlled-release urea combining with optimal irrigation improved grain yield, nitrogen uptake, and growth of maize. Agric Water Management. 2020;227:105834. Loague KM, Green RE. Statistical and graphical methods for evaluating solute transport models: overview and application. J Contaminant. 1991;7:51–73. López-Cedrón XF, Boote KJ, Piñeiro J, et al. Improving the CERES-Maize Model Ability to Simulate Water deficit Impact on Maize Production and Yield Components. Agron J. 2008;100:296–307. Lupien JR. Codex standards for cereals, pulses, legumes, and products derived. Supplement 1. To Codex Alimentarius vol xviii Rome FAO/WHO PP33. 1990. Msongaleli BM. Impacts of climate variability and change on rainfed sorghum and maize: Implications for food security policy in Tanzania. J Agric Sci. 2015;7(5):124–42. https://doi.org/10.5539/jas.v7n5p124. Musongaleli B, Filbert R, Siza D, Tumbo NK. Sorghum yield response to changing climatic conditions in semi-arid central Tanzania: Evaluating crop simulation model applicability. Agric Sci. 2014;5:822–33. https://doi.org/10.4236/as.2014.510087. O'Donnell J, Rowe CM. Statistic for the evaluation and comparison of models. J Geophys Res. 1985;90(5):8995–9005. Olsen R, Cole S, Watanabe F, Dean L. Estimation of available phosphorus in soils by extraction with sodium bicarbonate.United States Department of Agriculture Circ. 1954; 939:9. Pramod VP, Bapuji Rao B, Ramakrishna SSVS, MuneshwarSingh M, Patel NR, Sandeep VM, Rao VUM, Chowdary PS, Narsimha Rao N, Vijaya KP. Impact of projected climate on wheat yield in India and its adaptation strategies. Journal of Agrometeorology. 2017;19(3):207–16. Van Reeuwijk . Procedure for soil analysis. International soil reference and information center (ISRIC), Technical paper, no.9. 2002. Reth S, Reichstein M, Falge E. The effect of soil water content, soil temperature, soil pH-value and root mass on soil CO2 efflux - A modified model. Plant Soil. 2005;268:21–33. Sadras VO, Hayman D, Rodriguez M, Monjardino M, Bielich M, Unkovich B, Mudge EW. Interactions between water and nitrogen in Australian cropping systems: Physiological, agronomic, economic, breeding and modeling perspectives. Crop Pasture Science. 2016;67:1019–53. Sandeep VM, Rao VUM, Bapuji RB, Bharathi G, Pramod VP, Chowdary P, Patel NR, Mukesh P, Vijaya KP. Impact of climate change on sorghum productivity in India and its adaptation strategies. J Agrometeorol. 2018;20(2):89–96. Saxena NP, O'Toole JC. Field Screening for Drought Tolerance in Crop Plants with Emphasis on Rice: Proceedings of an International Workshop on Field Screening for Drought Tolerance in Rice, 11–14 Dec 2000, ICRISAT, Patancheru, India. Patancheru 502 324, Andhra Pradesh, India, 2002. Schofield RV, Kirkby MJ. Application of salinization indicators and initial development of potential global soil salinization scenario under climatic change. Glob Biogeochem Cycles. 2003;17:1–13. Seo S, Mendelsohn R, Munasinghe M. Climate change impacts Sivakumar 1992. Climate change and implications for agriculture in Niger. Clim Change. 2005;2005(20):297–312. Setegn SG, Rayner D, Melesse AM, Dargahi B, Srinivasan R. Impact of climate change on the hydroclimatology of Lake Tana Basin, Ethiopia. Water Resource Research. 2011;47:W04511. https://doi.org/10.1029/2010WR009248. Sinclair TR, Rufty TW. Nitrogen and water resources commonly limit crop yield increases, not necessarily plant genetics. Glob Food Sec. 2012;1:94–8. https://doi.org/10.1016/j.gfs.2012.07.001. Singh P, Virmani SM. Modeling growth and yield of chickpea (Cicer arietinum L.). Field Crop Research. 1996;46:41–59. Stocker TF, Qin D, Plattner G-K, Tignor M, Allen SK, Bouschung J, Nauels A, Xia Y, Bex V. Contribution of Working group I to the Fifth Assessment Report of Intergovernmental Panel on Climate Change Stocker. 2013. Taylor JRN, Schober T, Bean S. Novel and non food uses of sorghum and millets. J Cereal Sci. 2006;44(3):252–71. Tefera A. Ethiopia: grain and feed annual report. Global Agricultural Information Network. USDA Foreign Agriculture Service, report number ET 1201. 2012. Turner NC, Rao KPC. Simulation analysis of factors affecting sorghum yield at selected sites in eastern and southern Africa, with emphasis on increasing temperatures. Agric Syst. 2013;2013(121):53–62. Wheeler T, von Braun J. Climate Change Impacts Global Food Security Science. 2013;341:508–13. Willmott CJ. On the Validation of Models. Phys Geogr. 1981;2:184–94. Willmott CJ; Akleson GS, Davis RE; Feddema JJ, Klink KM., Legates DR, World Bank. Economics of adaptation to climate change: Ethiopia. The World Bank Group, Washington, DC, p 124. 2010. Wortmann CS, Mamo M, Mburu C, Letayo E, Abebe G, Kayuki KC. Atlas of sorghum production in eastern and southern Africa. Lincoln, NE: The University of Nebraska-Lincoln; 2009. We would like to Acknowledge Sirinka Agricultural Research Center for providing crop data for the study. We also appreciate Kombolcha weather station for providing weather data for our study. No fund available this time. Department of Plant Science, College of Agriculture, Wollo University, P.O.Box 1145, Dessie, Ethiopia Adem Mohammed Departmet of Plant Sciences, College of Agriculture and Natural Resource Management , Tulu Awulia University , P.O.box 32, Tulu Awulia, Ethiopia Abebe Misganaw AMo was involved in analyzing and interpreting the data regarding the crop modeling and calibration process. Ami has been involved in the field work and analysis of the data and also in the write of the manuscript. Both authors read and approved the final manuscript. Correspondence to Adem Mohammed. The manuscript is submitted based on the journal requirement and ethical consideration. The authors are agreed to publication the article in the journal. "The authors declare that they have no any competing interests" in this section. Availability of data and materials: The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Mohammed, A., Misganaw, A. Modeling future climate change impacts on sorghum (Sorghum bicolor) production with best management options in Amhara Region, Ethiopia. CABI Agric Biosci 3, 22 (2022). https://doi.org/10.1186/s43170-022-00092-9 Crop model DSSAT Sowing date
CommonCrawl
Section 3.3: The $\operatorname{Ex}^{\infty }$ Functor Subsection 3.3.2: The Subdivision of a Simplex (cite) 3.3.2 The Subdivision of a Simplex Let $n \geq 0$ be a nonnegative integer and let be the topological $n$-simplex. For every nonempty subset $S \subseteq [n] = \{ 0 < 1 < \cdots < n \} $, let $| \Delta ^{S}|$ denote the corresponding face of $| \Delta ^{n} |$, given by the collection of tuples $(t_0, \ldots , t_ n) \in | \Delta ^{n} |$ satisfying $t_{i} =0$ for $i \notin S$. Let $b_{S}$ denote the barycenter of the simplex $| \Delta ^{S} |$: that is, the point $(t_0, \ldots , t_ n) \in | \Delta ^{S} | \subseteq | \Delta ^ n |$ given by $t_{i} = \begin{cases} \frac{1}{|S|} & \text{ if } i \in S \\ 0 & \text{ otherwise.} \end{cases}$ The collection of barycenters $\{ b_ S \} _{\emptyset \neq S \subseteq [n]}$ can be regarded as the vertices of a triangulation of $| \Delta ^{n} |$, which we indicate in the case $n = 2$ by the following diagram: \[ \xymatrix { & & \bullet \ar@ {-}[ddl] \ar@ {-}[ddr] \ar@ {-}[ddd] & & \\ & & & & \\ & \bullet \ar@ {-}[dr] & & \bullet \ar@ {-}[dl] & \\ & & \bullet & & \\ \bullet \ar@ {-}[urr] \ar@ {-}[rr] \ar@ {-}[uur] & & \bullet \ar@ {-}[u] & & \bullet \ar@ {-}[ll] \ar@ {-}[ull] \ar@ {-}[uul] } \] In this section, we show that this triangulation arises from the identification of $| \Delta ^{n} |$ with the geometric realization of another simplicial set (Proposition 3.3.2.3). Notation 3.3.2.1. Let $Q$ be a partially ordered set. We let $\operatorname{Chain}(Q)$ denote the collection of all nonempty, finite, linearly ordered subsets of $Q$. We regard $\operatorname{Chain}(Q)$ as a partially ordered set, where the partial order is given by inclusion. In the special case where $Q = [n] = \{ 0 < 1 < \ldots < n \} $ for some nonnegative integer $n$, we denote the partially ordered set $\operatorname{Chain}(Q)$ by $\operatorname{Chain}[n]$. Remark 3.3.2.2 (Functoriality). Let $f: Q \rightarrow Q'$ be a nondecreasing map between partially ordered sets. Then $f$ induces a map $\operatorname{Chain}(f): \operatorname{Chain}(Q) \rightarrow \operatorname{Chain}(Q')$, which carries each nonempty linearly ordered subset $S \subseteq Q$ to its image $f(S) \subseteq Q'$. By means of this construction, we can regard $Q \mapsto \operatorname{Chain}(Q)$ as functor from the category of partially ordered sets to itself. Proposition 3.3.2.3. Let $n \geq 0$ be an integer. Then there is a unique homeomorphism of topological spaces \[ f: | \operatorname{N}_{\bullet }( \operatorname{Chain}[n] ) | \rightarrow | \Delta ^{n} | \] For every nonempty subset $S \subseteq [n]$, the map $f$ carries $S$ (regarded as a vertex of $\operatorname{N}_{\bullet }( \operatorname{Chain}[n] )$) to the barycenter $b_{S} \in | \Delta ^{S} | \subseteq | \Delta ^{n} |$. For every $m$-simplex $\sigma : \Delta ^{m} \rightarrow \operatorname{N}_{\bullet }( \operatorname{Chain}[n] )$, the composite map \[ | \Delta ^{m} | \xrightarrow { | \sigma | } | \operatorname{N}_{\bullet }( \operatorname{Chain}[n] ) | \xrightarrow {f} | \Delta ^{n} | \] is affine: that is, it extends to an $\operatorname{\mathbf{R}}$-linear map from $\operatorname{\mathbf{R}}^{m+1} \supseteq | \Delta ^{m} |$ to $\operatorname{\mathbf{R}}^{n+1} \supseteq | \Delta ^{n} |$. Proof. Note that an affine map $| \Delta ^{m} | \rightarrow | \Delta ^{n} |$ is uniquely determined by its values on the vertices of the topological $m$-simplex $| \Delta ^{m} |$. From this observation, it is easy to deduce that there is a unique continuous function $f: | \operatorname{N}_{\bullet }( \operatorname{Chain}[n] ) | \rightarrow | \Delta ^{n} |$ which satisfies conditions $(1)$ and $(2)$ of Proposition 3.3.2.3. We will complete the proof by showing that $f$ is a homeomorphism. Since the domain and codomain of $f$ are compact Hausdorff spaces, it will suffice to show that $f$ is a bijection. Unwinding the definitions, this can be restated as follows: $(\ast )$ For every point $(t_0, t_1, \ldots , t_ n) \in | \Delta ^{n} |$, there exists a unique chain $S_0 \subsetneq S_1 \subsetneq \cdots \subsetneq S_ m$ of subsets of $[n]$ and positive real numbers $(s_0, s_1, \ldots , s_ m)$ satisfying the identities \[ \sum s_ i = 1 \quad \quad (t_0, t_1, \ldots , t_ n) = \sum s_ i b_{S_{i}}. \] We will deduce $(\ast )$ from the following more general assertion: $(\ast ')$ For every element $(t_0, t_1, \ldots , t_ n) \in \operatorname{\mathbf{R}}_{\geq 0}^{n+1}$, there exists a unique (possibly empty) chain $S_0 \subsetneq S_1 \subsetneq \cdots \subsetneq S_ m$ of subsets of $[n]$ and positive real numbers $(s_0, s_1, \ldots , s_ m)$ satisfying $(t_0, t_1, \ldots , t_ n) = \sum s_ i b_{S_{i}}.$ Note that, if $(t_0, t_1, \ldots , t_ n)$ and $(s_0, s_1, \ldots , s_ m)$ are as in $(\ast ')$, then we automatically have $\sum _{i=0}^{m} s_ i = \sum _{j=0}^{n} t_ j$. It follows that assertion $(\ast )$ is a special case of $(\ast ')$. To prove $(\ast ')$, let $K \subseteq [n]$ be the collection of those integers $j$ for which $t_ j \neq 0$. We proceed by induction on the cardinality of $k = |K|$. If $k=0$ is empty, there is nothing to prove. Otherwise, set $r = \min \{ k t_ i \} _{i \in K}$. We can then write \[ (t_0, t_1, \ldots , t_ n) = r b_{K} + (t'_0, t'_1, \ldots , t'_ n) \] for a unique sequence of nonnegative real numbers $(t'_0, \ldots , t'_ n )$. Applying our inductive hypothesis to the sequence $(t'_0, \ldots , t'_ n)$, we deduce that there is a unique chain of subsets $S_0 \subsetneq S_1 \subsetneq \cdots \subsetneq S_{m-1}$ of $[n]$ and positive real numbers $(s_0, s_1, \ldots , s_{m-1} )$ satisfying $(t'_0, t'_1, \ldots , t'_ n) = \sum s_ i b_{S_{i}}$. Note that each $S_{i}$ is contained in $K'$, and therefore properly contained in $K$. To complete the proof, we extend this sequence by setting $S_ m = K$ and $s_ m = r$. $\square$ Remark 3.3.2.4 (Functoriality). Let $\alpha : [m] \rightarrow [n]$ be a nondecreasing function between partially ordered sets, so that $\alpha $ induces a nondecreasing map $\operatorname{Chain}[\alpha ]: \operatorname{Chain}[m] \rightarrow \operatorname{Chain}[n]$ (Remark 3.3.2.2). If $\alpha $ is injective, then the diagram of topological spaces \[ \xymatrix@R =50pt@C=50pt{ | \operatorname{N}_{\bullet }( \operatorname{Chain}[m] ) | \ar [d]^{ | \operatorname{N}_{\bullet }( \operatorname{Chain}[\alpha ] ) |} \ar [r]^-{ f_ m }_-{\sim } & | \Delta ^{m} | \ar [d]^{ | \alpha | } \\ | \operatorname{N}_{\bullet }( \operatorname{Chain}[n] ) | \ar [r]^-{ f_ n }_-{\sim } & | \Delta ^{n} | } \] is commutative, where the horizontal maps are the homeomorphisms supplied by Proposition 3.3.2.3. Beware that if $\alpha $ is not injective, this this diagram does not necessarily commute. For example, the induced map $| \Delta ^{m} | \rightarrow | \Delta ^{n} |$ carries the barycenter of $| \Delta ^{m} |$ to the point \[ ( \frac{ | \alpha ^{-1}(0) |}{m+1}, \frac{| \alpha ^{-1}(1) |}{m+1}, \ldots , \frac{| \alpha ^{-1}(n) |}{m+1}) \in | \Delta ^{n} |, \] which need not be the barycenter of any face $| \Delta ^{n} |$. It will be convenient to repackage Proposition 3.3.2.3 (and Remark 3.3.2.4) as a statement about the singular simplicial set functor $\operatorname{Sing}_{\bullet }: \operatorname{Top}\rightarrow \operatorname{Set_{\Delta }}$ of Construction 1.1.7.1. We first introduce a bit of notation (which will play an essential role throughout §3.3). Construction 3.3.2.5 (The $\operatorname{Ex}$ Functor). Let $X$ be a simplicial set. For every nonnegative integer $n$, we let $\operatorname{Ex}_{n}( X)$ denote the collection of all morphisms of simplicial sets $\operatorname{N}_{\bullet }( \operatorname{Chain}[n] ) \rightarrow X$. By virtue of Remark 3.3.2.2, the construction $( [n] \in \operatorname{{\bf \Delta }}^{\operatorname{op}} ) \mapsto (\operatorname{Ex}_{n}(X) \in \operatorname{Set})$ determines a simplicial set which we will denote by $\operatorname{Ex}(X)$. The construction $X \mapsto \operatorname{Ex}(X)$ determines a functor from the category of simplicial sets to itself, which we denote by $\operatorname{Ex}: \operatorname{Set_{\Delta }}\rightarrow \operatorname{Set_{\Delta }}$. Remark 3.3.2.6. The construction $X \mapsto \operatorname{Ex}(X)$ can be regarded as a special case of Variant 1.1.7.6: it is the functor $\operatorname{Sing}_{\bullet }^{T}: \operatorname{Set_{\Delta }}\rightarrow \operatorname{Set_{\Delta }}$ associated to the cosimplicial object $T$ of $\operatorname{Set_{\Delta }}$ given by the construction $[n] \mapsto \operatorname{N}_{\bullet }( \operatorname{Chain}[n] )$. Remark 3.3.2.7. The functor $X \mapsto \operatorname{Ex}(X)$ preserves filtered colimits of simplicial sets. To prove this, it suffices to observe that each of the simplicial sets $\operatorname{N}_{\bullet }( \operatorname{Chain}[n] )$ has only finitely many nondegenerate simplices (since the partially ordered set $\operatorname{Chain}[n])$ is finite). Example 3.3.2.8. Let $\operatorname{\mathcal{C}}$ be a category and let $\operatorname{N}_{\bullet }(\operatorname{\mathcal{C}})$ denote the nerve of $\operatorname{\mathcal{C}}$. Then $n$-simplices of the simplicial set $\operatorname{Ex}( \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) )$ can be identified with functors from the partially ordered set $\operatorname{Chain}[n]$ into $\operatorname{\mathcal{C}}$ (see Proposition 1.2.2.1). Example 3.3.2.9. Let $X$ be a topological space and let $\operatorname{Sing}_{\bullet }(X)$ denote the singular simplicial set of $X$. For each nonnegative integer $n$, the $n$-simplices of $\operatorname{Sing}_{\bullet }(X)$ are given by continuous functions $| \Delta ^{n} | \rightarrow X$, and the $n$-simplices of $\operatorname{Ex}( \operatorname{Sing}_{\bullet }(X) )$ are given by continuous functions $| \operatorname{N}_{\bullet }( \operatorname{Chain}[n] ) | \rightarrow X$. The homeomorphism $| \operatorname{N}_{\bullet }( \operatorname{Chain}[n] ) | \simeq | \Delta ^{n} |$ of Proposition 3.3.2.3 determines a bijection $\operatorname{Sing}_{n}(X) \xrightarrow {\sim } \operatorname{Ex}_{n}( \operatorname{Sing}_{\bullet }(X) )$, and Remark 3.3.2.4 guarantees that these bijections are compatible with the face operators on the simplicial sets $\operatorname{Sing}_{\bullet }(X)$ and $\operatorname{Ex}( \operatorname{Sing}_{\bullet }(X) )$. In other words, Proposition 3.3.2.3 supplies an isomorphism of semisimplicial sets $\varphi : \operatorname{Sing}_{\bullet }(X) \xrightarrow {\sim } \operatorname{Ex}( \operatorname{Sing}_{\bullet }(X) )$. Beware that $\varphi $ is generally not an isomorphism of simplicial sets: that is, it usually does not commute with the degeneracy operators on $\operatorname{Sing}_{\bullet }(X)$ and $\operatorname{Ex}( \operatorname{Sing}_{\bullet }(X) )$. Variant 3.3.2.10 ($\operatorname{Ex}$ for Semisimplicial Sets). Note that, for every nonnegative integer $n$, the simplicial set $\operatorname{N}_{\bullet }( \operatorname{Chain}[n] )$ is braced (Exercise 3.3.1.2). If $X$ is a semisimplicial set, we write $\operatorname{Ex}_{n}(X)$ for the collection of all morphisms of semisimplicial sets $\operatorname{N}_{\bullet }( \operatorname{Chain}[n] )^{\mathrm{nd}} \rightarrow X$; here $\operatorname{N}_{\bullet }( \operatorname{Chain}[n] )^{\mathrm{nd}}$ denotes the semisimplicial subset of $\operatorname{N}_{\bullet }(\operatorname{Chain}[n] )$ spanned by the nondegenerate simplices. The construction $[n] \mapsto \operatorname{Ex}_{n}(X)$ determines a semisimplicial set, which we denote by $\operatorname{Ex}(X)$. Note that, if $X$ is the underlying semisimplicial set of a simplicial set $Y$, then $\operatorname{Ex}(X)$ is the underlying semisimplicial set of the simplicial set $\operatorname{Ex}(Y)$ given by Construction 3.3.2.5 (this is a special case of Proposition 3.3.1.5). In other words, the construction $X \mapsto \operatorname{Ex}(X)$ determines a functor from the category of semisimplicial sets to itself which fits into a commutative diagram \[ \xymatrix { \{ \text{Simplicial sets} \} \ar [r]^-{\operatorname{Ex}} \ar [d] & \{ \text{Simplicial sets} \} \ar [d] \\ \{ \text{Semisimplicial sets} \} \ar [r]^-{\operatorname{Ex}} & \{ \text{Semisimplicial sets} \} . } \]
CommonCrawl
Series of water waves caused by the displacement of a large volume of a body of water Not to be confused with 2004 Indian Ocean earthquake and tsunami. For other uses, see Tsunami (disambiguation) and Tidal wave. The 2004 Indian Ocean tsunami at Ao Nang, Krabi Province, Thailand 3D tsunami animation A tsunami (from Japanese: 津波, lit. 'harbour wave';[1] English pronunciation: /suːˈnɑːmi/ soo-NAH-mee[2] or /tsuːˈnɑːmi/[3]) or tidal wave,[4], also known as a seismic sea wave, is a series of waves in a water body caused by the displacement of a large volume of water, generally in an ocean or a large lake. Earthquakes, volcanic eruptions and other underwater explosions (including detonations, landslides, glacier calvings, meteorite impacts and other disturbances) above or below water all have the potential to generate a tsunami.[5] Unlike normal ocean waves, which are generated by wind, or tides, which are generated by the gravitational pull of the Moon and the Sun, a tsunami is generated by the displacement of water. Tsunami waves do not resemble normal undersea currents or sea waves because their wavelength is far longer.[6] Rather than appearing as a breaking wave, a tsunami may instead initially resemble a rapidly rising tide.[7] For this reason, it is often referred to as a "tidal wave", although this usage is not favoured by the scientific community because it might give the false impression of a causal relationship between tides and tsunamis.[8] Tsunamis generally consist of a series of waves, with periods ranging from minutes to hours, arriving in a so-called "internal wave train".[9] Wave heights of tens of metres can be generated by large events. Although the impact of tsunamis is limited to coastal areas, their destructive power can be enormous, and they can affect entire ocean basins. The 2004 Indian Ocean tsunami was among the deadliest natural disasters in human history, with at least 230,000 people killed or missing in 14 countries bordering the Indian Ocean. The Ancient Greek historian Thucydides suggested in his 5th century BC History of the Peloponnesian War that tsunamis were related to submarine earthquakes,[10][11] but the understanding of tsunamis remained slim until the 20th century and much remains unknown. Major areas of current research include determining why some large earthquakes do not generate tsunamis while other smaller ones do; accurately forecasting the passage of tsunamis across the oceans; and forecasting how tsunami waves interact with shorelines. 1.1 Tsunami 1.2 Tidal wave 1.3 Seismic sea wave 3.1 Seismicity 3.2 Landslides 3.3 Meteorological 3.4 Man-made or triggered tsunamis 5 Drawback 6 Scales of intensity and magnitude 6.1 Intensity scales 6.2 Magnitude scales 7 Tsunami heights 8 Warnings and predictions 8.1 Forecast of tsunami attack probability 9 Mitigation 11 Footnotes "Tsunami" in kanji Japanese name The term "tsunami" is a borrowing from the Japanese tsunami 津波, meaning "harbour wave". For the plural, one can either follow ordinary English practice and add an s, or use an invariable plural as in the Japanese.[12] Some English speakers alter the word's initial /ts/ to an /s/ by dropping the "t", since English does not natively permit /ts/ at the beginning of words, though the original Japanese pronunciation is /ts/. Tsunami aftermath in Aceh, Indonesia, December 2004. Tsunamis are sometimes referred to as tidal waves.[13] This once-popular term derives from the most common appearance of a tsunami, which is that of an extraordinarily high tidal bore. Tsunamis and tides both produce waves of water that move inland, but in the case of a tsunami, the inland movement of water may be much greater, giving the impression of an incredibly high and forceful tide. In recent years, the term "tidal wave" has fallen out of favour, especially in the scientific community, because the causes of tsunamis have nothing to do with those of tides, which are produced by the gravitational pull of the moon and sun rather than the displacement of water. Although the meanings of "tidal" include "resembling"[14] or "having the form or character of"[15] the tides, use of the term tidal wave is discouraged by geologists and oceanographers. A 1969 episode of the TV crime show Hawaii Five-O entitled "Forty Feet High And It Kills!" used the terms "tsunami" and "tidal wave" interchangeably.[16] Seismic sea wave The term seismic sea wave also is used to refer to the phenomenon, because the waves most often are generated by seismic activity such as earthquakes.[17] Prior to the rise of the use of the term tsunami in English, scientists generally encouraged the use of the term seismic sea wave rather than tidal wave. However, like tsunami, seismic sea wave is not a completely accurate term, as forces other than earthquakes – including underwater landslides, volcanic eruptions, underwater explosions, land or ice slumping into the ocean, meteorite impacts, and the weather when the atmospheric pressure changes very rapidly – can generate such waves by displacing water.[18][19] See also: List of historic tsunamis Lisbon earthquake and tsunami in November 1755. While Japan may have the longest recorded history of tsunamis, the sheer destruction caused by the 2004 Indian Ocean earthquake and tsunami event mark it as the most devastating of its kind in modern times, killing around 230,000 people.[20] The Sumatran region is also accustomed to tsunamis, with earthquakes of varying magnitudes regularly occurring off the coast of the island.[21] Tsunamis are an often underestimated hazard in the Mediterranean Sea and parts of Europe. Of historical and current (with regard to risk assumptions) importance are the 1755 Lisbon earthquake and tsunami (which was caused by the Azores–Gibraltar Transform Fault), the 1783 Calabrian earthquakes, each causing several tens of thousands of deaths and the 1908 Messina earthquake and tsunami. The tsunami claimed more than 123,000 lives in Sicily and Calabria and is among the most deadly natural disasters in modern Europe. The Storegga Slide in the Norwegian Sea and some examples of tsunamis affecting the British Isles refer to landslide and meteotsunamis predominantly and less to earthquake-induced waves. As early as 426 BC the Greek historian Thucydides inquired in his book History of the Peloponnesian War about the causes of tsunami, and was the first to argue that ocean earthquakes must be the cause.[10][11] The cause, in my opinion, of this phenomenon must be sought in the earthquake. At the point where its shock has been the most violent the sea is driven back, and suddenly recoiling with redoubled force, causes the inundation. Without an earthquake I do not see how such an accident could happen.[22] The Roman historian Ammianus Marcellinus (Res Gestae 26.10.15–19) described the typical sequence of a tsunami, including an incipient earthquake, the sudden retreat of the sea and a following gigantic wave, after the 365 AD tsunami devastated Alexandria.[23][24] The principal generation mechanism (or cause) of a tsunami is the displacement of a substantial volume of water or perturbation of the sea.[25] This displacement of water is usually attributed to either earthquakes, landslides, volcanic eruptions, glacier calvings or more rarely by meteorites and nuclear tests.[26][27] Seismicity Tsunami can be generated when the sea floor abruptly deforms and vertically displaces the overlying water. Tectonic earthquakes are a particular kind of earthquake that are associated with the Earth's crustal deformation; when these earthquakes occur beneath the sea, the water above the deformed area is displaced from its equilibrium position.[28] More specifically, a tsunami can be generated when thrust faults associated with convergent or destructive plate boundaries move abruptly, resulting in water displacement, owing to the vertical component of movement involved. Movement on normal (extensional) faults can also cause displacement of the seabed, but only the largest of such events (typically related to flexure in the outer trench swell) cause enough displacement to give rise to a significant tsunami, such as the 1977 Sumba and 1933 Sanriku events.[29][30] Drawing of tectonic plate boundary before earthquake Overriding plate bulges under strain, causing tectonic uplift. Plate slips, causing subsidence and releasing energy into water. The energy released produces tsunami waves. Tsunamis have a small amplitude (wave height) offshore, and a very long wavelength (often hundreds of kilometres long, whereas normal ocean waves have a wavelength of only 30 or 40 metres),[31] which is why they generally pass unnoticed at sea, forming only a slight swell usually about 300 millimetres (12 in) above the normal sea surface. They grow in height when they reach shallower water, in a wave shoaling process described below. A tsunami can occur in any tidal state and even at low tide can still inundate coastal areas. On April 1, 1946, the 8.6 Mw  Aleutian Islands earthquake occurred with a maximum Mercalli intensity of VI (Strong). It generated a tsunami which inundated Hilo on the island of Hawaii with a 14-metre high (46 ft) surge. Between 165 and 173 were killed. The area where the earthquake occurred is where the Pacific Ocean floor is subducting (or being pushed downwards) under Alaska. Examples of tsunami originating at locations away from convergent boundaries include Storegga about 8,000 years ago, Grand Banks 1929, Papua New Guinea 1998 (Tappin, 2001). The Grand Banks and Papua New Guinea tsunamis came from earthquakes which destabilised sediments, causing them to flow into the ocean and generate a tsunami. They dissipated before travelling transoceanic distances. The cause of the Storegga sediment failure is unknown. Possibilities include an overloading of the sediments, an earthquake or a release of gas hydrates (methane etc.). The 1960 Valdivia earthquake (Mw 9.5), 1964 Alaska earthquake (Mw 9.2), 2004 Indian Ocean earthquake (Mw 9.2), and 2011 Tōhoku earthquake (Mw9.0) are recent examples of powerful megathrust earthquakes that generated tsunamis (known as teletsunamis) that can cross entire oceans. Smaller (Mw 4.2) earthquakes in Japan can trigger tsunamis (called local and regional tsunamis) that can devastate stretches of coastline, but can do so in only a few minutes at a time. In the 1950s, it was discovered that larger tsunamis than had previously been believed possible could be caused by giant submarine landslides. These rapidly displace large water volumes, as energy transfers to the water at a rate faster than the water can absorb. Their existence was confirmed in 1958, when a giant landslide in Lituya Bay, Alaska, caused the highest wave ever recorded, which had a height of 524 metres (over 1700 feet).[32] The wave did not travel far, as it struck land almost immediately. Two people fishing in the bay were killed, but another boat managed to ride the wave. Another landslide-tsunami event occurred in 1963 when a massive landslide from Monte Toc entered the Vajont Dam in Italy. The resulting wave surged over the 262 m (860 ft) high dam by 250 metres (820 ft) and destroyed several towns. Around 2,000 people died.[33][34] Scientists named these waves megatsunamis. Some geologists claim that large landslides from volcanic islands, e.g. Cumbre Vieja on La Palma in the Canary Islands, may be able to generate megatsunamis that can cross oceans, but this is disputed by many others. In general, landslides generate displacements mainly in the shallower parts of the coastline, and there is conjecture about the nature of large landslides that enter the water. This has been shown to subsequently affect water in enclosed bays and lakes, but a landslide large enough to cause a transoceanic tsunami has not occurred within recorded history. Susceptible locations are believed to be the Big Island of Hawaii, Fogo in the Cape Verde Islands, La Reunion in the Indian Ocean, and Cumbre Vieja on the island of La Palma in the Canary Islands; along with other volcanic ocean islands. This is because large masses of relatively unconsolidated volcanic material occurs on the flanks and in some cases detachment planes are believed to be developing. However, there is growing controversy about how dangerous these slopes actually are.[35] Meteorological Main article: Meteotsunami Some meteorological conditions, especially rapid changes in barometric pressure, as seen with the passing of a front, can displace bodies of water enough to cause trains of waves with wavelengths comparable to seismic tsunamis, but usually with lower energies. These are essentially dynamically equivalent to seismic tsunamis, the only differences being that meteotsunamis lack the transoceanic reach of significant seismic tsunamis and that the force that displaces the water is sustained over some length of time such that meteotsunamis can't be modelled as having been caused instantaneously. In spite of their lower energies, on shorelines where they can be amplified by resonance, they are sometimes powerful enough to cause localised damage and potential for loss of life. They have been documented in many places, including the Great Lakes, the Aegean Sea, the English Channel, and the Balearic Islands, where they are common enough to have a local name, rissaga. In Sicily they are called marubbio and in Nagasaki Bay, they are called abiki. Some examples of destructive meteotsunamis include 31 March 1979 at Nagasaki and 15 June 2006 at Menorca, the latter causing damage in the tens of millions of euros.[36] Meteotsunamis should not be confused with storm surges, which are local increases in sea level associated with the low barometric pressure of passing tropical cyclones, nor should they be confused with setup, the temporary local raising of sea level caused by strong on-shore winds. Storm surges and setup are also dangerous causes of coastal flooding in severe weather but their dynamics are completely unrelated to tsunami waves.[36] They are unable to propagate beyond their sources, as waves do. Man-made or triggered tsunamis See also: Tsunami bomb There have been studies of the potential of the induction of and at least one actual attempt to create tsunami waves as a tectonic weapon. In World War II, the New Zealand Military Forces initiated Project Seal, which attempted to create small tsunamis with explosives in the area of today's Shakespear Regional Park; the attempt failed.[37] There has been considerable speculation on the possibility of using nuclear weapons to cause tsunamis near an enemy coastline. Even during World War II consideration of the idea using conventional explosives was explored. Nuclear testing in the Pacific Proving Ground by the United States seemed to generate poor results. Operation Crossroads fired two 20 kilotonnes of TNT (84 TJ) bombs, one in the air and one underwater, above and below the shallow (50 m (160 ft)) waters of the Bikini Atoll lagoon. Fired about 6 km (3.7 mi) from the nearest island, the waves there were no higher than 3–4 m (9.8–13.1 ft) upon reaching the shoreline. Other underwater tests, mainly Hardtack I/Wahoo (deep water) and Hardtack I/Umbrella (shallow water) confirmed the results. Analysis of the effects of shallow and deep underwater explosions indicate that the energy of the explosions doesn't easily generate the kind of deep, all-ocean waveforms which are tsunamis; most of the energy creates steam, causes vertical fountains above the water, and creates compressional waveforms.[38] Tsunamis are hallmarked by permanent large vertical displacements of very large volumes of water which do not occur in explosions. When the wave enters shallow water, it slows down and its amplitude (height) increases. The wave further slows and amplifies as it hits land. Only the largest waves crest. Tsunamis cause damage by two mechanisms: the smashing force of a wall of water travelling at high speed, and the destructive power of a large volume of water draining off the land and carrying a large amount of debris with it, even with waves that do not appear to be large. While everyday wind waves have a wavelength (from crest to crest) of about 100 metres (330 ft) and a height of roughly 2 metres (6.6 ft), a tsunami in the deep ocean has a much larger wavelength of up to 200 kilometres (120 mi). Such a wave travels at well over 800 kilometres per hour (500 mph), but owing to the enormous wavelength the wave oscillation at any given point takes 20 or 30 minutes to complete a cycle and has an amplitude of only about 1 metre (3.3 ft).[39] This makes tsunamis difficult to detect over deep water, where ships are unable to feel their passage. The velocity of a tsunami can be calculated by obtaining the square root of the depth of the water in metres multiplied by the acceleration due to gravity (approximated to 10 m/s2). For example, if the Pacific Ocean is considered to have a depth of 5000 metres, the velocity of a tsunami would be the square root of √(5000 × 10) = √50000 = ~224 metres per second (735 feet per second), which equates to a speed of ~806 kilometres per hour or about 500 miles per hour. This is the formula used for calculating the velocity of shallow-water waves. Even the deep ocean is shallow in this sense because a tsunami wave is so long (horizontally from crest to crest) by comparison. The reason for the Japanese name "harbour wave" is that sometimes a village's fishermen would sail out, and encounter no unusual waves while out at sea fishing, and come back to land to find their village devastated by a huge wave. As the tsunami approaches the coast and the waters become shallow, wave shoaling compresses the wave and its speed decreases below 80 kilometres per hour (50 mph). Its wavelength diminishes to less than 20 kilometres (12 mi) and its amplitude grows enormously – in accord with Green's law. Since the wave still has the same very long period, the tsunami may take minutes to reach full height. Except for the very largest tsunamis, the approaching wave does not break, but rather appears like a fast-moving tidal bore.[40] Open bays and coastlines adjacent to very deep water may shape the tsunami further into a step-like wave with a steep-breaking front. When the tsunami's wave peak reaches the shore, the resulting temporary rise in sea level is termed run up. Run up is measured in metres above a reference sea level.[40] A large tsunami may feature multiple waves arriving over a period of hours, with significant time between the wave crests. The first wave to reach the shore may not have the highest run-up.[41] About 80% of tsunamis occur in the Pacific Ocean, but they are possible wherever there are large bodies of water, including lakes. They are caused by earthquakes, landslides, volcanic explosions, glacier calvings, and bolides. An illustration of the rhythmic "drawback" of surface water associated with a wave. It follows that a very large drawback may herald the arrival of a very large wave. All waves have a positive and negative peak; that is, a ridge and a trough. In the case of a propagating wave like a tsunami, either may be the first to arrive. If the first part to arrive at the shore is the ridge, a massive breaking wave or sudden flooding will be the first effect noticed on land. However, if the first part to arrive is a trough, a drawback will occur as the shoreline recedes dramatically, exposing normally submerged areas. The drawback can exceed hundreds of metres, and people unaware of the danger sometimes remain near the shore to satisfy their curiosity or to collect fish from the exposed seabed. A typical wave period for a damaging tsunami is about twelve minutes. Thus, the sea recedes in the drawback phase, with areas well below sea level exposed after three minutes. For the next six minutes, the wave trough builds into a ridge which may flood the coast, and destruction ensues. During the next six minutes, the wave changes from a ridge to a trough, and the flood waters recede in a second drawback. Victims and debris may be swept into the ocean. The process repeats with succeeding waves. Scales of intensity and magnitude As with earthquakes, several attempts have been made to set up scales of tsunami intensity or magnitude to allow comparison between different events.[42] Intensity scales The first scales used routinely to measure the intensity of tsunami were the Sieberg-Ambraseys scale (1962), used in the Mediterranean Sea and the Imamura-Iida intensity scale (1963), used in the Pacific Ocean. The latter scale was modified by Soloviev (1972), who calculated the tsunami intensity I according to the formula: I = 1 2 + log 2 ⁡ H a v {\displaystyle \,{\mathit {I}}={\frac {1}{2}}+\log _{2}{\mathit {H}}_{av}} where H a v {\displaystyle {\mathit {H}}_{av}} is the average wave height along the nearest coast. This scale, known as the Soloviev-Imamura tsunami intensity scale, is used in the global tsunami catalogues compiled by the NGDC/NOAA[43] and the Novosibirsk Tsunami Laboratory as the main parameter for the size of the tsunami. This formula yields: I = 3 for H a v {\displaystyle {\mathit {H}}_{av}} = 5.5 meters I = 4 for H a v {\displaystyle {\mathit {H}}_{av}} = 11 meters I = 5 for H a v {\displaystyle {\mathit {H}}_{av}} = 22.5 meters In 2013, following the intensively studied tsunamis in 2004 and 2011, a new 12-point scale was proposed, the Integrated Tsunami Intensity Scale (ITIS-2012), intended to match as closely as possible to the modified ESI2007 and EMS earthquake intensity scales.[44][45] Magnitude scales The first scale that genuinely calculated a magnitude for a tsunami, rather than an intensity at a particular location was the ML scale proposed by Murty & Loomis based on the potential energy.[42] Difficulties in calculating the potential energy of the tsunami mean that this scale is rarely used. Abe introduced the tsunami magnitude scale M t {\displaystyle {\mathit {M}}_{t}} , calculated from, M t = a log ⁡ h + b log ⁡ R = D {\displaystyle \,{\mathit {M}}_{t}={a}\log h+{b}\log R={\mathit {D}}} where h is the maximum tsunami-wave amplitude (in m) measured by a tide gauge at a distance R from the epicentre, a, b and D are constants used to make the Mt scale match as closely as possible with the moment magnitude scale.[46] Tsunami heights Diagram showing several measures to describe a tsunami size, including height, inundation and run-up. Several terms are used to describe the different characteristics of tsunami in terms of their height:[47][48][49][50] Amplitude, Wave Height, or Tsunami Height: Amplitude of Tsunami refers to its height relative to the normal sea level. It is usually measured at sea level, and it is different from the crest-to-trough height which is commonly used to measure other type of wave height.[51] Run-up Height, or Inundation Height: The height reached by a tsunami on the ground above sea level, Maximum run-up height refers to the maximum height reached by water above sea level, which is sometimes reported as the maximum height reached by a tsunami. Flow Depth: Refers to the height of tsunami above ground, regardless of the height of the location or sea level. (Maximum) Water Level: Maximum height above sea level as seen from trace or water mark. Different from maximum run-up height in the sense that they are not necessarily water marks at inundation line/limit. Warnings and predictions See also: Tsunami warning system Tsunami warning sign Drawbacks can serve as a brief warning. People who observe drawback (many survivors report an accompanying sucking sound), can survive only if they immediately run for high ground or seek the upper floors of nearby buildings. In 2004, ten-year-old Tilly Smith of Surrey, England, was on Maikhao beach in Phuket, Thailand with her parents and sister, and having learned about tsunamis recently in school, told her family that a tsunami might be imminent. Her parents warned others minutes before the wave arrived, saving dozens of lives. She credited her geography teacher, Andrew Kearney. In the 2004 Indian Ocean tsunami drawback was not reported on the African coast or any other east-facing coasts that it reached. This was because the wave moved downwards on the eastern side of the fault line and upwards on the western side. The western pulse hit coastal Africa and other western areas. A tsunami cannot be precisely predicted, even if the magnitude and location of an earthquake is known. Geologists, oceanographers, and seismologists analyse each earthquake and based on many factors may or may not issue a tsunami warning. However, there are some warning signs of an impending tsunami, and automated systems can provide warnings immediately after an earthquake in time to save lives. One of the most successful systems uses bottom pressure sensors, attached to buoys, which constantly monitor the pressure of the overlying water column. Regions with a high tsunami risk typically use tsunami warning systems to warn the population before the wave reaches land. On the west coast of the United States, which is prone to Pacific Ocean tsunami, warning signs indicate evacuation routes. In Japan, the community is well-educated about earthquakes and tsunamis, and along the Japanese shorelines the tsunami warning signs are reminders of the natural hazards together with a network of warning sirens, typically at the top of the cliff of surroundings hills.[52] The Pacific Tsunami Warning System is based in Honolulu, Hawaiʻi. It monitors Pacific Ocean seismic activity. A sufficiently large earthquake magnitude and other information triggers a tsunami warning. While the subduction zones around the Pacific are seismically active, not all earthquakes generate a tsunami. Computers assist in analysing the tsunami risk of every earthquake that occurs in the Pacific Ocean and the adjoining land masses. Tsunami hazard sign at Bamfield, British Columbia A tsunami warning sign in Kamakura, Japan The monument to the victims of the 1946 tsunami at Laupahoehoe, Hawaii Tsunami memorial in Kanyakumari beach A Tsunami hazard sign (Spanish - English) in Iquique, Chile. Tsunami Evacuation Route signage along U.S. Route 101, in Washington As a direct result of the Indian Ocean tsunami, a re-appraisal of the tsunami threat for all coastal areas is being undertaken by national governments and the United Nations Disaster Mitigation Committee. A tsunami warning system is being installed in the Indian Ocean. One of the deep water buoys used in the DART tsunami warning system Computer models can predict tsunami arrival, usually within minutes of the arrival time. Bottom pressure sensors can relay information in real time. Based on these pressure readings and other seismic information and the seafloor's shape (bathymetry) and coastal topography, the models estimate the amplitude and surge height of the approaching tsunami. All Pacific Rim countries collaborate in the Tsunami Warning System and most regularly practise evacuation and other procedures. In Japan, such preparation is mandatory for government, local authorities, emergency services and the population. Some zoologists hypothesise that some animal species have an ability to sense subsonic Rayleigh waves from an earthquake or a tsunami. If correct, monitoring their behaviour could provide advance warning of earthquakes, tsunami etc. However, the evidence is controversial and is not widely accepted. There are unsubstantiated claims about the Lisbon quake that some animals escaped to higher ground, while many other animals in the same areas drowned. The phenomenon was also noted by media sources in Sri Lanka in the 2004 Indian Ocean earthquake.[53][54] It is possible that certain animals (e.g., elephants) may have heard the sounds of the tsunami as it approached the coast. The elephants' reaction was to move away from the approaching noise. By contrast, some humans went to the shore to investigate and many drowned as a result. Along the United States west coast, in addition to sirens, warnings are sent on television and radio via the National Weather Service, using the Emergency Alert System. Forecast of tsunami attack probability Kunihiko Shimazaki (University of Tokyo), a leading member of the Earthquake Research Committee at The Headquarters for Earthquake Research Promotion in Japan, has mentioned an idea for instituting a system of public education regarding the probability of tsunami risk; such a system was announced by Shimazaki at the Japan National Press Club in May 2011. The forecast would include a detection for environmental risk, including proposed tsunami height, danger areas prone to tsunamis, and overall occurrence probability. The forecast would integrate the scientific knowledge of recent interdisciplinarity with information gathered from the aftermath of the 2011 Tōhoku earthquake and tsunami. Per the announcement, a plan was due to be put in place by 2014; however, reliable forecasting of earthquake and tsunami probability is still unavailable. Shimazaki acknowledged that, given the current literature on the topic, tsunami probability warnings are just as, if not more, difficult to predict than earthquake risk probability. See also: Seawall A seawall at Tsu, Japan In some tsunami-prone countries, earthquake engineering measures have been taken to reduce the damage caused onshore. Japan, where tsunami science and response measures first began following a disaster in 1896, has produced ever-more elaborate countermeasures and response plans.[55] The country has built many tsunami walls of up to 12 metres (39 ft) high to protect populated coastal areas. Other localities have built floodgates of up to 15.5 metres (51 ft) high and channels to redirect the water from an incoming tsunami. However, their effectiveness has been questioned, as tsunami often overtop the barriers. The Fukushima Daiichi nuclear disaster was directly triggered by the 2011 Tōhoku earthquake and tsunami, when waves exceeded the height of the plant's sea wall.[56] Iwate Prefecture, which is an area at high risk from tsunami, had tsunami barriers walls (Taro sea wall) totalling 25 kilometres (16 mi) long at coastal towns. The 2011 tsunami toppled more than 50% of the walls and caused catastrophic damage.[57] The Okushiri, Hokkaidō tsunami which struck Okushiri Island of Hokkaidō within two to five minutes of the earthquake on July 12, 1993, created waves as much as 30 metres (100 ft) tall—as high as a 10-storey building. The port town of Aonae was completely surrounded by a tsunami wall, but the waves washed right over the wall and destroyed all the wood-framed structures in the area. The wall may have succeeded in slowing down and moderating the height of the tsunami, but it did not prevent major destruction and loss of life.[58] Disasters portal Tsunamis portal Deep-ocean Assessment and Reporting of Tsunamis Earthquake Early Warning (Japan) Higher Ground Project Index of wave articles Kaikoura Canyon landslide tsunami hazard List of natural disasters by death toll Lists of earthquakes Megatsunami Minoan eruption Rogue wave Seiche Sneaker wave Tauredunum event Tsunami Society Tsunami-proof building Tsunamis affecting New Zealand Tsunamis affecting the British Isles Tsunamis in lakes ^ "Tsunami Terminology". NOAA. Archived from the original on 2011-02-25. Retrieved 2010-07-15. ^ Wells, John C. (1990). Longman pronunciation dictionary. Harlow, England: Longman. p. 736. ISBN 978-0-582-05383-0. Entry: "tsunami" ^ "tsunami". MacMillan Dictionary. Retrieved 2018-11-23. ^ "Definition of Tidal Wave". ^ Barbara Ferreira (April 17, 2011). "When icebergs capsize, tsunamis may ensue". Nature. Retrieved 2011-04-27. ^ "NASA Finds Japan Tsunami Waves Merged, Doubling Power". Retrieved 3 November 2016. ^ "Tsunami 101". University of Washington. Retrieved 1 December 2018. ^ "What does "tsunami" mean?". Earth and Space Sciences, University of Washington. Retrieved 1 December 2018. ^ Fradin, Judith Bloom and Dennis Brindell (2008). Witness to Disaster: Tsunamis. Witness to Disaster. Washington, D.C.: National Geographic Society. pp. 42–43. Archived from the original on 2012-04-06. ^ a b Thucydides: "A History of the Peloponnesian War", 3.89.1–4 ^ a b Smid, T. C. (April 1970). 'Tsunamis' in Greek Literature. Greece & Rome. 17 (2nd ed.). pp. 100–104. ^ [a. Jap. tsunami, tunami, f. tsu harbour + nami waves.—Oxford English Dictionary] ^ "Definition of Tidal Wave". Retrieved 3 November 2016. ^ "Tidal", The American Heritage Stedman's Medical Dictionary. Houghton Mifflin Company. 11 November 2008.Dictionary.reference.com ^ -al. (n.d.). Dictionary.com Unabridged (v 1.1). Retrieved November 11, 2008, Dictionary.reference.com ^ "Forty Feet High And It Kills!" Hawaii Five-O. Writ. Robert C. Dennis and Edward J. Lakso. Dir. Michael O'Herlihy. CBS, 8 Oct. 1969. Television. ^ "Seismic Sea Wave – Tsunami Glossary". Retrieved 3 November 2016. ^ "tsunamis". Retrieved 3 November 2016. ^ postcode=3001, corporateName=Bureau of Meteorology; address=GPO Box 1289, Melbourne, Victoria, Australia;. "Joint Australian Tsunami Warning Centre". Retrieved 3 November 2016. ^ Indian Ocean tsunami anniversary: Memorial events held 26 December 2014, BBC News ^ The 10 most destructive tsunamis in history Archived 2013-12-04 at the Wayback Machine, Australian Geographic, March 16, 2011. ^ Thucydides: "A History of the Peloponnesian War", 3.89.5 ^ Kelly, Gavin (2004). "Ammianus and the Great Tsunami". The Journal of Roman Studies. 94 (141): 141–167. doi:10.2307/4135013. JSTOR 4135013. ^ Stanley, Jean-Daniel & Jorstad, Thomas F. (2005), "The 365 A.D. Tsunami Destruction of Alexandria, Egypt: Erosion, Deformation of Strata and Introduction of Allochthonous Material" ^ Haugen, K; Lovholt, F; Harbitz, C (2005). "Fundamental mechanisms for tsunami generation by submarine mass flows in idealised geometries". Marine and Petroleum Geology. 22 (1–2): 209–217. doi:10.1016/j.marpetgeo.2004.10.016. ^ Margaritondo, G (2005). "Explaining the physics of tsunamis to undergraduate and non-physics students". European Journal of Physics. 26 (3): 401–407. Bibcode:2005EJPh...26..401M. doi:10.1088/0143-0807/26/3/007. ^ Voit, S.S (1987). "Tsunamis". Annual Review of Fluid Mechanics. 19 (1): 217–236. Bibcode:1987AnRFM..19..217V. doi:10.1146/annurev.fl.19.010187.001245. ^ "How do earthquakes generate tsunamis?". University of Washington. Archived from the original on 2007-02-03. ^ Lynnes, C. S.; Lay, T. (1988), "Source Process of the Great 1977 Sumba Earthquake" (PDF), Geophysical Research Letters, American Geophysical Union, 93 (B11): 13, 407–13, 420, Bibcode:1988JGR....9313407L, doi:10.1029/JB093iB11p13407 ^ Kanamori H. (1971). "Seismological evidence for a lithospheric normal faulting – the Sanriku earthquake of 1933". Physics of the Earth and Planetary Interiors. 4 (4): 298–300. Bibcode:1971PEPI....4..289K. doi:10.1016/0031-9201(71)90013-6. ^ Facts and figures: how tsunamis form Archived 2013-11-05 at the Wayback Machine, Australian Geographic, March 18, 2011. ^ George Pararas-Carayannis (1999). "The Mega-Tsunami of July 9, 1958 in Lituya Bay, Alaska". Retrieved 2014-02-27. ^ Petley, Dave (Professor) (2008-12-11). "The Vaiont (Vajont) landslide of 1963". The Landslide Blog. Archived from the original on 2013-12-06. Retrieved 2014-02-26. ^ Duff, Mark (2013-10-10). "Italy Vajont anniversary: Night of the 'tsunami'". BBC News. Bbc.co.uk. Retrieved 2014-02-27. ^ Pararas-Carayannis, George (2002). "Evaluation of the threat of mega tsunami generation from postulated massive slope failures of the island volcanoes on La Palma, Canary Islands, and on the island of Hawaii". Science of Tsunami Hazards. 20 (5): 251–277. Retrieved 7 September 2014. ^ a b Monserrat, S.; Vilibíc, I.; Rabinovich, A. B. (2006). "Meteotsunamis: atmospherically induced destructive ocean waves in the tsunami frequency band" (PDF). Natural Hazards and Earth System Sciences. 6 (6): 1035–1051. doi:10.5194/nhess-6-1035-2006. Retrieved 23 November 2011. ^ "The Hauraki Gulf Marine Park, Part 2". Inset to The New Zealand Herald. 3 March 2010. p. 9. ^ Glasstone, Samuel; Dolan, Philip (1977). Shock effects of surface and subsurface bursts – The effects of nuclear weapons (third ed.). Washington, DC: U.S. Department of Defense; Energy Research and Development Administration. ^ Earthsci.org, Tsunamis ^ a b "Life of a Tsunami". Western Coastal & Marine Geology. United States Geographical Survey. 22 October 2008. Retrieved 2009-09-09. ^ Prof. Stephen A. Nelson (28 January 2009). "Tsunami". Tulane University. Retrieved 2009-09-09. ^ a b Gusiakov V. "Tsunami Quantification: how we measure the overall size of tsunami (Review of tsunami intensity and magnitude scales)" (PDF). Retrieved 2009-10-18. ^ Center, National Geophysical Data. "NGDC/WDS Global Historical Tsunami Database – NCEI". Retrieved 3 November 2016. ^ Lekkas E.; Andreadakis E.; Kostaki I. & Kapourani E. (2013). "A Proposal for a New Integrated Tsunami Intensity Scale (ITIS‐2012)". Bulletin of the Seismological Society of America. 103 (2B): 1493–1502. Bibcode:2013BuSSA.103.1493L. doi:10.1785/0120120099. ^ Katsetsiadou, K.N., Andreadakis, E. and Lekkas, E., 2016. Tsunami intensity mapping: applying the integrated Tsunami Intensity Scale (ITIS2012) on Ishinomaki Bay Coast after the mega-tsunami of Tohoku, March 11, 2011. Research in Geophysics, 5(1). ^ Abe K. (1995). Estimate of Tsunami Run-up Heights from Earthquake Magnitudes. Tsunami: progress in prediction, disaster prevention, and warning. ISBN 978-0-7923-3483-5. Retrieved 2009-10-18. ^ Tsunami Glossary ^ Tsunami Terms ^ 津波について ^ 津波の高さの定義 ^ Tsunami Amplitude ^ Chanson, H. (2010). Tsunami Warning Signs on the Enshu Coast of Japan. 78. Shore & Beach. pp. 52–54. ISSN 0037-4237. ^ Lambourne, Helen (2005-03-27). "Tsunami: Anatomy of a disaster". BBC. ^ Kenneally, Christine (2004-12-30). "Surviving the Tsunami: What Sri Lanka's animals knew that humans didn't". Slate Magazine. ^ "Journalist's Resource: Research for Reporting, from Harvard Shorenstein Center". Content.hks.harvard.edu. 2012-05-30. Retrieved 2012-06-12. ^ Phillip Lipscy, Kenji Kushida, and Trevor Incerti. 2013. "The Fukushima Disaster and Japan's Nuclear Plant Vulnerability in Comparative Perspective". Environmental Science and Technology 47 (May), 6082–6088. ^ Fukada, Takahiro (21 September 2011). "Iwate fisheries continue struggle to recover". The Japan Times. p. 3. Retrieved 2016-09-18. ^ George Pararas-Carayannis. "The Earthquake and Tsunami of July 12, 1993 in the Sea of Japan/East Sea". www.drgeorgepc.com. Retrieved 2016-09-18. IOC Tsunami Glossary by the Intergovernmental Oceanographic Commission (IOC) at the International Tsunami Information Centre (ITIC) of UNESCO Tsunami Terminology at NOAA In June 2011, the VOA Special English service of the Voice of America broadcast a 15-minute program on tsunamis as part of its weekly Science in the News series. The program included an interview with an NOAA official who oversees the agency's tsunami warning system. A transcript and MP3 of the program, intended for English learners, can be found at The Ever-Present Threat of Tsunamis. abelard.org. tsunamis: tsunamis travel fast but not at infinite speed. retrieved March 29, 2005. Dudley, Walter C. & Lee, Min (1988: 1st edition) Tsunami! ISBN 0-8248-1125-9 website Iwan, W.D., editor, 2006, Summary report of the Great Sumatra Earthquakes and Indian Ocean tsunamis of December 26, 2004 and March 28, 2005: Earthquake Engineering Research Institute, EERI Publication #2006-06, 11 chapters, 100-page summary, plus CD-ROM with complete text and supplementary photographs, EERI Report 2006-06. ISBN 1-932884-19-X website Kenneally, Christine (December 30, 2004). "Surviving the Tsunami." Slate. website Lambourne, Helen (March 27, 2005). "Tsunami: Anatomy of a disaster." BBC News. website Macey, Richard (January 1, 2005). "The Big Bang that Triggered A Tragedy," The Sydney Morning Herald, p 11—quoting Dr Mark Leonard, seismologist at Geoscience Australia. Interactive Map of Historical Tsunamis from NOAA's National Geophysical Data Center Tappin, D; 2001. Local tsunamis. Geoscientist. 11–8, 4–7. Girl, 10, used geography lesson to save lives, Telegraph.co.uk Philippines warned to prepare for Japan's tsunami, Noypi.ph Boris Levin, Mikhail Nosov: Physics of tsunamis. Springer, Dordrecht 2009, ISBN 978-1-4020-8855-1. Kontar, Y. A. et al.: Tsunami Events and Lessons Learned: Environmental and Societal Significance. Springer, 2014. ISBN 978-94-007-7268-7 (print); ISBN 978-94-007-7269-4 (eBook) Kristy F. Tiampo: Earthquakes: simulations, sources and tsunamis. Birkhäuser, Basel 2008, ISBN 978-3-7643-8756-3. Linda Maria Koldau: Tsunamis. Entstehung, Geschichte, Prävention, (Tsunami development, history and prevention) C.H. Beck, Munich 2013 (C.H. Beck Reihe Wissen 2770), ISBN 978-3-406-64656-0 (in German). Walter C. Dudley, Min Lee: Tsunami! University of Hawaii Press, 1988, 1998, Tsunami! University of Hawai'i Press 1999, ISBN 0-8248-1125-9, ISBN 978-0-8248-1969-9. Charles L. Mader: Numerical Modeling of Water Waves CRC Press, 2004, ISBN 0-8493-2311-8. Wikimedia Commons has media related to Tsunamis. Look up tsunami in Wiktionary, the free dictionary. World's Tallest Tsunami – geology.com Tsunami Data and Information – National Geophysical Data Center IOC Tsunami Glossary – International Tsunami Information Center (UNESCO) Tsunami & Earthquake Research at the USGS – United States Geological Survey Intergovernmental Oceanographic Commission – Intergovernmental Oceanographic Commission Tsunami – National Oceanic and Atmospheric Administration Wave That Shook The World – Nova Recent and Historical Tsunami Events and Relevant Data – Pacific Marine Environmental Laboratory Raw Video: Tsunami Slams Northeast Japan – Associated Press Tsunami alert page (in English) from Japan Meteorological Agency Tsunami status page from USGS-run Pacific Tsunami Warning Center Physical oceanography Airy wave theory Ballantine scale Benjamin–Feir instability Boussinesq approximation Breaking wave Clapotis Cnoidal wave Cross sea Edge wave Equatorial waves Green's law Infragravity wave Internal wave Iribarren number Kelvin wave Kinematic wave Longshore drift Luke's variational principle Mild-slope equation Radiation stress Rossby wave Rossby-gravity waves Sea state Significant wave height Soliton Stokes boundary layer Stokes drift Stokes wave Trochoidal wave Ursell number Wave action Wave base Wave power Wave radar Wave setup Wave shoaling Wave turbulence Wave–current interaction Waves and shallow water one-dimensional Saint-Venant equations shallow water equations Wind wave Atmospheric circulation Baroclinity Boundary current Coriolis force Coriolis–Stokes force Craik–Leibovich vortex force Downwelling Ekman layer Ekman spiral Ekman transport El Niño–Southern Oscillation General circulation model Geochemical Ocean Sections Study Geostrophic current Global Ocean Data Analysis Project Halothermal circulation Humboldt Current Hydrothermal circulation Langmuir circulation Loop Current Modular Ocean Model Ocean dynamics Ocean gyre Princeton ocean model Rip current Subsurface currents Sverdrup balance Thermohaline circulation World Ocean Circulation Experiment Amphidromic point Earth tide Head of tide Internal tide Lunitidal interval Perigean spring tide Rule of twelfths Tidal force Tidal race Tidal range Tidal resonance Tide gauge Tideline Theory of tides Abyssal fan Abyssal plain Bathymetric chart Coastal geography Cold seep Continental margin Continental rise Continental shelf Contourite Oceanic basin Oceanic plateau Oceanic trench Passive margin Seabed Seamount Submarine canyon Convergent boundary Divergent boundary Fracture zone Hydrothermal vent Marine geology Mid-ocean ridge Mohorovičić discontinuity Vine–Matthews–Morley hypothesis Oceanic crust Outer trench swell Ridge push Slab pull Slab suction Slab window Subduction Transform fault Volcanic arc Ocean zones Benthic Deep ocean water Mesopelagic Photic Future sea level Global Sea Level Observing System North West Shelf Operational Oceanographic System Sea-level curve World Geodetic System Deep scattering layer Hydroacoustics Ocean acoustic tomography Sofar bomb SOFAR channel Underwater acoustics Jason-1 Jason-2 (Ocean Surface Topography Mission) Benthic lander Color of water DSV Alvin Marginal sea National Oceanographic Data Center Ocean observations Ocean reanalysis Ocean surface topography Ocean thermal energy conversion Pelagic sediment Sea surface microlayer Sea surface temperature Thermocline Underwater glider Water column World Ocean Atlas Natural disasters · list by death toll Mass wasting Debris flow Seismic hazard Seismic risk Soil liquefaction Pyroclastic flow Hydrological Coastal flood Limnic eruption Ice storm Megadrought Tropical cyclone Impact event Geomagnetic storm Retrieved from "https:/w/index.php?title=Tsunami&oldid=906823399" Forms of water Japanese words and phrases Oceanographical terminology Wikipedia indefinitely semi-protected pages Articles containing video clips Related to Tsunami An earthquake is the shaking of the surface of the Earth, resulting from the sudden release of energy in the Earth's lithosphere that creates seismic waves. Earthquakes can range in size from those that are so weak that they cannot be felt to those violent enough to toss people around and destroy whole cities. The seismicity, or seismic activity, of an area is the frequency, type and size of earthquakes experienced over a period of time. The word tremor is also used for non-earthquake seismic rumbling. A megatsunami is a very large wave created by a large, sudden displacement of material into a body of water. Physical oceanography is the study of physical conditions and physical processes within the ocean, especially the motions and physical properties of ocean waters. A seiche is a standing wave in an enclosed or partially enclosed body of water. Seiches and seiche-related phenomena have been observed on lakes, reservoirs, swimming pools, bays, harbours and seas. The key requirement for formation of a seiche is that the body of water be at least partially bounded, allowing the formation of the standing wave. Cumbre Vieja Cumbre Vieja is an active although dormant volcanic ridge on the volcanic ocean island of La Palma in the Canary Islands, Spain, that erupted twice in the 20th century – in 1949, and again in 1971. 2004 Indian Ocean earthquake and tsunami The 2004 Indian Ocean earthquake and tsunami occurred at 00:58:53 UTC on 26 December, with an epicentre off the west coast of northern Sumatra. It was an undersea megathrust earthquake that registered a magnitude of 9.1–9.3 Mw, reaching a Mercalli intensity up to IX in certain areas. The earthquake was caused by a rupture along the fault between the Burma Plate and the Indian Plate. 1929 Grand Banks earthquake The 1929 Grand Banks earthquake occurred on November 18. The shock had a moment magnitude of 7.2 and a maximum Rossi–Forel intensity of VI and was centered in the Atlantic Ocean off the south coast of Newfoundland in the Laurentian Slope Seismic Zone. Tsunami warning system A tsunami warning system (TWS) is used to detect tsunamis in advance and issue warnings to prevent loss of life and damage to property. It is made up of two equally important components: a network of sensors to detect tsunamis and a communications infrastructure to issue timely alarms to permit evacuation of the coastal areas. There are two distinct types of tsunami warning systems: international and regional. When operating, seismic alerts are used to instigate the watches and warnings; then, data from observed sea level height are used to verify the existence of a tsunami. Other systems have been proposed to augment the warning procedures; for example, it has been suggested that the duration and frequency content of t-wave energy is indicative of an earthquake's tsunami potential. 2006 Pangandaran earthquake and tsunami The 2006 Pangandaran earthquake and tsunami occurred on July 17 at 15:19:27 local time along a subduction zone off the coast of west and central Java, a large and densely populated island in the Indonesian archipelago. The shock had a moment magnitude of 7.7 and a maximum perceived intensity of IV (Light) in Jakarta, the capital and largest city of Indonesia. There were no direct effects of the earthquake's shaking due to its low intensity, and the large loss of life from the event was due to the resulting tsunami, which inundated a 300 km (190 mi) portion of the Java coast that had been unaffected by the earlier 2004 Indian Ocean earthquake and tsunami that was off the coast of Sumatra. The July 2006 earthquake was also centered in the Indian Ocean, 180 kilometers (110 mi) from the coast of Java, and had a duration of more than three minutes. Submarine landslide Submarine landslides are marine landslides that transport sediment across the continental shelf and into the deep ocean. A submarine landslide is initiated when the downwards driving stress exceeds the resisting stress of the seafloor slope material causing movements along one or more concave to planar rupture surfaces. Submarine landslides take place in a variety of different settings including planes as low as 1° and can cause significant damage to both life and property. Recent advances have been made in understanding the nature and processes of submarine landslides through the use of sidescan sonar and other seafloor mapping technology. Teletsunami A teletsunami is a tsunami that originates from a distant source, defined as more than 1,000 km (620 mi) away or three hours' travel from the area of interest, sometimes travelling across an ocean. All teletsunamis have been generated by major earthquakes such as the 1755 Lisbon earthquake, 1960 Valdivia earthquake, 1964 Alaska earthquake, and 2004 Indian Ocean earthquake. 1965 Ceram Sea earthquake The 1965 Ceram Sea earthquake occurred on January 24 at 00:11 UTC with a moment magnitude of 8.2 and its epicenter was located just off the southwestern coast of Sanana Island in eastern Indonesia. The event occurred at a depth of 28 kilometers under the Ceram Sea, and a tsunami was generated which caused damage in Sanana, Buru, and Mangole. During the tsunami three consecutive run-ups were reported in Seram Island, and a four-meter run-up was reported at Buru Island. Tsunami Warning, Education, and Research Act of 2014 The Tsunami Warning, Education, and Research Act of 2014 is a bill that would authorize the National Oceanic and Atmospheric Administration (NOAA) to spend $27 million a year for three years on their on-going tsunami warning and research programs. List of tsunamis affecting New Zealand Tsunamis affecting New Zealand are mainly due to the country being part of the geologically active Pacific Plate and associated with the Pacific Ring of Fire. Tsunamis affect New Zealand's coastline reasonably frequently and tend to be caused by earthquakes on the Pacific Plate both locally and as far away as South America, Japan, and Alaska. Some have been attributed to undersea landslides, volcanoes, and at least one meteor strike. New Zealand is affected by at least one tsunami with the a wave height greater than one metre every ten years on average. The history of tsunamis is limited by the country's written history only dating from the early to mid-1800s with Māori oral traditions and paleotsunami research prior to that time. Studies are also being carried out into possible tsunamis on the larger inland lakes, particularly from landslides.
CommonCrawl
Solutions to Riemann–Liouville fractional integrodifferential equations via fractional resolvents Shaochun Ji ORCID: orcid.org/0000-0002-5773-65321 & Dandan Yang2 This paper is concerned with the semilinear fractional integrodifferential system with Riemann–Liouville fractional derivative. Firstly, we introduce the suitable \(C_{1-\alpha }\)-solution to Riemann–Liouville fractional integrodifferential equations in the new frame of fractional resolvents. Some properties of fractional resolvents are given. Then we discuss the sufficient conditions for the existence of solutions without the Lipschitz assumptions to nonlinear item. Finally, an example on fractional partial differential equations is presented to illustrate our abstract results. Fractional differential equations have received much attention over the past two decades, as they are found to be important models in many physical, biological, and engineering problems. In fact, they can be regarded as alternative models to nonlinear differential equations and many physical phenomena with memory characteristics can be described by fractional differential equations; see, for instance, [1–7]. Recently, the theories of fractional differential equations with classical Caputo and Riemann–Liouville derivative have been developed and some basic properties are obtained including existence and controllability, see [8–27]. Among them, the differential equations with Caputo fractional derivative are studied extensively. By probability density functions, Wang and Zhou [13] gave a suitable concept of mild solutions to Caputo fractional evolution equations. Balachandran and Kiruthika [11], Balasubramaniam and Tamilalagan [23] proved the existence of solutions to Caputo fractional integrodifferential equations by using resolvent operators. Mallika and Baleanu et al. [26] studied the fractional neutral integrodifferential equation with nonlocal conditions by fixed point theorems and resolvent operators. On the other hand, in the papers of Heymans and Podlubny [28], Agarwal et al. [29], Baleanu et al. [30], it was shown that Riemann–Liouville fractional differential equations are useful in physics to model viscoelasticity and have different properties from the Caputo derivative. As the Riemann–Liouville fractional derivative has a singularity at zero, the mathematical analysis to Riemann–Liouville fractional differential equations is more complicated. In this paper we will consider the following semilinear integrodifferential system with a Riemann–Liouville fractional derivative: $$ \textstyle\begin{cases} D^{\alpha }x(t)= Ax(t) + f (t,x(t),\int _{0}^{t} h(t,s,x(s)) \,\mathrm{d}s ),\quad t\in J':=(0,b], \\ \lim_{t\rightarrow 0^{+}} \varGamma (\alpha ) t^{1-\alpha } x(t)=x_{0}, \end{cases} $$ where \(0<\alpha <1\), \(D^{\alpha }\) is the Riemann–Liouville fractional derivative of order α, \(A:D(A)\subseteq X\rightarrow X \) is the infinitesimal generator of an order-α fractional resolvent \(\{S_{\alpha }(t), t>0\}\) on a Banach space X, the operators \(h:\Delta \times X\rightarrow X\), \(f:J\times X\times X\rightarrow X\) are nonlinear functions, where \(\Delta =\{(t, s), 0 \leq s \leq t \leq b \}\), \(J:=[0,b]\). Some authors have discussed the solutions to fractional differential equations with Riemann–Liouville fractional derivative [18, 31–33]. For the mild solution, there are two different types of representation that have been given. The first one was constructed in terms of a probability density function. By Laplace transformation and probability density function, Liu and Li [31] gave an appropriate concept of solutions to a semilinear differential system when A generates a \(C_{0}\)-semigroup. The second one was presented in terms of fractional resolvents. In [32], based on \((\alpha ,k)\)-regularized operators, Lizama got the representation of solutions for linear fractional order differential equations. Using order-α resolvents, Li and Peng [21], Fan [33] discussed the solutions to fractional homogeneous and inhomogeneous linear differential system, respectively. As is well known, \(C_{0}\)-semigroup is a useful tool in the study of first order differential equations in Banach spaces. In a similar way, fractional resolvents play an important role in the theory of fractional integrodifferential equations. For a compact \(C_{0}\)-semigroup \(T(t)\), it is continuous in the sense of operator norm for \(t>0\). Then it is a natural question to ask whether the result is valid in the case of Riemann–Liouville fractional resolvents; see Lemma 2.5. This is one motivation of this paper. Recently some interesting results on Caputo fractional resolvents have been given in [11, 12, 26]. We note that the properties of resolvent operators for Caputo derivative and Riemann–Liouville derivative are different in essence, though neither of them has the semigroup property. For Caputo fractional resolvents \(T_{\alpha }(t)\), \(T_{\alpha }(0)x=x\) for every \(x\in X\), but it is not valid in the case of Riemann–Liouville fractional resolvents. So another motivation of this paper is to formulate the suitable solution to problem (1.1) by Riemann–Liouville fractional resolvents in a Banach space \(C_{1-\alpha }(J,X) \), which is constructed to solve the difficulty of fractional resolvents' unboundedness at \(t=0\). Then without the Lipschitz conditions, the existence of solutions to problem (1.1) is discussed. The paper is organized as follows. In Sect. 2, we recall some concepts and facts about the fractional resolvents. Section 3 is devoted to the sufficient conditions for solutions to problem (1.1). Finally, an example is presented to illustrate the application of our results. We denote by \(C(J,X)\) the space of X-valued continuous functions on J with the norm \(\|x\|=\sup \{\|x(t)\|, t\in J\}\), \(B(X)\) the space of all bounded linear operators from X to itself, \(L^{p}(J,X)\) the space of X-valued Bochner integrable functions with the norm \(\|f\|_{L^{p}}=(\int _{0}^{b} \|f(t)\|^{p} \,\mathrm{d}t)^{\frac{1}{p}}\). In order to define the solution to problem (1.1), we consider the space $$ C_{1-\alpha }(J, X):= \bigl\{ x(\cdot ) : \cdot ^{1-\alpha }x(\cdot ) \in C(J, X), 0< \alpha < 1 \bigr\} , $$ with the norm \(\|x\|_{C_{1-\alpha }} =\sup \{ \|t^{1-\alpha }x(t)\| _{X}: t\in J \}\), where \(t^{1-\alpha }x(t)|_{t=0}= \lim_{t\rightarrow 0^{+}} t^{1-\alpha }x(t)\). Obviously \(C_{1-\alpha }(J, X)\) is a Banach space. Now we recall some definitions and results on fractional derivative and fractional differential equations. Definition 2.1 ([3]) The Riemann–Liouville fractional integral of a function \(f\in L^{1}(J,X)\) of order \(\alpha \in \mathbb{R^{+}}\) is defined by $$ I^{\alpha }_{t} f(t)=\frac{1}{\varGamma (\alpha )} \int _{0}^{t}(t-s)^{ \alpha -1}f(s) \, \mathrm{d}s,\quad t>0, $$ where \(\varGamma (\cdot )\) is the gamma function. The Riemann–Liouville fractional order derivative of order \(\alpha \in \mathbb{R^{+}}\) of a function \(f\in L^{1}(J,X)\) is defined by $$ D^{\alpha }f(t)= \frac{1}{\varGamma (n-\alpha )}\frac{\mathrm{d}^{n}}{ \mathrm{d}t^{n}} \int _{0}^{t}(t-s)^{n-\alpha -1}f(s) \, \mathrm{d}s,\quad t>0, $$ where \(\alpha \in (n-1,n]\), \(n\in \mathbb{N}\). Especially for \(0<\alpha <1\), \(D^{\alpha }f(t)= \frac{1}{\varGamma (1- \alpha )}\frac{\mathrm{d}}{\mathrm{d}t}\int _{0}^{t}(t-s)^{-\alpha }f(s) \,\mathrm{d}s\), \(t>0\). Let the symbol ∗ be the convolution \((f*g)(t)=\int _{0}^{t} f(t-s)g(s) \,\mathrm{d}s\). For the sake of convenience, we take \(g_{\alpha }(t):=\frac{t ^{\alpha -1}}{\varGamma (\alpha )}\) for \(t>0\) and \(g_{\alpha }(t)=0\) for \(t\leq 0\). Then, for \(0<\alpha <1\), $$ I^{\alpha }_{t} f(t)=(g_{\alpha }*f) (t),\qquad D^{\alpha }f(t)=\frac{\mathrm{d}}{\mathrm{d}t} (g_{1-\alpha }*f) (t). $$ ([21]) Let \(0<\alpha <1\). A family \(\{S_{\alpha }(t),t>0\}\subseteq B(X)\) is called an order-α fractional resolvent if it satisfies the following assumptions: \(S_{\alpha }(\cdot ) x \in C (\mathbb{R}^{+}, X )\) and \(\lim_{t\rightarrow 0^{+}} \varGamma (\alpha ) t^{1-\alpha } S_{\alpha }(t) x=x\), \(x \in X\); \(S_{\alpha }(t) S_{\alpha }(s)=S_{\alpha }(s) S_{\alpha }(t)\), \(s, t>0\); \(S_{\alpha }(t) I_{s}^{\alpha } S_{\alpha }(s)-I_{t}^{\alpha } S _{\alpha }(t) S_{\alpha }(s)=g_{\alpha }(t) I_{s}^{\alpha } S_{\alpha }(s)-g_{\alpha }(s) I_{t}^{\alpha } S_{\alpha }(t)\), \(s, t>0\). The linear operator A defined by $$ A x=\varGamma (2 \alpha ) \lim_{t \rightarrow 0^{+}} \frac{t^{1-\alpha } S_{\alpha }(t) x-\frac{1}{\varGamma (\alpha )} x}{t^{\alpha }}, \quad x\in D(A), $$ is the infinitesimal generator of the fractional resolvent \(S_{\alpha }(t)\), where $$ D(A)= \biggl\{ x \in X : \lim_{t \rightarrow 0+} \frac{t^{1-\alpha } S _{\alpha }(t) x-\frac{1}{\varGamma (\alpha )} x}{t^{\alpha }} \text{ exists} \biggr\} . $$ Note that the fractional resolvent \(S_{\alpha }(t)\) is unbounded when t is sufficiently small, but \(t^{1-\alpha }S_{\alpha }(t)\) is bounded on \(J=[0,b]\). We denote \(M =\sup_{t\in J} \Vert t^{1-\alpha } S _{\alpha }(t) \Vert \). Let \(\{S_{\alpha }(t),t>0\}\)be an order-αfractional resolvent andAbe its infinitesimal generator. Then: \(S_{\alpha }(t) x \in D(A)\)and \(A S_{\alpha }(t) x=S_{\alpha }(t) A x\)for all \(x \in D(A)\), \(t>0\). For all \(x\in X\), \(t>0\), \(S_{\alpha }(t) x=\frac{t^{\alpha -1}}{ \varGamma (\alpha )} x+A I_{t}^{\alpha } S_{\alpha }(t) x\). For all \(x\in D(A)\), \(t>0\), \(S_{\alpha }(t) x=\frac{t^{\alpha -1}}{ \varGamma (\alpha )} x+I_{t}^{\alpha } S_{\alpha }(t) Ax\). Ais closed and densely defined. As fractional resolvents do not satisfy the property of semigroups, we need the following convergence results for resolvents in the uniform operator topology. Let \(\{t^{1-\alpha }S_{\alpha }(t),t>0\}\)be equicontinuous and compact. Then, for every \(t>0\), \(\lim_{h \rightarrow 0^{+}} \Vert (t+h)^{1-\alpha } S_{\alpha }(t+h)-\varGamma (\alpha ) h^{1-\alpha } S_{\alpha }(h)\cdot t^{1-\alpha } S_{\alpha }(t) \Vert =0\); \(\lim_{h \rightarrow 0^{+}} \Vert t^{1-\alpha } S_{\alpha }(t)- \varGamma (\alpha ) h^{1-\alpha } S_{\alpha }(h)\cdot (t-h)^{1-\alpha } S _{\alpha }(t-h) \Vert =0\). As \(t^{1-\alpha }S_{\alpha }(t)\) is compact for \(t>0\), we have the set $$ P_{t}=\bigl\{ t^{1-\alpha }S_{\alpha }(t)x: \Vert x \Vert \leq 1 \bigr\} , $$ is precompact in X for every \(t>0\). Then we can find a finite family \(\{t^{1-\alpha }S_{\alpha }(t)x_{i}:\|x_{i}\|\leq 1\}_{i=1}^{m} \subset P_{t}\) satisfying for every x, \(\|x\|\leq 1\), there exists \(x_{i}\), \(i=1,\ldots ,m\), such that $$ \bigl\Vert t^{1-\alpha } S_{\alpha }(t)x-t^{1-\alpha } S_{\alpha }(t)x_{i} \bigr\Vert < \frac{\varepsilon }{3(1+\varGamma (\alpha )M)}. $$ From Definition 2.3(a), there exists \(h_{1}>0\) such that $$ \bigl\Vert t^{1-\alpha } S_{\alpha }(t)x_{i} - \varGamma (\alpha ) h^{1-\alpha }S _{\alpha }(h)\cdot t^{1-\alpha } S_{\alpha }(t)x_{i} \bigr\Vert < \frac{\varepsilon }{3}, $$ for every \(0< h\leq h_{1}\) and \(1\leq i\leq m\). Moreover, as \(t^{1-\alpha }S_{\alpha }(t)\) is equicontinuous for \(t>0\), we can find \(h_{2}>0\) such that $$ \bigl\Vert (t+h)^{1-\alpha } S_{\alpha }(t+h)x - t^{1-\alpha }S_{\alpha }(t)x \bigr\Vert < \frac{\varepsilon }{3}, $$ for every \(0< h\leq h_{2}\) and \(\|x\|\leq 1\). Now for \(0< h\leq \min \{h_{1},h_{2}\}\) and \(\|x\|\leq 1\), it follows from (2.1)–(2.3) that $$\begin{aligned}& \bigl\Vert (t+h)^{1-\alpha } S_{\alpha }(t+h)x-\varGamma (\alpha ) h ^{1-\alpha } S_{\alpha }(h)\cdot t^{1-\alpha } S_{\alpha }(t)x \bigr\Vert \\ & \quad \leq \bigl\Vert (t+h)^{1-\alpha } S_{\alpha }(t+h)x- t^{1-\alpha } S _{\alpha }(t)x \bigr\Vert \\ & \qquad {} + \bigl\Vert t^{1-\alpha } S_{\alpha }(t)x - t^{1-\alpha } S_{\alpha }(t)x_{i} \bigr\Vert \\ & \qquad {} + \bigl\Vert t^{1-\alpha } S_{\alpha }(t)x_{i} - \varGamma (\alpha ) h ^{1-\alpha }S_{\alpha }(h)\cdot t^{1-\alpha } S_{\alpha }(t)x_{i} \bigr\Vert \\ & \qquad {} + \bigl\Vert \varGamma (\alpha ) h^{1-\alpha }S_{\alpha }(h)\cdot t^{1- \alpha } S_{\alpha }(t)x_{i} -\varGamma (\alpha ) h^{1-\alpha }S_{\alpha }(h)\cdot t^{1-\alpha } S_{\alpha }(t)x \bigr\Vert \\ & \quad < \frac{\varepsilon }{3}+ \frac{\varepsilon }{3(1+\varGamma (\alpha )M)}+ \frac{\varepsilon }{3} \\ & \qquad {}+\varGamma (\alpha )) \bigl\Vert h^{1-\alpha }S_{\alpha }(h) \bigl[t^{1- \alpha } S_{\alpha }(t)x_{i}-t^{1-\alpha } S_{\alpha }(t)x \bigr] \bigr\Vert \\ & \quad \leq \frac{\varepsilon }{3}+ \frac{\varepsilon }{3(1+ \varGamma (\alpha )M)}+ \frac{\varepsilon }{3}+\varGamma (\alpha )M \frac{ \varepsilon }{3(1+\varGamma (\alpha )M)} \\ & \quad \leq \varepsilon , \end{aligned}$$ which implies that, for every \(t>0\), $$ \lim_{h \rightarrow 0} \bigl\Vert (t+h)^{1-\alpha } S_{\alpha }(t+h)- \varGamma (\alpha ) h^{1-\alpha } S_{\alpha }(h) \cdot t^{1-\alpha } S _{\alpha }(t) \bigr\Vert =0. $$ (b) Let \(t>0\) and \(0< h<\min \{t,b\}\). Then, for \(\|x\|\leq 1\), we have $$\begin{aligned}& \bigl\Vert t^{1-\alpha } S_{\alpha }(t)x-\varGamma (\alpha ) h^{1-\alpha } S_{\alpha }(h)\cdot (t-h)^{1-\alpha } S_{\alpha }(t-h)x \bigr\Vert \\& \quad = \bigl\Vert t^{1-\alpha } S_{\alpha }(t)x- (t+h)^{1-\alpha } S_{\alpha }(t+h)x \bigr\Vert \\& \qquad {} + \bigl\Vert (t+h)^{1-\alpha } S_{\alpha }(t+h)x-\varGamma (\alpha ) h ^{1-\alpha } S_{\alpha }(h)\cdot t^{1-\alpha } S_{\alpha }(t)x \bigr\Vert \\& \qquad {} + \bigl\Vert \varGamma (\alpha ) h^{1-\alpha } S_{\alpha }(h)\cdot \bigl[t^{1-\alpha } S_{\alpha }(t)x- (t-h)^{1-\alpha } S_{\alpha }(t-h) x\bigr] \bigr\Vert \\& \quad \leq \bigl\Vert t^{1-\alpha } S_{\alpha }(t)x- (t+h)^{1-\alpha } S _{\alpha }(t+h)x \bigr\Vert \\& \qquad {} + \bigl\Vert (t+h)^{1-\alpha } S_{\alpha }(t+h)x-\varGamma (\alpha ) h ^{1-\alpha } S_{\alpha }(h)\cdot t^{1-\alpha } S_{\alpha }(t)x \bigr\Vert \\& \qquad {} +\varGamma (\alpha )M \bigl\Vert t^{1-\alpha } S_{\alpha }(t)x- (t-h)^{1-\alpha } S_{\alpha }(t-h) x \bigr\Vert , \end{aligned}$$ which implies the corresponding result by the conclusion of Lemma 2.5(a) and the equicontinuity of \(\{t^{1-\alpha }S_{\alpha }(t),t>0\}\). □ A function \(x\in C_{1-\alpha }(J,X)\) is called a solution to problem (1.1) if it satisfies $$ x(t) =\frac{t^{\alpha -1}}{\varGamma (\alpha )} x_{0}+A I_{t}^{\alpha } x(t)+I _{t}^{\alpha }f \biggl(t, x(t), \int _{0}^{t} h\bigl(t,s,x(s)\bigr) \, \mathrm{d}s \biggr), \quad t \in J'. $$ Let \(f\in L^{p}(J,X)\)with \(1\leq p<\infty \). Then $$ \lim_{h \rightarrow 0} \int _{0}^{b} \bigl\Vert f(t+h)-f(t) \bigr\Vert ^{p} \,\mathrm{d} t=0, $$ where \(f(t)=0\)for \(t\neq J\). In this section we shall discuss the concept of solution to problem (1.1) by fractional resolvent method and give its existence theorem without Lipschitz assumptions to nonlinear item f. Let r be a finite positive constant and set \(W_{r}=\{x \in C_{1-\alpha }(J,X):\|x\|_{C_{1-\alpha } }\leq r\}\). For brevity, we define the integral operator H by \((Hx)(t)= \int _{0}^{t} h(t,s,x(s)) \,\mathrm{d}s\), \(x\in C_{1-\alpha }(J,X)\). We give the following hypotheses on fractional integrodifferential system (1.1). \(\{t^{1-\alpha } S_{\alpha }(t), t>0 \}\) is equicontinuous and compact. The function \(h:\Delta \times X\rightarrow X\) satisfies the following: For a.e. \((t,s)\in \Delta \), the function \(h(t,s,\cdot ):X \rightarrow X\) is continuous and for all \(x\in X\), the function \(h(\cdot ,\cdot ,x):\Delta \rightarrow X\) is strongly measurable; There exists \(m\in \mathbb{R^{+}}\) such that \(\|h(t,s,x)\| \leq m\|x\|\). The function \(f:J\times X\times X \rightarrow X\) satisfies the following: \(f(t,\cdot ,\cdot )\) is continuous for a.e. \(t\in [0,b]\) and \(f(\cdot ,x,y):[0,b]\rightarrow X\) is measurable for all \(x, y\in X\); For a.e. \(t\in [0,b]\) and \(x,y\in X\), $$ \bigl\Vert f(t,x,y) \bigr\Vert \leq \theta (t)+\rho t^{1-\alpha }\bigl( \Vert x \Vert + \Vert y \Vert \bigr), $$ where \(\theta (t)\in L^{p}(J,X)\), \(p>\frac{1}{\alpha }\) and \(0<\rho <\frac{\alpha ^{2}}{Mb\alpha +Mb^{2}m}\). Let \(f\in L^{p}(J,X)\), \(p>\frac{1}{\alpha }\)and hypothesis (H1) be satisfied. Then the convolution $$ (S_{\alpha }*f) (t)= \int _{0}^{t} S_{\alpha }(t-s)f(s) \, \mathrm{d}s,\quad t\in J', $$ exists and defines a continuous function on \(J'\). From Proposition 1.3.4 in [35], we know that \(S_{\alpha }(t-\cdot )f(\cdot )\) is measurable on \((0,t)\). Moreover, we have $$\begin{aligned} \bigl\Vert (S_{\alpha } * f ) (t) \bigr\Vert =& \biggl\Vert \int _{0}^{t} \bigl((t-s)^{1-\alpha } S_{\alpha }(t-s) \bigr)\cdot (t-s)^{ \alpha -1} f(s) \,\mathrm{d} s \biggr\Vert \\ \leq & M \int _{0}^{t} \bigl\Vert (t-s)^{\alpha -1} f(s) \bigr\Vert \,\mathrm{d} s \\ \leq & M \Vert f \Vert _{L^{p}} \biggl( \int _{0}^{t} \bigl((t-s)^{\alpha -1} \bigr)^{ \frac{p}{p-1}} \,\mathrm{d} s \biggr)^{\frac{p-1}{p}} \\ \leq & M \Vert f \Vert _{L^{p}} \biggl(\biggl( \frac{p-1}{\alpha p-1}\biggr)t^{\frac{ \alpha p-1}{p-1}} \biggr)^{\frac{p-1}{p}} \\ \leq & M \Vert f \Vert _{L^{p}} b^{\alpha -\frac{1}{p}} \biggl( \frac{p-1}{ \alpha p-1} \biggr)^{1-\frac{1}{p}}, \end{aligned}$$ which shows that \(S_{\alpha } * f\) exists. Next we show that \(S_{\alpha } * f \in C(J',X)\). Let \(0<\varepsilon <t _{1}<t_{2} \leq b\), then we have $$\begin{aligned}& \bigl\Vert (S_{\alpha } * f ) (t_{2})- (S_{\alpha } * f ) (t _{1}) \bigr\Vert \\& \quad = \biggl\Vert \int _{0}^{t_{2}} S_{\alpha }(t_{2}-s)f(s) \,\mathrm{d}s- \int _{0}^{t_{1}} S_{\alpha }(t_{1}-s)f(s) \,\mathrm{d}s \biggr\Vert \\& \quad \leq \biggl\Vert \int _{0}^{t_{1}-\varepsilon } \bigl[(t_{2}-s)^{1-\alpha }S_{\alpha }(t_{2}-s)-(t_{1}-s)^{1-\alpha }S_{\alpha }(t_{1}-s) \bigr] \cdot (t_{2}-s)^{\alpha -1}f(s) \,\mathrm{d}s \biggr\Vert \\& \qquad {}+ \biggl\Vert \int _{t_{1}-\varepsilon }^{t_{1}} \bigl[(t_{2}-s)^{1-\alpha }S_{\alpha }(t_{2}-s)-(t_{1}-s)^{1-\alpha }S_{\alpha }(t_{1}-s) \bigr] \cdot (t_{2}-s)^{\alpha -1}f(s) \,\mathrm{d}s \biggr\Vert \\& \qquad {}+ \biggl\Vert \int _{0}^{t_{1}} (t_{1}-s)^{1-\alpha }S_{\alpha }(t_{1}-s) \cdot \bigl[(t_{2}-s)^{\alpha -1}- (t_{1}-s)^{\alpha -1} \bigr]f(s) \,\mathrm{d}s \biggr\Vert \\& \qquad {}+ \biggl\Vert \int _{t_{1}}^{t_{2}} (t_{2}-s)^{1-\alpha }S_{\alpha }(t _{2}-s)\cdot (t_{2}-s)^{\alpha -1}f(s) \, \mathrm{d}s \biggr\Vert \\& \quad \leq \sup_{s\in [0,t_{1}-\varepsilon ]} \bigl\Vert (t_{2}-s)^{1-\alpha }S _{\alpha }(t_{2}-s)-(t_{1}-s)^{1-\alpha }S_{\alpha }(t_{1}-s) \bigr\Vert \cdot \Vert f \Vert _{L^{p}} b^{\alpha -\frac{1}{p}} \biggl( \frac{p-1}{\alpha p-1} \biggr) ^{1-\frac{1}{p}} \\& \qquad {}+2M \Vert f \Vert _{L^{p}}\cdot \biggl(\frac{p-1}{\alpha p-1} \biggr)^{1- \frac{1}{p}} \bigl[(t_{2}-t_{1}+\varepsilon )^{\frac{\alpha p-1}{p-1}} - (t_{2}-t_{1})^{\frac{\alpha p-1}{p-1}} \bigr]^{1-\frac{1}{p}} \\& \qquad {}+ M \Vert f \Vert _{L^{p}} \biggl( \int _{0}^{t_{1}} \bigl[ (t_{2}-s ) ^{\alpha -1}- (t_{1}-s )^{\alpha -1} \bigr]^{ \frac{p}{p-1}} \,\mathrm{d} s \biggr)^{1-\frac{1}{p}} \\& \qquad {}+ M \Vert f \Vert _{L^{p}} \biggl(\frac{p-1}{\alpha p-1} \biggr)^{1- \frac{1}{p}} (t_{2}-t_{1} )^{\alpha -\frac{1}{p}}. \end{aligned}$$ Then due to the equicontinuity of \(\{t^{1-\alpha } S_{\alpha }(t),t>0 \}\), Lemma 2.7 and the arbitrariness of ε, we get $$ \bigl\Vert (S_{\alpha }*f) (t_{2})- (S_{\alpha }*f) (t_{1}) \bigr\Vert \rightarrow 0, \quad \text{as } t_{1}\rightarrow t_{2}, $$ which shows that \((S_{\alpha }*f)(t)\) is continuous on \((0,b]\). □ Suppose that conditions (H1)–(H3) are satisfied. Then \(x\in C_{1- \alpha }(J,X)\)is a solution to problem (1.1) if and only ifxsatisfies $$ x(t)=S_{\alpha }(t) x_{0}+ \int _{0}^{t} S_{\alpha }(t-s)f\bigl(s, x(s),Hx(s)\bigr) \,\mathrm{d} s,\quad t \in J'. $$ By Lemma 2.4(b), we know that, for \(t>0\), $$ g_{\alpha }(t)=S_{\alpha }(t)-(A g_{\alpha }* S_{\alpha }) (t). $$ Let \(x(\cdot )\) be a solution to problem (1.1). Then we have $$\begin{aligned} g_{\alpha }* x =& (S_{\alpha }-A g_{\alpha }* S_{\alpha }) * x \\ =& S_{\alpha }*x-S_{\alpha }*(A g_{\alpha }* x) \\ =& S_{\alpha }*(x-A g_{\alpha }* x ) \\ =& S_{\alpha }* \bigl(g_{\alpha }x_{0}+g_{\alpha }*f \bigl(\cdot ,x(\cdot ),Hx( \cdot ) \bigr) \bigr) \\ =& g_{\alpha }* \bigl(S_{\alpha }x_{0}+S_{\alpha }*f \bigl(\cdot ,x(\cdot ),Hx( \cdot ) \bigr) \bigr), \end{aligned}$$ which implies $$ x(t)=S_{\alpha }(t)x_{0}+ \int _{0}^{t} S_{\alpha }(t-s)f\bigl(s, x(s),Hx(s)\bigr) \,\mathrm{d} s. $$ Conversely, suppose \(x(\cdot )\) satisfies Eq. (3.2). From Lemma 3.1, we know that x is well defined on \(J'\). For the result of \(A I_{t}^{ \alpha } x(t)\), by Definition 2.3(c), we have $$\begin{aligned}& \biggl(s^{1-\alpha } S_{\alpha }(s)-\frac{1}{\varGamma (\alpha )} \biggr) I_{t}^{\alpha } x(t) \\& \quad = \bigl(s^{1-\alpha } S_{\alpha }(s)-s^{1-\alpha }g_{\alpha }(s) \bigr) \bigl(I_{t}^{\alpha } S_{\alpha }(t)x_{0}+g_{\alpha }* S_{\alpha } * f\bigl(\cdot ,x(\cdot ),Hx(\cdot )\bigr) (t) \bigr) \\& \quad = s^{1-\alpha }\bigl[S_{\alpha }(s)I_{t}^{\alpha } S_{\alpha }(t)x_{0} - g_{\alpha }(s)I_{t}^{\alpha } S_{\alpha }(t)x_{0} \bigr] \\& \qquad {} +s^{1-\alpha }\bigl[S_{\alpha }(s)\cdot \bigl(I_{t}^{\alpha }S_{\alpha } \bigr)*f\bigl( \cdot ,x(\cdot ),Hx(\cdot )\bigr) (t)- g_{\alpha }(s)\cdot \bigl(I_{t}^{\alpha }S _{\alpha }\bigr)*f\bigl(\cdot ,x( \cdot ),Hx(\cdot )\bigr) (t)\bigr] \\& \quad = s^{1-\alpha }\bigl[S_{\alpha }(t)I_{s}^{\alpha } S_{\alpha }(s)x_{0} - g_{\alpha }(t)I_{s}^{\alpha } S_{\alpha }(s)x_{0} \bigr] \\& \qquad {} +s^{1-\alpha }\bigl[I_{s}^{\alpha }S_{\alpha }(s)S_{\alpha }(t)- I_{s} ^{\alpha }S_{\alpha }(s) g_{\alpha }(t) \bigr]* f\bigl(\cdot ,x(\cdot ),Hx( \cdot )\bigr) (t) \\& \quad = s^{1-\alpha }I_{s}^{\alpha }S_{\alpha }(s) \bigl[S_{\alpha }(t)x _{0}- g_{\alpha }(t)x_{0}+S_{\alpha } * f\bigl(\cdot ,x(\cdot ),Hx(\cdot )\bigr) (t)-g_{\alpha }*f\bigl(\cdot ,x( \cdot ),Hx(\cdot )\bigr) (t) \bigr] \\& \quad = s^{1-\alpha }I_{s}^{\alpha }S_{\alpha }(s) \bigl[x(t)-g_{\alpha }(t)x _{0}- I_{t}^{\alpha }f \bigl(t,x(t),Hx(t)\bigr) \bigr]. \end{aligned}$$ $$\begin{aligned}& A I_{t}^{\alpha } x(t) \\& \quad = \lim_{s \rightarrow 0^{+}} \varGamma (2 \alpha ) \frac{ (s^{1- \alpha } S_{\alpha }(s) -\frac{1}{\varGamma (\alpha )} )I_{t}^{\alpha } x(t) }{s^{\alpha }} \\& \quad = \lim_{s \rightarrow 0^{+}} \varGamma (2 \alpha ) s^{1-2 \alpha } I _{s}^{\alpha }S_{\alpha }(s) \bigl[x(t)-g_{\alpha }(t)x_{0}- I_{t}^{ \alpha }f\bigl(t,x(t),Hx(t)\bigr) \bigr]. \end{aligned}$$ Noticing that $$\begin{aligned}& \bigl\Vert \varGamma (2 \alpha ) s^{1-2 \alpha } I_{s}^{\alpha } S_{ \alpha }(s) x-x \bigr\Vert \\& \quad = \biggl\Vert \frac{\varGamma (2 \alpha )}{ \varGamma (\alpha )} \int _{0}^{s} s^{1-2 \alpha }(s-\tau )^{\alpha -1} S _{\alpha }(\tau ) x \,\mathrm{d}\tau -x \biggr\Vert \\& \quad = \biggl\Vert \frac{\varGamma (2 \alpha )}{\varGamma (\alpha )} \int _{0} ^{1} s^{1-\alpha }(1-\tau )^{\alpha -1} S_{\alpha }(s \tau ) x \,\mathrm{d}\tau -x \biggr\Vert \\& \quad = \biggl\Vert \frac{\varGamma (2 \alpha )}{[\varGamma (\alpha )]^{2}} \int _{0}^{1} s^{1-\alpha } \varGamma (\alpha ) (1-\tau )^{\alpha -1} S_{ \alpha }(s \tau ) x \,\mathrm{d}\tau -x \biggr\Vert \\& \quad = \biggl\Vert \frac{\varGamma (2 \alpha )}{[\varGamma (\alpha )]^{2}} \int _{0}^{1}(1-\tau )^{\alpha -1} \tau ^{\alpha -1} \varGamma (\alpha ) (s \tau )^{1-\alpha } S_{\alpha }(s \tau ) x \,\mathrm{d}\tau \\& \qquad {} -\frac{\varGamma (2 \alpha )}{[\varGamma (\alpha )]^{2}} \int _{0}^{1}(1- \tau )^{\alpha -1} \tau ^{\alpha -1} x \,\mathrm{d}\tau \biggr\Vert \\& \quad \leq \frac{\varGamma (2 \alpha )}{[\varGamma (\alpha )]^{2}} \int _{0} ^{1}(1-\tau )^{\alpha -1} \tau ^{\alpha -1} \,\mathrm{d}\tau \cdot \sup_{\tau \in [0,1]} \bigl\Vert \varGamma (\alpha ) (s \tau )^{1-\alpha } S _{\alpha }(s \tau ) x-x \bigr\Vert \\& \quad \leq \sup_{\tau \in [0,1]} \bigl\Vert \varGamma (\alpha ) (s \tau )^{1- \alpha } S_{\alpha }(s \tau ) x-x \bigr\Vert . \end{aligned}$$ By Definition 2.3(a), we get $$ \bigl\Vert \varGamma (2 \alpha ) s^{1-2 \alpha } I_{s}^{\alpha } S_{ \alpha }(s) x-x \bigr\Vert \rightarrow 0,\quad \text{as } s\rightarrow 0^{+}. $$ Combining (3.3) and (3.4), we have $$ A I_{t}^{\alpha } x(t)=x(t)-g_{\alpha }(t)x_{0}- I_{t}^{\alpha }f\bigl(t,x(t),Hx(t)\bigr). $$ That is, $$ x(t)=g_{\alpha }(t)x_{0} + A I_{t}^{\alpha } x(t)+ I_{t}^{\alpha }f\bigl(t,x(t),Hx(t)\bigr), $$ which shows that x is a solution to problem (1.1). □ Suppose that assumptions (H1)–(H3) are satisfied. Let \(W_{r}=\{x \in C_{1-\alpha }(J,X):\|x\|_{C_{1-\alpha }}\leq r\}\). Then the mapping \(G:W_{r}\rightarrow C_{1-\alpha }(J,X) \)defined by $$ (Gx) (t)= \int _{0}^{t} S_{\alpha }(t-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s $$ is compact. In view of the relationship between \((C_{1-\alpha }(J,X), \|\cdot \|_{C_{1-\alpha }})\) and \((C(J,X), \|\cdot \|_{C})\), for the compactness of \(GW_{r}\) in \(C_{1-\alpha }(J,X) \), it is sufficient to prove that the set $$ B=\bigl\{ y\in C(J,X):y(t)=t^{1-\alpha }(Gx) (t), x\in W_{r}, t \in J\bigr\} $$ is precompact in \(C(J,X)\). Firstly, we show that \(B(t)=\{y(t):y\in B \}\subseteq X\) is precompact in X for every \(t\in J\). If \(t=0\), then \(B(0)=0\) is obviously satisfied. If \(t>0\), we can define a set \(B^{\varepsilon }(t)= \{y ^{\varepsilon }(t), x\in W_{r}, t\in J'\}\subseteq X \), where $$ y^{\varepsilon }(t)=\varepsilon ^{1-\alpha } S_{\alpha }(\varepsilon ) \cdot \varGamma (\alpha ) t^{1-\alpha } \int _{0}^{t-\varepsilon } S_{ \alpha }(t-s- \varepsilon )f\bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s. $$ For \(x\in W_{r}\), \(s\in [0,b]\), we have $$\begin{aligned} \bigl\Vert f\bigl(s,x(s),Hx(s)\bigr) \bigr\Vert \leq & \theta (s)+\rho s^{1-\alpha } \biggl( \bigl\Vert x(s) \bigr\Vert + \biggl\Vert \int _{0}^{s} h\bigl(s,\tau ,x(\tau )\bigr) \, \mathrm{d}\tau \biggr\Vert \biggr) \\ \leq & \theta (s)+\rho s^{1-\alpha } \biggl( \bigl\Vert x(s) \bigr\Vert + \int _{0}^{s} m \bigl\Vert x(\tau ) \bigr\Vert \,\mathrm{d}\tau \biggr) \\ \leq & \theta (s)+\rho s^{1-\alpha } \bigl\Vert x(s) \bigr\Vert + \rho s^{1-\alpha } \int _{0}^{s} m \tau ^{\alpha -1} \bigl\Vert \tau ^{1-\alpha }x(\tau ) \bigr\Vert \,\mathrm{d}\tau \\ \leq & \theta (s)+\rho r+ \rho s^{1-\alpha }m\frac{s^{\alpha }}{ \alpha }r \\ \leq & \theta (s)+\rho r+ \rho \frac{ms}{\alpha }r \\ \leq & \theta (s)+\rho r+ \rho \frac{mb}{\alpha }r. \end{aligned}$$ By (3.5), for \(x\in W_{r}\), \(t\in (0,b]\), we get $$\begin{aligned}& \biggl\Vert t^{1-\alpha } \int _{0}^{t-\varepsilon } S_{\alpha }(t-s-\varepsilon )f\bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \biggr\Vert \\& \quad \leq b^{1-\alpha } \int _{0}^{t-\varepsilon } \bigl\Vert (t-s-\varepsilon )^{1- \alpha } S_{\alpha }(t-s-\varepsilon )\cdot (t-s-\varepsilon )^{ \alpha -1}f\bigl(s,x(s),Hx(s)\bigr) \bigr\Vert \,\mathrm{d}s \\& \quad \leq M b^{1-\alpha } \int _{0}^{t-\varepsilon } \bigl\Vert (t-s-\varepsilon )^{\alpha -1}f\bigl(s,x(s),Hx(s)\bigr) \bigr\Vert \,\mathrm{d}s \\& \quad \leq M b^{1-\alpha } \int _{0}^{t-\varepsilon } (t-s-\varepsilon )^{ \alpha -1} \biggl( \theta (s)+\rho r+ \rho \frac{mb}{\alpha }r \biggr) \,\mathrm{d}s \\& \quad \leq M b^{1-\alpha } \int _{0}^{t-\varepsilon } (t-s-\varepsilon )^{ \alpha -1} \theta (s) \,\mathrm{d}s+ M b^{1-\alpha } \int _{0}^{t- \varepsilon } (t-s-\varepsilon )^{\alpha -1} \biggl(\rho r+ \rho \frac{mb}{ \alpha }r \biggr) \, \mathrm{d}s \\& \quad \leq M \biggl( b\frac{p-1}{\alpha p-1} \biggr)^{1-\frac{1}{p}} \Vert \theta \Vert _{L^{p}}+\frac{Mb}{\alpha } \biggl(\rho r+ \rho \frac{mb}{ \alpha }r \biggr) \\& \quad < \infty . \end{aligned}$$ Moreover, due to hypothesis (H1), for \(\varepsilon >0\), the operator \(\varepsilon ^{1-\alpha } S_{\alpha }(\varepsilon )\) is compact. So we know that \(B^{\varepsilon }(t) \) is precompact in X for each \(t\in J'\). Let \(t\in (0,b]\) and \(\delta \in (\varepsilon ,t )\). We have $$\begin{aligned}& \bigl\Vert y(t)-y^{\varepsilon }(t) \bigr\Vert \\& \quad \leq t^{1-\alpha } \biggl[ \biggl\Vert \int _{0}^{t-\varepsilon } (t-s)^{1- \alpha }S_{\alpha }(t-s) \cdot (t-s)^{\alpha -1} f\bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \\& \qquad {} - \varepsilon ^{1-\alpha } S_{\alpha }(\varepsilon )\varGamma (\alpha ) \int _{0}^{t-\varepsilon } (t-s-\varepsilon )^{1-\alpha }S_{\alpha }(t-s- \varepsilon )\cdot (t-s)^{\alpha -1} f\bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \biggr\Vert \\& \qquad {} + \biggl\Vert \varepsilon ^{1-\alpha } S_{\alpha }(\varepsilon ) \varGamma (\alpha ) \int _{0}^{t-\varepsilon } (t-s-\varepsilon )^{1-\alpha }S _{\alpha }(t-s-\varepsilon )\cdot \bigl((t-s)^{\alpha -1} -(t-s-\varepsilon )^{\alpha -1} \bigr) \\& \qquad {}\times f\bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \biggr\Vert \\& \qquad {} + \biggl\Vert \int _{t-\varepsilon }^{t} (t-s)^{1-\alpha }S_{\alpha }(t-s) \cdot (t-s)^{\alpha -1} f\bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \biggr\Vert \biggr] \\& \quad \leq b^{1-\alpha } \int _{0}^{t-\varepsilon } \bigl\Vert \bigl[ (t-s)^{1- \alpha }S_{\alpha }(t-s)- \varGamma (\alpha ) \varepsilon ^{1-\alpha } S _{\alpha }(\varepsilon ) (t-s-\varepsilon )^{1-\alpha } S_{\alpha }(t-s- \varepsilon ) \bigr] \\& \qquad {}\times (t-s)^{\alpha -1} f\bigl(s,x(s),Hx(s)\bigr) \bigr\Vert \,\mathrm{d}s \\& \qquad {} +b^{1-\alpha } \bigl\Vert \varepsilon ^{1-\alpha } S_{\alpha }(\varepsilon ) \bigr\Vert \varGamma (\alpha )\cdot M \int _{0}^{t-\varepsilon } \bigl\Vert \bigl[(t-s)^{ \alpha -1} -(t-s-\varepsilon )^{\alpha -1} \bigr]f \bigl(s,x(s),Hx(s)\bigr) \bigr\Vert \,\mathrm{d}s \\& \qquad {} + b^{1-\alpha }M \int _{t-\varepsilon }^{t} \bigl\Vert (t-s)^{\alpha -1} f\bigl(s,x(s),Hx(s)\bigr) \bigr\Vert \,\mathrm{d}s \\& \quad \leq b^{1-\alpha } \int _{0}^{t-\delta } \bigl\Vert (t-s)^{1-\alpha }S_{ \alpha }(t-s)- \varGamma (\alpha )\varepsilon ^{1-\alpha } S_{\alpha }( \varepsilon ) (t-s-\varepsilon )^{1-\alpha } S_{\alpha }(t-s-\varepsilon ) \bigr\Vert \\& \qquad {}\times (t-s)^{\alpha -1} \biggl(\theta (s)+\rho r+ \rho \frac{mb}{\alpha }r \biggr) \,\mathrm{d}s \\& \qquad {} + b^{1-\alpha } \int _{t-\delta }^{t-\varepsilon } \bigl\Vert (t-s)^{1-\alpha }S_{\alpha }(t-s)- \varGamma (\alpha )\varepsilon ^{1-\alpha } S_{\alpha }(\varepsilon ) (t-s-\varepsilon )^{1-\alpha } S_{\alpha }(t-s-\varepsilon ) \bigr\Vert \\& \qquad {}\times (t-s)^{\alpha -1} \biggl(\theta (s)+\rho r+ \rho \frac{mb}{\alpha }r \biggr) \,\mathrm{d}s \\& \qquad {} +b^{1-\alpha }M^{2} \varGamma (\alpha ) \biggl( \int _{0}^{t-\varepsilon } \bigl[(t-s)^{\alpha -1} -(t-s-\varepsilon )^{\alpha -1} \bigr]^{ \frac{p}{p-1}} \,\mathrm{d}s \biggr)^{1-\frac{1}{p}} \Vert f \Vert _{L^{p}} \\& \qquad {} +b^{1-\alpha }M \int _{t-\varepsilon }^{t} \bigl\Vert (t-s)^{\alpha -1} f\bigl(s,x(s),Hx(s)\bigr) \bigr\Vert \,\mathrm{d}s \\& \quad := I_{1}+I_{2}+I_{3}+I_{4}, \end{aligned}$$ $$\begin{aligned}& \begin{aligned} I_{1} &= b^{1-\alpha } \int _{0}^{t-\delta } \bigl\Vert (t-s)^{1-\alpha }S_{ \alpha }(t-s)- \varGamma (\alpha ) \varepsilon ^{1-\alpha } S_{\alpha }( \varepsilon ) (t-s-\varepsilon )^{1-\alpha } S_{\alpha }(t-s-\varepsilon ) \bigr\Vert \\ &\quad {}\times (t-s)^{\alpha -1} \biggl(\theta (s)+\rho r+ \rho \frac{mb}{\alpha }r \biggr) \,\mathrm{d}s, \end{aligned} \\& \begin{aligned} I_{2} &= b^{1-\alpha } \int _{t-\delta }^{t-\varepsilon } \bigl\Vert (t-s)^{1- \alpha }S_{\alpha }(t-s)- \varGamma (\alpha )\varepsilon ^{1-\alpha } S _{\alpha }(\varepsilon ) (t-s-\varepsilon )^{1-\alpha } S_{\alpha }(t-s- \varepsilon ) \bigr\Vert \\ &\quad {}\times (t-s)^{\alpha -1} \biggl(\theta (s)+\rho r+ \rho \frac{mb}{\alpha }r \biggr) \,\mathrm{d}s, \end{aligned} \\& I_{3} = b^{1-\alpha }M^{2} \varGamma (\alpha ) \biggl( \int _{0}^{t- \varepsilon } \bigl[(t-s)^{\alpha -1} -(t-s-\varepsilon )^{\alpha -1} \bigr]^{\frac{p}{p-1}} \,\mathrm{d}s \biggr)^{1-\frac{1}{p}} \Vert f \Vert _{L^{p}}, \\& I_{4} = b^{1-\alpha }M \int _{t-\varepsilon }^{t} \bigl\Vert (t-s)^{\alpha -1} f\bigl(s,x(s),Hx(s)\bigr) \bigr\Vert \,\mathrm{d}s. \end{aligned}$$ From Lemma 2.5, we know that \(I_{1}\rightarrow 0\), as \(\varepsilon \rightarrow 0^{+} \). By the arbitrariness of ε, δ and absolute continuity of integral, we get $$ I_{2}\rightarrow 0,\qquad I_{4}\rightarrow 0, $$ as \(\varepsilon ,\delta \rightarrow 0^{+} \). The conclusion of Lemma 2.7 shows that \(I_{3}\rightarrow 0 \), as \(\varepsilon \rightarrow 0^{+}\). Now for \(t\in J'\), we get $$ \lim_{\varepsilon \rightarrow 0^{+}} \bigl\Vert y(t)-y^{\varepsilon }(t) \bigr\Vert =0, $$ which implies that \(B(t)=\{y(t):y\in B \}\) is precompact in X as there is a family of precompact sets arbitrarily close to it. Next, we show the equicontinuity of B on J. Similar to the computational procedure of (3.6), we can get $$\begin{aligned}& \biggl\Vert \int _{0}^{t} S_{\alpha }(t-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \biggr\Vert \\& \quad \leq Mb^{\alpha - \frac{1}{p}} \biggl(\frac{p-1}{\alpha p-1} \biggr) ^{1-\frac{1}{p}} \Vert \theta \Vert _{L^{p}}+\frac{Mb^{\alpha }}{\alpha } \biggl(\rho r+ \rho \frac{mb}{\alpha }r \biggr) \\& \quad := F_{r}< \infty , \end{aligned}$$ for \(t\in J\), \(x\in W_{r}\). Let \(y\in B\), \(0\leq t_{1}< t_{2}\leq b\). Then we have $$\begin{aligned}& \bigl\Vert y(t_{2})-y(t_{1}) \bigr\Vert \\& \quad = \biggl\Vert t_{2}^{1-\alpha } \int _{0}^{t_{2}} S_{\alpha }(t_{2}-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s - t_{1}^{1-\alpha } \int _{0}^{t_{1}} S_{\alpha }(t_{1}-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \biggr\Vert \\& \quad \leq \biggl\Vert \bigl(t_{2}^{1-\alpha }-t_{1}^{1-\alpha } \bigr) \int _{0}^{t_{2}} S_{\alpha }(t_{2}-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \biggr\Vert \\& \qquad {} +t_{1}^{1-\alpha } \biggl\Vert \int _{0}^{t_{2}} S_{\alpha }(t_{2}-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s- \int _{0}^{t_{1}} S_{\alpha }(t_{1}-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \biggr\Vert \\& \quad \leq \bigl(t_{2}^{1-\alpha }-t_{1}^{1-\alpha } \bigr)F_{r} \\& \qquad {} +b^{1-\alpha } \biggl\Vert \int _{0}^{t_{2}} S_{\alpha }(t_{2}-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s- \int _{0}^{t_{1}} S_{\alpha }(t_{1}-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \biggr\Vert . \end{aligned}$$ From (3.5), we know $$ \bigl\Vert f\bigl(s,x(s),Hx(s)\bigr) \bigr\Vert \leq \theta (s)+\rho r+ \rho \frac{mb}{\alpha }r,\quad \theta \in L^{p}(J,X). $$ Then due to Eq. (3.1) in Lemma 3.1, we have $$ \biggl\Vert \int _{0}^{t_{2}} S_{\alpha }(t_{2}-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s- \int _{0}^{t_{1}} S_{\alpha }(t_{1}-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \biggr\Vert \rightarrow 0, $$ as \(t_{1}\rightarrow t_{2} \), independent of \(x\in W_{r}\). Now we can obtain $$ \lim_{t_{1} \rightarrow t_{2}} \bigl\Vert y (t_{2} )-y (t _{1} ) \bigr\Vert =0, $$ which leads to the equicontinuity of B on J. Thus \(G:W_{r}\rightarrow C_{1-\alpha }(J,X) \) is a compact mapping by the Ascoli–Arzela theorem. This proof is completed. □ Now we can present our main existence result to problem (1.1). Assume that the hypotheses (H1)–(H3) are satisfied. Then the system (1.1) has at least one solution. We transform the existence of solutions into a fixed point problem. For this purpose, by considering Lemma 3.2, we introduce the solution operator \(\varPhi :C_{1-\alpha }(J,X) \rightarrow C_{1- \alpha }(J,X)\) by $$ \varPhi x(t)=S_{\alpha }(t) x_{0}+ \int _{0}^{t} S_{\alpha }(t-s)f\bigl(s, x(s),Hx(s)\bigr) \,\mathrm{d} s. $$ It is easy to see that the fixed point of Φ is just the solution to problem (1.1). Subsequently, we shall prove that Φ has a fixed point by Schauder's fixed point theorem. Step 1. We claim that \(\varPhi W_{r}\subseteq W_{r}\) in \(C_{1-\alpha }(J,X)\), where $$ r\geq \frac{\alpha ^{2}}{\alpha ^{2}-Mb(\alpha \rho +\rho bm ) } \biggl[ M \Vert x_{0} \Vert + M \biggl( b\frac{p-1}{\alpha p-1} \biggr)^{1-\frac{1}{p}} \Vert \theta \Vert _{L^{p}} \biggr] . $$ In fact, for \(x\in W_{r}\), \(t\in J\), from (3.7) we have $$\begin{aligned}& \bigl\Vert t^{1-\alpha }\varPhi x(t) \bigr\Vert \\& \quad \leq \bigl\Vert t^{1-\alpha }S_{\alpha }(t) x_{0} \bigr\Vert +b^{1-\alpha } \biggl\Vert \int _{0}^{t} S_{\alpha }(t-s)f \bigl(s,x(s),Hx(s)\bigr) \,\mathrm{d}s \biggr\Vert \\& \quad \leq M \Vert x_{0} \Vert + M \biggl( b\frac{p-1}{\alpha p-1} \biggr)^{1- \frac{1}{p}} \Vert \theta \Vert _{L^{p}}+ \frac{Mb}{\alpha } \biggl(\rho r+ \rho \frac{mb}{\alpha }r \biggr) \\& \quad \leq r. \end{aligned}$$ Step 2. We show that Φ is continuous on \(W_{r}\subseteq C_{1- \alpha }(J,X)\). For this purpose, we assume that \(x_{n}\rightarrow x\) in \(W_{r}\). From hypothesis (H2), (H3), for \(t\in J\), we have $$ (t-s)^{\alpha -1} \bigl(f \bigl(s, x_{n}(s), Hx_{n}(s) \bigr)-f\bigl(s, x(s),Hx(s)\bigr) \bigr) \rightarrow 0,\quad \text{a.e. } s \in [0, t], $$ and from (3.5), it follows that $$\begin{aligned}& (t-s)^{\alpha -1} \bigl\Vert f \bigl(s, x_{n}(s),Hx_{n}(s) \bigr)-f\bigl(s, x(s),Hx(s)\bigr) \bigr\Vert \\& \quad \leq 2(t-s)^{\alpha -1}\biggl(\theta (s)+\varrho r+\rho \frac{mb}{\alpha }r\biggr),\quad s \in [0, t]. \end{aligned}$$ Then, by the dominated convergence theorem, we get $$\begin{aligned}& t^{1-\alpha } \bigl\Vert (\varPhi x_{n}) (t)-(\varPhi x) (t) \bigr\Vert \\& \quad \leq t^{1-\alpha } \int _{0}^{t} \bigl\Vert (t-s)^{1-\alpha }S_{\alpha }(t-s) \bigr\Vert \cdot (t-s)^{\alpha -1} \bigl\Vert f\bigl(s, x_{n}(s),Hx_{n}(s)\bigr)- f\bigl(s, x(s),Hx(s)\bigr) \bigr\Vert \,\mathrm{d} s \\& \quad \leq Mb^{1-\alpha } \int _{0}^{t} (t-s)^{\alpha -1} \bigl\Vert f\bigl(s, x _{n}(s),Hx_{n}(s)\bigr)- f\bigl(s, x(s),Hx(s)\bigr) \bigr\Vert \,\mathrm{d} s \\& \quad \rightarrow 0 ,\quad n\rightarrow \infty , \end{aligned}$$ which implies the continuity of Φ on \(W_{r}\). Step 3. We show that the operator Φ is compact. Let $$ \varPhi =\varPhi _{1}+\varPhi _{2}, $$ where \(\varPhi _{1}(t)=S_{\alpha }(t) x_{0}\), \(\varPhi _{2}(t)=\int _{0}^{t} S _{\alpha }(t-s)f(s, x(s),Hx(s)) \,\mathrm{d} s\). From Lemma 3.3, we have concluded that \(\varPhi _{2}\) is compact in \(W_{r}\). For the compactness of \(\varPhi _{1}\), it is sufficient to check the set $$ V=\bigl\{ z\in C(J,X):z(t)=t^{1-\alpha } S_{\alpha }(t)x_{0},x_{0} \in X, t \in J \bigr\} , $$ is precompact in \(C(J,X)\). Obviously, \(V(0)=\{\frac{x_{0}}{\varGamma ( \alpha )}\}\), \(V(t)=\{t^{1-\alpha } S_{\alpha }(t)x_{0} \}\), \(t>0\), is precompact in X. Suppose that \(0\leq t_{1}< t_{2}\leq b \). If \(t_{1}=0\), in view of Definition 2.3(a), we get $$ \bigl\Vert z(t_{2})-z(0) \bigr\Vert = \biggl\Vert t_{2}^{1-\alpha } S_{\alpha }(t_{2})x_{0} -\frac{x _{0}}{\varGamma (\alpha )} \biggr\Vert \rightarrow 0, $$ as \(t_{2}\rightarrow 0\). If \(t_{1}>0\), $$ \bigl\Vert z(t_{2})-z(t_{1}) \bigr\Vert \leq \bigl\Vert t_{2}^{1-\alpha } S_{\alpha }(t_{2})x _{0} - t_{2}^{1-\alpha } S_{\alpha }(t_{2})x_{0} \bigr\Vert \rightarrow 0. $$ From hypothesis (H1), we know that \(\|z(t_{2})-z(t_{1}) \|\rightarrow 0\), as \(t_{1}\rightarrow t_{2}\). By the Ascoli–Arzela theorem, we see that V is precompact in \(C(J,X)\). Therefore, \(\varPhi =\varPhi _{1}+\varPhi _{2}\) is a compact operator in \(C_{1-\alpha }(J,X) \). Hence, from Schauder's fixed point theorem, there exists a fixed point x such that \(\varPhi x=x \), which is the solution to problem (1.1). This completes the proof. □ Consider the following integrodifferential evolution system with Riemann–Liouville fractional derivative: $$\begin{aligned} \textstyle\begin{cases} D^{\alpha }u(t,x)=\frac{\partial ^{2}}{\partial x^{2}}u(t,x)+ F (t,u(t,x), \int _{0}^{t} h_{1}(t,s,u(s,x))\,\mathrm{d}s ),\quad 0< t\leq 1, 0< x< 1, \\ u(t,0)=u(t,1)=0, \\ \lim_{t \rightarrow 0^{+}} \varGamma (\alpha ) t^{1-\alpha } u(t, x)=u _{0}(x). \end{cases}\displaystyle \end{aligned}$$ Take \(X=L^{2}(0,1)\) and the operator \(A:D(A)\subseteq X\rightarrow X\) defined by \(Az=z^{\prime \prime }\), with $$ D(A)=\bigl\{ z\in X:z,z' \mbox{ are absolutely continuous}, z''\in X,z(0)=z(1)=0 \bigr\} . $$ From Pazy [36], A is the infinitesimal generator of a compact analytic semigroup \(T(t)\), \(t\geq 0\). It is known that A has the eigenvalues \(\lambda _{n}=-n^{2}\pi ^{2}\), \(n\in \mathbb{N}\), and the corresponding eigenvectors \(e_{n}(x)=\sqrt{2}\sin (n\pi x)\) for \(n\geq 1\), \(e_{0}=1\), which form an orthogonal basis for \(L^{2}(0, 1)\). Then \(T(t)\) is given by $$ T(t)z= \sum_{n=1}^{\infty }e^{-n^{2}\pi ^{2}t}\langle z,e_{n}\rangle e_{n}. $$ If \(u_{0}(x)=\sum_{n=1}^{\infty } c_{n} \sin n \pi x\), then we have $$ T(t) u_{0}(x)=\sum_{n=1}^{\infty } e^{-n^{2} \pi ^{2} t} c_{n} \sin n \pi x. $$ Moreover, from [21], we know A is the infinitesimal generator of an order-α fractional resolvent \(S_{\alpha }(t)\) and $$ S_{\alpha }(t) u_{0}(x) =\sum_{n=1}^{\infty } t^{\alpha -1} E_{\alpha , \alpha } \bigl(-n^{2} \pi ^{2} t^{\alpha } \bigr) c_{n} \sin n \pi x. $$ Employing the method in [13, 31], by Laplace transformation and probability density functions, we can have $$ t^{1-\alpha }S_{\alpha }(t)u_{0}(x)= \alpha \int _{0}^{\infty } \theta \xi _{\alpha }( \theta ) T \bigl(t^{\alpha } \theta \bigr) u_{0}(x) \, \mathrm{d} \theta , $$ for any \(u_{0}\in X\), where $$\begin{aligned}& \xi _{\alpha }(\theta )=\frac{1}{\alpha } \theta ^{-1-\frac{1}{\alpha }} \varpi _{\alpha } \bigl(\theta ^{-\frac{1}{\alpha }} \bigr), \\& \varpi _{\alpha }(\theta )=\frac{1}{\pi } \sum _{n=1}^{\infty }(-1)^{n-1} \theta ^{-n \alpha -1} \frac{\varGamma (n \alpha +1)}{n !} \sin (n \pi \alpha ), \quad \theta \in (0, \infty ). \end{aligned}$$ Equation (4.2) shows $$ t^{1-\alpha }S_{\alpha }(t)=\alpha \int _{0}^{\infty } \theta \xi _{ \alpha }( \theta ) T \bigl(t^{\alpha } \theta \bigr) \,\mathrm{d} \theta . $$ From Lemma 2.9 of [13], it follows that the family of operators \(\{t^{1-\alpha }S_{\alpha }(t):t>0 \} \) is equicontinuous, compact and \(\|t^{1-\alpha }S_{\alpha }(t)\|\leq \frac{\alpha M'}{\varGamma (1+ \alpha )}:=M\), where \(M'=\sup \{\|T(t)\|:0\leq t\leq 1 \}\). Then hypothesis (H1) is satisfied. Now, we define a continuous function \(f:[0,b]\times X\times X \rightarrow X\) by $$\begin{aligned}& f(t,u,h) (x)=F\bigl(t,u(t,x),h(t,x) \bigr),\quad 0< t\leq 1, 0< x< 1, \\& h(t,x)= \int _{0}^{t} h_{1}\bigl(t,s,u(s,x) \bigr)\,\mathrm{d}s. \end{aligned}$$ $$\begin{aligned}& F \biggl(t,u(t,x), \int _{0}^{t} h_{1}\bigl(t,s,u(s,x) \bigr)\,\mathrm{d}s \biggr) \\& \quad =e ^{-t}\cos \bigl(u(t,x)\bigr)+\rho t^{1-\alpha } \biggl(u(t,x)+ \int _{0}^{t} \cos (ts)u(s,x) \,\mathrm{d}s \biggr), \end{aligned}$$ where \(0<\rho <\frac{\alpha ^{2}}{M\alpha +M}\). So the functions f, h satisfy hypotheses (H2) and (H3). Let \(u(t)x=u(t,x)\), for \(t,x\in (0,1)\). Then the differential system (4.1) can be presented in the abstract form (1.1) and all the conditions of Theorem 3.4 are satisfied. Hence there exists a function \(u\in C_{1-\alpha }(J,L^{2}(0,1))\) which is a solution of (4.1). By using fractional resolvents, this paper introduces the solution to semilinear Riemann–Liouville fractional integrodifferential equations and discuss its existence results. There are two points worth of attention in the study. One is that the Riemann–Liouville fractional resolvent \(S_{\alpha }(t)\) is not bounded at \(t=0\), which is essentially different from the case for Caputo fractional resolvents, and the other is \(S_{\alpha }(t)\) does not have the property of semigroups, which means that the compactness of \(S_{\alpha }(t)\) (or \(t^{1-\alpha } S _{\alpha }(t)\)) cannot conclude the equicontinuity of \(S_{\alpha }(t)\) (or \(t^{1-\alpha } S_{\alpha }(t)\)). As the existence of solutions is the basis of qualitative study to differential equations, we can continue to discuss the controllability and stability of the solution using the similar approach. On the other hand, some new general fractional derivatives are introduced and studied, such as some types of extended Riemann–Liouville fractional derivative and the fractional derivative without singular kernel of exponential function [37–41]. Especially, the Hilfer fractional derivative is often used as a generalized Riemann–Liouville fractional derivative, which includes Riemann–Liouville and Caputo derivatives; see [4, 9, 42]. To the best of our knowledge, most of the existence and controllability results on the Hilfer fractional differential system are studied under the frame that A generates a strongly continuous semigroup and the solution is given by semigroup and probability density functions. It is still an open problem how to define fractional resolvents to Hilfer and other general fractional equations and it is worth discussing later. Bonilla, B., Rivero, M., Rodriguez-Germa, L., Trujillo, J.J.: Fractional differential equations as alternative models to nonlinear differential equations. Appl. Math. Comput. 187, 79–88 (2007) Miller, K.S., Ross, B.: An Introduction to the Fractional Calculus and Differential Equations. Wiley, New York (1993) Podlubny, I.: Fractional Differential Equations. Academic Press, San Diego (1999) Hilfer, R.: Applications of Fractional Calculus in Physics. World Scientific, Singapore (2000) Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006) Lakshmikantham, V., Leela, S., Vasundhara Devi, J.: Theory of Fractional Dynamic Systems. Cambridge Academic Publishers, Cambridge (2009) Yang, X.J.: General Fractional Derivatives: Theory, Methods and Applications. CRC Press, New York (2019) El-Borai, M.M.: Some probability densities and fundamental solutions of fractional evolution equations. Chaos Solitons Fractals 14, 433–440 (2002) Du, J., Jiang, W., Pang, D., Niazi, A.U.K.: Exact controllability for Hilfer fractional differential inclusions involving nonlocal initial conditions. Complexity 2018, 9472847 (2018) Li, M., Chen, C., Li, F.B.: On fractional powers of generators of fractional resolvent families. J. Funct. Anal. 259, 2702–2726 (2010) Balachandran, K., Kiruthika, S.: Existence results for fractional integrodifferential equations with nonlocal condition via resolvent operators. Comput. Math. Appl. 62, 1350–1358 (2011) Hernandez, E., O'Regan, D., Balachandran, K.: Existence results for abstract fractional differential equations with nonlocal conditions via resolvent operators. Indag. Math. 24, 68–82 (2013) Wang, J., Zhou, Y.: A class of fractional evolution equations and optimal controls. Nonlinear Anal., Real World Appl. 12, 262–272 (2011) Wang, J., Fečkan, M., Zhou, Y.: Approximate controllability of Sobolev type fractional evolution systems with nonlocal conditions. Evol. Equ. Control Theory 6, 471–486 (2017) Wang, J., Zhou, Y., Wei, W., Xu, H.: Nonlocal problems for fractional integrodifferential equations via fractional operators and optimal controls. Comput. Math. Appl. 62, 1427–1441 (2011) Fan, Z.: Characterization of compactness for resolvents and its applications. Appl. Math. Comput. 232, 60–67 (2014) Ji, S.: Approximate controllability of semilinear nonlocal fractional differential systems via an approximating method. Appl. Math. Comput. 236, 43–53 (2014) Lian, T., Fan, Z., Li, G.: Approximate controllability of semilinear fractional differential systems of order \(1< q<2\) via resolvent operators. Filomat 31, 5769–5781 (2017) Agarwal, R.P., Santos, J.P., Cuevas, C.: Analytic resolvent operator and existence results for fractional integrodifferential equations. J. Abstr. Differ. Equ. Appl. 2(2), 26–47 (2012) Chen, L., Fan, Z., Li, G.: On a nonlocal problem for fractional differential equations via resolvent operators. Adv. Differ. Equ. 2014, 251 (2014) Li, K., Peng, J.: Fractional resolvents and fractional evolution equations. Appl. Math. Lett. 25, 808–812 (2012) Li, K., Peng, J., Jia, J.: Cauchy problems for fractional differential equations with Riemann–Liouville fractional derivatives. J. Funct. Anal. 263, 476–510 (2012) Balasubramaniam, P., Tamilalagan, P.: The solvability and optimal controls for impulsive fractional stochastic integro-differential equations via resolvent operators. J. Optim. Theory Appl. 174, 139–155 (2017) Bajlekova, E.G.: Fractional evolution equations in Banach spaces. Ph.D. thesis, Eindhoven University of Technology (2001) Debbouche, A., Baleanu, D.: Controllability of fractional evolution nonlocal impulsive quasilinear delay integro-differential systems. Comput. Math. Appl. 62, 1442–1450 (2011) Mallika, D., Baleanu, D., Suganya, S., Arjunan, M.M.: Existence results for fractional neutral integro-differential systems with nonlocal condition through resolvent operators. An. Ştiinţ. Univ. 'Ovidius' Constanţa, Ser. Mat. 27, 107–124 (2019) Tariboon, J., Ntouyas, S.K., Agarwal, P.: New concepts of fractional quantum calculus and applications to impulsive fractional q-difference equations. Adv. Differ. Equ. 2015, 18 (2015) Heymans, N., Podlubny, I.: Physical interpretation of initial conditions for fractional differential equations with Riemann–Liouville fractional derivatives. Rheol. Acta 45, 765–771 (2006) Agarwal, R.P., Belmekki, M., Benchohra, M.: A survey on semilinear differential equations and inclusions involving Riemann–Liouville fractional derivative. Adv. Differ. Equ. 2009, 981728 (2009) Baleanu, D., Agarwal, P., Parmar, R.K., et al.: Extension of the fractional derivative operator of the Riemann–Liouville. J. Nonlinear Sci. Appl. 10, 2914–2924 (2017) Liu, Z., Li, X.: Approximate controllability of fractional evolution systems with Riemann–Liouville fractional derivatives. SIAM J. Control Optim. 53, 1920–1933 (2015) Lizama, C.: An operator theoretical approach to a class of fractional order differential equations. Appl. Math. Lett. 24, 184–190 (2011) Fan, Z.: Existence and regularity of solutions for evolution equations with Riemann–Liouville fractional derivatives. Indag. Math. 25, 516–524 (2014) Zeidler, E.: Nonlinear Functional Analysis and Its Application II/A. Springer, New York (1990) Arendt, W., Batty, C., Hieber, M., Neubrander, F.: Vector-valued Laplace transforms and Cauchy problems. In: Amann, H., Bourguignon, J.P., Grove, K., Lions, P.L. (eds.) Monographs in Mathematics, 2nd edn. vol. 96. Birkhäuser, Basel (2011) Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer, New York (1983) Agarwal, P., Choi, J., Paris, R.B.: Extended Riemann–Liouville fractional derivative operator and its applications. J. Nonlinear Sci. Appl. 8, 451–466 (2015) Agarwal, P., Nieto, J.J., Luo, M.J.: Extended Riemann–Liouville type fractional derivative operator with applications. Open Math. 15, 1667–1681 (2017) Yang, X.J., Srivastava, H.M., Tenreiro Machado, J.A.: A new fractional derivative without singular kernel: application to the modelling of the steady heat flow. Therm. Sci. 20, 753–756 (2016) Yang, X.J., Gao, F., Tenreiro Machado, J.A., Baleanu, D.: A new fractional derivative involving the normalized sinc function without singular kernel. Eur. Phys. J. Spec. Top. 226, 3567–3575 (2017) Yang, X.J., Gao, F., Ju, Y., Zhou, H.W.: Fundamental solutions of the general fractional-order diffusion equations. Math. Methods Appl. Sci. 41, 9312–9320 (2018) Ahmed, H.M., El-Borai, M.M.: Hilfer fractional stochastic integro-differential equations. Appl. Math. Comput. 331, 182–189 (2018) This work was completed when the first author visited Texas A&M University. The author is grateful to Professor Goong Chen and Department of Mathematics for their hospitality and providing good working conditions. The authors are grateful to the referees for their careful reading of the manuscript and their valuable comments. Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study. Research is supported by National Natural Science Foundation of China (No. 11601178), Qing Lan Project of Jiangsu Province Universities, Jiangsu Overseas Research & Training Program for University Young Faculty and Presidents. Faculty of Mathematics and Physics, Huaiyin Institute of Technology, Huaian, P.R. China Shaochun Ji School of Mathematical Science, Huaiyin Normal University, Huaian, P.R. China Dandan Yang Search for Shaochun Ji in: Search for Dandan Yang in: The work presented in this paper has been accomplished through contributions of all authors. All authors read and approved the final manuscript. Correspondence to Shaochun Ji. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Ji, S., Yang, D. Solutions to Riemann–Liouville fractional integrodifferential equations via fractional resolvents. Adv Differ Equ 2019, 524 (2019). https://doi.org/10.1186/s13662-019-2463-z Fractional resolvents Riemann–Liouville fractional derivative Integrodifferential equations Fixed point theorems
CommonCrawl
Recommendations for the analysis of individually randomised controlled trials with clustering in one arm – a case of continuous outcomes Laura Flight1, Annabel Allison2, Munyaradzi Dimairo1, Ellen Lee1, Laura Mandefield1 & Stephen J. Walters ORCID: orcid.org/0000-0001-9000-81261 In an individually randomised controlled trial where the treatment is delivered by a health professional it seems likely that the effectiveness of the treatment, independent of any treatment effect, could depend on the skill, training or even enthusiasm of the health professional delivering it. This may then lead to a potential clustering of the outcomes for patients treated by the same health professional, but similar clustering may not occur in the control arm. Using four case studies, we aim to provide practical guidance and recommendations for the analysis of trials with some element of clustering in one arm. Five approaches to the analysis of outcomes from an individually randomised controlled trial with clustering in one arm are identified in the literature. Some of these methods are applied to four case studies of completed randomised controlled trials with clustering in one arm with sample sizes ranging from 56 to 539. Results are obtained using the statistical packages R and Stata and summarised using a forest plot. The intra-cluster correlation coefficient (ICC) for each of the case studies was small (<0.05) indicating little dependence on the outcomes related to cluster allocations. All models fitted produced similar results, including the simplest approach of ignoring clustering for the case studies considered. A partially clustered approach, modelling the clustering in just one arm, most accurately represents the trial design and provides valid results. Modelling homogeneous variances between the clustered and unclustered arm is adequate in scenarios similar to the case studies considered. We recommend treating each participant in the unclustered arm as a single cluster. This approach is simple to implement in R and Stata and is recommended for the analysis of trials with clustering in one arm only. However, the case studies considered had small ICC values, limiting the generalisability of these results. Randomised controlled trials (RCTs) are commonly used to evaluate the efficacy of healthcare treatments where patients are randomised to receive care from the same source; for example a health professional such as a nurse, therapist, general practitioner (GP) or surgeon. There are two main types of RCTs: group/cluster randomised controlled trials (cRCTs) and individually randomised controlled trials (iRCTs). Cluster RCTs randomise groups or clusters (of individuals) to the treatment arms; for example GP practices, schools or communities whilst iRCTs randomise individual patients [1, 2]. In a cRCT, for example, where patients in each treatment arm receive one of two group based interventions, we might expect patients in the same group to experience similar outcomes purely as a result of their group allocation. It is important to try and account for this cluster or group effect when designing and analysing the data. RCTs where individuals are randomised are not immune to this clustering effect either. In an iRCT where the treatment is delivered by a health professional it seems likely that the effectiveness of the treatment, independent of any treatment effect, could depend on the skill, training or even enthusiasm of the health professional delivering it. This may then lead to a potential clustering of the outcomes for patients treated by the same health professional or who received treatment as a group. Alternatively a single therapist may deliver an intervention to a sample of patients on an individual basis while another therapist delivers the intervention to a different sample of patients. We might expect there to be clustering in the patients who received treatment from the same therapist. In both cRCTs and iRCTs with clustering we can measure the extent to which outcomes within the same cluster may depend on each other using the intra-cluster correlation coefficient (ICC) [2]. If the outcomes are clustered then the conventional statistical methods for analysing RCT outcome data, such as an independent two sample t-test to compare the mean outcomes between the treatment and control groups, may not be appropriate as the methods assume the observed outcomes on different patients are independent [3]. When there is clustering there is a lack of independence among the outcomes. When using conventional statistical methods this may lead to underestimation of the standard error for the treatment effect estimate, narrower confidence limits and hence larger values for the test-statistic (the ratio of the treatment estimate to its standard error) and smaller P-values. The extent to which the results are affected depends on the average cluster size in the trial and the magnitude of the ICC [4]. For example a high ICC (≥0.05) may not greatly impact the results if the average size of the clusters is small and a low ICC (<0.05) may have a large impact on the results if the average cluster size is large. If we do not use appropriate methods to allow for this we can underestimate the standard error and over-estimate the significance of results. Furthermore, there is a reduction in the evaluable sample size and so the power of the study to detect a treatment effect decreases. Using the nomenclature of Baldwin [5], the clustering that arises in iRCTs can be split into two categories, fully clustered and partially clustered. A fully clustered trial is one with elements of clustering that span both arms of the trial. An example of a fully clustered trial is one comparing homeopathic remedy with placebo for the treatment of chronic fatigue syndrome [6]. Patients were assigned to a homeopath and then within each homeopath the patients were randomly assigned to either the treatment or control. As patients on both treatments saw the same homeopath there is clustering by homeopath in each arm of trial. Partially clustered designs describes a trial where clustering occurs in just one of the arms of the trial. An example of a partially clustered design is a trial comparing acupuncture with usual care for the treatment of persistent non-specific low back pain [7]. Patients in the treatment arm were treated by one of the trial acupuncturists. Clustering occurs in one arm of the trial only, where a health professional-given treatment is being compared with usual care. There is clustering by heath professional in the treatment arm but no equivalent clustering in the control arm (Fig. 1). Schematic of a trial with clustering in only one arm (the treatment arm) where n1,…,n m is the number of patients in the m treatment clusters (clusters are not necessarily of equal size but this is often fixed in advance) and l is the number of subjects in the control arm This paper reviews and describes the statistical methods for analysing outcomes from an iRCT with some element of clustering in one arm. We focus on trials with continuous outcomes and assume the clustering occurs in the treatment arm only. We explore the performance of all the models including naïve approaches that were implemented in our case studies prior to the development of more sophisticated methods. We provide practical guidance and recommendations for the analysis of iRCTs with some element of clustering in only one arm. A comprehensive literature search was used to identify published work on clustering in iRCTs. A search of the database MEDLINE was conducted on 1st August 2014. The following search criteria were implemented: Two statisticians (LF and EL) hand searched the articles independently based on titles, abstracts and where necessary the full article, to identify relevant results. Relevant articles contained details of RCTs with clustering in one arm or methods used to analyse such trials. In addition to the database search, papers known by the authors to be relevant were included. Researchers known to be working in this area were contacted to identify unpublished or ongoing work. A consensus decision was then made between LF and EL as to relevant articles. This list was then reviewed and summarised, identifying the most relevant articles for this project - those describing methodology for handling clustering in one arm of iRCTs. Literature search results The MEDLINE search identified 353 articles. After the initial hand searching exercise 22 (19 from the MEDLINE search and three from other sources) were shortlisted and 17 were included in the list of relevant articles. These articles included methodological and application papers providing methods for the analysis of trials containing clustering in one arm and are referenced throughout. The following models were selected based on the findings of the literature search. The general notation is as follows; y denotes the continuous outcome, i is the patient indicator, j is the cluster indicator, t is the treatment indicator variable, β0 is the intercept and θ is treatment effect. The most straightforward and naïve option for the analysis of trials with clustering in one arm is to ignore clustering and use a simple linear regression model. This model assumes observations within the same treatment arm and cluster are independent. Here y i is a continuous outcome for patient i, t i is the treatment indicator variable (t=0 for control and t=1 for the treatment arm) for patient i, θ is the treatment effect, ε i are Normally distributed errors with mean zero and residual variance \(\sigma ^{2}_{\epsilon }\). This represents the patient level variation. $$\begin{array}{*{20}l} y_{i}&=\beta_{0}+\theta t_{i}+\epsilon_{i}, \end{array} $$ $$\begin{array}{*{20}l} \epsilon_{i} &\sim N\left(0,\sigma_{\epsilon}^{2}\right). \end{array} $$ Although this model is simple to implement and common in practice it may give incorrect results as the independence assumption of the linear regression model is violated [8]; standard errors of parameter estimates and the p-value are likely to be smaller than they should be [2]. This will depend on the level of clustering as measured by the ICC and the average cluster size. Imposing clustering in the control arm Rather than ignoring the clustering in the trial we can account for it in the model used for analysis. As there is clustering in just one arm of the trial, one option is to impose clusters on the control arm that in reality do not exist. This will allow the implementation of methods used in the analysis of cRCTs with clustering in both arms. There are different options for imposing clusters (j) in the control arm. Table 1 gives three different options where l is the number of participants and k is the number of arbitrary clusters in the control arm. The first option treats the control arm as a large artificial cluster of size one [9]; the second option treats each individual within the control arm as a cluster of size one with j=l clusters in the control arm [5, 8, 10]. Both approaches may cause problems when estimating the ICC as, in theory, it is not possible to estimate between cluster variability in the control arm (Option 1, Table 1) and within cluster variability in the control arm (Option 2, Table 1). However, in practice, the exclusive person-to-person variability in the control arm is artificially partitioned into the between and within cluster components that occur with the treatment arm [5]. The third option overcomes the issue of estimating the ICC. We create artificial-random clusters in the control arm as in Option 3 (Table 1) [9]. Consideration may be given to the number of arbitrary clusters (k) to minimise bias in the estimation of treatment effect. There is paucity of literature guiding the optimum choice of the artificial-cluster sizes, hence for pragmatic and simplicity reasons k could be chosen to ensure cluster size is roughly equal across treatment arms. Table 1 Different options for imposing clustering of controls Cluster as a fixed effect It is possible to account for clustering by including cluster as a fixed covariate [5]; treating cluster coefficients as nuisance parameters. In Eq. 3 y ij is the outcome for patient i in cluster j, β j is the cluster effect, c j is the cluster indicator. $$\begin{array}{*{20}l} y_{ij}&=\beta_{0}+\theta t_{ij}+\sum\limits_{j=1}^{J}\beta_{j}c_{j}+\epsilon_{i}, \end{array} $$ While the fixed effect model may appear simple, fitting the fixed effects model is not straightforward as the model will be over-fitted; not all parameters in the model can be estimated since within each cluster each participant receives only the intervention or the control [11]. Consequently, by setting one cluster to be the reference category the between cluster treatment effect cannot be easily estimated. There is no cross classification for treatment arm. While options are available for fitting this model, we do not advocate this approach [5]. The fixed effects model does not truly reflect the study design. Therefore we will not consider the model further in this paper. Cluster as a random effect Using a random effects model mitigates some of the limitations of the fixed effects model. The inclusion of a random cluster effect adds just one parameter for estimation in the model, rather than J−1 parameters as in Eq. 3 [12]. This increases the degrees of freedom and allows exploration of the different sources of variability; between and within cluster. In this model we fit a random intercept for each cluster (u j ) and assume it is Normally distributed with zero mean and cluster effect variance \((\sigma _{u}^{2})\). Here, ε ij is the patient level variation for the ith patient in the jth cluster. $$\begin{array}{*{20}l} y_{ij}&=\beta_{0}+\theta t_{ij}+u_{j}+\epsilon_{ij}, \end{array} $$ $$\begin{array}{*{20}l} u_{j} &\sim N\left(0,{\sigma_{u}^{2}}\right), \end{array} $$ $$\begin{array}{*{20}l} \epsilon_{ij} &\sim N\left(0, \sigma_{\epsilon}^{2}\right). \end{array} $$ Again, as with Eq. 3 the imposed clustering of the control arm must be selected (Table 1, Options 1 to 3). Modelling clustering in one arm Imposing clustering in the control arm is theoretically not an ideal solution [5]. Alternatively we can consider models that do not force any clustering on the 'unclustered' control arm, instead we model just the clustering in the treatment arm. Subjects in the control arm are assumed to be independent [5]. As such the ICC is allowed to vary between the intervention and the control arm. Here the ICC in the control arm is modelled to be zero and in the intervention arm is modelled using Eqs. 19 and 20 given later. This partially clustered approach [8, 10, 13], more accurately reflects the nature of the clustering in the trial design [5], so is seemingly preferable to the forcing clustering methods. Partially clustered model In this model we confine the random effect to the treatment arm only, and hence do not need to configure artificial-clusters as in Table 1. $$\begin{array}{*{20}l} y_{ij}&=\beta_{0}+\theta t_{ij}+t_{ij}u_{j}+\epsilon_{ij}, \end{array} $$ We define a random slope model, however when writing out the models for the two levels of t ij we can see this essentially amounts to a random intercept for each cluster in the treatment arm only (Eq. 11) and one intercept for the unclustered control arm (Eq. 12). $$\begin{array}{*{20}l} \text{For the treatment arm } (t_{ij}&=1):\\ &y_{ij}=\beta_{0}+\theta+u_{j}+\epsilon_{ij}. \end{array} $$ $$\begin{array}{*{20}l} \text{For the control arm } (t_{ij}&=0):\\ &y_{ij}=\beta_{0}+\epsilon_{ij}. \end{array} $$ Heteroskedastic individual level errors In the partially clustered model (Eq. 8) the individual level errors ε ij have the same variance in the control and the treatment arm - hence the model is homoscedastic. An extension of this allows for different individual level errors in the two treatment arms. In a trial with therapists delivering an intervention in the treatment arm and no intervention in the control arm we might expect participants in the treatment arm to vary in a different way to those participants in the control arm. The outcome might be more homogeneous in participants in the treatment arm as between therapist variation is small due to adherence strict protocols for treatment implementation. It is possible to extend the partially clustered approach to allow for heteroskedastic errors between the treatment arms [5, 8, 13]. The intervention arm varies differently to the control arm. Here $$\begin{array}{*{20}l} y_{ij}&=\beta_{0}+\theta t_{ij}+t_{ij}u_{j}+(1-t_{ij})r_{ij}+t_{ij}\epsilon_{ij}, \end{array} $$ $$\begin{array}{*{20}l} r_{ij} &\sim N\left(0,{\sigma_{r}^{2}}\right), \end{array} $$ For the treatment arm the cluster level error is u j and the individual level error is ε ij (Eq. 17) and in the control arm the individual level error is r ij (Eq. 18). $$\begin{array}{*{20}l} \text{For the treatment arm} \left(t_{ij}=1\right)&:\\ y_{ij}=&\beta_{0}+\theta+u_{j}+\epsilon_{ij}. \end{array} $$ $$\begin{array}{*{20}l} \text{For the control arm} \left(t_{ij}=0\right)&:\\ y_{ij}=&\beta_{0}+r_{ij}. \end{array} $$ This model can reveal whether individuals become more homogeneous in their attitudes and behaviours as a function of treatment arm membership [8]. A summary of the models that can be used in the analysis of iRCT with clustering in one arm is given in Fig. 2. Summary of models for the analysis of iRCTs with clustering in one arm only y denotes the continuous outcome, i is the patient indicator, j is the cluster indicator, t is the treatment indicator variable (t=1 for the treatment arm and t=0 for the control arm), θ is treatment effect, ε, u and r are error terms For the random effects, partially clustered and heteroskedastic models it is possible to estimate the ICC to measure the overall level of clustering in the trial across both arms [2]. For the random effects and partially clustered models we use $$\begin{array}{*{20}l} ICC=\frac{{\sigma_{u}^{2}}}{{\sigma_{u}^{2}}+\sigma_{\epsilon}^{2}}. \end{array} $$ The heteroskedastic model requires an additional term in the denominator as we have now allowed the residual variance to differ between the treatment and control arms. This formula was adapted from the work of Roberts (2010) [14] on nested therapist designs $$\begin{array}{*{20}l} ICC=\frac{{\sigma_{u}^{2}}}{{\sigma_{u}^{2}}+\sigma_{\epsilon}^{2}+{\sigma^{2}_{r}}}. \end{array} $$ We compared 10 models using four example case studies from iRCTs with clustering in one arm: specialist clinics for the treatment of venous leg ulcers [15], acupuncture for low back pain [7], cost-effectiveness of community postnatal support workers (CPSW) [16], and Putting Life in Years (PLINY) [17]. These studies were selected from our portfolio of studies as trial statisticians that had clustering in one arm only. The trials are summarised in Table 2. Table 2 Summary of the case studies Main analysis The clustering structure of each case study was first summarised by the number of clusters in the treatment arm and the mean, median, minimum (min), maximum (max) and inter-quartile range (IQR) of the cluster size. All analyses used complete cases for simplicity; patients with data missing for the primary outcome were removed. Box plots aided visualisation of the spread of data within and between each cluster for each case study. Patients with missing cluster allocation in the treatment arm were grouped as one cluster in both the summary table and the box plots. Model fitting To explore the practical aspects of the models proposed for analysing an iRCT with clustering in one arm we used two statistical packages – Stata and R. The results presented here are taken from the analysis in R [18] as Roberts has comprehensively presented results using Stata [14]. Scripts for both packages are provided (see Additional file 1). All models were fitted using a restricted maximum likelihood procedure (REML) and the following specifications of the clustering in the control arm were used: Treating controls as clusters of size one, Treating controls as one large cluster, Creating artificial-clusters. Although in theory we do not model the clustering in the control arm for both the partially clustered and heteroskedastic models, for the sake of running a model in R or Stata it is necessary to impose clustering. All three approaches are explored. The artificial-clusters in the control arm were created by randomly assigning control patients to a cluster based on the average cluster size in the intervention arm. When analysing clustered data with small to medium number of clusters, a correction to the degrees of freedom is recommended to protect against inflation of type I error [19]. A number of methods which include Satterthwaite [20] and Kenward-Roger [21] approximations have been proposed to correct degrees of freedom. The debate about which procedure to adopt and under what circumstances is beyond the scope of this paper. In this study, the results were however similar regardless of whether a correction to the degrees of freedoms was made or not. In this regard, the results are presented using REML approximation without any correction to the degrees of freedom. However, Stata's mixed command allows the correction of degrees of freedom using a number of methods including Satterthwaite and Kenward-Roger approximations [22]. Using R, the model ignoring clustering was fitted using the lm() command [18] in the stats package. The lme4 package was used to fit the random effects and the partially clustered model however it was not possible to use the same package for the heteroskedastic model as this package does not allow heteroskedastic errors [23]. Instead the nlme and lme() function were used [24]. Bespoke functions were written in R to calculate the ICC for the appropriate models as per Eqs. 19 and 20. The lme4 package does not produce p-values for model estimates and so does not need an estimate for degrees of freedom. This omission is due to the authors not supporting current approaches for doing so [23]. The nlme package uses approximations to the distributions of the maximum likelihood estimates to produce p-values. This method requires an estimate of degrees of freedom which is outlined in detail by Pinheiro and Bates [25]. The results from each model were compared visually using a forest plot and summarised in a table. Summary of case studies Table 3 provides a summary of the four case studies considered. The CPSW case study has the largest amount of missing outcome data (13.5%), all patients with no outcome data were removed from the model fitting. Figure 3 shows there is slight variation in the median general health perception domain of the SF-36 with clear differences in the spread of the data depending on the support worker. This indicates small potential for clustering of outcomes in the treatment arm. Box plot of the case studies. Patients with missing outcome data have been removed Table 3 Summary of the clustering in the case studies where IQR is the inter-quartile range. The summary of the cluster sizes is based on patients with a valid primary endpoint (number analysed) The Acupuncture study had an average cluster size of 21 in the intervention arm and 80 patients in the control arm. As with each of the case studies the controls were randomly assigned to artificial-clusters. Here four clusters of size 20 were used. Therapist 7 saw two patients, much fewer than the other therapists. In Fig. 3 the median pain score at 12 months varies slightly between the therapists (not accounting for Therapist 7) and there is little variability when compared to the control arm. Again there is potential for clustering of outcomes in the treatment arm. The outcome of interest in the Ulcer case study is recorded for all patients in this case study. Figure 3 shows great variability in the median leg ulcer free weeks between clinics and in comparison to the controls, indicating potential clustering of the outcome in the treatment arm. The PLINY case study was a pliot trial and as such had an evaluable sample size of 56. There were five patients in the treatment arm with no cluster allocation. As the other clusters in the treatment arm were of size six, we grouped the five patients without cluster allocation into their own cluster. In Fig. 3, four of these patients had missing outcome data. All clusters contain only a few patients (a maximum of 6), a reflection of the small sample size for this study. The small number of patients in each cluster makes it difficult to assess any variability between facilitators in Fig. 3 and the control arm. There is some suggestion of variability in the median score in the mental health domain of the SF-36 indicating potential for clustering in the outcome dependent on the facilitator. The results from fitting the models to the CPSW case study are given in Table 4. The estimate of the treatment difference and its standard error for the model ignoring clustering and the random effects, partially clustered and heteroskedastic models are all similar, including for the various imposed clustering options in the random effects model. The residual variance is comparable for all these models and the random variation (where applicable) is small (<0.0001). Table 4 Summary of results for the CPSW case study (n=539) For the random effects model in the remaining case studies there is some dependence on how the clustering in the control arm is specified. For example, in the Acupuncture case study the estimate of the treatment difference ranges from 5.49 to 5.59 and its standard error from 3.75 to 5.02 (Table 5). This is evident in the forest plot in Fig. 4. A similar result is found for the Ulcer case study (Table 6) with the standard error greatest when the controls are treated as one large cluster. The choice of imposed clustering method also affects the residual error and the random error. For the PLINY case study (Table 7) the standard error is largest when the controls are treated as artificial-clusters. This suggests the specification of clustering in the control arm can influence the results when using a random effects model. A possible explanation is the small number of patients per cluster. In the case studies the within cluster variance is estimated with large uncertainty. As expected, the partially clustered and heteroskedastic models appear not to depend on the specification of controls giving identical results regardless of the approach adopted. Forest plot of models fitted using R for each of the case studies where RE is random effects, PC is partial clustering, Het. is heteroskedastic model. The vertical, black dashed line represents the target treatment difference. We are not using the primary outcome from the Ulcer case study and so this line is not marked. The vertical, red dotted line marks a zero treatment difference Table 5 Summary of results for the Acupuncture case study (n=215) Table 6 Summary of results for the Ulcer case study (n=233) Table 7 Summary of results for the PLINY case study (n=56) In the case studies considered, the estimates of the ICC are small with the largest value recorded for the Ulcer case study of 0.04 (Table 7) estimated using the partially clustered and random effects model (one large cluster). These small values may provide an explanation as to why the simple model ignoring clustering provides similar estimates to the more complex models in all four cases. The results were replicated using Stata and the results were almost identical between the two packages. In this paper, five different approaches to the analysis of iRCTs with clustering in the treatment arm have been discussed. Some of these approaches have been applied to four case studies in different settings to demonstrate their implementation and evaluate their use in practice. The four case studies considered have small estimates for the ICC. All had an ICC less than 0.05 and three studies had an ICC less than 0.02. This indicates there was little clustering of outcomes. For example in the CPSW case studies the General Health Status score of a patient seen by Support Worker 1 would likely be similar had they been treated by Support Worker 2. As a result of the small cluster sizes and ICCs, we found little difference in the estimates of the treatment coefficients and their standard errors between four of the models. The ICCs observed in our case studies are not uncommon for trials of this nature. For example, in surgical trials with a quality of life endpoint (EQ-5D) identified by Cook et al. there were no trials with an ICC greater than 0.04 in either intervention arm [26]. Generally, we would expect the impact of the ICC to be larger when the therapist effect or cluster based intervention is delivered over a long period of time (so the ICC is high) or when the average size of the cluster is large (the ICC may be small in this case). The simplest model, ignoring clustering, performed comparably well with the more complex models for all case studies. However, it is important to consider that this model does not truly reflect the design of the study as there was no allowance for clustering. We would not expect the simple model to perform well in circumstances where the ICC is higher or the cluster size is large. We do not recommend this model for use in practice, however, applying this model to our case studies illustrated that there was little difference in the results using the correct, more complex methods and so the results previously found are still valid. Although in theory we might anticipate differences in the outcome of patients dependent on the cluster they belong to, in reality the ICC is often low as we observed in our case studies. One explanation could be, in clinical research practice, the training given to therapists as part of the protocol in some way standardises the treatment given. If the ICC was high there might be concern regarding the success of the intervention as there would be a strong reliance on the cluster and the therapist or healthcare professional delivering the intervention. We encourage the reporting of the ICC from clinical trials to aid the planning of future studies. When using a random effects model to analyse clustering in just one arm of a study it is necessary to specify how the control arm is treated. In our results we found that for three of the case studies the choice of clustering for controls influenced the treatment coefficient estimates and their corresponding standard errors. Although this model performed well in our case, it does not truly represent the nature of clustering in the trial as we have forced clusters in the control arm that were not present in the actual trial. Therefore we do not recommend this approach. The partially clustered and heteroskedastic models more accurately reflect the clustering in these trials, however, are of greater statistical complexity. We are often required to specify in advance of the trial commencing the proposed analysis. This is before any data from the trial has been collected so we do not know the ICC in the study. Therefore balancing the complexity of the model fitting procedure and the gain in accuracy of the results, we recommend to use the partially clustered model as a minimum, as this provides an accurate analysis of the study regardless of the observed ICC. We recommend, to allow fitting of the partially clustered models, participants in the unclustered arm are treated a clusters of size one as this provides a simple and intuitive solution for practical implementation. If there is strong belief that there are different variances between the treatments and controls the heteroskedastic approach may be appropriate. We hope to identify, in a simulation study, a threshold value for the expected difference in individual level variability between the treatment and control arm whereby the heteroskedastic model will be more appropriate. Practically, fitting the partially clustered and heteroskedastic models in R and Stata required little additional work and the code to implement these models is provided. This study employed a formal search of relevant literature to capture most of the related work conducted. However, this was not an exhaustive review of all work in this area. We have used four case studies that have arisen from our work as applied medical statisticians in clinical trials research. The results and inferences made are applicable to data with similar properties to these studies. For example our results focus only on continuous endpoints and as discussed relate to trials with small ICCs and relatively small clusters. All the case studies assumed each patient belonged to one cluster only; in the Acupuncture study patients only saw one therapist. We have not considered the effects of multiple membership [27]. Our analysis of these case studies was on complete cases only, we have ignored any data collected on patients for whom the outcome of interest was not recorded. The cluster allocation for participants in some of the case studies were also missing and we were not able to find this information. We therefore had to group these participants into one cluster. These data limitations may result in a large loss of information and potentially introduce bias, so alternative approaches should be explored. While small cluster sizes, small ICCs and incomplete data are issues in many real world data sets, to increase the generalisability of these results to trials with different characteristics to the case studies we hope to conduct a simulation study. This study will explore how the findings might change for varying cluster sizes, varying ICC, varying sample sizes and differential variance in the control and intervention arms. We believe that while the control arm is 'unclustered', there is low level, natural clustering that occurs in practice in all trials - even trials with no formal clustering in either arm. For instance, patients in the unclustered control may be treated within the same hospital by healthcare professionals with similar skill levels or even by the same healthcare professional, creating potential dependencies in their outcomes. Baldwin et al. state that it is not plausible to have a non-zero ICC for the unclustered controls [5]. In their work they treat each individual in the cluster arm as their own cluster and conclude that the ICC for unclustered participants will have a negligible impact on estimation. Here we have considered clustering the controls as one large group cluster and using artificial clusters. We acknowledge that imposing these types of clusterings in the control arm that does not exist in reality could impact the estimation of the ICC. By imposing clustering on the control arm of the study we may impact the estimation of the ICC as we are either over or underestimating this low level natural clustering that is occurring. We will explore the impact of this imposed clustering on estimation in a simulation study. In the analysis of cRCTs two popular methods of analysis include the use of random effects models and marginal models [2]. In this work we have chosen to use random effects models as this allowed the fitting of the more complex partially clustered and heteroskedastic models. In iRCTs where the treatment is group based or delivered by a health professional there is potential for a clustering of outcomes in the treatment arm only. As with any clustering this needs special attention in the design and analysis of the study. This paper has summarised the literature, identifying five potential approaches for the analysis of trials where there is clustering in one arm only. Some of these methods have been applied using the statistical packages R and Stata, exploring alternative methods to model the clustering in the control arm, to four case studies where clustering was present in one arm. Ignoring the clustering performed well for our case studies as a consequence of the low ICC in these studies. However, we do not recommend this approach in practice. Modelling homogeneous variances between the clustered and unclustered arm is adequate in scenarios similar to the case studies considered. We recommend treating each participant in the unclustered arm as a single cluster to facilitate modelling in a statistical package. This approach is simple to implement in R and Stata and is recommended for the analysis of trials with clustering in one arm only. The generalisability of our results is limited to trials similar to the case studies. Simulation work is required, for example, to determine scenarios where accounting for different levels of variability between treatment arms is necessary. CPSW: Community Postnatal Support Worker cRCT: Cluster randomised controlled trial FE: Fixed Effects GP: Het: Heteroskedastic ICC: Intra-cluster correlation coefficient IQR: Inter-quartile range iRCT: Individually randomised controlled trial Partially clustered PLINY: Putting life in years RCT: Randomised controlled trial Random effects Walters SJ. Therapist effects in randomised controlled trials: what to do about them. J Clin Nurs. 2010; 19:1102–1112. Campbell MJ, Walters SJ. How to Design, Analyses and Report Cluster Randomised Trials in Medicine and Health Related Research. West Sussex: Statistics in Practice, Wiley; 2014. Campbell MJ, Machin D, Walters SJ. Medical Statistics. A Textbook for the Health Sciences, 4th Edition. Chester: Wiley; 2007. Baldwin SA, Stice E, Rohde P. Statistical analysis of group-administered intervention data: Reanalysis of two randomized trials. Psychother Res. 2008; 18(4):365–76. Baldwin SA, Bauer DJ. Evaluating models for partially clustered designs. Psychol Methods. 2011; 16:149–65. Weatherley-Jones E, Nicholl JP, Thomas KJ, Parry GJ, McKendrick MW, Green ST, Stanley PJ, Lynch SPJ. A randomised,controlled, triple-blind trial of the efficacy of homeopathic treatment for chronic fatigue syndrome. J Psychosom Res. 2004; 56:189–97. Thomas KJ, MacPherson H, Thorpe L, Brazier J, Fitter M, Campbell MJ, Roman M, Walters SJ, Nicholl J. Randomised controlled trial of a short course of traditional acupuncture compared with usual care for persistent non-specific low back pain. BMJ. 2006; 333:623. Bauer DJ, Sterba SK, Hallfors DD. Evaluating group-based interventions when control participants are ungrouped. Multivar Behav Res. 2008; 43:210–36. Bland M. Grouping in individually randomised trials. In: 4th Annual Conference on Randomised Controlled Trials in the Social Sciences, September 2009. York, UK: 2009. https://www-users.york.ac.uk/~mb55/talks/. Accessed 31 Oct 2016. Heo M, Litwin AH, Blackstock O, Kim N, Arnsten JH. Sample size determinations for group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Stat Methods Med Res. 2014; 0:1–15. Galbraith S, Daniel JA, Vissel B. A study of clustered data and approaches to its analysis. J Neurosci. 2010; 30(32):10601–10608. Faes MC, Reelick MF, Perry M, Rikkert MGO, Borm GF. Studies with group treatments required special power calculations, allocation methods, and statistical analyses. J Clin Epidemiol. 2012; 65(2):138–46. Moerbeek M, Wong WK. Sample size formulae for trials comparing group and individual treatments in a multilevel model. Stat Med. 2008; 27:2850–864. Roberts C. Session 3: Design and analysis of trials of therapist treatments. In: BABCP 2010, 38th Annual Conference and Workshops: 2010. Available online at http://slideplayer.com/slide/6195114/. Accessed 31 Oct 2016. Morrell CJ, Walters SJ, Dixon S, Collins KA, Brereton LML, Peters J, Brooker CGD. Cost effectiveness of community leg ulcer clinics: randomised controlled trial. BMJ. 1998; 316:1487–1497. Morrell CJ, Spiby H, Stewart P, Walters S, Morgan A. Costs and effectiveness of community postnatal support workers: randomised controlled trial. BMJ. 2000; 321:593–8. Gail M, Hind D, Gossage-Worrall R, Walters S, Duncan R, Newbould L, Rex S, Jones C, Bowling A, Cattan M, Cairns A, Cooper C, Edwards R, Goyder E. 'Putting Life in Years' (PLINY) telephone friendship groups research study: pilot randomised controlled trial. Trials. 2014; 15(1):141. R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2013. R Foundation for Statistical Computing. http://www.R-project.org/. Kahan BC, Forbes G, Ali Y, Jairath V, Bremner S, Harhay MO, Hooper R, Wright N, Eldridge SM, Leyrat C. Increased risk of type i errors in cluster randomised trials with small or medium numbers of clusters: a review, reanalysis, and simulation study. Trials. 2016; 17(1):438. Satterthwaite FE. An approximate distribution of estimates of variance components. Biom Bull. 1946; 2(6):110–4. Kenward MG, Roger JH. Small sample inference for fixed effects from restricted maximum likelihood. Biometrics. 1997; 53(3):983–997. StataCorp. Stata: Release 14. statistical software. 2015. http://www.stata.com/support/faqs/resources/citing-software-documentation-faqs/. Bates D, Mächler M, Bolker B, Walker S. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015; 67(1):1–48. doi:10.18637/jss.v067.i01. Pinheiro J, Bates D, DebRoy S, Sarkar D, R Core Team. nlme: Linear and Nonlinear Mixed Effects Models. 2015. R package version 3.1-121. http://CRAN.R-project.org/package=nlme. Accessed 31 Oct 2016. Pinheiro J, Bates D. Mixed-effects models in S and S-PLUS.Springer Science & Business Media. 2006. Cook JA, Bruckner T, MacLennan GS, Seiler CM. Clustering in surgical trials-database of intracluster correlations. Trials. 2012; 13(1):1. Browne WJ, Goldstein H, Rasbash J. Multiple membership multiple classification (MMMC) models. Stat Model. 2001; 1(2):103–24. Ware JE, Sherbourne CD. The MOS 36-item short-form health survey (SF-36). I. Conceptual framework and item selection. Med Care. 1992; 30:473–83. This is independent research arising from grants: Research Methods Fellowship (RMFI-2013-04-011 Goodacre) and Doctoral Research Fellowship (DRF-2012-05-182) which are funded by the National Institute for Health Research fully supporting LF and MD respectively. AA, MD, LM and SW were funded by the University of Sheffield. Data used here are from published trials so will be made available on request by contacting the corresponding author. However, availability is conditional on an agreement with individual Chief Investigators. SW and MD conceived the project and designed the work. EL and LF carried out the literature search. EL and MD devised the notation for the statistical models. LF and AA summarised the datasets and AA and LM wrote the R code for the fitting of statistical models. All R code was checked by LM for accuracy. EL wrote the Stata code for the fitting of the statistical models. LF drafted the manuscript. All authors contributed to the writing of the manuscript, its revisions and approved the final version. This is secondary analysis of trial data which has been published. Individual ethics approvals for the trials used were obtained as noted in related publications which have been referenced. The views expressed in this publication are those of the authors and not necessarily those of the NHS, National Institute for Health Research, the Department of Health or the University of Sheffield. ScHARR, University of Sheffield, 30 Regent Street, Sheffield, S1 4DA, UK Laura Flight, Munyaradzi Dimairo, Ellen Lee, Laura Mandefield & Stephen J. Walters MRC Biostatistics Unit, Cambridge Institute of Public Health, Forvie Site, Robinson Way, Cambridge Biomedical Campus, Cambridge, CB2 0SR, UK Annabel Allison Laura Flight Munyaradzi Dimairo Laura Mandefield Stephen J. Walters Correspondence to Stephen J. Walters. Additional file 1 Generic code. R and Stata code for models and ICC calculations. This file contains the R and Stata code required to implement the models used throughout this article. (DOCX 22 kb) Flight, L., Allison, A., Dimairo, M. et al. Recommendations for the analysis of individually randomised controlled trials with clustering in one arm – a case of continuous outcomes. BMC Med Res Methodol 16, 165 (2016). https://doi.org/10.1186/s12874-016-0249-5 Statistical models Therapist effects Individually clustered randomised controlled trials Data analysis, statistics and modelling
CommonCrawl
Filter by source IBICT Brazilian ETDs [49] DiVA Archive at Upsalla University [40] Dépôt national des thèses électroniques françaises [39] Ohiolink ETDs [17] Library and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada [14] Czech ETDs [11] Ethos UK [11] Hochschulschriftenserver (HSSS) der SLUB Dresden [9] Universidade de São Paulo [8] Bradford Scholars [7] Virginia Tech Theses and Dissertation [6] Carnegie Mellon University [4] Mississippi State University [4] Texas A and M University [4] Universitat Politècnica de Catalunya [4] University of Waterloo Electronic Theses Repository [4] Université d'Ottawa [4] Arizona State University [3] Georgia Tech Electronic Thesis and Dissertation Archive [3] University of Patras [3] University of Toronto [3] Australiasian Digital Theses Program [2] Georgia State University [2] University of Texas [2] Virginia Commonwealth University [2] Worcester Polytechnic Institute [2] Bibliothèque interuniversitaire de la Communauté française de Belgique [1] Boston University [1] Brigham Young University [1] California Polytechnic State University [1] East Tennessee State University [1] Georg-August-Universität Göttingen [1] Humboldt University of Berlin [1] India Institute of Science [1] King Abdullah University of Science and Technology [1] Lithuanian ETD submission system [1] M.I.T. Theses and Dissertation [1] McMaster University [1] Middle East Technical Univ. [1] NSYSU Electronic Thesis and Dissertation Archive [1] National Chengchi University Libraries [1] Purdue University [1] Universitat Politècnica de València [1] Universitat de les Illes Balears [1] University of Iowa [1] University of Kentucky [1] University of New Orleans [1] University of Saskatchewan Library [1] Atlanta University Center [0] Ball State University [0] Boston College [0] Brigham Young University Theses [0] Butler University [0] CCSD theses-EN-ligne, France [0] CRANFIELD [0] CRANFIELD1 [0] California Institute of Technology [0] California State University San Bernardino [0] Chapman University [0] Claremont Colleges [0] Columbia University [0] Digitale Hochschulschriften der LMU [0] Dissertations and other Documents of the Gerhard-Mercator-University Duisburg [0] Doshisha University [0] Duke University [0] Duquesne University [0] Florida Atlantic University [0] Florida International University [0] Florida State University [0] George Washington University Theses [0] Hamburg Universtiy of Applied Sciences (HAW Hamburg) [0] Harvard University [0] Hitotsubashi University [0] Hokkaido University Japan [0] Hong Kong Baptist University [0] Hong Kong University Theses [0] Indiana University of Pennsylvania Thesis [0] Indiana University-Purdue University Indianapolis [0] K-State Research Exchange [0] Kyoto University [0] Lingnan University [0] Louisiana State University [0] Loyola Marymount University [0] McGill University [0] Montana State University [0] NDLTD Individual ETDs [0] Nagoya University [0] National Digital Library of Theses and Dissertations in Taiwan [0] National Library of Portugal DiTeD ETD Repository [0] Naval Postgraduate School [0] North Carolina State University [0] North Dakota State University [0] North-West University [0] North-West University, South Africa [0] Northeastern University [0] Nova Southeastern University [0] OCLC [0] Oregon Health and Science Univ. Library [0] Oregon State University [0] PUC Rio [0] Physnet, Physics Document Server, Germany [0] Pontificia Universidad Católica del Perú [0] Portland State University [0] Potsdam University [0] ProQuest.com [0] Rhodes University SA [0] Rice University [0] Robert Gordon University [0] Sao Paulo State University, Sao Paulo - Brazil. [0] South African National ETD Portal [0] Southern Baptist Theological Seminary [0] Southern Illinois University Carbondale [0] Technischen Universität Darmstadt [0] Temple University [0] Texas Christian University [0] The Chinese University of Hong Kong [0] Theses and Dissertations Online (Catalunya) [0] Tulane University [0] UCF [0] UCL [0] UDLA-Thesis [0] UniZar [0] Univ. of Western Cape [0] Universidad Catolica Santo Toribio de Mogrovejo [0] Universidad Continental [0] Universidad Mayor de San André(UMSA) [0] Universidad Nacional Jorge Basadre Grohmann [0] Universidad Nacional Mayor de San Marcos - SISBIB PERU [0] Universidad Nacional de Ingenierí(UNI) [0] Universidad Nacional de La Plata, Sedici [0] Universidad Nacional del Sur [0] Universidad Peruana de Ciencias Aplicadas (UPC) [0] Universidad Ricardo Palma (URP) [0] Universidad de Alicante [0] Universidad de Cantabria [0] Universidad de Chile [0] Universidad de Lima [0] Universidad de Murcia [0] Universidad de Oviedo [0] Universidad de San Martin de Porres [0] Universidad de las Americas, Puebla: Digital Theses [0] Universidad del Pacífico [0] Universidade Estadual Paulista [0] Universidade do Porto [0] Universita Cattolica del Sacro Cuore. DocTA [0] Universitat Abat Oliba [0] Universitat Autònoma de Barcelona [0] Universitat Internacional de Catalunya [0] Universitat Jaume I [0] Universitat Oberta de Catalunya [0] Universitat Pompeu Fabra [0] Universitat Ramon Llull [0] Universitat Rovira i Virgili [0] Universitat de Barcelona [0] Universitat de Girona [0] Universitat de Lleida [0] Universitat de Rovira i Virgili [0] Universitat de València [0] Universitat de Vic [0] University of Arizona [0] University of Auckland [0] University of British Columbia [0] University of Canterbury [0] University of Central Florida [0] University of Hawaii at Manoa Libraries [0] University of Latvia [0] University of Macau [0] University of Maine [0] University of Manitoba Canada [0] University of Massachusetts Medical School [0] University of Massachusetts, Amherst [0] University of Miami [0] University of Missiouri at St. Louis ETD Archive [0] University of Montana Missoula [0] University of North Carolina-Chapel Hill [0] University of North Florida [0] University of North Texas [0] University of Novi Sad [0] University of Oregon [0] University of Oulu [0] University of Pittsburgh [0] University of South Flordia [0] University of Tennessee Libraries [0] University of Vermont [0] University of Victoria [0] University of Würzburg [0] University of the Pacific [0] Università di Bologna [0] Università di Trento [0] Universität Osnabrück [0] Université Laval [0] Université de Montréal [0] Université de Sherbrooke [0] Université de Toulouse [0] Université du Québec à Chicoutimi [0] Université du Québec à Trois-Rivières [0] Université libre de Bruxelles [0] Ural Federal University [0] Utah State University [0] VTechWorks NDLTD ETDs [0] Vanderbilt University Theses [0] Vidyanidhi Digital Lib. (India) [0] Walden University [0] Western Kentucky University Theses [0] Wichita State University [0] William and Mary [0] Wirtschaftsuniversität Wien [0] Yale Medical student MD Thesis [0] en_us 11 en_ca 6 deu 2 catalan 1 lithuanian 1 multilíngua 1 中 1 balancing 279 load 279 distributed 51 parallel 51 systems 45 carga 38 balanceamento 34 charge 33 networks 31 Spelling suggestions: "subject:"load balancing"" "subject:"road balancing"" Visibility-based microcells for dynamic load balancing in MMO games SUMILA, ALEXEI 29 September 2011 (has links) Massively multiplayer games allow hundreds of players to play and interact with each other simultaneously. Due to the increasing need to provide a greater degree of interaction to more players, load balancing is critical on the servers that host the game. A common approach is to divide the world into microcells (small regions of the game terrain) and to allocate the microcells dynamically across multiple servers. We describe a visibility--based technique that guides the creation of microcells and their dynamic allocation. This technique is designed to reduce the amount of cross--server communication, in the hope of providing better load balancing than other load--balancing strategies. We hypothesize that reduction in expensive cross-server traffic will reduce the overall load on the system. We employ horizon counts map to create visibility based microcells, in order to emphasize primary occluders in the terrain. In our testing we consider traffic over a given quality of service threshold as the primary metric for minimization. As result of our testing we find that dynamic load balancing produces significant improvement in the frequency of quality of service failures. We find that our visibility-based micro cells do not outperform basic rectangular microcells discussed in earlier research. We also find that cross-server traffic makes up a much smaller portion of overall message load than we had anticipated, reducing the potential overall benefit from cross server message optimisation. / Thesis (Master, Computing) -- Queen's University, 2011-09-28 14:15:32.173 Microcells Stochastic scheduling in networks Dacre, Marcus James January 1999 (has links) Improved Prediction-based Dynamic Load Balancing Systems for HLA-Based Distributed Simulations Alkharboush, Raed January 2015 (has links) Due to the dependency of High-Level Architecture (HLA)-Based simulations on the resources of distributed environments, simulations can face load imbalances and can suffer from low performance in terms of execution time. HLA is a framework that simplifies the implementation of distributed simulations; it also has been built with dedicated resources in mind. As technology is nowadays shifting towards shared environments, the following two weaknesses have become apparent in HLA: managing federates and reacting towards load imbalances on shared resources. Moreover, a number of dynamic load management systems have been designed in order to provide a solution to enable a balanced simulation environment on shared resources. These solutions use specific techniques depending on simulation characteristics or load aspects to perform the balancing task. Load prediction is one of such techniques that improves load redistribution heuristics by preventing load imbalances. In this thesis, a number of enhancements for a prediction technique are presented, and their efficiency are compared. The proposed enhancements solve the observed problems with Holt's implementations on dynamic load balancing systems for HLA-Based distributed simulations and provide better forecasting. As a result, these enhancements provide better predictions for the load oscillations of the shared resources. Furthermore, a number of federate migration decision-making approaches are introduced to add more intelligence into the process of migrating federates. The approaches aim to solve a dependency problem in the prediction-based load balancing system on the prediction model, thus making similar systems adapt to any future system improvements. A resource aware distributed LSI algorithm for scalable information retrieval Liu, Yang January 2011 (has links) Latent Semantic Indexing (LSI) is one of the popular techniques in the information retrieval fields. Different from the traditional information retrieval techniques, LSI is not based on the keyword matching simply. It uses statistics and algebraic computations. Based on Singular Value Decomposition (SVD), the higher dimensional matrix is converted to a lower dimensional approximate matrix, of which the noises could be filtered. And also the issues of synonymy and polysemy in the traditional techniques can be overcome based on the investigations of the terms related with the documents. However, it is notable that LSI suffers a scalability issue due to the computing complexity of SVD. This thesis presents a resource aware distributed LSI algorithm MR-LSI which can solve the scalability issue using Hadoop framework based on the distributed computing model MapReduce. It also solves the overhead issue caused by the involved clustering algorithm. The evaluations indicate that MR-LSI can gain significant enhancement compared to the other strategies on processing large scale of documents. One remarkable advantage of Hadoop is that it supports heterogeneous computing environments so that the issue of unbalanced load among nodes is highlighted. Therefore, a load balancing algorithm based on genetic algorithm for balancing load in static environment is proposed. The results show that it can improve the performance of a cluster according to heterogeneity levels. Considering dynamic Hadoop environments, a dynamic load balancing strategy with varying window size has been proposed. The algorithm works depending on data selecting decision and modeling Hadoop parameters and working mechanisms. Employing improved genetic algorithm for achieving optimized scheduler, the algorithm enhances the performance of a cluster with certain heterogeneity levels. Mapreduce ; Simulation ; Load balancing On the Design and Implementation of Load Balancing for CDPthread-based Systems Chou, Yu-chieh 02 September 2009 (has links) In this thesis, we first propose a modified version of the CDPthread to eliminate the restriction on the number of execution engines supported¡Xby dynamically instead of statically allocating the execution engines to a process. Then, we describe a method to balance the workload among nodes under the control of the modified CDPthread to improve its performance. The proposed method keeps track of the workload of each node and decides to which node the next job is to be assigned. More precisely, the number of jobs assigned to each node is proportional to, but not limited to, the number of cores in each node. Our experimental results show that with a small loss of performance compared to the original CDPthread, which uses a static method for allocating the execution engines to a process, the modified CDPthread with load balancing outperforms the modified CDPthread without load balancing by about 25 to 45 percent in terms of the computation time. Moreover, the modified CDPthread can now handle as many threads as necessary. Distributed shared memory CDPthread Generation and properties of random graphs and analysis of randomized algorithms Gao, Pu January 2010 (has links) We study a new method of generating random $d$-regular graphs by repeatedly applying an operation called pegging. The pegging algorithm, which applies the pegging operation in each step, is a method of generating large random regular graphs beginning with small ones. We prove that the limiting joint distribution of the numbers of short cycles in the resulting graph is independent Poisson. We use the coupling method to bound the total variation distance between the joint distribution of short cycle counts and its limit and thereby show that $O(\epsilon^{-1})$ is an upper bound of the $\eps$-mixing time. The coupling involves two different, though quite similar, Markov chains that are not time-homogeneous. We also show that the $\epsilon$-mixing time is not $o(\epsilon^{-1})$. This demonstrates that the upper bound is essentially tight. We study also the connectivity of random $d$-regular graphs generated by the pegging algorithm. We show that these graphs are asymptotically almost surely $d$-connected for any even constant $d\ge 4$. The problem of orientation of random hypergraphs is motivated by the classical load balancing problem. Let $h>w>0$ be two fixed integers. Let $\orH$ be a hypergraph whose hyperedges are uniformly of size $h$. To {\em $w$-orient} a hyperedge, we assign exactly $w$ of its vertices positive signs with respect to this hyperedge, and the rest negative. A $(w,k)$-orientation of $\orH$ consists of a $w$-orientation of all hyperedges of $\orH$, such that each vertex receives at most $k$ positive signs from its incident hyperedges. When $k$ is large enough, we determine the threshold of the existence of a $(w,k)$-orientation of a random hypergraph. The $(w,k)$-orientation of hypergraphs is strongly related to a general version of the off-line load balancing problem. The other topic we discuss is computing the probability of induced subgraphs in a random regular graph. Let $0<s<n$ and $H$ be a graph on $s$ vertices. For any $S\subset [n]$ with $|S|=s$, we compute the probability that the subgraph of $\mathcal{G}_{n,d}$ induced by $S$ is $H$. The result holds for any $d=o(n^{1/3})$ and is further extended to $\mathcal{G}_{n,{\bf d}}$, the probability space of random graphs with given degree sequence $\bf d$. This result provides a basic tool for studying properties, for instance the existence or the counts, of certain types of induced subgraphs. random graph Combinatorics and Optimization On Load Balancing and Routing in Peer-to-peer Systems Giakkoupis, George 15 July 2009 (has links) A peer-to-peer (P2P) system is a networked system characterized by the lack of centralized control, in which all or most communication is symmetric. Also, a P2P system is supposed to handle frequent arrivals and departures of nodes, and is expected to scale to very large network sizes. These requirements make the design of P2P systems particularly challenging. We investigate two central issues pertaining to the design of P2P systems: load balancing and routing. In the first part of this thesis, we study the problem of load balancing in the context of Distributed Hash Tables (DHTs). Briefly, a DHT is a giant hash table that is maintained in a P2P fashion: Keys are mapped to a hash space I --- typically the interval [0,1), which is partitioned into blocks among the nodes, and each node stores the keys that are mapped to its block. Based on the position of their blocks in I, the nodes also set up connections among themselves, forming a routing network, which facilitates efficient key location. Typically, in a DHT it is desirable that the nodes' blocks are roughly of equal size, since this usually implies a balanced distribution of the load of storing keys among nodes, and it also simplifies the design of the routing network. We propose and analyze a simple distributed scheme for partitioning I, inspired by the multiple random choices paradigm. This scheme guarantees that, with high probability, the ratio between the largest and smallest blocks remains bounded by a small constant. It is also message efficient, and the arrival or departure of a node perturbs the current partition of I minimally. A unique feature of this scheme is that it tolerates adversarial arrivals and departures of nodes. In the second part of the thesis, we investigate the complexity of a natural decentralized routing protocol, in a broad family of randomized networks. The network family and routing protocol in question are inspired by a framework proposed by Kleinberg to model small-world phenomena in social networks, and they capture many designs that have been proposed for P2P systems. For this model we establish a general lower bound on the expected message complexity of routing, in terms of the average node degree. This lower bound almost matches the corresponding known upper bound. Using Grid Network between VISIR Laboratories Mehraban, Mehrdad January 2015 (has links) VISIR "Virtual Instrument Systems in Reality" is a remote laboratory that empowers students and researchers to design, implement, and measure on electronic circuits remotely. Users are able to connect to this system regardless of their location and use traditional lab resources online via JavaScript and HTML5 enabled web browser. The VISIR project is deployed to seven universities around the globe including Blekinge Institute of Technology (BTH) in Sweden. In this thesis work, the main aim is to introduce load-balancing Scenarios in VISIR network in order to enable and improve load balancing, stability, and scalability in this system. The participant universities will be connected in a grid topology, and they exchange capabilities, features, and data repository in order to share workload and resources. For this purpose, the behavior of VISIR network nodes were studied and simplified as simple servers. According to the VISIR characteristics, infrastructure, and requirement a set of design paradigms and guidelines were defined for selecting suitable load balancing mechanism to be used in VISIR system. Four different load-balancing methods described, were selected for comparison in an experimental setup. Moreover, an experimental test bed with utilizing virtual Linux machines was modeled, and chosen scenarios were implemented and tested under different circumstances i.e. various number of servers and clients. VISIR network and socket programming
CommonCrawl
How many distinct translates of a (non-admissible) set $\mathcal{H}$ can consist entirely of primes? In a recent post, Terence Tao talks about the prime tuples conjecture, and in particular, he asks: "Suppose one is given a ${k_0}$-tuple ${{\mathcal H} = (h_1,\ldots,h_{k_0})}$ of ${k_0}$ distinct integers for some ${k_0 \geq 1}$, arranged in increasing order. When is it possible to find infinitely many translates ${n + {\mathcal H} =(n+h_1,\ldots,n+h_{k_0})}$ of ${{\mathcal H}}$ which consists entirely of primes?" To study this, the concept of an admissible set is introduced: a $k_0$-tuple ${\mathcal H}$ is admissible "if it avoids at least one residue class ${\hbox{ mod } p}$ for each prime ${p}$." It is pointed out that, since for non-admissible sets ${\mathcal H}$ there exists a prime $p$ such that ${\mathcal H}$ meets every residue class $\hbox{ mod } p$, then every translate of ${\mathcal H}$ must contain a multiple of $p$, and so there can only be a finite number of translates of ${\mathcal H}$ consisting entirely of primes: in particular, each translate consisting of only primes must contain $p$ itself. This seems incredibly restrictive. Given a non-admissible $k_0$-tuple ${\mathcal H}$, just how many translates are there consisting only of primes, and how does this depend on $k_0$? Can there even be more than 1? As an example, the non-admissible $3$-tuple $(0,2,4)$ has only a single translate consisting of primes -- $(3,5,7)$ -- since every third odd number greater than 3 is divisible by 3, and hence not prime. There are plenty of prime triplets, i.e. $3$-tuples of the form $(p, p+2, p+6)$ or $(p,p+4,p+6)$, but both $(0,2,6)$ and $(0,4,6)$ are admissible -- and similarly for prime quadruplets, quintuplets, and sextuplets. I'm thinking about writing up some Mathematica code to go prime-diving, but I wanted to see if there's some simple theory here first that would save time. Edit: The wiki page on prime k-tuples says that "Some inadmissible k-tuples have more than one all-prime solution" and gives the smallest example, but doesn't explain how it arrived at this or explains any of the theory behind it, let alone gives estimates on how many there are. This of course just makes the curiosity even worse. number-theory prime-numbers Andrés E. Caicedo AndrewGAndrewG $\begingroup$ If you have a tuple which is inadmissible because the primes {p, q, ...} have all of their residue classes filled, a necessary condition to having an all-prime translate is that the translate contain {p, q, ...}. This may happen several times. In the Wikipedia example the obstruction is 5 and it can occur in the first or second position of the tuple. $\endgroup$ – Charles Jun 6 '13 at 20:25 An obvious upper bound is $k_0$ itself. There is some prime $p$ such that $\mathcal{H}$ contains all the residue classes mod $p$ (since the tuple is inadmissible), and $p\in n+\mathcal{H}.$ Another upper bound is $\pi(p)$ since the prime can't appear in after $k$ others unless there are $k$ others before it. Of course this is not sharp: you need not only $k$ primes before $p$, but for them to be arranged in the same pattern. CharlesCharles $\begingroup$ @AndrewG: If you use a common difference of $p-1$ then $p$ can occur at most once, in the first position. To occur in the second position there would need to be a prime $p-(p-1)$. $\endgroup$ – Charles Jun 6 '13 at 20:47 $\begingroup$ Derp, of course, I don't know what I was thinking. But in order to actually achieve $k_0$, wouldn't we need a prime arithmetic progression? $p$ would have to start out on the right-hand side and then, as we translate, move to the left: this would turn the distance between the last two elements into the distance between the 2nd and 3rd last, and so-on, so that they all have a common difference. Can prime arithmetic progressions even be inadmissible? I admit I am very ignorant of these ideas and of number theory in general. $\endgroup$ – AndrewG Jun 6 '13 at 21:14 $\begingroup$ @AndrewG: I don't see why you would need an arithmetic progression. The important thing is that all residue classes are filled, not in what order. $\endgroup$ – Charles Jun 7 '13 at 1:07 It seems very likely that the answer is "arbitrarily many", at least if we are allowed to make $k_0$ arbitrarily large. I was thinking about this question myself the other day, and I happened upon the exact same tuple mentioned on Wikipedia. My reasoning was as follows. Fix a prime $p$ such that every residue class modulo $p$ is represented in our (hypothetical) tuple $\mathcal H$. Then every translate of $\mathcal H$ consisting of primes must include $p$ itself. Suppose (for example) that $(p, p + h_1, \dots, p + h_{k_0})$ and $(p - h_1, p, p - h_1 + h_2, \dots, p - h_1 + h_{k_0})$ consist entirely of primes. Then $p$ lies in the middle of three-term arithmetic progression of primes with common difference $h_1$. Once we've found such a prime (say $p = 5$, with $h_1 = 2$), we just need to find primes $p + h_2, \dots, p + h_{k_0}$, covering all remaining residue classes modulo $p$, such that each $p + h_i - h_1$ is prime. At least in the given case, this is easy to do: $(3, 5, 11, 17, 29)$ and $(5, 7, 13, 19, 31)$ consist entirely of primes. The discussion above generalizes pretty easily to give more than two translates. The natural thing to try is to find some $p$ lying in the middle of a five-term arithmetic progression (I chose 17, using the AP 5, 11, 17, 23, 29), and find three-term AP's with the same common difference covering all remaining residue classes modulo $p$. An example is \begin{align} \mathcal H = (0, 2, 6, 12, 26, 42, 56, 91, 96, 146, 222, 252, 582, 642, 1086, 1176, 2702), \end{align} which has three translates consisting of primes: \begin{align} & (5, 7, 11, 17, 31, 47, 61, 97, 101, 151, 227, 257, 587, 647, 1091, 1181, 2707), \\ & (11, 13, 17, 23, 37, 53, 67, 103, 107, 157, 233, 263, 593, 653, 1097, 1187, 2713), \\ & (17, 19, 23, 29, 43, 59, 73, 109, 113, 163, 239, 269, 599, 659, 1103, 1193, 2719). \end{align} More generally, if you want $n$ prime translates, you could start with a $(2n-1)$-term AP of primes (which exists by Green-Tao), let $p$ be the $n$th of them, and try to fill in the remaining residue classes modulo $p$ with primes in $n$-term APs, as above. I don't know the current state of research well enough to say for sure whether these $n$-term APs with prescribed common difference and residue classes modulo $p$ will necessarily exist, but it seems likely that they will. As an aside, it isn't necessary to begin with such a long arithmetic progression. For example, if $n = 4$, you could begin with a two-dimensional AP of primes such as \begin{matrix} 5 & 17 & 29 \\ 47 & 59 & 71 \\ 89 & 101 & 113, \end{matrix} take $p = 59$, and construct your four translates so that each one contains one of the four 2x2 corners of the grid above. Ravi FernandoRavi Fernando Not the answer you're looking for? Browse other questions tagged number-theory prime-numbers or ask your own question. Reduced residue systems and prime k-tuple bijection Differences of Prime Numbers Problem relating to primes and sequences Proof of infinitude of primes using ratio of n to its totient
CommonCrawl
Technical advance Spie charts for quantifying treatment effectiveness and safety in multiple outcome network meta-analysis: a proof-of-concept study Caitlin H. Daly1,2, Lawrence Mbuagbaw1,3, Lehana Thabane1,3, Sharon E. Straus4,5 & Jemila S. Hamid6 BMC Medical Research Methodology volume 20, Article number: 266 (2020) Cite this article Network meta-analysis (NMA) simultaneously synthesises direct and indirect evidence on the relative efficacy and safety of at least three treatments. A decision maker may use the coherent results of an NMA to determine which treatment is best for a given outcome. However, this evidence must be balanced across multiple outcomes. This study aims to provide a framework that permits the objective integration of the comparative effectiveness and safety of treatments across multiple outcomes. In the proposed framework, measures of each treatment's performance are plotted on its own pie chart, superimposed on another pie chart representing the performance of a hypothetical treatment that is the best across all outcomes. This creates a spie chart for each treatment, where the coverage area represents the probability a treatment ranks best overall. The angles of each sector may be adjusted to reflect the importance of each outcome to a decision maker. The framework is illustrated using two published NMA datasets comparing dietary oils and fats and psoriasis treatments. Outcome measures are plotted in terms of the surface under the cumulative ranking curve. The use of the spie chart was contrasted with that of the radar plot. In the NMA comparing the effects of dietary oils and fats on four lipid biomarkers, the ease of incorporating the lipids' relative importance on spie charts was demonstrated using coefficients from a published risk prediction model on coronary heart disease. Radar plots produced two sets of areas based on the ordering of the lipids on the axes, while the spie chart only produced one set. In the NMA comparing psoriasis treatments, the areas inside spie charts containing both efficacy and safety outcomes masked critical information on the treatments' comparative safety. Plotting the areas inside spie charts of the efficacy outcomes against measures of the safety outcome facilitated simultaneous comparisons of the treatments' benefits and harms. The spie chart is more optimal than a radar plot for integrating the comparative effectiveness or safety of a treatment across multiple outcomes. Formal validation in the decision-making context, along with statistical comparisons with other recent approaches are required. Health technology assessments and clinical guidelines are increasingly being supported by evidence synthesised through network meta-analysis (NMA) [1, 2]. The main output from an NMA is a coherent set of relative treatment effects, based on pooled direct and indirect evidence typically contributed by randomised controlled trials (RCTs) [3, 4]. The estimated treatment effects relative to a common comparator may then be used to inform a ranked list of treatments, from which knowledge users may be able to deduce which treatment is best for a given clinical problem. Interpreting NMA results is challenging, particularly as the number of treatments and outcomes increase. Several pieces of literature have aimed to ease the interpretative burden of NMA. For example, three graphical tools were developed to display key features of an NMA (i.e. relative effects and their uncertainty, probabilities of ranking best, and between-study heterogeneity) for a single outcome [5]. The rank heat plot has been proposed as a visual tool for presenting NMA results across multiple outcomes [6]. However, knowledge users could also benefit from the quantification of the overall integrated results across multiple outcomes to facilitate interpretation in a more objective way. This is particularly important in situations where the comparative rankings of treatments on each outcome contradict each other. Radar plots are often used as a visualisation tool to communicate multivariate data [7]. Recently, they have been used to visually compare the surface under the cumulative ranking curves (SUCRAs) in an NMA evaluating multiple interventions for relapsing multiple sclerosis [8]. Another NMA on dual bronchodilation therapy for chronic obstructive pulmonary disease has compared the area within radar charts of SUCRA values to deliver a single ranking of their efficacy-safety profile [9]. However, in this NMA, the quantification of the area weighed each outcome equally, which may not reflect a knowledge user's preferences. The use of radar plots for the purpose of comparing the overall performance of treatments is also limited by the fact that the area depends on the ordering of the outcomes on the plot. For this reason, the spie chart has been suggested as a better alternative [10]. A spie chart is a combination of two pie charts, where one is superimposed on another, allowing comparisons between two groups on multiple attributes [11]. For example, in the context of NMA, this could be the comparison of a treatment against a hypothetical treatment that is uniformly the best across multiple outcomes. The former's area will be a fraction of the latter's, thereby facilitating the comparison of multiple treatments in a manner similar to comparing areas on a radar chart. To address the aforementioned gaps and limitations, the objective of this paper is to lay the groundwork for conceptualising a treatment's likelihood of being the best overall in terms of its coverage area inside a spie chart. This circular plot may be divided into segments representing a treatment's level of efficacy or safety for each outcome. We provide a methodological framework and assess the feasibility of adapting the area on a spie chart to numerically integrate the efficacy and safety of treatments estimated by NMAs of multiple outcomes. Since radar plots have not been formally investigated and generalised for NMA, we also present the area on a radar plot and compare it to that of spie charts. We illustrate how the spie chart may be used to overcome the limitations of the radar plot, as well as its flexibility for further adaptations. Measuring the coverage area inside a spie chart Consider for example a situation where the performance of a treatment has been measured in terms of J = 8 outcomes valued between 0 and 1. Simulated values are plotted on a spie chart in Fig. 1. In general, the resulting shape on any spie chart is a series of J sectors, each with their own radius equal to the value of the J outcome measures. The area covered by these sectors may be calculated as the sum of the areas of the individual sectors. Example spie chart informed by the values of 8 outcomes. To calculate the area of sector j = 2, the required parameters are denoted: θj = 2 is a known angle, yj = 2 is the radius of sector j = 2, equal to the value of outcome 2 In Fig. 1, the shaded area, A, is the sum of the area of the 8 sectors, Aj, j = 1, ..., 8 $$ {A}_j=\frac{1}{2}{\theta}_j{y}_j^2, $$ where yj and θj are the radius and angle of sector j, respectively. In Fig. 1, all angles are equal, i.e., \( {\theta}_j=\frac{2\pi }{8}=\frac{\pi }{4} \) radians, and the shaded area on the spie chart is then: $$ A=\frac{\pi }{8}\sum \limits_{j=1}^8{y}_j^2, $$ which is an average of the squared values of the 8 outcomes, scaled by a factor of π. In general, the shaded area within a spie chart informed by J ≥ 1 outcomes for an intervention is $$ A=\frac{1}{2}\sum \limits_{j=1}^J{\theta}_j{y}_j^2. $$ Choice of outcome measure To enable a fair comparison of the areas defined by the treatments' performances across multiple outcomes, the outcomes should be plotted on the same or comparative scales. This is not the case in most NMA studies involving multiple outcomes. As such, ranking probabilities and their summaries (e.g. the Surface Under the Cumulative RAnking curve (SUCRA) or P-score) may provide valid measures for this purpose [12, 13]. These measures transfer the comparative relative effects to a scale between 0 and 1. Alternatively, the posterior mean or median ranks may be used. However, note that the probability of a treatment ranking best should be avoided because treatments with high uncertainty around their estimated effects are likely to be ranked best [14], and this ranking probability has the potential to be biased [15]. SUCRA values, which are calculated in a Bayesian framework, provide a less sensitive and less biased alternative to rank treatments. The posterior mean rank is a scaled transformation of SUCRA, and the P-score is its frequentist equivalent [13]. These measures account for the uncertainty of a treatment's relative effect, and are thus preferable [12]. Another option may be to use the absolute probabilities of response or risk for each treatment, as was done in a multicriteria decision analysis of statins [16]. Note that NMA pools relative effects such as log-odds ratios. To obtain estimates of the absolute probabilities for all treatments, an estimate of the absolute effect (e.g., log-odds) of a treatment in a contemporary population of interest must be assumed. This may be any treatment in the network [2]. The assumed absolute effect of this treatment would then be applied to the relative effects (e.g., log-odds ratio) vs. the chosen treatment, to obtain estimates of absolute effects (e.g., log-odds) of all other treatments, which may be subsequently converted into probabilities [2, 17]. If an NMA pools evidence on important outcomes measured on a continuous scale, response rates may be estimated [18] or standardised mean differences may be converted to log-odds ratios [19], provided that the underlying assumptions of these conversions are reasonable for the data. Note that plotting absolute probabilities of response or risk would limit the generalisability of the area to the population informing the assumed absolute effect of the chosen reference treatment. In this paper, to simplify the presentation of our novel methodological framework, we use SUCRA values as a measure of the comparative effectiveness and safety of the treatments. We would like to highlight that this choice is made without loss of generality and the method is valid for any other measure. Standardised area inside a spie chart To facilitate interpretation of the coverage area inside a spie chart, we standardise it by the maximum possible area. Its interpretation would then be comparable to the interpretation of SUCRA [12]. As such, the minimum possible standardised area of 0 corresponds to a treatment that always ranked the worst (i.e., SUCRA = 0 across all outcomes). The maximum possible standardised area of 1 corresponds to a treatment that always ranked best for each outcome (i.e., SUCRA = 1 across all outcomes). First, consider the maximum possible area of each sector in a spie chart defined by SUCRA, $$ {A_j}^{\mathrm{max}}=\frac{1}{2}{\theta}_j{(1)}^2=\frac{\theta_j}{2}. $$ If there are J outcomes and the angles of each sector are equal, i.e., \( {\theta}_j=\frac{2\pi }{J},j=1,...,J \), then the maximum possible area on a spie chart is $$ {A}^{\mathrm{max}}=\frac{1}{2}\sum \limits_{j=1}^J\left(\frac{2\pi }{J}\right)=\frac{J}{2}\left(\frac{2\pi }{J}\right)=\pi . $$ In fact, regardless of the size of the individual sector's angles θj, as long as the outcome measure yj can range from 0 to 1, the spie chart consists of a unit circle. Consequently, Amax = π, for all 0 ≤ θj ≤ 2π. Therefore, in general, the standardised area on a spie chart consisting of outcome measures ranging between 0 and 1 is $$ {A}^{std}=\frac{1}{2\pi}\sum \limits_{j=1}^J{\theta}_j{y}_j^2 $$ where yj and θj are the outcome measure (e.g., SUCRA value) and angle of the sector corresponding to outcome j, respectively. Note that 0 ≤ θj ≤ 2π, where θj = 0 implies outcome j does not contribute the area, and θj = 2π implies outcome j is the sole contributor to the area. In the case of equal angles, the standardised area on a spie chart for a given treatment is a weighted average of the squared outcome measures, $$ {A}^{std}=\frac{1}{J}\sum \limits_{j=1}^J{y}_j^2, $$ provided that the outcomes are measured on a scale between 0 and 1. Incorporating stakeholder preferences An advantage of the spie chart's circular design is the ability to incorporate preferences of the knowledge user. Some outcomes may be more important than others, and this can be incorporated in the plots by adjusting the contribution each outcome has to the overall area. By adjusting the angles of the sectors in a spie chart, we can adjust the proportion of the chart each sector covers. Noting that the sum of the angles in a spie chart must be 2π, given a set of weights, wj, j = 1, ..., J for a set of J outcomes, the corresponding angles may be calculated as $$ {\theta}_j=\frac{2\pi {w}_j}{\sum \limits_{j=1}^J{w}_j}. $$ There are various ways to derive the contribution of the outcomes in terms of weights, which may be informed by preferences supported by evidence in the literature or through weights elicited from knowledge users themselves. For example, risk prediction or prognostic models may be used to inform the weights of outcomes when the goal is to reduce the risk of an unfavourable event or disease such as cardiovascular disease (CVD). If all outcomes are included in a regression model, and measured in the same units, the magnitude of the unstandardised coefficients, βj, j = 1, ..., J, capture the influence each outcome has on the overall risk, adjusted for any additional factors included in the model: $$ {w}_j=\frac{\mid {\beta}_j\mid }{\sum \limits_{j=1}^J\mid {\beta}_j\mid }. $$ If the outcomes are measured on different scales, then standardised coefficients may be considered. There are more optimal approaches to deriving the relative importance of predictors (e.g., outcomes) when individual patient data (IPD) are available to create multiple regression models [20]. In fact, the use of standardised coefficients for this purpose has been criticised because the dependencies between predictors are not fully taken into account [21]. Nevertheless, researchers undertaking NMA often have limited resources in terms of time and access to IPD, and thus have to make secondary use of aggregate or summary level data. If there are important dependencies between the outcomes, this should be accounted for at the synthesis stage. There are several approaches available for this and guidance is provided by López-López and colleagues [22] and multi-parameter evidence synthesis methods should also be considered [2]. Nevertheless, if there is a need to avoid double counting the contribution of related outcomes, and we know the correlations between them, the contribution of each outcome to the overall area can be adjusted. The weights of each outcome may be informed by a J × J correlation matrix, denoted as $$ {\displaystyle \begin{array}{ccccc}1& {\rho}_{12}& {\rho}_{13}& \cdots & {\rho}_{1J}\\ {}{\rho}_{21}& 1& {\rho}_{23}& \cdots & {\rho}_{2J}\\ {}{\rho}_{31}& {\rho}_{32}& 1& \cdots & {\rho}_{3J}\\ {}\vdots & \vdots & \vdots & \ddots & \vdots \\ {}{\rho}_{J1}& {\rho}_{J2}& {\rho}_{J3}& \cdots & 1\end{array}} $$ However, since correlation can be negative, the squares of the pairwise correlations, i.e., the coefficients of determination, i.e., \( {R}_{ij}^2={\rho}_{ij}^2 \) should be used instead. The weight of each outcome can then be proportional or equal to the marginal sums of the squared correlation matrix: $$ {w}_j=\sum \limits_{i=1}^J{\rho}_{ij}^2,i=1,...,J. $$ Finally, there are several methods for eliciting preferences from decision makers, such as direct rating, where the decision makers rate outcomes on a scale from 1 to 100 and weights are derived by normalising these scores [23]. Regardless of the method to inform the weights, the application of the proposed framework remains the same. Selecting outcomes to inform the area The number of outcomes that may be plotted on a spie chart ranges from one to infinity. Nevertheless, a knowledge user would not benefit from either extreme. The purpose of the spie chart is to facilitate the combination of multiple outcomes, accounting for the desired contribution of the overall summary. As such, a minimum of two outcomes is sensible for this purpose. Plotting an overwhelming number of outcomes will not be visually appealing, although the area inside a spie chart is intended to overcome the visual interpretative burden. Increasing the number of outcomes will limit the contribution of important outcomes, to a degree that depends on the weights. Researchers presenting results of an NMA should be wary of this, though they do not need to restrict themselves to a maximum number of outcomes. Outcomes which are critical to the decision-making process should be plotted on the spie chart. For example, any outcomes for which lack of evidence would exclude a treatment from consideration should be plotted. Evidence for any plotted outcome should be available for every treatment under consideration. It is important that every treatment is compared based on the same set of outcomes. If evidence on a critical outcome is not available for a treatment within a decision set, then imputation methods may be considered at the NMA stage [24]. Efficacy and safety outcomes should be plotted on two separate spie charts for each treatment, as it is important for a knowledge user to recognise that a very effective treatment may not be safe. Plotting these on the same spie chart and summarising the area inside as a single numerical value may mask important information on harms. A knowledge user should be able to simultaneously compare both the benefits and the harms of a treatment. This is possible by plotting the area inside an efficacy spie chart against the area inside a safety spie chart on a scatter plot [25]. Points towards the upper right quadrant of a scatter plot (e.g., towards (1,1)) would represent the most efficacious and safe treatment. Summary of steps To summarise, our proposed method can be organised into the following steps: Determine the final list of treatments to be compared and for which outcomes. All treatments should have evidence on the outcomes plotted on the spie charts. Determine the outcome measure. Outcomes should be plotted in the same units and non-negative measures are recommended. If this is a ranking measure (e.g., probability of ranking best, P-score, or SUCRA), then separately calculate the ranking probabilities based on the subset of treatments determined in Step 1. Determine the weights, wj of each outcome j = 1, ..., J and the corresponding angle \( {\theta}_j=\frac{2\pi {w}_j}{\sum \limits_{j=1}^J{w}_j} \) . Plot the efficacy and safety spie charts for each treatment. One option is to make use of the ggplot2 package in the free and open source R statistical software [26, 27], using the R code provided in Additional File 1. Calculate and compare the area inside the spie charts. The standardised area inside a spie chart consisting of outcomes measured on a scale between 0 and 1 is where yj is the measure of outcome j, j = 1, ..., J, plotted on the spie chart. Any calculator or software may be used to apply this formula. Plot the area inside the efficacy spie charts against the area inside the safety spie charts on a scatter plot. In this section, our proposed framework is illustrated using results from two published reviews [28, 29]. At the same time, we empirically compare the use of the spie chart and the radar plot for quantifying a treatment's overall performance. The formula for the standardised area inside a radar plot has been derived in Additional File 2. The two reviews used in this section were selected as all interventions have complete outcome information, and they highlight conceptual issues that drive the development of this framework. The first example illustrates one way of weighting outcomes of unequal importance to reflect the preferences of decision makers. Since there are four outcomes in this example, there are different ways to order the outcomes on a radar plot, and we show how this impacts the area inside a radar chart. In the second example, there are three outcomes, and thus one unique ordering of the outcomes which allows us to fairly compare the areas inside the radar plot and spie chart. The second example also underlines the importance of considering efficacy and safety outcomes separately. All analyses were performed using R [27]. We emphasize here that any observations made in these examples are for illustrative purposes only and should not impact clinical practice. Lipids study The effects of thirteen dietary oils and fats on total cholesterol (TC), low-density lipoprotein cholesterol (LDL-c), high-density lipoprotein cholesterol (HDL-c) cholesterol, and triglycerides (TG), were investigated by Schwingshackl and colleagues [28]. Blood tests measuring these lipoproteins are carried out to assess a person's risk for cardiovascular disease (CVD) [30]. The NMAs in this review pooled data from RCTs on thirteen treatments for the four outcomes of interest, and the SUCRA values are listed in Table 1. There is no treatment that clearly ranks the best across all outcomes. Table 1 SUCRA values and rankings produced based on all trials included in [28] Note that, lower values of TC, LDL-c, and triglycerides are preferred, while higher values of HDL-c are preferred. The SUCRA values were computed in this NMA so that higher values of SUCRA reflect the preferred direction (i.e., improvement) of the outcomes. This is important when plotting outcomes on a spie chart, so that larger areas reflect treatments that are better at improving outcomes. Spie chart To compare the rankings of areas inside a spie chart, we first calculated the standardised areas, assuming equal weights i.e., equal angles (Table 2). The area corresponding to the spie chart for Safflower oil is displayed in Fig. 2a. The percentages of the unit circle covered by the shaded areas for each treatment are small (Table 2), indicating that there is no treatment which is certainly the best across all outcomes. Knowing this, a stakeholder may then direct their attention to differences, if any, between treatments for more important outcomes. Table 2 Standardised areas inside spie charts of SUCRA values from multiple outcomes Two possible spie charts of the SUCRA values corresponding to Safflower oil in [28]. The plot in a weighs each outcome equally, since they have the same angles. The plot in b weighs the outcomes based on a coronary heart disease risk prediction model for women aged 50 - < 65 years [31] For illustrative purposes, we make use of a model built by Castelli et al., which built a multivariate regression model against coronary heart disease (CHD) [31]. This model was informed by data from the Framingham Study, and the reported regression coefficients for TC, LDL-c, HDL-c, and TG, for women aged 50 to less than 65 years old are 2.51, 2.19, − 1.05, 0.48, respectively. These coefficients are adjusted for systolic blood pressure, glucose, and cigarette smoking status. We can then calculate weights for each outcome, based on the absolute values of these coefficients. For TC, $$ {w}_1=\frac{\mid 2.51\mid }{\mid 2.51\mid +\mid 2.19\mid +\mid -1.05\mid +\mid 0.48\mid }=\mathrm{0.40.} $$ The weights for LDL-c, HDL-c, and TG are 0.35, 0.17, and 0.08, respectively. The angle corresponding to TC may then be calculated as $$ {\theta}_1=\frac{2\pi (0.40)}{1}=0.8\pi . $$ The angles for LDL-c, HDL-c, and TG are 0.7π, 0.34π, and 0.16π, respectively. The resulting area corresponding to the spie chart for Safflower oil is displayed in Fig. 2b. The standardised areas and ranks for each treatment, tailored to women aged 50 to less than 65 years are provided in Table 2. Based on this weighting scheme, the best treatment for reducing a 50 - < 65-year-old woman's risk for CHD by improving lipid levels is Safflower oil. Radar plot In this example, there are four outcomes and thus four radii defining the radar plot. When using a radar plot, one must decide the order of the outcomes around the plot. The placement of the first outcome does not matter, but the ordering of the remaining J − 1 outcomes will impact the area enclosed in the radar plot [10]. There are \( \frac{1}{2}\left(J-1\right)! \) options to order outcomes around a circle. The \( \frac{1}{2}\left(4-1\right)!=3 \) possible orderings of outcomes in this example are displayed in Supplementary Figure 2 in Additional File 3 for a single intervention, Safflower oil. Summary of Findings tables in Cochrane Reviews may provide some guidance on how to order the outcomes, since the outcomes are listed in decreasing order of importance [32]. Of course, this importance ordering will vary across different stakeholders. For example, one Cochrane Review examining the effectiveness of a Mediterranean-style diet in preventing CVD has listed the decreasing order of importance of the four lipids as TC, LDL-c, HDL-c, TG [33]. Another Cochrane Review examining the effectiveness of polyunsaturated fatty acids in preventing CVD orders the importance of the lipids as TC, TG, HDL-c, LDL-c [34]. Nevertheless, these separate orderings will produce the same area, assuming the angles between the outcomes are equal, \( {\theta}_j=\frac{2\pi }{4}=\frac{\pi }{2},j=1,2,3,4 \). For example, the areas enclosed in the radar plots for Safflower oil (Supplementary Figure 2A&B, Additional File 3), based on the formula provided in Additional File 2, are $$ {\displaystyle \begin{array}{rl}{A}_{\mathrm{Safflower},\mathrm{Rees}}^{std}& =\frac{1}{4}\left({y}_{TC}{y}_{LDL-c}+{y}_{LDL-c}{y}_{HDL-c}+{y}_{HDL-c}{y}_{TG}+{y}_{TG}{y}_{TC}\right)\\ {}& =\frac{1}{4}\left({y}_{TC}{y}_{TG}+{y}_{TG}{y}_{HDL-c}+{y}_{HDL-c}{y}_{LDL-c}+{y}_{LDL-c}{y}_{TC}\right)\\ {}& ={A}_{\mathrm{Safflower},\mathrm{Abdelhamid}}^{std}\end{array}} $$ There is only one other ordering that will produce a unique area: TC, HDL-c, LDL-c, TG. This is because of the triangles formed by TC & HDL-c and LDL-c & TG; these outcomes were not congruent in the plots generated by Rees' and Abdelhamid's orderings (Supplementary Figure 2C, Additional File 3). The standardised areas produced by these two ordered datasets, assuming equal angles, are provided in Table 3. The rankings of some of the treatments change, and although the differences between standardised areas may seem trivial in this example, this is an important feature of radar plots to highlight, as the differences could be exacerbated in other applications. For example, the areas for one treatment may be quite different if the outcomes are arranged in such a way that those reflecting higher scores alternate with those that have lower scores vs. an ordering where all outcomes with high scores are placed together. Table 3 Standardised areas inside radar plots of SUCRA values from multiple outcomes Psoriasis example The effectiveness and safety of seven biologic therapies plus placebo for treating psoriasis were investigated by Jabbar-Lopez and colleagues to support the development of a guideline [29]. Randomised trials informed the NMAs, which synthesised evidence on the following outcomes measuring efficacy: clear/nearly clear skin (defined as minimum residual activity, Psoriasis Area and Severity Index (PASI) > 90, or 0 or 1 on physician's global assessment), mean change in dermatology life quality index (DLQI), and PASI 75 (defined as PASI > 75). The first 2 outcomes were deemed "critical" outcomes by the guideline development group, while the latter outcome, PASI 75, was deemed "important". An additional outcome measuring safety, referred to in the review as tolerability or withdrawal due to adverse events, was also deemed an "important" outcome. For illustrative purposes, the published SUCRA values corresponding to each treatment under investigation are displayed in Table 4. As was the case in the lipids example, there is no treatment that is universally the best according to SUCRA across all outcomes. Ixekizumab has the largest SUCRA value in terms of the critical "Clear/nearly clear" outcome, but it is not the best in terms of the other critical outcome, mean change in DLQI. It also ranks the second worse in terms of tolerability, highlighting the importance of considering efficacy and safety outcomes separately. Radar plot vs. spie chart For illustrative purposes, we first combine the evidence on the three efficacy outcomes (clear/nearly clear, DLQI, PASI 75), considering them to be of equal importance (although the guideline committee suggested otherwise) [29] on both the spie chart and the radar plot. Since there are only three outcomes, there is only one way to arrange the order of the outcomes, and thus one unique area. As such, this example provides an opportunity to fairly compare the area on the radar plot with that on the spie chart. The standardised areas on the radar plot and spie chart are provided in Table 5. The standardised areas for each treatment on both plots are quite similar, and the corresponding ranks are the same. Nevertheless, the efficacy outcomes equally contributed to the standardised area, which is unlikely to reflect a knowledge user's preferences. There are some dependencies between the outcomes. For example, treatments that clear or nearly clear psoriasis for a large proportion of patients are also likely to have a higher proportion of patients that achieve a PASI score of at least 75. These dependencies should be accounted for using methods such as the ones suggested earlier in the Methods section. Table 5 Standardised areas inside radar plots and spie charts of SUCRA values in [29] Scatter plot of efficacy vs. safety The purpose of this illustration is to show the consequences of naively plotting all efficacy and safety outcomes on a spie chart and summarising them with a single numerical value. As such, the standardised areas on a spie chart containing all efficacy and safety outcomes were calculated, assuming they were of equal importance. Of course, in practice, this is unlikely to be true. A knowledge user may want the contribution of the safety outcome to be the same as the contribution of the collection of efficacy outcomes. This is possible by dividing the spie chart in half, where the safety outcome is plotted on one half of the chart, and the three efficacy outcomes contribute equally to the other half. Nevertheless, the numerical summary of the coverage area will not allow a knowledge user to simultaneously compare the benefits and harms of the treatments, and so a scatter plot comparing the two is more desirable. The results show that Ixekizumab has the second highest SUCRA value (Fig. 3a), agreeing with the ranks solely based on efficacy (Table 5), but the message that it is one of the least tolerable is lost in this result (Table 4). The standardised area on the spie chart containing the efficacy outcomes only is plotted against the reported SUCRA values for the safety outcome in Fig. 3b. In this scatter plot, treatments in the top right corner are preferred. The benefit-risk trade-off is clearer for Ixekizumab, and Secukinumab seems to have the best benefit-risk ratio. Comparison of two approaches for balancing efficacy and safety outcomes in [29]. In a, the areas inside a spie chart containing both efficacy and safety outcomes are plotted on a number line, where larger values (areas) are preferred. In b the areas inside a spie chart containing efficacy outcomes only are plotted against the SUCRA values for the single safety outcome on a scatter plot, where values in the top-right corner are preferred We have developed and presented a framework for obtaining the overall performance of treatments in NMA, summarised across all outcomes. Similar to SUCRA, the standardised area on a spie chart is a ratio of the maximum possible area, which a treatment could have if it always ranked best [12]. This paper lays the groundwork for integrating evidence across multiple outcomes, including some direction on how to incorporate key considerations for decision makers (e.g., outcome preferences). Table 6 presents a summary of graphical tools available for presenting multiple outcomes, where rank-o-grams [12, 36], standard scatter plots comparing two outcomes [35], the rank heat plot [6], radar plot, and spie chart are compared. Table 6 Properties of graphical tools available to summarise multiple outcomes in NMA Radar plots have been used in the past to compare outcomes in health research. More recently, they have been used to summarise the performance of treatments in an NMA context [8, 9]. Despite this, there are several limitations of radar plots that reviewers should consider, and spie charts may be a more suitable alternative. A radar plot may be sufficient when evidence on three outcomes needs to be combined, and these three outcomes are of equal importance. If there are any additional outcomes, subjectivity can arise through the ordering of outcomes on the plot. Nevertheless, this may be mitigated by specifying outcome preferences a priori, which can be informed by preferences in Cochrane Summary of Findings tables or through a survey of stakeholders' preferences. Spie charts, however, are a more generalisable option and they have nicer mathematical properties compared to a radar chart. For example, the standardised area on a spie chart informed by a single outcome will output the same value that was inputted. In addition, adjusting the contribution of several outcomes on a spie chart is mathematically straightforward. Weighting schemes should be specified a priori to minimise subjectivity. This is also important when using coefficients from a risk prediction model to inform the weights, as it is important to select a risk prediction model that has been validated and covers the population of interest. Some risk prediction models may even present coefficients tailored to subgroups, as shown in the lipids example, permitting subgroup-specific ranks. Nevertheless, the practice of using coefficients to inform the relative importance of predictors has been criticised [21, 37]. More optimal methods require individual patient data [20], which NMA researchers may not have access to. Formally eliciting the relative importance of outcomes from decision makers may offer a better alternative in the NMA context [23]. In the future, it would be useful to design a weighting scheme that accounts for both the dependencies between the outcomes, as well as the preferences of knowledge users. This framework was illustrated using SUCRA values; however, other outcome measures could be used. Nevertheless, the cited examples of systematic reviews presenting evidence across outcomes through radar plots have done so using SUCRA values [8, 9]. Another recent review averaged SUCRA values on LDL-c, HDL-c, and TG to give an overall indication of the effectiveness of diets on the lipid profile [38]. SUCRA is an attractive measure to compare treatments across multiple outcomes as it summarises both the strength and uncertainty of the relative treatment effects on the same scale [13]. The standardised area inside a spie chart informed by SUCRA values clearly conveys the degree of uncertainty in the evidence across outcomes. This is because the outcome values are squared in the calculation of the area, and smaller SUCRA values, which indicate less plausibility or certainty in a treatment ranking best, are penalised. The standardised area for a particular treatment will only be close to 1 if there is large certainty supporting a treatment being more effective than all other treatments across all outcomes. While a treatment may be very effective, it could also be unsafe, and so it is important to consider efficacy and safety outcomes separately and not summarise them with one measure. Efficacy and safety outcomes should be combined separately, and they may be simultaneously compared in scatter plots such as the one plotted in Fig. 3b for the psoriasis example. Nevertheless, we pause to reflect whether safety outcomes should be combined at all. A treatment's harmful effects in terms of one outcome could be diluted by the appearance of its safety in terms of several other outcomes. It might be better to pool evidence on efficacy outcomes together as a single measure and then compare it against critical safety outcomes one by one. Additional aspects of the evidence also need to be considered, such as the internal and external biases of the RCTs informing the networks. This goes beyond assessing whether the evidence supporting a treatment ranking best is at high risk of bias. The decision maker must grasp how the biased trials affect the network estimates, and this depends on the geometry of the networks and size of the trials. Sensitivity analyses which remove the trials at high risk of bias, threshold analyses, or CINeMA may provide some insight into this [39,40,41,42,43]. Methods for integrating such assessments into the spie chart should be explored in future work. For example, if CINeMA is used to evaluate the confidence in the NMA results [41,42,43], then an overall confidence rating for each outcome may be represented through colours or symbols on the spie chart for a given treatment. There may be instances where there is no evidence on a treatment for a particular outcome. This treatment could still be included in the overall evaluation through spie charts, where a value of 0 is assumed. This would penalise the treatment's performance for missing outcome data. However, if a treatment cannot be considered without information on a critical outcome, then perhaps it should be excluded from the evaluation of the overall performances. Note that SUCRA depends on the number of treatments informing it. As such, the number of treatments should be equal across all outcomes to allow fair comparison. If a treatment is excluded from the decision set, then it should not be included in the calculation of the ranking probabilities, and thus SUCRA. We have established the foundation of a framework that objectively summarises the comparative effectiveness of a treatment across multiple outcomes. This eliminates any subjectivity that may be introduced by a decision maker balancing contradictory rankings of treatments across different outcomes. The proposed framework is not meant to be a standalone presentation of the NMA results. Rather, it is intended to supplement the more detailed results that must be considered when evaluating the evidence. Forest plots or pairwise relative effect estimates should also be inspected to confirm whether there are any significant differences between treatments, a feature which may be masked by ranking statistics. Future research should investigate ways to adapt this framework when outcomes are missing for some treatments. The general approach should also be compared with existing numerical approaches for integrating ranks across outcomes [44, 45]. Moving forward, we recommend the spie chart over the radar plot for summarising effectiveness across multiple outcomes. All data generated or analysed during this study are included in this published article. CHD: CVD: DLQI: Dermatology life quality index HDL-c: High-density lipoprotein cholesterol IPD: Individual patient data LDL-c: Low-density lipoprotein cholesterol NMA: Network meta-analysis PASI: Psoriasis area severity index RCT: Randomised controlled trial SUCRA: Surface under the cumulative ranking curve TC: Total cholesterol TG: Petropoulou M, Nikolakopoulou A, Veroniki A, Rios P, Vafaei A, Zarin W, et al. Bibliographic study showed improving statistical methodology of network meta-analyses published between 1999 and 2015. J Clin Epidemiol. 2017;82:20–8. Dias S, Ades AE, Welton NJ, Jansen JP, Sutton AJ. Network meta-analysis for decision making. Hoboken: Wiley; 2018. Caldwell DM, Ades AE, Higgins JPT. Simultaneous comparison of multiple treatments: combining direct and indirect evidence. BMJ. 2005;331:897–900. Lu G, Ades A. Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med. 2004;23:3105–24. Tan SH, Cooper NJ, Bujkiewicz S, Welton NJ, Caldwell DM, Sutton AJ. Novel presentational approaches were developed for reporting network meta-analysis. J Clin Epidemiol. 2014;67:672–80. Veroniki AA, Straus SE, Fyraridis A, Tricco AC. The rank-heat plot is a novel way to present the results from a network meta-analysis including multiple outcomes. J Clin Epidemiol. 2016;76:193–9. Saary MJ. Radar plots: a useful way for presenting multivariate health care data. J Clin Epidemiol. 2008;61:311–7. McCool R, Wilson K, Arber M, Fleetwood K, Toupin S, Thom H, et al. Systematic review and network meta-analysis comparing ocrelizumab with other treatments for relapsing multiple sclerosis. Mult Scler Relat Disord. 2019;29:55–61. Rogliani P, Matera MG, Ritondo BL, De Guido I, Puxeddu E, Cazzola M, et al. Efficacy and cardiovascular safety profile of dual bronchodilation therapy in chronic obstructive pulmonary disease: a bidimensional comparative analysis across fixed-dose combinations. Pulm Pharmacol Ther. 2019;59:101841. Stafoggia M, Lallo A, Fusco D, Barone AP, D'Ovidio M, Sorge C, et al. Spie charts, target plots, and radar plots for displaying comparative outcomes of health care. J Clin Epidemiol. 2011;64:770–8. Feitelson D. Comparing partitions with spie charts: School of Computer Science and Engineering: The Hebrew University of Jerusalem; 2003. p. 1–7. Salanti G, Ades A, Ioannidis J. Graphical methods and numerical summaries for presenting results from multiple-treatment meta-analysis: an overview and tutorial. J Clin Epidemiol. 2011;64:163–71. Rücker G, Schwarzer G. Ranking treatments in frequentist network meta-analysis works without resampling methods. BMC Med Res Methodol. 2015;15:58. Jansen J, Trikalinos T, Cappelleri J, et al. Indirect treatment comparison/network meta-analysis study questionnaire to assess relevance and credibility to inform health care decision making: an ISPOR-AMCP-NPC good practice task force report. Value Health. 2014;17:157–73. Kibret T, Richer D, Beyene J. Bias in identification of the best treatment in a Bayesian network meta-analysis for binary outcome: a simulation study. Clin Epidemiol. 2014;6:451–60. Naci H, van Valkenhoef G, Higgins JPT, Fleurence R, Ades AE. Combining network meta-analysis with multicriteria decision analysis to choose among multiple drugs. Circ Cardiovasc Qual Outcomes. 2014;7:787–92. Dias S, Welton NJ, Sutton AJ, Ades AE. NICE DSU Technical support document 5: evidence synthesis in the baseline natural history model; 2011. Furukawa TA, Cipriani A, Barbui C, Brambilla P, Watanabe N. Imputing response rates from means and standard deviations in meta-analyses. Int Clin Psychopharmacol. 2005;20:49–52. Chinn S. A simple method for converting an odds ratio to effect size for use in meta-analysis. Stat Med. 2000;19:3127–31. Lebreton JM, Ployhart RE, Ladd RT. A Monte Carlo comparison of relative importance methodologies. Organ Res Methods. 2004;7:258–82. Johnson JW. A heuristic method for estimating the relative weight of predictor variables in multiple regression. Multivar Behav Res. 2000;35:1–19. López-López JA, Page MJ, Lipsey MW, Higgins JPT. Dealing with effect size multiplicity in systematic reviews and meta-analyses. Res Synth Methods. 2018. Riabacke M, Danielson M, Ekenberg L. State-of-the-art prescriptive criteria weight elicitation. Adv Decis Sci. 2012;2012:276584. Riley RD, Jackson D, Salanti G, Burke DL, Price M, Kirkham J, et al. Multivariate and network meta-analysis of multiple outcomes and multiple treatments: rationale, concepts, and examples. BMJ. 2017;358:j3932. Bellanti F. From data to models: reducing uncertainty in benefit risk assessment : application to chronic iron overload in children: Leiden University, Faculty of Science; 2015. Wickham H. ggplots2: Elegant graphics for data analysis. New York: Springer-Verlag; 2016. R Core Team. R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2020. Available from http://www.R-project.org/. Schwingshackl L, Bogensberger B, Bencic A, Knuppel S, Boeing H, Hoffmann G. Effects of oils and solid fats on blood lipids: a systematic review and network meta-analysis. J Lipid Res. 2018;59:1771–82. Jabbar-Lopez ZK, Yiu ZZN, Ward V, Exton LS, Mohd Mustapa MF, Samarasekera E, et al. Quantitative evaluation of biologic therapy options for psoriasis: a systematic review and network meta-analysis. J Invest Dermatol. 2017;137:1646–54. Pagana KD, Pagana TJ. Mosby's Canadian manual of diagnostic and laboratory tests. 1st ed. Toronto: Mosby; 2013. Castelli WP, Anderson K, Wilson PW, Levy D. Lipids and risk of coronary heart disease. The Framingham Study. Ann Epidemiol. 1992;2:23–8. Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA. Cochrane Handbook for Systematic Reviews of Interventions version 6.0. Cochrane; 2019. Available from www.training.cochrane.org/handbook. Rees K, Takeda A, Martin N, Ellis L, Wijesekara D, Vepa A, et al. Mediterranean-style diet for the primary and secondary prevention of cardiovascular disease. Cochrane Database Syst Rev. 2019. Abdelhamid AS, Brown TJ, Brainard JS, Biswas P, Thorpe GC, Moore HJ, et al. Omega-3 fatty acids for the primary and secondary prevention of cardiovascular disease. Cochrane Database Syst Rev. 2018. Chaimani A, Higgins JPT, Mavridis D, Spyridonos P, Salanti G. Graphical tools for network meta-analysis in STATA. PLoS One. 2013;8:e76654. Ades AE, Mavranezouli I, Dias S, Welton NJ, Whittington C, Kendall T. Network meta-analysis with competing risk outcomes. Value Health. 2010;13:976–83. Bring J. How to standardize regression coefficients. Am Stat. 1994;48:209–13. Neuenschwander M, Hoffmann G, Schwingshackl L, Schlesinger S. Impact of different dietary approaches on blood lipid control in patients with type 2 diabetes mellitus: a systematic review and network meta-analysis. Eur J Epidemiol. 2019;34:837–52. Phillippo D, Dias S, Ades A, Didelez V, Welton N. Sensitivity of treatment recommendations to bias in network meta-analysis. J R Stat Soc Ser A Stat Soc. 2017;181. Phillippo DM, Dias S, Welton NJ, Caldwell DM, Taske N, Ades AE. Threshold analysis as an alternative to GRADE for assessing confidence in guideline recommendations based on network meta-analyses. Ann Intern Med. 2019;170:538–46. Salanti G, Giovane CD, Chaimani A, Caldwell DM, Higgins JPT. Evaluating the quality of evidence from a network meta-analysis. PLoS One. 2014;9:e99682. Nikolakopoulou A, Higgins JPT, Papakonstantinou T, Chaimani A, Del Giovane C, Egger M, et al. CINeMA: an approach for assessing confidence in the results of a network meta-analysis. PLoS Med. 2020;17:e1003082. Papakonstantinou T, Nikolakopoulou A, Higgins JPT, Egger M, Salanti G. CINeMA: software for semiautomated assessment of the confidence in the results of network meta-analysis. Campbell Syst Rev. 2020;16:e1080. Rücker G, Schwarzer G. Resolve conflicting rankings of outcomes in network meta-analysis: partial ordering of treatments. Res Synth Methods. 2017;8:526–36. Mavridis D, Porcher R, Nikolakopoulou A, Salanti G, Ravaud P. Extensions of the probabilistic ranking metrics of competing treatments in network meta-analysis to reflect clinically important relative differences on many outcomes. Biom J. 2020;62:375–85. This work was supported in part by an Ontario Graduate Scholarship granted to CHD and NSERC discovery grant of JSH. Department of Health Research Methods, Evidence, and Impact, McMaster University, McMaster University Medical Centre, 1280 Main Street West, 2C Area, Hamilton, Ontario, L8S 4K1, Canada Caitlin H. Daly, Lawrence Mbuagbaw & Lehana Thabane Population Health Sciences, Bristol Medical School, University of Bristol, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS, UK Caitlin H. Daly Biostatistics Unit, Father Sean O'Sullivan Research Centre, St Joseph's Healthcare Hamilton, 50 Charlton Avenue East, Hamilton, Ontario, L8N 4A6, Canada Lawrence Mbuagbaw & Lehana Thabane Knowledge Translation Program, Li Ka Shing Knowledge Institute, St. Michael's Hospital, 209 Victoria Street, Toronto, Ontario, M5B 1TB, Canada Sharon E. Straus Department of Medicine, Faculty of Medicine, University of Toronto, C. David Naylor Building, 6 Queen's Park Crescent West, Third Floor, Toronto, Ontario, M5S 3H2, Canada Department of Mathematics and Statistics, STEM Complex, University of Ottawa, room 336, 150 Louis-Pasteur Private, Ottawa, Ontario, K1N 6N5, Canada Jemila S. Hamid Lawrence Mbuagbaw Lehana Thabane CHD conceptualised and designed the study, acquired and analysed the data, interpreted the results of the empirical illustrations, and drafted and revised the manuscript. LM, LT, and SES contributed to the study design and revised the manuscript. JSH conceptualised and designed the study and revised the manuscript. All authors read and approved the final manuscript. Correspondence to Jemila S. Hamid. R code for spie charts. R script containing function to generate and calculate area inside a spie chart. Measuring the area inside a radar plot. Describes the derivation of the standardised area inside a radar plot, as well as the difficulty in incorporating incorporate stakeholder preferences through the angles between the axes of a radar plot. Additional file 3: Supplementary Figure 2. Three possible radar plots of the SUCRA values corresponding to Safflower oil in [26]. The plots in panel A and B have the same area, since they are the same shape flipped at the vertical axes. The plot in panel C has a different area due to the different triangles formed by TC & HDL-c and TC & LDL-c. Daly, C.H., Mbuagbaw, L., Thabane, L. et al. Spie charts for quantifying treatment effectiveness and safety in multiple outcome network meta-analysis: a proof-of-concept study. BMC Med Res Methodol 20, 266 (2020). https://doi.org/10.1186/s12874-020-01128-2 SUCRA Multiple outcomes
CommonCrawl
Journal of the Brazilian Computer Society Microalgae classification using semi-supervised and active learning based on Gaussian mixture models Paulo Drews- Jr.1, Rafael G. Colares2, Pablo Machado1, Matheus de Faria1, Amália Detoni3 & Virgínia Tavano3 Journal of the Brazilian Computer Society volume 19, pages 411–422 (2013)Cite this article Microalgae are unicellular organisms that have different shapes, sizes and structures. Classifying these microalgae manually can be an expensive task, because thousands of microalgae can be found in even a small sample of water. This paper presents an approach for an automatic/semi-automatic classification of microalgae based on semi-supervised and active learning algorithms, using Gaussian mixture models. The results show that the approach has an excellent cost-benefit relation, classifying more than 90 % of microalgae in a well distributed way, overcoming the supervised algorithm SVM. Microalgae are unicellular organism that can be found in a variety of sizes, structures and forms. These characteristics allows us to classify microalgae into different phytoplankton taxonomic groups. Microalgae classification is relevant to biology and oceanology, because the description of microalgae species at a certain time and place is important to the understanding of how energy is transferred from the food chain base to higher trophic levels [5]. Furthermore, it reflects changes in fish stocks and the carbon cycle of a given environment. The classification of microalgae and characterization of the predominant taxonomic groups has a diversity of applications, such as understanding of a phytoplankton community's structure. A recent census of marine life [4] gathered research from more than 80 nations, and lasted one decade, in order to obtain a global benthic biomass map predicted to the seafloor, phytoplankton included. Microalgae are classified in groups based on different characteristics, with huge morphological variations such as round, oval, cylindrical, and fusiform cells, as well as projections like thorns, cilia, etc. In addition to the taxonomic classification, phytoplankton organisms can be classified according to their sizes: picoplankton (\(0,2\)–\(2~\upmu \hbox {m}\)), nanoplankton (\(2\)–\(20~\upmu \hbox {m}\)), and microplankton (\(20\)–\(200~\upmu \hbox {m}\)). Specific composition, size structure and biomass studies about phytoplankton communities are being developed through the classic method of optic microscopy [13], in which an observer has to manually manipulate a small water sample requiring more than a day for a complete analysis. The use of particle analyzers has been an important tool to obtain information about the aquatic environment. It intends to efficiently obtain data about density, composition and morphometry of phytoplanktonic organisms. Typically, this automatic equipment is composed of an optical system capable of distinguishing microalgae from other particles in the sample and capturing images, along with software that assists in the classification and visualization of the cells. An automatic, or even semi-automatic, approach to classifying microalgae would greatly benefit research on this topic. This work presents an approach to an automatic/semi-automatic classification of microalgae based on machine learning algorithms. The proposed approach combines two types of learning: semi-supervised and active. The first assumes that only a small part of data has known ranking a priori, and tries to use information from non-ranked data to improve the classification. The second, active, searches the non-ranked data for the one that provides the most information gain, and then asks the user the rank of that data. In this work, both learning types were combined to improve microalgae classification. The process is initialized with semi-supervised learning, and then is improved using active learning. In order to acquire the microalgae data, a FlowCAM particle analyzer [15] was used. It is capable of obtaining information concerning microorganism in water samples. Four experts analyzed and ranked the obtained data in order to validate the proposed approach. Most studies found in the literature try to classify plankton, which, although not exactly the focus of this work, shares some similarities with our goal. Blaschko et al. [1] presented a comparison of supervised approaches to learning and classifying plankton. Those approaches are used to classify larger organisms than the targets of this work, thus presenting a greater number of relevant features, facilitating the learning process. Furthermore, those approaches used extensive supervised data, which makes it very costly and not extensible. Finally, Blaschko et al. [1] also used the FlowCAM and the best results obtained were around \(70~\%\). Another work of interest was proposed by Xu et al. [21], which uses a restrict set of supervised data classified with a SVM classifier, using non-ranked data to improve the learning. Although the presented approach is adequate to this work, it does not use experts as an information source. They obtain the information through a density method technique, which is sensitivity to the microalgae size. Due to the small size of the microalgae used in this study, the amount of information is reduced, which makes this approach unfeasible. The work of Sosik and Olson [19] used the FlowCytobot equipment to extract features from the phytoplankton, on a similar process to the FlowCAM. The results obtained in the automatic classification were around 68 and \(99~\%\), depending on the type of organism that were classified. They obtained least significant results to smaller plankton, the focus of this work. Another work from Hu and Davis [14] uses co-occurrence matrices techniques and SVM to classify plankton. Using both supervised learning techniques, they obtained around \(72~\%\) of accuracy. The problem of classification of microalgae was addressed in the work of Drews, Jr., et al. [9], where Gaussian mixture models were used together with semi-supervised and active learning. The present work is an extension of the previous work, where the methodology is detailed. Furthermore, we present and discuss a more thorough experimental data acquired using FlowCAM. As explained on Sect. 1, this work uses an approach based on the combination of two learning types: semi-supervised and active, with the objective to classify microalgae. The first step of the work was to obtain the data of the microalgae using the FlowCAM. Given a water sample, this equipment is capable of finding and analyzing microalgae in order to identify up to 26 different features to compose the databases used in this work. This work used only seven of these features: ESD diameter, ESD volume, width, length, aspect ratio, transparency, and CH1 peak. We selected the best of these features using the approach proposed by Peng et al. [16]; the method is an optimal first-order approximation of the mutual information criteria. The selected features are in accordance with FlowCAM software manual [11], which defines these seven features as good features in general cases. Four experts analyzed and classified these datasets in order to generate a ground truth to validate the proposed approach. Fig. 1 shows the FlowCAM interface. FlowCAM interface [11]—The interface is divided into two windows. In the left, the Visual Spreadsheet is shown, where tables, graphics and histograms illustrate some statistics about the dataset. On the right, the View Sample window shows the microalgae images. The classification mechanism provided by FlowCAM is too simple and restricted to selecting limit values to features The first step of this proposed method is the development of a semi-supervised algorithm to classify the microalgae. IN this step, the algorithm receives as input just a small sample of ranked data, wherein at least one instance of each class needs to be provided. This allow that the algorithm is able to identify and cluster microalgae with similar characteristics, creating a model of the microalgae class. This model allows new instances, non-ranked, to be observed and classified through their characteristics, updating the model simultaneously. When the semi-supervised algorithm finishes, the active algorithm analyzes the instances that were not ranked and searches among them for the one that provides the largest information gain for the model. In order to identify which instance this is, three methods were used: least confident sampling, margin sampling and entropy-based sampling. Then, the chosen instance is presented to the user, who will indicate the class to which it belongs. This class is incorporated into the model, which is then updated and tries to classify the other non-ranked instances. This process is repeated as long as the user finds it to be favorable or until the information gain is too small. Figure 2 illustrates the described process. In the following sections, we describe the semi-supervised and active learning algorithms. Proposed approach Semi-supervised learning Due to the nature of the data used on this work, where the instances have similar characteristics when they belong to the same class, it is costly to rank a large set of instances. Thus, it favors an approach that uses clustering to classify microalgae. Furthermore, as the number of classes, species of microalgae on a sample are known and small,Footnote 1 and the classes are relatively well separated, the use of the Gaussian mixture model (GMM) with expectation-maximization(EM) becomes a natural choice [7]. Gaussian Mixture Models The Gaussian mixture model (GMM) is a probability density function (PDF) given by a linear combination of a Gaussian PDF. More specifically, the function is a mixture of a Gaussian PDF if it has the following form: $$\begin{aligned} p(x|K,\theta )=\sum _{k=1}^{K}\pi _k \,\mathcal{N }(x|\mu _k,{\varvec{\Sigma }}_k), \end{aligned}$$ where \(K\) is the number of the Gaussian PDF and \(\pi _k\) is the weight of each one in the mixture. This weight can be interpreted as the a priori probability that the random variable value was generated by the Gaussian \(k\). Considering, \(0\le \pi _k\le 1\) and \(\sum _{k=1}^K\pi _k=1\), the GMM can be defined by the parameter list \(\theta \), which represents the parameters from each Gaussian and their respective weights, i.e., \(\theta =\{\pi _1,\mu _1,{\varvec{\Sigma }}_1,\ldots ,\pi _K,\mu _K, {\varvec{\Sigma }}_K\}\), where \(\mu \) and \({\varvec{\Sigma }}\) are the mean and the covariance matrix, respectively. The problem with estimating the Gaussian mixtures lies in determining \(\theta \), given that only \(K\) and the data are known and the other parameters are unknown (\(\pi _k\) and \(\theta _k=(\mu _k,{\varvec{\Sigma }}_k)\)). Considering \(Y=\{y_1,\ldots ,y_n,\ldots ,y_N\}\) with \(y_n \in \mathbb{R }^M\), the independent sample set, where \(M\) is the size of the data sample space and \(N\) is the number of samples. In this work, \(y_n\) represents the dataset instances, the microalgae. It is possible to estimate the probability \(p (y_n|K,\theta )\) directly for each \(K\). However, a logarithmic function of the probability is normally used for ease of handling numbers. Thus, we have: $$\begin{aligned} {\hat{\theta }}=\underset{\theta ,K}{\mathrm{argmax }}\,\log p (y|K,\theta ). \end{aligned}$$ Solving the Eq. 2 is not an easy task [8, 10]. The number of variables to be estimated can grow exponentially with the size of \(K\) and \(\theta \), thus making the computation very costly. We used the EM algorithm to solve this problem. EM algorithm The EM algorithm is used to determine the class of each data [7]. The algorithm aims to solve problems in which we do not know all the information needed for the solution. The algorithm is composed of two steps: E-step: On this step, the missing data are estimated using the observed data and the actual status of the model parameters. M-step: The maximum likelihood function is maximized, considering that the missing data are known. The EM algorithm seeks to classify the \(y_n\) data on classes, or Gaussian, and, later, to re-estimate each class value. Using Bayes' rule the probability that a point \(y_n\) belongs to class \(k\) is computed. Considering \(\theta ^{(i)}\) to be the \(\theta \) value on the iterative step \(i\) of the algorithm and known in this step, the probability of E-step is given by: $$\begin{aligned} p(k| y_n, \theta ^{(i)}) = \frac{\pi _k \cdot \mathcal{N }(y_n|\theta _k^{(i)})}{\sum _{l=1}^{K} \pi _l \cdot \mathcal{N }(y_n|\theta _l^{(i)})}. \end{aligned}$$ Calculating these probabilities makes it possible to estimate \(\theta \) and \(\pi \). These equations below show how each value is estimated in the maximization step (M-step). First, one normalizing parameter \({\overline{N}}_k\) is estimated by the posterior estimation of new values for \({\overline{\pi }}_k,{\overline{\mu }}_k,{\overline{{\varvec{\Sigma }}_k}}\). From this, the update equations from step M can be defined: $$\begin{aligned} \overline{N}_k&= \sum _{n=1}^N p(k| y_n, \theta ^{(i)}), \end{aligned}$$ $$\begin{aligned} \overline{\pi }_k&= \frac{\overline{N}_k}{N}, \end{aligned}$$ $$\begin{aligned} \overline{\mu }_k&= \frac{1}{\overline{N}_k} \sum _{n=1}^N y_n p(k| y_n, \theta ^{(i)}) , \end{aligned}$$ $$\begin{aligned} \overline{{\varvec{\Sigma }}}_k&= \frac{1}{\overline{N}_k} \sum _{n=1}^N (y_n-\overline{\mu }_k)\cdot (y_n-\overline{\mu }_k)^\mathrm{T} \, p(k| y_n, \theta ^{(i)}). \end{aligned}$$ The algorithm initialization is critical for a good performance, i.e., the \(\theta ^{(0)}\). In this work, the initialization is done based on the ranked data available, generating a initial model using random initialization. Thereafter, using non-ranked data information, this model is updated. This approach has the major advantage of ensuring that data labeled as distinct classes remain this way. The approach of Zhu and Goldberg [22] was used to estimate the GMM model from ranked and non-ranked data. The ranked data are computed distinctly in the E-step. This way, the ranked data have their probability set to \(100~\%\) for their class and \(0~\%\) to the other classes. After executing the semi-supervised learning algorithm, it is possible to divide the dataset into two groups: ranked instances and non-ranked instances. Considering \(X=\{x_1,\ldots ,x_n,\ldots ,x_N\}\) as the set of non-ranked instances and \(k\) the possible classes, the active algorithm must find an \(x_i \in X\) that maximizes the amount information added to the system when it is classified as \(k_j\). In order to define which instance \(x_i\) is going to be presented to the user, three metrics were used to calculate the information contained therein. The three metrics, based on the work of Settles [18] and Friedman et al. [12], are described below: Least-confident sampling: involves choosing the instance with the least probability of belonging to the class with the most probability. The instance \(x\) to be chosen is the one that: $$\begin{aligned} x=\underset{i}{\mathrm{argmin }}\; p(z_i={\hat{k}}|x_i) \end{aligned}$$ where \({\hat{k}}=\hbox {argmax}_{\,k}\; p(z_i=k|x_i)\) is the class with most probability. Margin sampling: involves choosing the instance with the least margin between the class with most probability and the one with the secondmost probability. The instance \(x\) to be chosen is the one that: $$\begin{aligned} x = \underset{i}{\mathrm{argmin }} \; [p(z_i = {\hat{k}}_1 | x_i) - p(z_i = {\hat{k}}_2 | x_i)] \end{aligned}$$ where \({\hat{k}}_1\) and \({\hat{k}}_2\) are the most likely classes. Entropy-based sampling: involves choosing the instance with the most entropy of the classes' probabilities. The instance \(x\) to be chosen is the one that: $$\begin{aligned} x=\underset{i}{\mathrm{argmax }}-\sum _k p(z_i \!=\! k | x_i)\log \; p(z_i=k|x_i)\qquad \end{aligned}$$ After defining which instance is the most informative, the user must inform the system its rank. This classification is used by the EM algorithm in order to find the best model for the data, ranked or non-ranked, with this new information. Such model is initialized with the best representation until the present moment. The results were obtained using two different datasets acquired using the FlowCAM equipment. The Oceanographic Institute of FURG collected the data during an oceanographic expedition on the Atlantic Ocean in different place and depth. In order to validate the results, four different experts classified these datasets. Doubtful data were eliminated, typically they were small microalgae, around \(1~\upmu \hbox {m}\), or really big microalgae, which were problems on the acquisition by the FlowCAM or were microalgae colonies. Figure 3 illustrates some excluded data during the process. Some examples of microalgae acquired by FlowCAM that were excluded due to acquisition problems or the presence of microalgae colonies. The presence of colonies is due to a failure in the segmentation process of FlowCAM. These fail are common due to the acquisition process of the FlowCAM device The first dataset was classified on four different classes: flagellates (Fig. 4a), mesopores (Fig. 4d), pennate diatoms (Fig. 4c) and others (Fig. 4b). An important characteristic, usually found in this kind of data, is the unbalance of classes. The flagellates and the others classes represent more than 90 % of the data. Furthermore, as shown at Fig. 4a or b, these are reduced size data with few characteristics, which makes the classification problem difficult to solve. Examples of the four classes of microalgae acquired by FlowCAM on the first dataset. This dataset were classified on four different classes: a flagellates, c pennate diatoms, d mesopores, and b others. This figure shows some important characteristics of this data as the unbalance and the reduced information about each microalgae The second dataset was classified on four different classes: pennate diatoms (Fig. 5a), flagellates (Fig. 5b), gymnodinium (Fig. 5c) and prorocentrales (Fig. 5d), respectively. Both datasets have two similar species of microalgae and two different ones. This is due to different place and, mainly, depth where the samples were acquired. The characteristics of the data are similar, both datasets are unbalanced and with reduced size data. Examples of the four classes of microalgae acquired by FlowCAM on the second dataset. This dataset were classified on four different classes: a pennate diatoms, b flagellates, c gymnodinium and d prorocentrales. As in the previous figure, it shows some important characteristics of this data as the unbalance and the reduced information about each microalgae In order to validate the proposed approach, we used some evaluation metrics. As there are multiple classes, the metrics need to deal with this kind of information. It was used the F-score metric [17], defined by the Eq. 11, which is the harmonic mean between the recall \((r)\) and the precision \((p),\) defined by Eq. 12. $$\begin{aligned} F_\beta =(1+\beta )\frac{pr}{(\beta ^2 p)+r}, \end{aligned}$$ where \(\beta \) is a constant factor. At the present work, \(\beta \) was equal to 1, obtaining the F1-score metric. $$\begin{aligned} r_k=\frac{Tp_k}{Tp_k+Fp_k},\quad p_k=\frac{Tp_k}{Tp_k+Fn_k}, \end{aligned}$$ where, \(Tp_k\) is the number of correctly classified microalgae for class \(k\); \(Fp_k\) is the number of false positives, the number of microalgae that were wrongly classified as class \(k\); \(Fn_k\) is the number of false negatives, the number of microalgae that are from class \(k\), but were defined to another class; \(k\) is the microalgae class. These metrics are defined for each class. The F1-score values are defined on the interval \((0,1)\), and if they are near one they represent a better classification, while small values, near zero, represent a low classification quality. However, to evaluate the performance for all classes was used the micro-average and macro-average metrics [20]. These metrics evaluate the average performance of the classifier, based on precision and recall metrics. The macro-average metric gives an average where every class is treated with same importance, while the micro-average metric gives an average where the microalgae are treated with the same importance. It is important to evaluate these two metrics due the fact that the micro-average is more influenced by the classifier performance on classes with large samples, while the macro-average is more influenced by classes with less samples. Thus, using both metrics, the F1-score was evaluated. It is called maxF1 when obtained using macro-average and minF1 when obtained using micro-average. In the case of multiple classes, the minF1 has the same value as the metric known as accuracy (Ac), which is defined by Eq. 13. Thus, this work uses these two metrics: accuracy, or minF1, and maxF1. $$\begin{aligned} \hbox {Ac}=\frac{\sum _{k}Tp_k}{N}, \end{aligned}$$ where \(N\) is the total number of samples on the data base and \(\sum _k\) is the sum for all classes. Some results were obtained in order to validate the approach using these two datasets completely classified. The first dataset is composed by 1,526 microalgae divided into four classes, as previously described, each one with 1,003 (Flagellates), 500 (others), 14 (pennate diatoms) and 9 samples (mesopores). The second dataset is composed by \(923\). It is also divided in four classes, as previously described, each one with \(112\) (Pennate Diatoms), \(669\) (Flagellates), \(65\) (gymnodinium) and \(77\) samples (prorocentrales). From these datasets, smaller classified bases were randomly generated, with approximately 1, 3, 5, 10, 20 and \(50~\%\) of the original dataset, where each class should have at least one sample. In order to obtain quantitative results, for each percentage were generated ten different instances. Forty-eight samples were actively selected and classified. Evaluation of the active learning Firstly, the active learning capabilities were evaluated using three different metrics: Least Confidence Sampling, Margin Sampling e Entropy, when compared with a random selection. The results obtained in the first dataset is shown in Fig. 6. It shows the results for 1, 10 and \(50~\%\) of initial supervision using both evaluation metrics: Accuracy and maxF1. Comparative of active learning metrics against a random selection using the first dataset, with results showing mean and standard deviation for the datasets with ten different bases. The vertical axis represents the accuracy and the horizontal axis represents the number of active samples informed to the system. a Accuracy for 1 % of initial semi-supervision, b MaxF1 for 1 % of initial semi-supervision, c accuracy for 10 % of initial semi-supervision, d MaxF1 for 10 % of initial semi-supervision, e accuracy for 50 % of initial semi-supervision, f MaxF1 for 50 % of initial semi-supervision (color figure online) Figure 6a and b show the results for \(1~\%\) of initial semi-supervision, in which the random selection presents a small accuracy and maxF1 raises with the addition of new samples. On the other hand, the other metrics had a significant improvement, especially on accuracy, which means a better classification independently of the classes. On Fig. 6c and d, the results for \(10~\%\) of initial supervision are shown. It can be noted that the accuracy starts at a higher value than \(1~\%\) of semi-supervision and increases smoother for all metrics and the random selection. For the results obtained with \(50~\%\), the variance is even smaller, as shown in Fig. 6e and f. Moreover, in this case, the active learning presents a small improvement for accuracy and maxF1. The random selection can be seen as a semi-supervised addition of samples, thus, it can be noted that the active learning represents a significant gain, especially when there is little initial information. Considering the second dataset, the results are similar to the ones obtained with the first dataset. One important difference between the datasets is the mean of accuracy and maxF1. The second dataset has different classes of microalgae, and the intraclass variability is larger than the first dataset. Thus, it is hardier to classify. Figure 7 shows the results for 1, 3 and \(5~\%\) of initial supervision. It is interesting to see in Fig. 7a and b that the random selection of samples to classify in the active learning presents better results than the statistical techniques. This improvement happens after five samples and remains until the end of the active learning. The main reason for these results is due the large intraclass variance in this dataset. Thus, the system is not able to classify with a small number of supervised samples. In this case, the statistical selection falls into "local minima". In this case, the random selection chooses samples that improve the results, while the statistical methods choose samples that obtain a small improvement in relation to the random one. Comparison of active learning metrics against a random selection using the second dataset, with results showing the average and standard deviation for the datasets with ten different instances. The vertical axis represents the accuracy and the horizontal axis represents the number of active samples informed to the system. a Accuracy for 1 % of initial semi-supervision, b MaxF1 for 1 % of initial semi-supervision, c accuracy for 3 % of initial semi-supervision, d MaxF1 for 3 % of initial semi-supervision, e accuracy for 5 % of initial semi-supervision, f MaxF1 for 5 % of initial semi-supervision (color figure online) In Fig. 7c and d, the results obtained by all selection methods are similar, with the maxF1 metrics of the random selection being worse than the others are. Considering \(5~\%\) of initial supervision the statistical methods are better than random selection, as shown in Fig. 7e and f. This results remain to the 10, 20 and \(50~\%\). The entropy based sampling obtains a small advantage to the other metrics in all cases. Figure 8 shows the results for accuracy of each of the ten generated bases from both datasets, considering a semi-supervision of 1, 3 and \(10~\%\). The vertical axis represents the accuracy and the horizontal axis represents the number of active samples informed to the system. The accuracy results for the semi-supervised learning can be seen at number zero of the horizontal axis. As expected, the results shows that with a small semi-supervision, the accuracy is ruled by the chosen samples, and as the number of active samples increases, the variance decreases. The first two columns in this figure show the results using entropy for both datasets, and the last column using random selection for the second dataset. Evaluation of the different instances of the semi-supervised data. In order to obtain statistical results, we generated ten different instances for each supervision percentage. The accuracy for all ten instances is shown using the proposed method with entropy, in the first two columns, and random selection, in the last column. The visualization is improved using different colors. the vertical axis represents the accuracy and the horizontal axis represents the number of active samples informed to the system. It is important to call attention to the vertical axis, where the intervals are different between the results from the first and the second datasets. a Results for 1 % of semi-supervision in the first dataset. b Results for 1 % of semi-supervision in the second dataset. c Results for 1 % of semi-supervision in the second dataset. d Results for 3 % of semi-supervision in the first dataset. e Results for 3 % of semi-supervision in the second dataset. f Results for 3 % of semi-supervision in the second dataset. g Results for 10 in the first dataset. h Results for 10 % of semi-supervision in the second dataset. i Results for 10 % of semi-supervision in the second dataset (color figure online) On Fig. 8a and b, it is possible to see the difference in accuracy obtained by the proposed methodology for both dataset using \(1~\%\) of supervision. The second dataset is hardier to classify, thus the results show bases with approximately \(65~\%\) in accuracy. In these two figures is possible to see an interesting characteristics of \(1~\%\) supervision, some samples are capable to improve the accuracy in more than \(5~\%\), as the base in black in Fig. 8a and in blue in Fig. 8b. This phenomenon also happens, in Fig. 8c, in a large scale, where the random selection is used. It mainly occurs in bases where the initial accuracy is smaller. Thus, this base is composed by unrepresentative instances. Therefore, a representative sample can improved the capability of the system to classify correctly the unclassified data. It explains the results obtained in the Fig. 7a and b. Figure 8d and e show the results for each base, considering \(3~\%\) of supervision. In this case, the first dataset shows a better accuracy value than the second dataset. The characteristics of the results are similar, with almost all bases with a small increasing in the accuracy with the active learning. Moreover, both results presents a base with small initial accuracy. This base, as previously explained, is sensible to random selection that generated some steps in the accuracy, as shown in Fig. 8f. The other bases are less sensible to the random selection, where the increasing in the accuracy is almost zero for the random selection, by the other side; the entropy based sampling is able to select good samples that increases the accuracy. On Fig. 8d, there are two extreme cases. The first one, on cyan, \(92~\%\) of accuracy is obtained with a small supervision, while, on the second one, on red, only \(86~\%\) of accuracy is obtained. It can be noted that all instances had an improvement when new samples are actively selected. This is clearer on the red instance that goes from \(86~\%\) to almost \(90~\%\). This fact also occurs in both datasets. The results obtained using \(10~\%\) of supervision with entropy selection is shown in Fig. 8g and h. As seen in the previous results, the first dataset present a better accuracy than second dataset. The new actively selected samples improve in a small way the classification. This is due to a small capacity of generalization for this big group of samples, which means that new samples adds little information. It is also possible to see in Fig. 8i, the random selection presents a very small improvement in the accuracy. In this case, there are only a small number of informative samples to be selected, and the random method has a small chance of selecting them. Although these facts, the entropy- based sampling select informative samples. It is shown by the increasing in the accuracy of almost all bases, in Fig. 8g and h. In order to evaluate whether the obtained performance was adequate, the results were compared with the support vector machine (SVM) [3] algorithm. This algorithm is considered the state-of-art on supervised learning and classification. The libSVM implementation [2] was used with a radial base kernel function, which presented better results. All other parameters were kept to its default. Figure 9 shows the results including the active learning as a supervision addition. In black are shown the results obtained using only semi-supervised learning. The results after using the active learning are shown in red, which makes the supervision percentage to be raised. The blue line links the data used on the initialization of the active learning after forty-eight instances, in percentage. Comparative of semi-supervised learning, in black color, against active learning using entropy, in red color. It is important that the semi-supervised learning be used as initial step to the active learning. Therefore, the semi-supervised results is s linked to the active learning by a blue line. The results obtained using SVM method are trained from the same supervised data used in semi-supervised approach, in green color. The mean and standard deviation are estimated and illustrated in the figure. Two different metrics are evaluated: accuracy and maxF1. a Accuracy comparative in the first dataset. b MaxF1 comparative in the first dataset. c Accuracy comparative in the second dataset. d MaxF1 comparative in the second dataset (color figure online) Figure 9a and b shows semi-supervised learning, active learning using entropy and SVM learning results to accuracy and MaxF1 metrics for the first dataset. It can be noted that SVM has a small accuracy improvement with the increase of supervision, although it has better results than the semi-supervised algorithm alone. Only for \(50~\%\) of semi-supervision the presented approach obtains a better accuracy, while the active learning has similar results to the ones obtained by SVM. On the other hand, the proposed approach presents superior results of maxF1. It is due to the unbalance between the classes. The SVM had excellent classification for the flagellates class, which has 1,003 samples, but did not have any sample classified for mesopores class, which has only \(9\) samples. This has a reflect on the accuracy and maxF1 metrics, as the accuracy metric only cares for the number of samples that were correct classified, while the maxF1 cares for the number of samples classified for each class. In addition, it is interest of researchers to classify samples on all classes, especially the ones with small number of microalgae. It is possible to notice that the gain obtained by the active learning is reduced as the semi-supervision increases. This effect happens with both metrics, accuracy and maxF1. For the second dataset, the results are similar, but there are small differences, as shown in Fig. 9c and d. Due the large intraclass variance, the SVM obtains a better accuracy only until \(5~\%\) of supervision, after it, the semi-supervised algorithm obtains a better results. The maxF1 metric shows the main problem of the SVM results. The method has difficult to correct classify unbalanced datasets. But, it is a natural characteristics in this kind of dataset. The accuracy obtained in the second dataset is smaller than the first one. However, the accuracy for bases with \(50~\%\) of supervision in the second dataset is greater than the first dataset, where after the active learning the accuracy is \(95~\%\). Differently of the first dataset, the maxF1 continues increasing after active learning, even after \(10~\%\) of initial supervision, as shown Fig. 9d. It can be seen be the inclination of the blue line at \(50~\%\) between semi-supervised learning and the active learning. This work proposed an approach for the classification of microalgae using a combination of semi-supervised and active learning algorithms. At the proposed approach, the semi-supervised classification is done using Gaussian mixture models together with the expectation-maximization algorithm. This classification is improved by the use of an active learning. Two metrics, accuracy and maxF1, were used to validate the proposed approach, which presented favorable results for both metrics, achieving around \(92~\%\) of accuracy. The approach was compared with a state of the art algorithm of supervised learning, SVM, presenting similar results of accuracy and much better results of MaxF1. In this work, we presented three information evaluation metrics for the active learning, which had similar results with a small advantage to the entropy based sampling. The results show that the use of active learning improves the accuracy and the maxF1 with few samples. The results obtained are relevant because, according to Culverhouse et al. [6], the hit rate achieved by humans remains between 67 and \(83~\%\). As a future direction, we intend to verify other methods capable of dealing with a larger number of classes and data, in order to generate a database of classified microalgae. Another improvement is to develop an adaptive model that automatically determines the number of classes. Finally, the features obtained by the FlowCAM are limited, and as shown in this work, have problems concerning segmentation of microalgae. Thus, we will study image processing approaches to improve the segmentation and increase the amount of relevant features of the samples. This size is dependent of the environment. Typically, we have around ten different classes. Blaschko MB, Holness G, Mattar MA, Lisin D, Utgoff PE, Hanson AR, Schultz H, Riseman EM, Sieracki ME, Balch WM, Tupper B (2005) Automatic in situ identification of plankton. In: IEEE workshops on application of computer vision (WACV), Breckenridge, Co, USA, pp 79–86 Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):1–27 Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297 Costello MJ, Coll M, Danovaro R, Halpin P, Ojaveer H, Miloslavich P (2010) A census of marine biodiversity knowledge, resources, and future challenges. PLoS One 5(8):e12,110 Cullen JJ, Franks PJS, Karl DM, Longhurst A (2002) Physical influences on marine ecosystem dynamics. In: The sea, vol 12, chap 8, pp 297–336 Culverhouse P, Williams R, Reguera B, Herry V, Gonzlez-Gil S (2003) Do experts make mistakes? A comparison of human and machine identification of dinoflagellates. Mar Ecol Prog Ser 247:17–25 Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B 39(1):1–38 MathSciNet Google Scholar Drews-Jr P, Núñez P, Rocha R, Campos M, Dias J (2010) Novelty detection and 3 D shape retrieval using superquadrics and multi-scale sampling for autonomous mobile robot. In: Proceedings of the IEEE international conference on robotics and automation-ICRA, Anchorage, Alaska, USA, pp 3635–3640 Drews-Jr P, Colares RG, Machado P, de Faria M, Detoni A, Tavano V (2012) Aprendizado ativo e semi-supervisionado na classificacao de microalgas (in portuguese). In: IX Encontro Nacional de Inteligência Artificial-ENIA, Curitiba, Brazil Drews-Jr P, Silva S, Marcolino L, Núñez P (2013) Fast and adaptive 3D change detection algorithm for autonomous robots based on Gaussian mixture models. In: Proceedings of the IEEE international conference on robotics and automation-ICRA, Karlsruhe, Germany, pp 4670–4675 Fluid Imaging Technologies Inc (2011) FlowCAM manual, 3rd edn. 65 Forest Falls Drive, Yarmouth, Maine, USA Friedman A, Steinberg D, Pizarro O, Williams SB (2011) Active learning using a variational Dirichlet process model for pre-clustering and classification of underwater stereo imagery. In: IEEE/RSJ international conference on intelligent robots and system-IROS, IEEE, pp 1533–1539 Hamilton P, Proulx M, Earle C (2001) Enumerating phytoplankton with an upright compound microscope using a modified settling chamber. Hydrobiologia 444(1):171–175 Hu Q, Davis C (2006) Accurate automatic quantification of taxa-specific plankton abundance using dual classification with correction. Mar Ecol Prog Ser 306:51–61 Jakobsen H, Carstensen J (2011) FlowCAM: sizing cells and understanding the impact of size distributions on biovolume of planktonic community structure. Aquat Microb Ecol 65:75–87 Peng H, Long F, Ding C (2005) Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intell 27(8): 1226–1238. doi:10.1109/TPAMI.2005.159 van Rijsbergen CJ (1979) Information retrieval. Butterworth-Heinemann, Glasgow Settles B (2009) Active learning literature survey. Technical Report 1648, Computer Sciences. University of Wisconsin-Madison Sosik HM, Olson RJ (2007) Automated taxonomic classification of phytoplankton sampled with imaging in-flow cytometry. Limnol Oceanogr Methods 5:204–216 Veloso A, Meira W Jr, Cristo M, Gonçalves M, Zaki M (2006) Multi-evidence, multi-criteria, lazy associative document classification. In: ACM international conference on Information and knowledge management, pp 218–227 Xu L, Jiang T, Xie J, Zheng S (2010) Red tide algae classification using SVM-SNP and semi-supervised FCM. In: 2nd International conference on education technology and computer-ICETC, pp 389–392 Zhu X, Goldberg AB (2009) Introduction to semi-supervised learning. Synth Lect Artif Intell Mach Learn 3(1):1–130 Centro de Ciências Computacionais, Universidade Federal do Rio Grande (FURG), Rio Grande, RS, Brazil Paulo Drews- Jr., Pablo Machado & Matheus de Faria Departamento de Ciência da Computação, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG, Brazil Rafael G. Colares Instituto de Oceanografia, Universidade Federal do Rio Grande (FURG), Rio Grande, RS, Brazil Amália Detoni & Virgínia Tavano Paulo Drews- Jr. Pablo Machado Matheus de Faria Amália Detoni Virgínia Tavano Correspondence to Paulo Drews- Jr.. Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Drews-, P., Colares, R.G., Machado, P. et al. Microalgae classification using semi-supervised and active learning based on Gaussian mixture models. J Braz Comput Soc 19, 411–422 (2013). https://doi.org/10.1007/s13173-013-0121-y Received: 12 November 2012 Issue Date: November 2013 Microalgae classification
CommonCrawl
Earth Perspectives Transdisciplinarity Enabled A comparative analysis of reference evapotranspiration from the surface of rainfed grass in Yaounde, calculated by six empirical methods against the penman-monteith formula Valentine Asong Tellen1,2 Earth Perspectives volume 4, Article number: 4 (2017) Cite this article Six reference evapotranspiration (ETo) methods including: Papadakis (1966), Turc (1961), Blaney and Criddle (1950), Blaney and Criddle modified by Shih et al. (1977), Penman modified by Frere and Popov (1979) and Stephens and Stewart (1963) modified by Jansen and Haise, were compared with the FAO-56 Penman-Monteith formula using rain-fed grass data within the period of 15 years (1967 to 1982) in Yaoundé, extracted from the records of climatological observations from meteorological stations published by the National Meteorological Center of Cameroon. The methods compared daily ETo using linear regression and statistical indices of a quantitative approach to model performance evaluation. The average FAO-56 PM ETo was 3.16 mm/day, but Penman modified by Frere and Popov (1979) overestimated ETo by 25% (12.72 mm/day) and Papadakis (1966) underestimated ETo by 8% (0.28 mm/day). In general, the Stephens and Stewart (1963) modified by Jansen and Haise method produced best statistics result (R2 = 0.96, RMSE = 0.072, MBE = -1.260, RMSEs = 0.980 and RMSEu = 0.693) and generated ETo results of 2.76 mm/day (2% underestimate), closest to that of FAO-56 PM method. The results of statistical comparisons delivered a confident statistical justification for the ranking of the compared methods based on performance indices. Water deficit is a key regulating element of plant growth and development under the panoply of environmental conditions around the world. Irrigation offers an important feature of agricultural (cropping design) scheduling in areas where rainfall is insufficient to meet crop water requirements. Nevertheless, under rainfed agriculture, it is crucial to determine the optimum time for sowing to benefit from the existing soil moisture and precipitation, while under irrigated agriculture, it is imperative to establish the required amount of water and time of application. Generally, the determination of irrigation water demand employs the procedures for estimating evapotranspiration (E), which (apart from precipitation) is an important component of the hydrological budget [2]. The latter reckoned that weather stations must be sited on standardized vegetation surfaces (grass or alfalfa) in order to calculate crop reference or potential evapotranspiration (ETo). The high installation and maintenance cost constraints limits many nations in Sub-Saharan Africa from having vegetation reference sites or installed ETo-networks. Consequently, the systematic usage of incongruous climatic data for calculating ETo from sites that do not tally with standardization definitely results to systematic aggregate errors in irrigation scheduling and confounding conclusions. Over the years, empirically based equations were established and used under various climatic regimes to estimate ETo from climatological data. To test the accurateness of these methods in a different environment employs the use of costly equipments (such as lysimeters) and profesionals or standard reference surface conditions (protocols). Also, many of these equations have limited global validity. Intending to reduce the alterations associated with tree canopy characteristics, FAO and working groups of the International Commission on Irrigation and drainage tested different formulas among empirical model equations extensively, and then recommended the standardized Penman-Monteith reference evapotranspiration as the potential evapotranspiration (ETo) for short grass or a tall reference crop (alfalfa), whose characteristics have been well defined [4, 18, 24]. Fortunately, there exist unconventional, empirical and less weather demanding equations for accurate determination of ETo [18]. Since effective water resource planning and management, irrigation scheduling and controlled agricultural productivity relies on the accuracy of estimated values of ETo, numerous researches have been carried out in different parts of the world to ascertain appropriate models suitable for applications in such parts. Hari et al. [10] estimated ET using four different methods for the Bapatla region in India and reported no significant differences and similar trend between the FAO-56 Penman-Monteith and Blaney and credible method. Similarly, Zarei et al. [27] also compared several methods to estimate reference ET in the South East of fars province, Iran and reported a strong similarity between the Pan Evaporation and the Penman-Monteith methods. The latter reported a significant difference between Penman-Monteith and the Jensen-Haise and Thornthwait methods, while the Penman-Monteith results did not differ significantly with those of Pan evaporation, Hargreaves-Samani modified 2 and Blaney and credible methods. Oluwaseun et al. [14] evaluated four different methods for IITA stations in Ibadan, Onne and Kano states in Nigeria and reported that the Blaney-Morin-Nigeria method was the best to be used for estimating ET in these stations as it showed strong correlation with the Penman-Monteith method, required less volume of data for its application and the ease of its use. Alexandris et al. [18] compared the ET results from the surface of rain-fed grass in central Serbia, using six empirical methods against the Penman-Monteith method and reported that Turc's and Makkink methods produced underestimated results while the Priestley-Taylor, Hargreaves-Samani, and Copais methods performed well for the region and generated results closest to the Penman-Monteith method. Similar studies were carried out in Abeokuta, Ijebu-Ode and Itoikin in Nigeria [1] and Bulgaria [19], these and many other studies emphasized the necessity of comparison, sensitivity testing and calibration of methods in a local context. The objective of this paper is to compare and evaluate the performance of six globally used methods of ETo computation namely: Papadakis [15], Turc [23], Blaney and Criddle [6], Blaney and Criddle modified by Shih et al. [21], Penman modified by Frere and Popov [9] and Stephens and Stewart [22] modified by Jansen and Haise [11]. The aim is to determine which method apart from the FAO56-PM [3], can best be applied in Yaounde for the estimation of ETo, that is easiest to use in terms of parameters required and that can accurately and consistently capture evapotranspiration losses in the region. This will guide irrigation engineers, hydrologists, agronomists and meteorologists in the calculation of reference and crop evapotranspiration as well as estimating crop water demands for rain-fed and irrigated agriculture. Description of the equations FAO-56 Penman-Monteith [3] This is a combination of Penman [16]; Penman [17] method which is based on Bowen ratio principle (comprising radiation, wind and humidity factors) and Montieth [12, 13] method which considers the resistance factors (including surface resistance and aerodynamic resistance). The equation was used by Allen et al. [3] on an hourly basis with the resistance term having a constant value of 70 s/m throughout the day and night and recommended the FAO-56 Penman Monteith equation as the sole standard method for determining reference evapotranspiration in all climates, especially when there was availability of data. The equation has the form: $$ E T o=\frac{0.408\varDelta \left({R}_n- G\right)+\gamma \left(\frac{900}{T+273}{U}_2\right)\left({e}_s-{e}_s\right)\kern3.25em }{\varDelta +\gamma \left(1+0.34{U}_2\right)\ } $$ Where: ETo is the reference evapotranspiration (mm/day), Rn is the net radiation at the crop surface (MJ/m2/day), G is the soil heat flux density (MJ/m2/day), T is the mean daily air temperature at 2 m height (°C), U2 is the wind speed at 2 m height (m/s), es is the saturation vapour pressure (kPa), ea is the actual vaour pressure (kPa), (es-ea) is the saturated vapour pressure deficit (kPa/°C), \( \mathrm{\triangle} \) is the slope vapour pressure curve (kPa/°C) and ɣ is the psychrometric constant (kPa/°C). Turc [23] Turc's method is based on the assumption that ETo (mm/month) is a function of the average relative humidity. The method calculated ETo using relative humidity, average temperature, and solar radiation. Since relative humidity was >50%, the correction term was ignored and the following formula was used: $$ E T o= K\left( Rs+50\right)\frac{t}{t+15} $$ Where: K is monthly crop coefficient, Rs is solar radiation (MJ/m/day), t is the average monthly air temperature, and ETo is the reference evapotranspiration (mm/day). Papadakis [15] Papadakis method is based on saturation vapor pressure corresponding to monthly temperatures to estimate ETo (mm/month). The equation is presented as follows: $$ E T o=0.5625\left({e}_{aTmax}-{e}_d\right) $$ Where eaTmax is the water pressure corresponding to average maximum temperature (kPa), and ed is the saturation water pressure corresponding to the dewpoint temperature (kPa). Stephens and Stewart [22] modified by Jansen and Haise [11] The Stephens and Stewart [22] method is a radiation method adjusted for mean monthly temperature. It is basically the same form as the original Jensen and Haise [11] method. The equation is of the form: $$ E T o=0.01476\left({T}_m+40905\right){R}_s/\lambda $$ Where: Tm is the monthly potential ET in mm, Rs is the monthly solar radiation in cal/cm2, and λ is the latent heat of vaporization of water, 59.559 - 0.055Tm, cal/cm2.mm. Blaney and Criddle [6] This method is very simple and is based on temperature, percent daylight hours and crop coefficient. The equation is of the form: $$ E T o=25.4 P\left(1.8{T}_m+32\right)/100 $$ Where: P is the monthly % annual daylight hours, and Tm is the mean monthly temperature (°C). Blaney and Criddle modified by Shih et al. [21] Shih et al. [21] modified the Blaney-Criddle [6] method by substituting the measured solar radiation instead of the percent of daylight hours. The equation was expressed as: $$ E T o=25.4{R}_s\left(1.8{T}_m+32\right)/ T{R}_s $$ Where; Rs is the monthly incoming solar radiation in cal/cm2, TRs is the annual sum of mean monthly solar radiation in cal/cm2, and Tm is the mean monthly temperature (°C). Penman modified by Frere and Popov [9] This is a mass-transfer method commonly used for estimating free water surface evaporation because of its simplicity and reasonable accuracy [26]. The equation is of the form: $$ E T o=\frac{\left({R}_n. C+ A\right)}{C+1} $$ Where Rn is the net solar radiation in cal/cm2/day, C is an empirically determined constant involving some function of windiness and A is the aerodynamic term. The study area is Yaoundé, situated between Latitude 3o 85" N, Longitude 11o 32" E, with an altitude of about 750 m asl (Fig. 1). The 15 years (within the period of 1967 to 1982) climatic data Yaoundé was extracted from the records of climatological observations, published by the National Meteorological Centre of Cameroon. The variables of the equations included: average monthly temperatures, total precipitation, vapor pressure, durations of sunshine, wind speed, relative humidity and solar radiation. Location of (a) Centre region, and (b) Yaoundé (Mfoundi Division) in the Centre Region of Cameroon Quantitative approaches to the evaluation of models performance were applied. Fox [8] recommends, in essence, that at least four types of different measures be calculated and reported. The average difference between model performances was measured by root mean square error (RMSE). Its values can range from zero to infinity and, of course, lower values are better. The relative bias was explained by the mean bias error (MBE). Furthermore, RMSEs and RMSEu are the systematic and unsystematic components respectively and are computed and reported in addition to RMSE. Normally, the systematic RMSE is the estimated distance between the linear regression best-fit line and the 1:1 line, while unsystematic RMSE is estimated distance between the data point and the linear regression best fit-line [25]. Berengenal and Gavilan [5] expounded that, the unsystematic component represents the "noise" level in the model under test and it measures the scatter about the regression line (this can be inferred to as a measure of the potential accuracy). It is assumed that the systematic component represents the space available for local adjustment. A good model is considered to have a very low RMSEu and the RMSEs close to the RMSE. The correlation measure (R2), the intercepts (b) and slopes (a) for the least squared regression analysis are also reported. All computations were done with the aid of Microsoft Office Excel 2010 and SPSS version 17 statistical package. Computational forms of some indices are given bellow: $$ M B E={N}^{-1}{\displaystyle \sum_{i=1}^N}\left({P}_i-{O}_i\right) $$ $$ RMSE=\kern0.5em {\left[{N}^{-1}{\displaystyle {\sum}_{i=1}^N{\left({P}_i+{O}_i\right)}^2}\right]}^{0.5} $$ $$ \mathrm{RMSEu}={\left[{N}^{-1}{\displaystyle {\sum}_{i=1}^N{\left({P}_i+{\widehat{P}}_i\right)}^2}\right]}^{0.5} $$ $$ \mathrm{RMSEs}={\left[{N}^{-1}{{\displaystyle {\sum}_{i=1}^N\left({\widehat{P}}_i+{O}_i\right)}}^2\right]}^{0.5} $$ Where O i stands for observed values (estimated by FAO 56-PM) and Pi stands for values predicted by the compared methods, \( {\widehat{P}}_i= a{O}_i+ b \). FAO 56-PM method was a reasonable benchmark because according to Allen et al. [3] it overcomes the weaknesses of the previous FAO Penman method and provides values that are more consistent with actual crop water use data worldwide. Recommendations have also been developed for using the method in case of limited climatic data, thereby largely eliminating the need for any other reference evapotranspiration methods and creating a consistent and transparent basis for a globally valid standard for crop water requirement calculations. Result and Discussions A summary of average monthly climatic data from 1967 to 1982 of Yaoundé are presented in Table 1. All parameters vary within the years. The results indicate that the average monthly temperature values ranged from 22.5 to 25.2 °C with the least obtained in July and August (coldest month) while February recorded the highest value indicating the annual hottest month. Average monthly precipitation ranged from 19.0 to 290.5 mm. Wind speed was low throughout the year ranging from 1.0 to 2.4 m/s. Insulation was highest in February. Table 1 Summary of some average daily climatic parameters per month for Yaoundé between the years 1967 – 1982 The average ETo obtained from 1967 to 1982, by the FAO 56 PM (benchmark) method was 3.16 mm/day, but that of the Papadakis model was 0.28 mm/day (Table 2), indicating underestimation. The Penman modified by Frere and Popov model yielded considerable overestimates (12.72 mm/day). The progressive average monthly ETo values showing under/overvaluation difference of all the tested methods are represented in Fig. 2 Table 2 Monthly climatology of ETo values (mm/day) calculated by six empirically based methods and FAO-56 PM during 1967 – 1982 Trends of Monthly climatology of ETo values (mm/day) calculated by six empirically based methods and FAO-56 PM during 1967 - 1982. (Positive sign (+) = grater, negative sign (-) = lower, relative to the reference method) Comparisons for each empirical equation were made between monthly reference evaporation values and monthly values calculated using the FAO 56-Penman M. method. The FAO 56-Penman M. was selected as a benchmark against which comparisons were made because of its global acknowledgment and its assorted use [2]. Drawing experience from the latter, the relationships (regression equations) between monthly ETo estimates for each method against the FAO 56-PM ETo and the cross-correlation coefficient (R2) are shown in Fig. 3, using the linear regression formula Y = bx + a instead of regression through the origin (Y = bx). Comparison of FAO-56 PM method versus (a) Stephens and Stewart (1963) modified by Jansen and Haise (1963) (b) Turc (1961), (c) Blaney and Criddle modified by Shih et al. (1977), (d) Blaney and Criddle (1950), (e) Penman modified by Frere and Popov (1979), and (f) Papadakis (1966) methods using regression analysis, during 1967 – 1982. (N = 12) Stephens and Stewart [22] modified by Jansen and Haise, and Turc [23] models, correlated very well (R2 = 0.96 and R2 = 0.95 respectively), while Blaney and Criddle [6] and Papadakis [6] equations showed a weak correlation with FAO-56 PM model (R2 = 0.357 and R2 = 0.859 respectively) throughout the years under local conditions. This result agrees with Adeboye et al. [1] who reported a strong correlation (R2 = 0.97) by Jansen and Haise model with the FAO-56 PM model. With regards to regression equations, the Stephens and Stewart [22] modified by Jansen and Haise [11] equation resulted in a slope (b = 1.39) close to unity and an intercept close to zero (a = -1.62) which presumably gives the best predicted (P) values (Table 3). This result also conforms with the findings of Adeboye et al. [1]. The second best values were obtained by Blaney and Criddle modified by Shih et al. [21] (b = 2.68, a = -3.25). The Papadakis method systematically produced the greatest underestimates (-8%) while the Stephens and Stewart [22] modified by Jansen and Haise [11], model produced the least underestimate (-2%) giving the best estimates among all the tested methods. This is probably due to the limited data input for these two methods. The results are in agreement with Adeboye et al. [1] and Alexandris et al. [2] who reported underestimation of ETo by the Jansen and Haise and Turc's methods respectively. Conversely, Penman modified by Frere and Popov [9] method systematically overestimated by as much as 25%, giving worst estimates amongst all the tested methods. This is probably due to the effects of the advective conditions (dry/high wind speed) in the study area. This result conforms to the assertion that the Frere and Popov [9] method presents over estimates in areas under advective conditions [20]. The latter stated that the Frere and Popov method considers that the effect of wind on E or ETo is linear while FAO [7] explained that this effect is non-linear and that the effect of wind on E is higher than on ETo under advective conditions. Table 3 Statistical summary of Monthly climatology of ETo values (mm/day) calculated by six empirically based methods during 1967 - 1982 Generally, all the statistical measurements agree with the results attained by the regression analysis. All relevant statistics and rankings of daily methods are enumerated in Table 3, where the ranking of the methods against FAO-56 PM model is made, taking into account all the above mentioned statistical indices. It is requiered that a weighting coefficient for each statistical index in any complex evaluation system, be determined separately. Considering the same symptomatic tendency and the behavior of the indices, a simple grading of the determined index was made. With a MBE of -1.260 mm/day, RMSE of 0.072, RMSEs of 0.980 and RMSEu of 0.693, the Stephens and Stewart [22] modified by Jansen and Haise [11], model gave by far the best result and is therefore recommended for estimating ETo when input data parameters are limited in Yaoundé stations. The overall results indicate that some of the simpler empirical equations compare reasonably well with the FAO-56 PM method while several other methods produced ETo estimates which differ from those obtained by the FAO-56 PM method. The Stephens and Stewart [22] modified by Jansen and Haise [11] method ranked first among the methods evaluated. The Blaney and Criddle modified by Shih et al. [21] method ranked second with regards to statistical analysis on monthly data basis. The third and fourth spaces were occupied by the Papadakis [15] and Turcs [23] methods respectively. The fifth was the Blaney and Criddle [6]. Finally, Penman modified by Frere and Popov [9] equation was ranked the least position amongst the other five methods due to its consistent high overestimates and this was also reported by Alexandris et al. [2]. These results provides a reference tool that offers concrete guidance on which method to select, based on available data, for accurate and consistent estimates of monthly ETo related to FAO-56 PM method, under arid-humid conditions. ETo: Reference Evapotranspiration FAO: Food and Agricultural Organisation Penman-Monteith RMSE: Root Mean Square Error RMSEs: Systematic Root Mean Square Error RMSEu: Unsystematic Root Mean Square Error Adeboye OB, Osunbitan JA, Adekaluk O, Okanded A (2009) Evaluation of FAO- 56 penman-monteith and temperature based models in estimating reference evapotranspiration using complete and limited data, application to Nigeria. Agric Eng Int CIGR J 11:1–25 Alexandris R, Stricevic R, Petkovic S (2008) Comparative analysis of reference evapotranspiration from the surface of rainfed grass in central Serbia, calculated by six empirica methods against the Penman-Monteith formula. Eur Water 21(22):17–18 Allen RG, Pereire LS, Rase D and Smith M (1998) Crop Evapotranspiration. Guidelines for computing crop water requirements. FAO Irrigation and Drainage paper number 56, FAO Rome Allen RG, Walter IA, Elliot RL, Howell TA, Itenfisu D, Jensen ME and Snyder R (2005) The ASCE standardized reference evapotranspiration equation. ASCE and American Society of Civil Engineers Berengenal J, Gavilan P (2005) Reference ET estimation in a highly advective semi-arid environment. J Irrg Drain Eng 131(2):147–63 Blaney HF and Criddle WD (1950) Determining water requirements in irrigated areas from climatological and irrigation data. Department of Agriculture, Washington. Soil conservation service technical paper 96 FAO (1977) The state of food and agriculture. Some factors affecting progress in food and agriculture in developing countries. The state of natural resources and the human environment for food and agriculture. World Review. FAO Agriculture Series no. 8. ISBN 92-5-100607-5. http://www.fao.org/docrep/017/ap657e/ap657e.pdf Fox DG (1981) Judging air quality model performance: a summary of the AMS workshop on dispersion model performance. Bull Am Meteorology 125:67–82 Frère M and Popov GF (1979) Agometeorological crop monitoring and forecasting. FAO plant production and protection paper No 17. FAO, Rome. pp 64 Hari N, Ganesh BR and Murthy VRK (2016). Estimation of Evaporation with Different Methods for Bapatla Region. International Journal of Emerging Trends in Science and Technology. Vol. 03, Issue 07, Pp 4406-4414. ISSN 2348-9480. DOI: http://ijetst.in/article/v3-i7/19%20ijetst.pdf. Jensen ME and Haise HR (1963) Estimating evapotranspiration from solar radiation. Journal of Irrigation and Drainage of the ASCE, New York. V. 89, p.15-41 Montieth JL (1965) Evaporation and Environment. In: Michael AM (ed) Irrigation theory and practice second edition, 2008. Cochin: Vikas Publishing House Pvt. Ltd, New Delhi, p 499 Montieth JL (1981) Evaporation and surface temperature. Q JR Meteorol Soc 107:1–27 Oluwaseun AI, Philip GO, Ayorinde AO (2014) Evaluation of Four ETo Models for IITA Stations in Ibadan, Onne and Kano, Nigeria. J Environ Earth Sci 4(5):89–97 Papadakis J (1966) Crop Ecologic Survey in Relation to Agricultural Development in Western Pakistan. FAO, Roma, Italia, p 51 Penman H L (1948) Natural evaporation from open water, bare soil, and grass. Proceedings Royal Society of London, A193, (pp. 120-146). Penman HL (1963) Vegetation and hydrology. In: Farhani H J, Howell TA, Shuttleworth WJ and Bausch WC. Evapotranspiration: Progress in measurement and modeling in agriculture. American Society of Agricultural and Biological Engineers. Vol. 50(5), 2007, p. 1627 Pereire LS, Perrier A, Allen RG and Alves I (1996) Evapotranspiration: Review of concepts and future trends. J. Irrig. And Drain. Eng., ASCE 25. Popova Z, Kercheva M, Pereira LS (2006) Validation of the FAO Methodology for Computing ETo With Limited Data. Application to South Bulgaria. J Irrig Drain Eng 55(2):201–215 Reddy S J (1993) Agroclimatic agrometeorological techniques, as applicable to dry-land agriculture in developing countries. Jeevan Charitable Trust, Plot No. 6, ICRISAT Colony, Phase I, Tokatta, Secunderabad, A.P., India. Pg 204 Shih SF, Myhre DL, Mishoe JW and Kidder G (1977) Water management for sugarcane production in Florida Everglades. Proc. International Soc. of Sugar Cane Technologists, 16th Congress, San Paulo, Brazil. 2: 995- 1010 Stephens JC, Stewart EH (1963) "A comparison of procedures for computing evaporation and evapotranspiration.", Publication 62, International Association of Scientific Hydrology. International Union of Geodesy and Geophysics, California, USA, pp 123–133 Turc L (1961) Estimation of irrigation water requirements, potential evapotranspiration: A simple climatic formula evolved up to date. Ann Agron 12:13–14 Walter IA, Allen RG, Elliott R, Itenfisu D, Brown P and Jensen ME (2005) The ASCE Standardize Reference Evapotranspiration Equation. Final Report (ASCE-EWRI).Pr. Eds: Allen, R.G.; Walter, I.A.; Elliott, R.; Howell, T. and Jensen, M. Environmental and Water Resource Institute (2005). Task Committee on Standardization of Reference Evaporation of the Environmental and Water Resource Institute. Willmott CJ (1981) On the validation of models. Phys Geog 2(2):184–194 Xu CY, Singh VP (2002) Cross Comparison of Empirical Equations for Calculating Potential Evapotranspiration with Data from Switzerland. Kluwer Academic Publishers. Printed in the Netherlands. Water Resour Manag 16:197–219 Zarei AR, Zare S, Parsamehr AH (2015) Comparison of several methods to estimate reference evapotranspiration. West African J Applied Ecol 23(2):17–25 I acknowledge the heads of the soil science laboratory at the University of Dschang, Late Dr Omoko, and Professor Benard Yerima for their moral and technical support. I also express sincere gratitude to Dr. Asongwe G. for his academic support for this work. I, Valentine Asong Tellen, holder of ORCID number 0000-0001-8513-788X hereby declare that this research article is written by the author whose name has been appropriately indicated. Self-funded. The dataset supporting the conclusions of this article is included within the article. The author hereby certifies that he has participated sufficiently in the work to take public responsibility for the content. In addition, the author certifies that this material or similar material has not been and will not be submitted to or published in any other publication. VAT was responsible for the following: Conception and design of study, Acquisition of data, Data analysis and/or interpretation, Drafting/writing of Manuscript, Critical revision and approved the final version of the manuscript. The author has no affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers' bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript. All borrowed ideas have been well acknowledged by means of references and quotations and this manuscript has not been presented in any form for publication in any other journal. Department of Development Studies, Environment and Agricultural Development Program, Pan African Institute for Development - West Africa (PAID-WA), P.O.Box 133, Buea, Cameroon Valentine Asong Tellen Faculty of Agronomy and Agricultural Science, Department of Soil Science, University of Dschang, P.O. Box, 122, Dschang, 237, Cameroon Correspondence to Valentine Asong Tellen. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Tellen, V.A. A comparative analysis of reference evapotranspiration from the surface of rainfed grass in Yaounde, calculated by six empirical methods against the penman-monteith formula. Earth Perspectives 4, 4 (2017). https://doi.org/10.1186/s40322-017-0039-1 Received: 22 March 2017 Empirical methods comparison Penman-Monteith formula Rain-fed grass Water Resources and Management
CommonCrawl
npj materials degradation Article | Open | Published: 12 April 2019 Surface degradation mechanisms in a eutectic high entropy alloy at microstructural length-scales and correlation with phase-specific work function Vahid Hasannaeimi1 na1, Aditya V. Ayyagari1,2 na1, Saideep Muskeri1, Riyadh Salloom1 & Sundeep Mukherjee1 npj Materials Degradationvolume 3, Article number: 16 (2019) | Download Citation High entropy alloys represent a new paradigm of structural alloy design consisting of (near) equal proportions of constituent elements resulting in a number of attractive properties. In particular, eutectic high entropy alloys offer a remarkable combination of high strength and good ductility from the synergistic contribution of each phase in the eutectic, thereby circumventing the strength-ductility trade-off in conventional structural materials. In the present study, wear and corrosion behavior were evaluated for the AlCoCrFeNi2.1 eutectic high entropy alloy consisting of BCC (B2), and FCC (L12) lamellae. A transition from adhesive to oxidative wear was observed in reciprocating wear analysis. The L12 phase with lower hardness preferentially deformed during the wear test. The ratio of hardness to modulus was almost two times higher for the B2 phase as compared to L12. The overall corrosion resistance of the eutectic high entropy alloy was comparable to 304 stainless steel in 3.5 wt% NaCl solution. However, detailed microscopy revealed preferential dissolution of the B2 phase. Phase-specific scanning kelvin probe analysis showed relatively higher electropositivity for the B2 phase as compared with L12, supporting the selective corrosion and higher coefficient of friction of B2. Multi-principal element alloys represent a new paradigm in the design of advanced materials consisting of (near) equal proportions of constituent elements. These are often termed as high entropy alloys (HEAs) because high configurational entropy suppresses the formation of intermetallic compounds and leads to simple microstructures and attractive engineering properties. In particular, complex-phase HEAs are finding increasing research interest as they bring together the superior properties of single-phase HEAs and precipitation strengthening effects of super-alloys. In some of these alloys, the secondary phase is in the form of uniform dispersoids in a single-phase matrix,1,2 while for others it is in the form of lamellae as in the case of eutectic HEAs (E-HEAs) such as AlCoCrFeNi2.1,3 Nb25Sc25Ti25Zr25,4 and CoCrFeNiNbx.5 These alloys demonstrate a unique combination of excellent tensile/compressive strength, good ductility, high hardness, and good oxidation resistance. Slight changes in elemental constituents or thermal processing results in significant enhancement of mechanical properties in eutectic HEAs including tensile and compressive strength as well as elongation. Due to the synergistic contribution of the constituent phases, these alloys are promising in terms of circumventing the strength-ductility trade-off seen in conventional structural materials. Among the reported eutectic HEAs, AlCoCrFeNi2.1 E-HEA in particular has been found to be highly tunable, where a wide range of microstructures and excellent mechanical properties have been obtained by thermo-mechanical processing (such as severe cold rolling6 or friction stir processing7). Synergistic contribution from ductile face centered cubic (L12) and ordered body centered cubic (B2) phases in this alloy results in a combination of high strength and excellent ductility during tensile deformation. However, there is very limited understanding of the surface degradation behavior for E-HEAs. Use of these alloys as structural engineering materials requires good understanding of their surface degradation mechanisms including corrosion, erosion, and wear behavior.8 High wear resistance of single-phase HEAs has been attributed to their relatively higher hardness from solid solution strengthening.9,10 In addition, high corrosion resistance of single-phase HEAs stems from the high degree of microstructural homogeneity devoid of local corrosion initiators such as galvanic sites, intermetallic compounds, grain boundary precipitates, and sensitization elements.11,12,13 However, there are limited reports and very little understanding of surface degradation mechanisms in complex-phase and eutectic HEAs. This is not only important from the point of view of fundamental scientific understanding but also in determining the application worthiness of these alloys. Surface degradation studies of complex HEAs reveal unique microstructural dependence in terms of nano-galvanic coupling,14 anodic pit initiation,15 and grain boundary corrosion16 to name a few. Here, we report on the surface degradation behavior and mechanisms in AlCoCrFeNi2.1 E-HEA. The wear behavior and mechanisms were studied by bulk sliding reciprocating tests while corrosion resistance was evaluated by cyclic polarization and electrochemical impedance spectroscopy (EIS). Phase-specific response in the eutectic was quantified using SKP microscopy to relate the electronegativity to the electrochemical and friction behavior at the microstructural length-scales. Nano-indentation was used to explain the hardness contribution from each phase to material degradation behavior in terms of wear volume loss while small-scale scratch test was utilized to quantify phase-specific friction behavior. The microstructure of the as-cast alloy showed typical lamellar morphology as shown in the back-scattered SEM image in Fig. 1a, with an average grain size of ~75 ± 25 µm. Figure 1b shows the electron backscatter diffraction (EBSD) phase map of the L12/B2 lamellar structures. Fine parallel B2 lamellae (green color) are seen distributed within the L12 phase (violet color). The average thickness of L12 phase was observed to be ~0.6 μm while it was ~0.25 μm for the B2 phase. The volume fraction of L12 was ~68% while B2 was 32%. TEM analysis was performed to confirm the presence of ordered L12/B2 phases as shown in Fig. 1c. Distinct contrast can be seen in the lamellae regions, bright and dark for L12 and B2, respectively. The inset in Fig. 1c shows the selected area diffraction patterns (SAEDs) obtained from the two phases. The presence of super-lattice spots in both SAEDs confirms the L12 and B2 ordered structures. Energy dispersive X-ray spectroscopy (EDS) mapping of the alloy showed different compositions for L12 and B2 as summarized in Table 1. a Back-scattered electron micrograph of the as-cast AlCoCrFeNi2.1 alloy and b EBSD phase map showing the lamellar structure with green and violet colors for B2 and L12 phases, respectively; c TEM bright field image showing the phase contrast for the lamellar structure, the insets show the SADPs obtained using the regions marked with white circles. The zone axis (Z.A.) for B2 and L12 phases are [001] and [011], respectively Table 1 The chemical composition of L12 and B2 phases in as-cast AlCoCrFeNi2.1 eutectic high entropy alloy Wear resistance of the AlCoCrFeNi2.1 E-HEA was quantified by sliding reciprocating wear tests. SEM images of wear tracks obtained after 2 min up to 120 min of sliding duration are shown in Fig. 2 in secondary electron and back-scattered electron modes. The wear track obtained after 2 min sliding predominantly showed patches of re-deposited material, an indication of adhesive wear. However, no significant stiction of wear debris or wear particles were observed on the ball. This may be attributed to the continual dislodging of particles from the counterface during sliding-reciprocation action. Scanning electron microscope images of wear tracks ranging from 2 to 120 min duration in secondary electron (SE) and back-scattered electron (BSE) image modes showing wear deformation, as well as change in wear mechanism. The white scale bar corresponds to 400 µm The back-scattered electron images in Fig. 2 clearly show increasing darker phase with increasing sliding duration, indicating a gradual transition in wear mechanism. A high magnification image of the wear track after 120 min sliding duration is shown in Fig. 3a while the EDS map of the surface close to the wear track edge (red box in Fig. 3a) is shown in Fig. 3b. The wear track was characterized by high concentration of oxygen (up to 24 at.%), which indicates oxidation of the surface from temperature rise during sliding. The extent of surface oxidation increased almost linearly with increasing sliding duration as deduced from Fig. 2 and corresponding EDS spectra. To evaluate the nature of the oxide scale on wear track, Raman spectroscopy was done for the sample and wear debris following the test. The Raman spectrum in Fig. 3c shows several overlapping peaks in the 700–1400 cm−1 range. These correspond to complex oxides of transition metals present in the alloy including Ni, Cr, and Co. The strong peak at 1035 cm−1 closely matched with the position for Cobalt oxide, indicating that CoOx formation was tribologically favorable.17,18 Complex Cobalt–Chromium oxides have been reported to be typically present in tribofilms and passivation layers of alloys containing the respective elements providing surface protection from aggressive degradation.19,20 A 3D reconstruction of the wear track obtained using white light interferometry is shown in Fig. 3d. a Scanning electron microscope image of wear track after 120 min of sliding b EDS map showing oxides formed on the surface c Raman spectrum of the oxide scale showing presence of complex cobalt oxides d 3D interferometry image showing wear track depth and material loss during sliding; e wear volume loss as function of sliding duration The wear track depth was ~80 µm, indicating relatively higher wear volume loss compared to other recently studied complex HEAs.21 The wear volume loss quantified using Gwyddion software is shown in Fig. 3e as a function of sliding duration. There was a clear change in slope of wear volume after about 40 min from 2.05 × 10–12 m3 per minute of material loss to lower rate of 0.25 × 10−12 m3 per minute. This may be an indicator of two distinct mechanisms. The steep initial wear rate may be due to the dominant adhesive wear with aggravated material loss due to the absence of protective tribolayers. Subsequently, the wear volume loss continued to increase but the rate of material loss was an order of magnitude lower from the formation of protective oxide tribolayer (oxidative mechanism). The magnitude of friction coefficient during the initial wear regime was higher at 0.65–0.70 and gradually lowered to 0.45–0.50 with progressive sliding. The initial higher friction coefficient may be attributed to the high metal-metal contact and inherently electropositive nature of the constituent elements that promotes strong adhesion.22 However, the subsequently formed oxide and tribolayer may have effectively lowered the friction values after longer sliding distance. Besides wear track and macroscopic oxide formation, the material was observed to undergo complex microscopic deformation as observed from the post-test microstructures. The surface deformation mechanisms during wear were studied to gain fundamental understanding of the tribological system and material response to fatigue type loading, given the nature of repetitive sliding under a heavy normal load. Figure 4a shows surface deformation adjacent to the wear track after 120 min sliding. Extensive slip deformation was seen for the bright L12 phase while no significant deformation was seen in case of the darker B2 phase. A high magnification image of the inset area (Fig. 4b) shows extensive slip lines in the L12 phase discontinued at the L12-B2 interface. The image also shows material dislodged from the counterface and deposited on to the edge of the wear track during sliding. Both phases deformed uniformly inside the wear track, and no inter-phase debonding or void formation is observed in Fig. 4c. It was recently reported that L12 phase can accommodate several arrays of parallel mobile dislocations and deform by planar slip.23 This is due to the splitting of \(\frac{1}{2}\left\langle {111} \right\rangle\) dislocation into \(\frac{1}{6}\left\langle {112} \right\rangle\) Shockley partials and the formation of stacking faults between them. This restricts the slip to the {111} plane and effectively minimizes cross-slip. This is a characteristic feature of slip occurring in low stacking fault energy materials. B2 phase in the eutectic has been shown to fail in a brittle manner while L12 phase shows ductility and necking, leading to dual-mode fracture in this alloy.23 Our wear tests indicate that L12 and B2 phases may deform simultaneously, while B2 phase can accommodate medium density of dislocations. This was attributed to the 3D back-stress acting on L12 phase which can maintain synchronous deformation in heterogeneous systems. This back stress is further enhanced by semi-coherent boundaries between B2 and L12 phases, and the lower fraction of B2 lamellae. The accumulative effect of these conditions resulted in the activation of dislocation in the brittle B2 phase facilitated by high density of dislocation pile-up at the phase boundaries, and this modified the brittle behavior of B2 phase to accommodate deformation. a Low magnification SEM image showing deformation at the edge of the wear track b zoomed-in view corresponding to the square box showing extensive slip deformation for the bright L12 phase while no significant deformation was seen for the dark B2 phase c wear surface inside the track showing no preferred deformation of either of the two phases To understand the wear mechanism at microstructural length-scale, phase-specific scratch behavior was evaluated with in situ observation of material removal mechanism. The variation in coefficient of friction (COF) across the scratch line is shown in Fig. 5a. The average COF for L12 was found to be ~0.82, while it was ~0.87 for the B2 phase. This is consistent with the bulk wear results which showed greater extent of deformation for L12 due to activation of multiple slip systems. As seen in the inset of Fig. 5a, almost no material was observed adhering to the cube-corner indenter. Furthermore, the material pile-up was different along the scratch line, as shown in Fig. 5b, with much larger amount of pile-up for L12 compared to B2. a Coefficient of friction variation as a function of displacement measured using pico-indentation on the E-HEA; the inset shows SEM image of the cube-corner indenter during the in situ experiment; b SEM image of the scratch line showing different amounts of material pile-up Phase-specific hardness and modulus measurements by nano-indentation are summarized in Table 2, with the load-displacement curves shown in Fig. 6a. The B2 phase showed lower maximum displacement (~225 nm) as compared to L12 phase that had a peak displacement of 275 nm at 10 mN load. The location of the indent on the B2 and L12 phases are shown in Fig. 6b, c, respectively. At least ten indentations were performed on each phase. The B2 phase showed a hardness of 4.9 ± 0.71 GPa, while the L12 phase showed a hardness of 4.03 ± 0.39 GPa. The ratio of hardness to modulus was significantly higher for the B2 phase as compared to the L12 phase. Table. 2 Hardness and modulus values from nano-indentation a Load-displacement curves obtained from nano-indentation; SEM image of an indent: b on the B2 phase and c on the L12 phase Although hardness is the dominant parameter which affects friction and wear behavior according to Archard's relation,24 recent studies have shown that modulus also significantly affects wear resistance.24 The ratio of hardness to modulus represents deformation relative to yielding.25,26 The ratio of H/E* calculated from nano-indentation are shown in Table 2. The B2 phase showed a higher ratio (~0.027) compared to L12 phase (~0.018). A high H/E* ratio material shows greater elastic strain prior to plastic deformation compared to a lower H/E* material.27 Thus, high H/E* ratio materials are expected to exhibit better wear resistance. As shown in Fig. 4b, the darker B2 phase with higher H/E* value was almost un-deformed during reciprocating wear test while the brighter L12 phase with lower H/E* value was plastically deformed showing extensive slip lines. A larger plastic strain was induced within the region with smaller H/E* ratio which led to higher contact area during wear and more material loss. The corrosion resistance of the alloys was evaluated using accelerated electrochemical tests. The variation of open circuit potential (OCP) with immersion time in 0.6 M (3.5 wt.%) NaCl is shown in Fig. 7a. The alloy surface gradually developed a protective passive layer as evidenced by the continuous increase in voltage towards nobler potentials. The OCP stabilized after ~6 ks. Following steady-state condition, the sample was perturbed with a small AC voltage (5 mV) and electrochemical impedance spectrum (EIS) was recorded. The EIS data are represented in the form of a Nyquist plot as shown in Fig. 7b. The shape of the Nyquist plot is an incomplete arc, the radius of which is directly proportional to the surface polarization resistance of the alloy. The curve was best-fitted with a modified Randles circuit to quantify the electrochemical parameters, namely charge transfer resistance (Rct), solution resistance (Rs) and a constant phase element (CPE) to account for the heterogeneities of the surface.28 The impedance of a CPE is defined as follows: $$Z_{{\rm{CPE}}} = \left[ {Q\left( {j\omega } \right)^n} \right]^{ - 1}$$ where j2 = −1, ω is the angular frequency, Q is the magnitude of the CPE, and n is the exponent of the CPE related to the roughness of the surface.29 The value of n is a measure of the deviation from pure capacitive behavior (n = 1). The values of the electrochemical parameters associated with the corrosion behavior of E-HEA and 304 stainless steel in 3.5 wt.% NaCl and the simulated values for the equivalent circuit elements are summarized in Table 3. Due to the high resistance of the surface charge transfer, only a portion of capacitive loop was formed in the range of frequency in Nyquist plot. There is no indication of a second loop from adsorptive film on the specimen surface.30 Although the E-HEA showed less noble behavior during potential-time experiment, it exhibited a higher charge transfer resistance than 304 stainless steel. Cyclic polarization tests were carried out after the EIS test (Fig. 7c), in which a corrosion current density (icorr) of 60 nA/cm2 was achieved in NaCl, similar to values typically seen for austenitic stainless steel (types 304 and 316 L).16,31 The eutectic HEA showed a passivation range (Epit − Ecorr) of ~400 mV, which is lower than the range for 304 stainless steel (~700 mV). However, the area under the positive hysteresis loop in cyclic polarization plot was lower for E-HEA as compared with stainless steel, which could be an indication of less pitting susceptibility of E-HEA. The re-passivation of the E-HEA during the potential reverse cycle occurred at ~ −100 mV vs. SCE. a Open circuit potential obtained by holding the sample in 3.5 wt.% NaCl solution, the potential stabilized after 6 ks; b EIS test showing Nyquist plot with Randles circuit fit; c cyclic polarization plot obtained at a scan rate of 0.25 mV/s Table 3 Electrochemical parameters and equivalent circuit element values for EIS data for AlCoCrFeNi2.1 alloy in 3.5 wt.% NaCl solution SEM images of the corroded surfaces are shown in Fig. 8. Initial material removal comprised of uniform dissolution of B2 phase relative to L12 phase. This may be due to lower concentration of corrosion resistant elements such as Cr and Co in B2 phase relative to L12 phase. Figure 8a reveals that corrosion propagation was selective to grain orientation. Three regions are delineated with dotted lines along the grain boundaries. In Fig. 8a, the area delineated with blue line was observed to undergo mild corrosion whereas patches similar to the ones sandwiched between blue and red regions did not undergo any corrosion and remained intact. The separation between the corroded and pristine regions was along the grain boundaries. The region beyond the red boundary showed preferential corrosion of the B2 phase. The preferential dissolution of B2 relative to L12 was observed at higher magnifications (Fig. 8b). However, the unaffected grains did not show any signature of selective dissolution of B2. The second mechanism observed in samples that were subjected to higher anodic currents was pitting, which was always preceded by localized corrosion of a grain wherein B2 phase was rapidly depleted. The extent of corrosion on the susceptible grains was again clearly defined by the grain boundaries, indicating that certain orientations were more prone to corrosion in the matrix. Confluence of material depleted triple junctions of B2 phase developed into a pit as seen in Fig. 8c. a Low magnification SEM image of E-HEA after polarization test in 3.5 wt.% NaCl solution; b high magnification SEM image corresponding to the black box in the previous image showing preferential dissolution of B2 phase; c pitting in the corroded sample with pit initiation in the region with high B2 phase fraction For further understanding of localized electrochemical activity of the eutectic microstructure, SKP analysis was done to determine the relative work-function map. As shown in Fig. 9, the relative work function varied over the surface, when the probe scanned the surface at a constant distance of 50 μm. The regions with higher work function corresponded to L12 while the B2 phase showed lower work function (Fig. 9a). The parallel arrangement of the B2 lamellae within L12 phase is better observed in 2D SKP potential map (Fig. 9b). The correlation between corrosion potential and work function investigated for crystalline32 and amorphous33 materials show that metallic alloys with higher work function typically show better corrosion resistance due to lower electropositivity.34 In other words, corrosion is affected by the surface electron behavior or electron activity, which may be characterized by work function.35 Hence, the corrosion potential is affected by the local work function.36 In case of eutectic HEA, L12 phase was found to be more resistant to corrosion due to its higher work function, indicating that electron release may be more difficult for L12. The difference in work function of the present phases facilitates galvanic corrosion in addition to grain-orientation dependent uniform corrosion. As shown in Fig. 8b, B2 phase got selectively removed due to its lower corrosion resistance, while L12 phase was slightly corroded. Hence, in addition to the difference in chemistry of the two phases in the eutectic, the formation of galvanic coupling may be explained from the work-function variations over the surface. Relative work-function topography obtained from scanning kelvin probe microscope of the phases shown in a Isometric view to delineate the potential changes and b top view to delineate the width of the bands corresponding to phases. The high work-function regions correspond to L12 whereas lower work-function regions are related to B2 lamellae The difference in work function and electropositivtiy of B2/L12 supports the scratch behavior of the alloy as well. Higher adhesion for metal-on-metal wear has been reported for more electropositive alloys.34 Buckley's hypothesis37 suggests that chemically active metals (i.e., electron donors) are prone to strong adhesion as opposed to inert metals.38 Therefore, it can be concluded that the more electropositive nature of B2 phase might have resulted in a stronger adhesion to the cube-corner indenter and resulted in higher localized stresses (Fig. 5a). Surface degradation behavior of a two-phase E-HEA, AlCoCrFeNi2.1, was investigated. The wear behavior was observed to linearly increase with increasing sliding duration (Fig. 3). The mechanism of wear was initially abrasive wear that gradually transitioned into oxidation wear with prolonged sliding. Corrosion current density was comparable to 304 stainless steel (Fig. 7). Corrosion was predominantly galvanic, with the B2 having more anodic character than the L12 phase. The corrosion behavior of the alloys was observed to have a close correlation with the relative work-function values of the two phases: L12 phase with higher work function showed higher corrosion resistance, while B2 phase with relatively lower work function was more susceptible to dissolution (Fig. 9). The difference in electronegativity from variation in work function explained the phase-specific friction and scratch behavior, as well. For overall comparison, the properties of each phase are summarized in Table 4. Table 4 A comparison of wear and corrosion properties of B2 and L12 The AlCoCrFeNi2.1 E-HEA was cast by melting weighted proportions of pure elements in a vacuum arc-melter. The alloy was homogenized by melting at least five times followed by removal of surface defects and imperfections. The alloy was cut, mounted, and polished to a mirror finish for microscopy and nano-indentation studies. The density of the alloy was calculated by determining volume using AccuPyc gas pycnometer and mass using Sartorius sensitive balance. FEI Quanta scanning electron microscope (SEM) with built-in EDS was used for microstructure evaluation. At least five area scans were performed at different regions to determine the composition of each phase. EBSD analysis was conducted using FEI Nova Nano SEM230. Transmission electron microscopy (TEM) imaging was performed using Philips EM-420 operating at 120 kV. TEM foil was prepared by FEI Nova NanoLab 200 focused ion beam (FIB) system. The average grain size was calculated using a linear mean intercept method from the SEM images of several grains using ImageJ digital micrograph analysis software. The wear and friction response of the alloy was evaluated using RTEC Universal Tribometer with a sliding reciprocating stage. Wear tests were carried out at a load of 10 N, stroke length of 1 mm, and reciprocating frequency of 5 Hz. Sliding duration was progressively increased from 2 to 120 min. Wear volume loss was quantified using white light interferometry. Nano-indentation was done using TI-Premier Triboindenter (Bruker) using a diamond Berkovich tip at room temperature. Maximum load used was 10,000 μN with loading time of 2 s, holding time of 5 s, and unloading time of 2 s. Raman spectroscopy (Nicolet Almega XR) with a near infrared laser (NIR, λ = 780 nm) was performed over the wear tracks to evaluate the nature of wear debris. To evaluate the phase-specific friction, a Pico-indenter (PI 88 Hysitron Inc.) was used inside a FEI Nova NanoLab 200 FIB microscope. A diamond cube-corner tip was used to perform the scratch across the two phases with a load of 3000 µN. At least five tests were performed in various regions to determine the repeatability for the COF. Corrosion resistance was evaluated using Gamry Ref3000 potentiostat. A three electrode system was used with saturated calomel electrode (SCE) as reference, Pt wire as counter, and the sample as working electrode. Corrosion behavior was evaluated in 3.5 wt.% (0.6 M) NaCl solution. OCP was determined by immersing the sample into solution with no external bias. Following a stable OCP, EIS was carried out from 10−2 to 106 Hz at a rate of 10 points per decade. The EIS measurements were carried out at 5 mV rms bias for external perturbation. Following EIS measurement, cyclic polarization test was carried out by scanning the sample at a rate of 0.25 mV/s from −200 mV with respect to OCP up to stable pitting, and then the potential sweep direction was reversed. The pitting morphology was evaluated using scanning electron microscopy. Phase-specific electrochemical behavior was evaluated by a Princeton Applied Research SKP to determine the relative work function over an area of 100 µm2. A tungsten wire was used as a reference probe during SKP measurements. A high resolution topography map of the specimen was recorded to enable the use of constant height mode for SKP analysis. Work function was determined at a working distance of 50 µm with lab air (RH 55%) forming the capacitor between the W probe and working electrode. All data generated or analysed during this study are included in this published article. Further information is available on request from the authors. Ethics declarations The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Ayyagari, A. V., Gwalani, B., Muskeri, S., Mukherjee, S. & Banerjee, R. Surface degradation mechanisms in precipitation-hardened high-entropy alloys. NPJ Materials Degradation. 2, 1–10 (2018). Wang, R., Zhang, K., Davies, C. & Wu, X. Evolution of microstructure, mechanical and corrosion properties of AlCoCrFeNi high-entropy alloy prepared by direct laser fabrication. J. Alloys Compd. 694, 971–981 (2017). Lu, Y. et al. A promising new class of high-temperature alloys: eutectic high entropy alloys. Sci. Rep. 4, 1–5 (2014). Rogal, L., Morgiel, J., Świątek, Z. & Czerwiński, F. Microstructure and mechanical properties of the new Nb25Sc25Ti25Zr25 eutectic high entropy alloy. Mater. Sci. Eng. A 651, 590–597 (2016). He, F. et al. Designing eutectic high entropy alloys of CoCrFeNiNbx. J. Alloys Compd. 656, 284–289 (2016). Wani, I. S. et al. Ultrafine-grained AlCoCrFeNi2.1 eutectic high-entropy alloy. Mater. Res. Lett. 4, 174–179 (2016). Nene, S. S. et al. Enhanced strength and ductility in a friction stir processing engineered dual phase high entropy alloy. Sci. Rep. 7, 1–7 (2017). Qiu, Y., Thomas, S., Gibson, M. A., Fraser, H. L. & Birbilis, N. Corrosion of high entropy alloys. NPJ Materials Degradation 1, 1–18 (2017). Wu, J. et al. Adhesive wear behavior of AlxCoCrCuFeNi high-entropy alloys as a function of aluminum content. Wear 261, 513–519 (2006). Tong, C. et al. Mechanical performance of the AlxCoCrCuFeNi high-entropy alloy system with multiprincipal elements. Metall. Mater. Trans. A 36A, 1263–1271 (2005). Chou, Y., Yeh, J. & Shih, H. The effect of molybdenum on the corrosion behaviour of the high-entropy alloys Co1.5 CrFeNi1.5Ti0.5Mox in aqueous environments. Corros. Sci. 52, 2571–2581 (2010). Qiu, X., Zhang, Y., He, L. & Liu, C. Microstructure and corrosion resistance of AlCrFeCuCo high entropy alloy. J Alloys Compounds. 549, 195–199 (2013). Ayyagari, A., Hasannaeimi, V., Grewal, H. S. & Mukherjee, S. Corrosion, erosion and wear behavior of complex concentrated alloys: a review. Metals 8, 603–643 (2018). Lin, C.-M. et al. Evolution of microstructure, hardness, and corrosion properties of high-entropy Al0.5CoCrFeNi alloy. Intermetallics 19, 288–294 (2011). Lee, C., Chang, C., Chen, Y. Y., Yeh, J. W. & Shih, H. C. Effect of the aluminium content of AlxCrFe1.5MnNi0.5 high-entropy alloys on the corrosion behaviour in aqueous environments. Corros. Sci. 50, 2053–2060 (2008). Hsu, Y.-J., Chiang, W.-C. & Wu, J.-K. Corrosion behavior of FeCoNiCrCux high-entropy alloys in 3.5% sodium chloride solution. Mater. Chem. Phys. 92, 112–117 (2005). Li, L., Kim, D. Y. & Swain, G. M. Transient formation of chromate in trivalent chromium process (TCP) coatings on AA2024 as probed by Raman spectroscopy. J. Electrochem. Soc. 159, 326–333 (2012). Li, Y. et al. Identification of cobalt oxides with Raman scattering and fourier transform infrared spectroscopy. J. Phys. Chem. C 120, 4511–4516 (2016). Zhang, A., Han, J., Su, B., Li, P. & Meng, J. Microstructure, mechanical properties and tribological performance of CoCrFeNi high entropy alloy matrix self-lubricating composite. Mater. Des. 114, 253–263 (2017). Liu, Y. et al. Oxidation behavior of high-entropy alloys AlxCoCrFeNi (x = 0.15, 0.4) in supercritical water and comparison with HR3C steel. Trans. Nonferrous Met. Soc. China 25, 1341–1351 (2015). Ayyagari, A. et al. Reciprocating sliding wear behavior of high entropy alloys in dry and marine environments. Mater. Chem. Phys. 210, 162–169 (2018). Stachowiak, W. G. and Batchelor, A. W. Engineering Tribology 4th edn (Elsevier, Amsterdam, 2014). Gao, X. et al. Microstructural origins of high strength and high ductility in an AlCoCrFeNi2.1 eutectic high-entropy alloy. Acta Mater. 141, 59–66 (2017). Archard, J. Contact and rubbing of flat surfaces. J. Appl. Phys. 24, 981–988 (1953). Leyland, A. & Matthews, A. On the significance of the H/E ratio in wear control: a nanocomposite coating approach to optimised tribological behavior. Wear 246, 1–11 (2000). Pintaude, G. Introduction of the Ratio of the Hardness to the Reduced Elastic Modulus for Abrasion. Tribology-Fundamentals and Advancements, Gegner, J. (Ed.) InTech (2013). https://doi.org/10.5772/55470. Guo, J., Wang, H., Meng, F., Liu, X. & Huang, F. Tuning the H/E* ratio and E* of AlN coatings by copper addition. Surf. Coat. Technol. 228, 68–75 (2013). Zhang, L. et al. Study on the anodic film formation process of AZ91D magnesium alloy". Electrochim. Acta 52, 5325–5333 (2007). Fattah-alhosseini, A. & Vafaeian, S. Influence of grain refinement on the electrochemical behavior of AISI 430 ferritic stainless steel in an alkaline solution. Appl. Surf. Sci. 360, 921–928 (2016). Lee, C. P., Chen, Y. Y., Hsu, C. Y., Yeh, J. W. & Shih, H. C. Enhancing pitting corrosion resistance of AlxCrFe1.5MnNi0.5 high-entropy alloys by anodic treatment in sulfuric acid. Thin Solid Films 517, 1301–1305 (2008). Chen, Y., Duval, T., Hung, U., Yeh, J. & Shih, H. Microstructure and electrochemical properties of high entropy alloys—a comparison with type-304 stainless steel. Corros. Sci. 47, 2257–2279 (2005). Li, W. & Li, D. Variations of work function and corrosion behaviors of deformed copper surfaces. Appl. Surf. Sci. 240, 388–395 (2005). Li, W., Su, H. & Yue, J. Effects of crystallization on corrosion resistance and electron work function of Zr65Al7. 5Cu17. 5Ni10 amorphous alloys. Philos. Mag. Lett. 93, 130–137 (2013). Ayyagari, A., Hasannaeimi, V., Arora, H. & Mukherjee, S. Electrochemical and friction characteristics of metallic glass composites at the microstructural length-scales. Sci. Rep. 8, 906 (2018). Rohwerder, M. & Turcu, F. High-resolution Kelvin probe microscopy in corrosion science: scanning Kelvin probe force microscopy (SKPFM) versus classical scanning Kelvin probe (SKP). Electrochim. Acta 53, 290–299 (2007). Yee, S., Oriani, R. & Stratmann, M. Application of a kelvin microprobe to the corrosion of metals in humid atmospheres. J. Electrochem. Soc. 138, 55–61 (1991). Buckley, D. H. Surface effects in adhesion, friction, wear, and lubrication. (Elsevier, Cleveland, 1981). Michaelson, H. B. Relation between an atomic electronegativity scale and the work function. IBM J. Res. Dev. 22, 72–80 (1978). These authors contributed equally: Vahid Hasannaeimi, Aditya V. Ayyagari Department of Materials Science and Engineering, University of North Texas, Denton, TX, 76203, USA Vahid Hasannaeimi , Aditya V. Ayyagari , Saideep Muskeri , Riyadh Salloom & Sundeep Mukherjee Center for Nanoscale Materials, Argonne National Laboratory, Lemont, IL, 60439, USA Aditya V. Ayyagari Search for Vahid Hasannaeimi in: Search for Aditya V. Ayyagari in: Search for Saideep Muskeri in: Search for Riyadh Salloom in: Search for Sundeep Mukherjee in: V.H. performed bulk corrosion experiments, SKP experiments, microscopy of corroded samples, and data analysis; A.V.A. performed accelerated corrosion and bulk wear experiments, microscopy of wear samples, and data analysis; S.Muskeri. conducted nano-indentation and pico-scratch experiments. R.S. cast and processed the materials, performed SEM, TEM characterization; and S.Mukherjee. supervised the work. All the authors discussed and contributed to the writing of the manuscript. Correspondence to Sundeep Mukherjee. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. npj Materials Degradation menu For Authors & Referees
CommonCrawl
Help us improve our products. Sign up to take part. A Nature Research Journal Search E-alert Submit My Account Login The origin of rare alkali metals in geothermal fluids of southern Tibet, China: A silicon isotope perspective Wei Wang1, Hai-Zhen Wei ORCID: orcid.org/0000-0002-6658-792X2, Shao-Yong Jiang ORCID: orcid.org/0000-0003-3921-739X1,2, Hong-Bing Tan3, Christopher J. Eastoe4, Anthony E. Williams-Jones5, Simon V. Hohl2 & He-Pin Wu2 Scientific Reports volume 9, Article number: 7918 (2019) Cite this article Geothermal waters from the Semi, Dagejia and Kawu hot springs in the Shiquanhe-Yarlung Zangbo geothermal field of southern Tibet (China) are highly enriched in rare alkali metals (RAM). However, the enrichment mechanism is still hotly debated. Here, we report the first silicon isotope data of these geothermal waters to unravel the origin of the extreme RAM enrichments. Sinter precipitation in the spring vents and water-rock interaction in the deep reservoir controlled both the silicon budget and silicon isotope fractionation. The rates of water-rock interaction and sinter precipitation in three spring sites decrease in the sequences Semi > Kawu > Dagejia, and Dagejia > Kawu > Semi respectively. Silicon isotope fractionation during sinter precipitation (i.e. Δ30Siprecipitate-solution < −0.1‰) is less than that due to water-rock interaction (i.e. Δ30Sisolution-rocks at least as high as −0.47‰), which makes it possible to use the δ30Si signatures of springs to evaluate the intensity of water-rock interaction. Based on the available evidence, a conceptual model of RAM enrichment is proposed: (i) persistent magmatic activity in southern Tibet provided the initial enrichment of the RAM in host rocks and a heat sources for the deep reservoirs of geothermal systems; (ii) the high Cl− content and long residence time (thousands of years) promote the leaching of RAM from the silicate host rocks. The Mediterranean-Himalayan geothermal belt in southern Tibet is one of the most active geothermal regions in the world. In this region, precipitates from hydrothermal springs display extreme enrichments in rare alkali metals (RAM) and boron, exemplified by rare geyserite Cs-deposits at the Dagejia, Semi and Gulu sites1. Geothermal waters at these sites are also characterized by abnormally high boron content (up to 450 ppm) and low δ11B values (−16.6 to −10.9‰)2, which is in stark contrast with the Yellowstone (USA) geothermal system, where the geothermal water contains 0.46 to 29.08 mg/L of boron with δ11B of −9.3 to +4.4‰3. The data for the Tibetan sites reflect the contribution of residual magma degassing, which is confirmed by the He isotope signature4. Owing to their highly incompatible nature, the RAM and boron are concentrated in the magma during partial melting and/or fractional crystallization, and consequently their abundances typically increase from primitive ultramafic rocks to highly evolved felsic rocks, and ultimately to magmatic volatiles. Significantly, the RAM in the magmatic volatiles phase are enriched by 101 to 103 times relative to the bulk melt composition5. Previous studies have proposed that collisional orogeny promoted partial melting of the upper crust of the Qinghai-Tibet Plateau and that fractional crystallization of the magma and the segregation of magmatic volatiles led to formation of RAM-enriched pegmatite1, and other RAM-enriched intrusive phases leached by hydrothermal fluid6. However, whether the RAM enrichment in the geothermal springs is from residual magma degassing or from leaching of RAM-enriched host rocks is still hotly debated due to the lack of convincing evidence for either hypothesis. Recent studies have suggested that silicon isotopes may serve as a sensitive tracer for bio-physicochemical and ore-forming processes in terrestrial and oceanic environments and the Earth's interior7,8,9. There have been very few studies of silicon isotopes behavior, however, in geothermal systems10,11,12,13,14, and these have focused mainly on the diagenetic and biogenic effects on silicon fluxes and isotopic compositions. Here, we present a systematic silicon isotope study of three geothermal spring sites near the Yarlung Zangbo Suture Zone, with the main aim of constraining the factors responsible for the exceptional enrichment of RAM in the geothermal water, through developing an understanding of the relationship between silicon isotope fractionation, reservoir temperature and water chemistry. The subsidiary aims are to determine whether siliceous sinter formation (and/or water-rock interaction) control silicon isotope fractionation in such systems and to investigate the potential for using δ30Si as an indicator of intensity of geothermal water-rock interaction. We conclude with a conceptual model for the enrichment of the RAM and boron in the geothermal waters. Geologically, the Tibetan plateau is located at the junction between the Eurasian and Indian plates, and consists of four east-west-trending continental terranes, from north to south: the South Kunlun Mountains-Bayan Har Mountain Terrane, the Qiangtang-Sanjiang Terrane, the Gandise-Nyainqentangha Mountains Terrane (Lhasa Terrane) and the Himalaya Terrane (Fig. 1a,b). The terranes are separated by three suture zones: the Jinshajiang River Suture Zone, the Bangongcuo-Dongqiao-Nuojiang Suture Zone and the Indus Yarlung Zangbo Suture Zone. The most intense geothermal activity occurs in the Lhasa Terrane near the Indus Yarlung Zangbo Suture Zone, on the southern margin of the Gangdise Magmatic Arc6. This part of the Terrane is dominated by the Cretaceous-Tertiary Gangdise Batholith and the Paleogene Linzizong volcanic succession which are accompanied by minor Triassic-Cretaceous volcano-sedimentary rocks15. The geothermal spring sites considered in this work are on the southern side of Gangdise Magmatic Arc, which is of Cretaceous-Tertiary age2. The temperatures of hot springs and the reservoir distributed along the Indus Yarlung Zangbo Suture Zone range from 35 to 89 °C and from 47 to 227 °C2, respectively. (a) A geological map of southern Tibet, China, showing the locations of the Dagejia, Semi, Kawu spring sites (modified after Tan et al.45). (b) The location of major tectonic structures in the region. (c–e) Geological maps of the areas surrounding the Dagejia, Kawu and Semi geothermal sites (modified after Zheng et al.1). The Dagejia site is located closest to the Gangdise arc, in an area underlain by unconsolidated Quaternary sediments (loess) and siliciclastic sedimentary rocks (conglomerate and sandstone) ranging from Tertiary to Jurassic in age, and Cretaceous magmatic rocks (mainly biotite monzonitic granite, hornblende monzonitic granite) (Fig. 1c). Modern siliceous sinter is widespread. The Kawu site is located to the south of Yarlung Zangbo Suture, in an area dominated by unconsolidated Jurassic-Tertiary sedimentary rocks. Outcrops are sparse in the vicinity of the site, expose a single lithotype, namely Cretaceous gneissic monzonitic granite (Fig. 1d)16. The Semi site is located intermediately to the north of the Indus Yarlung Zangbo Suture Zone and, in contrast to the other geothermal systems, the area is underlain by a diverse assemblage of rock-types that includes ultramafic rocks, tourmaline granite, and late Triassic slate, quartzite and phyllite. The siliceous sinter is better developed than around the Kawu site, although it is thinner (≤5 m in thickness) and less extensive than that the Dagejia site (Fig. 1e)1. The temperatures of the geothermal waters at the three sites were between 70 and 90 °C and the temperature of nearby stream water was 15 °C. Most of the waters have near neutral pH (7.0 to 8.7); the highest pH was measured at the Semi site. The geothermal waters were classified into two types based on their chemical compositions (Table S1), namely a Na-HCO3− type with lower concentrations of SiO2, B, and RAM (at Dagejia) and a Na-Cl− type with higher concentration of SiO2, B, and RAM (at Kawu and Semi). The highest concentrations of SiO2 (200 ppm), Cl (900 ppm), B (400 ppm), Li (30 ppm) and Cs (50 ppm) were measured in samples from the Semi spring; these are up to several to hundreds of times greater than in the adjacent stream waters. Modern siliceous sinter was only observed and sampled at the Dagejia site. Their chemical compositions of spring-vent sinter at Dagejia site are given in Table S2. According to the XRD spectra, the siliceous sinter is of two types, Opal-A and Opal-CT. The contents of rare alkali metals (e.g., Cs) varies from 2000 ppm to 10000 ppm, which is hundreds of times higher than that in the average upper crust. The δ30Si values of the geothermal water ranges from −0.67‰ to +0.25‰ (−0.37‰ to −0.21‰ at Kawu, +0.13‰ to +0.25‰ at Dagejia and −0.67‰ to −0.44‰ at Semi) and are generally more negative than those of geothermal water elsewhere (see Table S3, Fig. 2). The stream water displays a wider range of δ30Si values, i.e., from −0.79‰ to +0.54‰; the lowest value (−0.79‰) is for a sample from the Dagejia stream and the highest (+0.54‰) is for the Yarlung Zangbo river. The silicon isotope values of modern sinter at the Dagejia site are relatively enriched in the light 28Si isotope (from −1.80‰ to +0.09‰). The average silicon isotope value of Opal-A is −0.24 ± 0.24‰ (n = 2), and that of Opal-CT is −1.75 ± 0.10‰ (n = 2) (Table S3). Silicon isotope values in geothermal springs worldwide10,11,12,13,29,30,32 and from this study (southern Tibet). Water-rock interaction in the deep reservoir The temperature of a subsurface thermal water reservoir is a key parameter in evaluating geothermal resources17. A variety of chemical geothermometers have been proposed, including the quartz thermometer18, the chalcedony thermometer19,20, the Na-K thermometer21,22 and the Na-K-Ca thermometer23. Because of the rapid convection of geothermal water to the surface, the chemical geothermometers report the highest temperatures in the geothermal reservoirs. Reservoir temperatures estimated from these geothermometers are presented in Table S4. Sharp decreases in temperature and pressure near the discharge points result in over-saturation of the aqueous solution and the precipitation of silica. Consequently, the temperatures estimated by the quartz and chalcedony geothermometers, from 50 to 160 °C, indicate the discharge temperatures rather than the reservoir temperatures. For the same reason, the precipitation of calc-sinter (i.e. tufa) near the surface would adversely affect application of the Na-K-Ca thermometer in determining reservoir temperatures. The degree of chemical equilibrium between groundwater solutes and reservoir rock is generally evaluated using the Na/1000-(K/100)-Mg1/2 ternary diagram22, from which the applicability of cation geothermometers can be assessed. In Fig. 3, except for sample DGJ-17, which plots in the full equilibrium region A, the compositions of most geothermal water samples all plot within the partial equilibrium region B. All the stream water samples plot compositionally near the Mg apex (region C) as immature waters. Partially and completely equilibrated waters can produce reliable reservoir temperatures from cation geothermometers24. The Semi spring has the highest reservoir temperature (~260 °C) followed by the Kawu spring (~250 °C) and the Dagejia spring (~220 °C) (Table S4). Distribution of aqueous samples on the Na/1000-K/100-Mg1/2 ternary diagram. Water-rock reactions in deep reservoirs are controlled by numerous factors including mineral composition, temperature, pressure, and fracture spacing. In general, the dissolution rate of a mineral is given by the equation: $$\frac{d{c}_{i}}{dt}=\frac{{A}_{\theta }}{V}{\nu }_{\theta }{k}_{\theta }$$ assuming heterogeneous kinetics25. In this equation, kθ is the dissolution rate constant of species i from mineral θ, Aθ is the surface area of the mineral, V is the volume of solution surrounding the mineral, νθ is the stoichiometric content of substance i in mineral θ and Ci is the concentration of species i. The temperature dependence of kθ follows the Arrhenius equation: $${k}_{\theta }=A{e}^{-{E}_{a}/RT}\cdot {({a}_{{H}^{+}})}^{{n}_{\theta }}$$ where aH+ is the activity of hydrogen ions; nθ has values usually in the range of 0–1; Ea is the apparent activation energy of the dissolution reaction; R is the gas constant (8.314 J·mol−1·K−1) and T is the absolute temperature (K). From T and pH in each spring sites (Tables S1, S4), the rate constants of silicate mineral dissolution were derived by observing that the majority of Ea values fall within the range of −40 to −80 kJ/mol (a mean ca. −60 kJ/mol) and nθ was taken to be −0.28, the value for feldspar at pH > 625. As shown in Table 1, water-rock interaction was fastest at the Semi site; the rates of water-rock interaction (kθ) in the Semi reservoir are 1.3 and 7.6 times greater than those at Kawu and Dagejia reservoirs, respectively. Strong correlations of the concentrations of soluble silica with kθ (an R2 value of 0.83) and of δ30Si with kθ (an R2 value of 0.94) (Fig. 4a) are observed (Fig. 4a). The first correlation reflects effective leaching of silicate minerals due to elevated rates of water-rock interaction (kθ) in the reservoirs, which is also reflected in the strong positive correlations of B, Li, Rb and Cs concentrations with the reservoir temperature (Fig. 4c). The second correlation implies a direct response of δ30Si in the reservoirs to the kinetics of water-rock interaction, which is discussed below. Table 1 Rate constants of water-rock interaction in the spring reservoirs. (a) Correlations of soluble silicon concentration and δ30Si with the dissolution rate constant kθ. (b) Correlations of B and RAM concentrations with Cl contents in spring water. (c) Correlations of B and RAM concentrations with reservoir temperature. (d) δ30Sialtered rock and δ30Sifluid vs. f (fraction of silicon remaining in the altered rocks) for different αrock-fluid values during water-rock interaction. (e) Correlations of Δ30Siprecipitate-solution vs. 1/T among different sinter minerals from this study and from previous studies13,40. (f) The variation of δ30Sisinter and δ30Sifluid with f (fraction of silicon remaining the solution) during the sinter precipitation for different αsinter-solution values (the solid circle and half-solid circle represent the spring samples and the sinter samples, respectively). As shown in Fig. S1, the reported δ18O and δD values of geothermal water from the Kawu, Dagejia, and Semi sites16,26,27, together with data for other geothermal waters in the region1,18, plot to the right of both the Global Meteoric Water Line (GMWL, δD = 8 × δ18O + 10) and a local meteoric water line for 5950 m above sea level (masl) (δD = 7.8 × δ18O + 8.7)26, indicating shifts in δ18O as a result of high-temperature water-rock interaction. The δ18O shift for the Semi site is larger than those for the Kawu and Dagejia sites, consistent with the Semi site having the highest reservoir temperature. All oxygen and hydrogen isotopic values of the geothermal waters plotted in Fig. S1 differ isotopically from those of 1990s rainwater from the Yarlung-Zangpo Gorge at 2450–4675 masl27 and from ice formed since AD 1864 on nearby Mt. Noijin Kangsang at 5950 masl26, consistent with deep circulation of ancient recharge28. At Semi, the annual precipitation and evaporation are 318.5 mm and 2553.0 mm respectively, and at Dagejia, 192.6 mm and 2269.1 mm respectively1. The intense evaporation exerts less influence on the hydrochemical properties of up-welling spring water than stream water. At the Semi and Dagejia sites, the Cl/Br molar ratios exceed 1200, indicating an evaporite origin for Cl, whereas at the Kawu site, the Cl/Br ratios are lower (358–378, Table S1), possibly indicating a magmatic source. The Na/Cl molar ratios are close to unity at the Semi site, consistent with dissolution of halite as the dominant source of NaCl, which is supported by the presence of evaporites within Late Tertiary sandstone and conglomerate. In addition, the concentrations B, Cs and RAM increase with the Cl content (Fig. 4b), suggesting that elevated Cl− concentrations may promote the leaching of RAM and boron from silicate minerals. On the other hand, this effect may not be separated from that of the temperature and pH in the present data set. The absence of a correlation of Cl/Br ratios with the enrichment of RAM and boron, however, indicates that the enrichment is independent of the source of chlorine. It is noteworthy that the RAM and boron concentrations in the Kawu site are obviously lower than expected from water-rock interaction at inferred values of parameters such as T and pH (Fig. 4b,c). This could signal the influence of factors on the kinetics of water-rock interaction, such as mineral composition, fracture spacing (i.e., fracture width) and dislocation density of the host rocks, the viscosity of the fluid, and the hydrostatic pressure. Silicon isotope fractionation during water-rock interaction in deep reservoirs In Fig. 2, the δ30Si values of the Dagejia geothermal water (+0.13 to +0.28‰) exhibit within the range of other geothermal waters (−0.4 to +0.7‰), e.g., Jiaodong, China29, Tenchong, China30,31, Yellowstone, USA10, mid-ocean ridge hydrothermal water32, and Iceland11,12,13 (Fig. 2). In Na-Cl type geothermal water from our study area, negative δ30Si values (−0.37 to −0.21‰ for Kawu, −0.67 to −0.44‰ for Semi) are accompanied by high SiO2, Cl, B, and RAM concentrations (Fig. 4a). The δ30Si values at the Semi site are the lowest reported to date for any geothermal water (Fig. 2). A possible explanation for this is that water-rock interaction reached an unprecedented high intensity and led to extreme kinetic fractionation of Si isotopes, as discussed below. Silicon isotope fractionation associated with water-rock interaction is likely to be governed by kinetic effects. An alternative interpretation is suggested by the observation that the isotope fractionation accompanying leaching of silicon from reservoir rock as a process approximating Rayleigh fractionation. As the bond strength of 30Si-O is stronger than that of 28Si-O in silicate minerals by Wu et al.33, lighter Si isotopes are released preferentially into the aqueous phase during water-rock interaction34, resulting in increase of δ30Si values in altered rock under more intensive water-rock interaction. If the silicon isotope fractionation remains constant, δ30Si values in coexisting water will decrease as the geothermal system evolves. In order to compare observed δ30Si values in discharging water with those in the reservoir host rock, it is necessary to estimate the δ30Si of the unaltered reservoir rock, which is inaccessible to sampling. This estimate was made using two approaches. In the first, we calculated δ30Si for a mixture of rock types based on the areal distribution of these rock-types around the springs, and in the second, we assumed that the δ30Si of the reservoir rock corresponded to that of the dominant rock-type in the region. As described above, the exposed rocks at each spring site are different. Granite is the major rock-type at the Kawu site, granite and siliciclastic sedimentary rocks dominate near the Dagejia site, and a more diverse assemblage (granite, ultramafic rocks, sedimentary rocks and metamorphic rocks) is present in the vicinity of the Semi site. The silicon isotopic compositions of the host rocks in geothermal reservoirs were estimated using the balance equation: $${\delta }^{30}S{i}_{SR}={\delta }^{30}S{i}_{GR}\times w+{\delta }^{30}S{i}_{SER}\times x+{\delta }^{30}S{i}_{UR}\times y+{\delta }^{30}S{i}_{MR}\times z$$ where w, x, y, z corresponds to the molar proportions of granite, siliciclastic sedimentary rock, ultramafic rock and metamorphic rock, respectively, and w + x + y + z = 1; The subscripted δ30Si values are published values for these rock-types. Based on the reported δ30Si values for granite (−0.5‰ to +0.3‰, average −0.2‰), siliciclastic sedimentary rock (−0.5‰ to +0.5‰, average −0.1‰), ultramafic rock (−0.7‰ to +0.4‰, with 90% in the range −0.4‰ to −0.2‰, average −0.3‰), and metamorphic rock (slate, −0.6‰ to +0.2‰, average −0.3‰; phyllite, −0.4‰ to 0.0‰, average −0.2‰)35, the weighted δ30SiSR for the three sites has a value of −0.21‰ for a 90% statistical probability derived by iterative calculation for the study area (Fig. S2). The second approach makes use of the observation that lithology in the Indus Yarlung Zangbo Suture Zone in the Lhasa Terrane was dominated by the Mesozoic and Cenozoic granitic rocks36 and subvolcanic felsic porphyries37. The δ30Si values of granitic rocks vary in the range −0.5‰ to +0.3‰ (the average and median values are −0.2‰), whereas those of porphyries vary in the range −0.3‰ to +0.4‰ (the average and median values are −0.1‰ and −0.2‰), repectively35. This approach yielded a δ30Si value for the reservoir rocks of −0.20‰, which is almost identical to that estimated from the silicon isotopic compositions of the rocks exposed in the vicinity of the three hot spring sites. Considering the very high flow rates of the Semi, Degejia and Kawu hot springs (9.5 × 104 m3·d−1, 7.9 × 106 m3·d−1 and 6.3 × 105 m3·d−1)1, the silicon isotope fractionation due to diffusion of hydrated silicon species should be negligible, although a fractionation factor 30/28α is estimated to be 0.998. This was estimated from the empirical inverse power relation proposed by Richter et al.38 (Eq. 4), might apply in static situations. $${}^{30/28}\alpha =\frac{{D}_{30}}{{D}_{28}}\propto {(\frac{{m}_{28}}{{m}_{30}})}^{\beta }$$ where D is the diffusion coefficients of the silicon isotopes, m is the mass of the light or heavy isotope ignoring any water of hydration and β is the correction factor for monovalent ions, which varies from 0.01 to 0.06 (we used a value of 0.0339). As a geothermal reservoir is an open system, the isotope fractionation accompanying the continuous leaching of silicon from silicate minerals can be treated as a fractional distillation under equilibrium conditions, described by the Rayleigh equation. From Fig. 4d, it is evident that the continuous leaching of silicon from silicate minerals during fluid circulation leads to simultaneous decreases of δ30Si in the spring water and increases of δ30Si in the altered rocks. According to the leaching-fractionation model, values of δ30Sispring < −0.2‰ (as at the Semi and Kawu sites) represent early alteration, whereas values of δ30Sispring > −0.2‰ reflect an advanced stage of alteration because of the higher δ30Si of the altered host rocks (as at the Dagejia site). In this interpretation, the RAM and boron are mainly transferred into the earliest, hottest geothermal water, which also has the highest Cl content. A silicon isotope fractionation factor, Δ30Sisolution-rocks of −0.16 to −0.47‰ was roughly estimated for the kinetic fractionation between rock and water in the Semi and Kawu sites. Silicon isotope fractionation during sinter precipitation Silicon isotope fractionation during low-temperature precipitation of silicate minerals depends heavily on system parameters such as the precipitation rate and the solution chemistry. At low-temperature conditions (e.g., in groundwater and geothermal surface water), the minerals are enriched in 28Si, resulting in δ30Si values as low as −5.4‰ and −4.0‰, respectively13,14,40,41. Consequently, the residual solutions are enriched in 30Si through Rayleigh fractionation (Eqs 5–7). $${{\rm{\delta }}}^{30}{{\rm{Si}}}_{{\rm{precipitate}}}={{\rm{\delta }}}^{30}{{\rm{Si}}}_{{\rm{solution}}}^{{\rm{i}}}+{{\rm{\Delta }}}^{30}{{\rm{Si}}}_{{\rm{precipitate}}-{\rm{solution}}}({\rm{1}}+\,\mathrm{ln}\,f)$$ $$f={C}_{{\rm{solution}}}/{C}_{{\rm{initial}}{\rm{solution}}}$$ $${{\rm{\Delta }}}^{30}{{\rm{Si}}}_{{\rm{precipitate}}-{\rm{solution}}}=1000{\mathrm{ln}{\rm{\alpha }}}_{{\rm{precipitate}}-{\rm{solution}}}={\delta }^{30}{{\rm{Si}}}_{{\rm{precipitate}}}-{\delta }^{{\rm{30}}}{{\rm{Si}}}_{{\rm{solution}}}$$ where the δ30Siprecipitate and δ30Siisolution denote the Si isotope compositions of SiO2 precipitates and the initial hydrothermal solution, respectively. The term Δ30Sisolid-solution (i.e. 1000lnαsolid-solution) denotes the fractionation between the hydrothermal fluid and the precipitated SiO2, and f denotes the fraction of silica remaining in solution, as expressed by Eq. 6. The terms Csolution and Cinitial solution are the silicon concentrations in geothermal water venting at the surface and in hydrothermal fluid at high temperatures in the deep reservoir, respectively. Assuming equilibrium, the value of Cinitial solution can be determined from the equilibrium constant for the reaction SiO2 (s) + 2H2O ↔ H4SiO4 at the typical reservoir temperature using Eq. 8 42. The δ30Si value of the initial solution was determined from Eq. 9. $$\mathrm{log}\,{\rm{K}}=\,-\,{\rm{8.476}}-{\rm{485}}{{\rm{.24T}}}^{-1}-2.268\times {10}^{-{\rm{6}}}{{\rm{T}}}^{2}+3.068{\rm{logT}}$$ $${{\rm{\delta }}}^{30}{{\rm{Si}}}_{{\rm{solution}}}^{{\rm{i}}}={{\rm{\delta }}}^{30}{{\rm{Si}}}_{{\rm{solution}}}-{{\rm{\Delta }}}^{30}{{\rm{Si}}}_{\mathrm{precipitate}-\mathrm{solution}}\,\mathrm{ln}\,({{\rm{C}}}_{{\rm{solution}}}/{{\rm{C}}}_{{\rm{initial}}{\rm{solution}}})$$ The silicon isotope fractionation factor, α, is related to temperature T (K) by the equation of \(\alpha ={e}^{-{\rm{\Delta }}E/RT}\)30, which yields a linear relationship between Δ30Sisolid-solution and 1/T for each sinter mineral (Fig. 4e). At constant temperature, αsolid-solution decreases with decreasing crystallinity in the sequence of αopal CT-solution > αopal A-solution > αamorphous silica-solution, reflecting differences in the activation energy, −52.1, −41.6 and −36.9 kJ·mol−1, of their respective formation reactions. As Opal-A was the first stable phase to crystallize from amorphous silica and was subsequently transformed into Opal-CT43, it follows that the δ30Si values of Opal-A provide the best basis for calculating δ30Sisolutioni of the hydrothermal fluid (Table S3). In geothermal systems, the rate of silica precipitation imposes a kinetic effect on silicon isotope fractionation between sinter and solution. This rate was calculated using Eqs 10–12 13. $${\rm{rate}}=\frac{d[{H}_{4}Si{O}_{4}]}{dt}=k(\frac{A}{M})(1-\frac{Q}{K})$$ $$Q={a}_{{H}_{4}Si{O}_{4}}/{a}_{Si{O}_{2}}\times {a}_{Si{O}_{2}}^{2}$$ $$\mathrm{log}\,k=\,-\,0.369-7.890\times {10}^{-4}{\rm{T}}-3438/{\rm{T}}$$ where Q and K are the activity product and the equilibrium constant of the reaction H4SiO4 ↔ SiO2 (s) + 2H2O, respectively (Eqs 8, 10). k is the rate constant, which depends on the temperature (Eqs 10, 12), A is the interfacial area and M is the mass of water. The A/M ratio is one of key parameters for estimating the precipitation kinetics of amorphous silica. It was estimated assuming a flat open stream channel with a uniformly shallow water depth where there are no borders because of the shallow depth relative to surface area, and a constant specific volume of water (i.e., independent of the temperature variation). The resulting A/M ratio will be 0.1 if the units are expressed in meters and kilograms for 1 cm of water depth10,13. For this geometry, the precipitation rates are 3.9 × 10−12 to 5.2 × 10−12 mol·L−1·s−1 at the Dagejia site, 2.2 × 10−12 to 3.6 × 10−12 mol·L−1·s−1 at the Semi site and 3.9 × 10−12 to 4.1 × 10−12 mol·L−1·s−1 at the Kawu site (Table S5, Fig. S3). These rates are higher than the rates of 2.0 × 10−13 to 4.7 × 10−13 mol·L−1·s−1 for the Geysir Konungshver springs13. During sinter precipitation, the first step is the formation of SiO2·H2O gel as a result of the sharp drop in the pressure and temperature of the spring water as it vents at the surface. As the solubility of silicic acid increases with increasing pH, the higher pH of the Semi and Kawu springs compared to the Dagejia spring inhibited sinter precipitation. In addition, the higher contents of alkali and/or alkaline earth metals tends to neutralize the negatively charged metastable SiO2 particles and promotes the coagulation of SiO2·H2O gel. The capacity to enhance coagulation is related to radius and atomic mass in the sequence Cs > Rb > K > Na > Li43. The high RAM concentrations at the study sites lead to higher precipitation rates than in other geothermal systems, such as those in Iceland and Cistern in the Yellowstone National Park, USA10,44. The isotopic fractionation during sinter formation was evaluated by comparing the predicted variations of δ30Sisolid and δ30Sifluid with f for different αsolid-solution values to the average calculated δ30Siisolution values of −0.055‰ for the Dagejia site and −0.34‰ (KW-3-6) for the Kawu site (Fig. 4f). The observed distribution of δ30Sisinter and δ30Sispring values corresponds to an αsolid-solution value of ~0.9998–0.9999 consistently, showing that rapid coagulation of silica gel from saline solutions precludes isotope fractionation. This explains why silicon isotope fractionation during the sinter precipitation in Tibet is less than that in low-salinity geothermal systems in Iceland (αsolid-solution = 0.9993)13 and in experimental abiotic silica precipitation (αsolid-solution = 0.9996)30. Accordingly, the percentages of silicon removed from solution by precipitation at Kawu and Dagejia are 78% and 88%, respectively (Fig. 4f), consistent with the greater amount of siliceous sinter exposed at the Dagejia site. Integrated kinetic effects on silicon isotope signatures Silicon circulation in geothermal systems is a dynamic process, relating the source where silicon is leached from silicate minerals to the sinks where sinter is precipitated around geothermal springs. The rates of water-rock interaction and sinter precipitation follow the order of Semi > Kawu > Dagejia and Dagejia > Kawu > Semi respectively. The very high dissolution rate and very low precipitation rate at Semi explain why this spring has the highest concentration of soluble Si and the most negative δ30Si. By analogy, the very low precipitation rate and very high dissolution rate at Dagejia explain the very low concentration of soluble Si and the positive δ30Si signature of that spring, which are similar to those of geothermal springs elsewhere. The intermediate dissolution and precipitation rates at the Kawu site lead to δ30Si values closer to the host rocks. In general, silicon isotope fractionation during sinter precipitation (Δ30Siprecipitate-solution is <−0.1‰) is less significant than that in water-rock interaction (Δ30Sisolution-rocks, at least as high as −0.47‰). This enables the δ30Si signatures of spring waters to be used to evaluate the impact of fluid temperature on the intensity of water-rock interaction in geothermal reservoirs. The mantle-derived Gangdese magmatic arc, with which geothermal activity in south Tibet is associated, was emplaced after the India-Asia plate collision and later underwent Oligocene metamorphic and anatectic reworking2. The partial melts from the enriched mantle wedge redistributed incompatible elements in the underthrust terrane, including the geothermal reservoir rocks of the study area. A conceptual model describing silicon cycling and isotope fractionation during circulation is given in Fig. 5. In the Lhasa Terrane, persistent magmatism provides continuous heat-flow that drives geothermal circulation. The δ11B and 3He signatures in the geothermal waters3,4 are consistent a role for magma degassing in concentrating boron, and probably also RAM. Secondary, strong water-rock interactions between the geothermal fluid and granitic rocks appears to lead to the enrichment of RAM in the geothermal springs. The evidence presented above leads to the following general observations for the three geothermal systems considered in this study: (i) enrichment of the RAM correlates with the high Cl− content of the geothermal water and high geothermal residence temperature, factors that are essential in promoting mineral alteration and the leaching of RAM from the host rocks; (ii) the silicon isotope fractionation during precipitation of amorphous sinter is less significant than that during water-rock interaction, which allows the δ30Si signatures in the springs to be used to evaluate the intensity of water-rock interaction; (iii) the association of the highest RAM concentrations with the lowest δ30Si and the most extreme δ18O shift are consistent with the hypothesis that intense water-rock interaction favors accumulation of RAM in geothermal fluids; (iv) assuming that the evolution of δ30Si in the wall-rocks was due to Rayleigh fractionation, it follows that RAM and boron were removed early in the alteration; (v) the entire δ18O dataset26,27,45 suggests that the geothermal waters of this study most likely recharged thousands of years ago, allowing a large amount of time for water-rock reaction; (vi) sinter formation can further concentrate RAM. A conceptual model of geothermal circulation, showing silicon cycling and associated silicon isotope fractionation. The δ30Si values and concentration of dissolved Si in the geothermal waters of the study area have provided a new perspective on the kinetics of silicon migration between silicate minerals and geothermal water, which has allowed us to show that the δ30Si signatures of springs can be used to evaluate the intensity of water-rock interaction and controls on the RAM enrichment. The processes of water-rock interaction and silica precipitation, however, are more complex than envisaged. Firstly, the factors that control water-rock interaction extend far beyond equilibrium solution chemistry, and likely to include the mineralogy and mechanical properties of the reservoir rocks (e.g., fracture spacing, permeability, and density etc.), and the physical properties of the fluid (e.g., viscosity, pressure etc.). Secondly, the precipitation of silica appears to be a non-equilibrium process, resulting in deposits exceeding the size predicted by equilibrium silica solubility. This departure from equilibrium is likely due to the fact that the precipitation begins with the nucleation of colloidal amorphous silica particles. In addition to the concentration of dissolved Si, temperature, pH and other cations (e.g., alkali and/or alkaline earth metals, Al3+) also affect the precipitation of silica and its subsequent transformation. These observations need to be addressed in future studies. Geothermal water and siliceous sinter were collected from three geothermal fields (Dagejia, Semi and Kawu) in southern Tibet, along the Indus-Yarlung Zangbo River Suture Zone. River water samples were collected from the Yalu Tsangpo River in the Semi area and the Changmaqu River in the Dagejia area. Silicon isotopes and elemental concentrations of rock and water samples were measured in the State Key Laboratory for Mineral Deposits Research, Nanjing University. Silicon isotope compositions of geological standard materials were measured in this study and compared to their recommended values (Table 2). A full description of the methods and analytical precisions (given as 2σ), including elemental and physicochemical parameters of aqueous and solid samples, crystal structure characteristics and silicon isotope analysis, is given in the following sections. Table 2 Silicon isotope compositions in the standard geological materials. Elemental and physicochemical parameters analysis of aqueous samples Physicochemical parameters of aqueous samples (e.g. temperature, pH, electrical conductivity (EC) and total dissolved solids (TDS) were measured immediately at the sampling sites with a calibrated portable multi-parameter analyzer (HQ40d, HACH America). Aqueous samples were stored in acid-washed, high-density polyethylene sampling bottles after being filtered through cellulose filters (0.45 µm), and acidified with 1 M HNO3, to avoid any invisible precipitation in the case of cation analysis. Cation concentrations were analyzed by Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES, Agilent-710) with an analytical reproducibility of ±3% in the Public Technical Service Center of Nanjing Institute of Geology and Palaeontology, Chinese Academy of Sciences. Anion concentrations were measured on an ion chromatograph (IC, type Thermo Fisher ICS-900) with an analytical reproducibility of ±3%, and trace elements were analyzed by Inductively Coupled Plasma Mass Spectrometry (ICP-MS, Element II, Thermo Fisher Finnigan) with an analytical reproducibility of ±5% at the State Key Laboratory for Mineral Deposits Research, Nanjing University. Major and trace element analysis of solid samples The major and trace element contents of the opal samples were measured by XRF and ICP-MS at the State Key Laboratory for Mineral Deposits Research, Nanjing University. The solid samples were crushed and powdered to 200-mesh using an agate mill. Major element compositions were determined by X-ray fluorescence spectroscopy (XRF) using an AXIOS Mineral Spectrometer, with an analytical uncertainty of <±5% (2σ), following the procedure of Norrish and Hutton46. Trace element concentrations were analyzed on an ICP-MS (Finnigan Element II). About 50 mg of powdered sample was dissolved in high-pressure Teflon containers using a HF + HNO3 acid mixture acid for 48 h at approximately 160 °C. Rh was used as an internal standard to monitor signal drift during ICP-MS measurement. The analytical precision was better than ±5% for most trace and rare earth elements. Details of the analytical procedures are described by Gao et al.47. XRD analyses of solid samples Samples for X-ray diffraction (XRD) (Bede-D1) with CuKα radiation (X' TRA) were dried and ground to a grain size of about 200 µm. The analyses were performed over a 2θ interval between 3° and 51°, using a step of 0.02° and an integration time of 0.24 s/step. The XRD spectra were analyzed by Jade 5.0 software to identify the minerals in the solid samples. Dissolution of solid samples for Si isotope measurement Standard samples were fused with K2CO3 (SP, Aldrich-Sigma, 99.99%) in platinum crucibles for 150 min at 950 °C in a muffle furnace. After cooling in air, the fusion cake was dissolved in Milli-Q water (resistivity, 18.2 MΩ·cm) and acidified with 3 M HNO3. The sample solution was diluted to about 60 ppm SiO2 and the solution pH was adjusted to ~7.0 with HNO3 in order to prevent polymerization. Separation and purification of silicon Both samples and silicon isotope standard materials (including the bracketing standard) were purified using single column chemistry. The poly-prep 10 mL column (BioRad, USA) is filled with 1.5 mL DOWEX AG 50W-X8 (200–400 mesh) cation exchange resin. The resin was regenerated before each sample loading by sequential washing with 3 mL 3 M HCl, 3 mL 6 M HCl, 3 mL 7 M HNO3, 3 mL10M HCl, 3 mL 6 M HCl, 3 mL 3 M HCl and Milli-Q water washing to neutral pH. At low pH (pH < 8), dissolved Si in the form of non-ionic monosilicic acid Si(OH)4 or the anionic species H3SiO4− that is not retained by the resin. For each sample, an aliquot of solution containing about 60 μg Si was loaded onto a column. Samples were eluted in about 10 mL of Milli-Q water and, after purification, were diluted with Milli-Q water to 15 mL, which contains about 4 ppm Si for silicon isotope analysis. The typical overall column recovery of Si was greater than 95%. Silicon isotope analysis Silicon isotope were analyzed using the method of Georg et al.48 with a slight modification. The effects of other major anionic species (e.g. SO42−, Cl−) on silicon isotope values were evaluated in this work, and are described in detail in the Supplementary Information. Measurements were made on a Neptune Plus MC-ICP-MS (Thermo Fisher Finnigan, Germany) with an ESI PFA 50 μL/min nebulizer in a quartz cyclonic spray chamber. The 28Si+, 29Si+, and 30Si+ ions were collected by Faraday cups L3, central, and H3. Potential isobaric inference from 14N16O+ (m/z = 29.997989) on the 30Si+ion (m/z = 29.97377) was eliminated by operating with the medium resolution mode of ∼5000. The silicon content in both the sample solution and standard material NIST 28 was kept at ∼4 ppm to maintain a 28Si+ signal of ∼9.0 V in wet plasma mode and the Si signal was washed down to <30 mV. Instrumental mass bias was corrected by a sample-standard bracketing procedure (SSB). Results are expressed in delta notation (δ30Si) as the per mil (‰) deviation from the standard material NBS 28 (Eq. 13). $${\delta }^{30}Si=\{\frac{{{(}^{30}Si{/}^{28}Si)}_{sample}}{0.5\times [{{(}^{30}Si{/}^{28}Si)}_{std-1}+{{(}^{30}Si{/}^{28}Si)}_{std-2}]}-1\}\times 1000$$ The abbreviations std-1 and std-2 refer to the standard material of NBS-28 measured before and after each sample, respectively. The international standard NBS-28 and the Chinese silicon isotope standard materials GBW-04421 and GBW-04422 (in quartz form) were prepared using the procedure described above. The silicon isotope compositions in the standard reference materials (GBW-04421, GBW-04422) values agree well with previous estimates (Table 2), ensuring accurate analysis of silicon isotopes in this study. All estimates of reproducibility described in this paper are from replicated measurements (n ≥ 4, 95% confidence limit). The long-term instrumental reproducibility, expressed as the standard deviations for isotopic reference material NBS 28 was ± 0.06‰ (n = 8, 2σ). Zheng, M. P. et al. A New Type of Hydrothermal Deposit- Cesium- Bearing Geyserite in Tibet. Geological Publishing House, Beijing (in Chinese) (1995). Zhang, Y. F. et al. A new geochemical perspective on hydrochemical evolution of the Tibetan geothermal system. Geochem. Int. 53, 1090–1106 (2015). Palmer, M. R. & Sturchio, N. C. The boron isotope systematics of the Yellowstone National Park (Wyoming) hydrothermal system: A reconnaissance. Geochim. Cosmochim. Ac. 54, 2811–2815 (1990). Hoke, L., Lamb, S., Hilton, D. R. & Poreda, R. J. Southern limit of mantle-derived geothermal helium emissions in Tibet: implications for lithospheric structure. Earth Planet Sc. Lett. 180, 297–308 (2000). Symonds, R. B. & Reed, M. H. Calculation of multicomponent chemical-equilibria in gas-solid-liquid systems: Calculation methods, thermochemical data, and applications to studies of high temperature volcanic gases with examples from Mount St. Helens. Am. J. Sci. 293, 758–864 (1993). Hou, Z. Q. et al. The uplifting processes of the Tibetan Plateau since 0.5 Ma B.P. Sci. China Ser. D 44, 35–44 (2001). Robert, F. & Chaussidon, M. A paleotemperature curve for the Precambrian oceans based on silicon isotopes in cherts. Nature 443, 969–972 (2006). Van den Boorn, S. H. J. M., van Bergen, M. J., Nijman, W. & Vroon, P. Z. Dual role of seawater and hydrothermal fluids in Early Archaean chert formation: evidence from silicon isotopes. Geology 35, 939–942 (2007). Poitrasson, F. & Zambardi, T. An Earth-Moon silicon isotope model to track silicic magma origins. Geochim. Cosmochim. Ac. 167, 301–312 (2015). Douthitt, C. B. The geochemistry of the stable isotopes of silicon. Geochim. Cosmochim. Ac. 46, 1449–1458 (1982). Opfergelt, S. et al. Quantifying the impact of freshwater diatom productivity on silicon isotopes and silicon fluxes: Lake Myvatn, Iceland. Earth Planet Sc. Lett. 305, 73–82 (2011). Opfergelt, S. et al. Riverine silicon isotope variations in glaciated basaltic terrains: implications for the Si delivery to the ocean over glacial-interglacial intervals. Earth Planet Sc. Lett. 369–370, 211–219 (2013). Geilert, S. et al. Silicon isotope fractionation during silica precipitation from hot-spring waters: evidence from the Geysir geothermal field, Iceland. Geochim. Cosmochim. Ac. 164, 403–427 (2015). Geilert, S., Pieter, Z. V. & van Bergen, M. J. Effect of diagenetic phase transformation on the silicon isotope composition of opaline sinter deposits of Geysir, Iceland. Chem. Geol. 433, 57–67 (2016). Ji, W. Q. et al. Zircon U-Pb chronology and Hf isotopic constraints on the petrogenesis of Gangdese batholiths, southern Tibet. Chem. Geol. 262, 229–245 (2009). Zhang, Q. et al. Water environmental effects of Kawu geothermal water in Sajia County, Tibet. Wat. Res. Protect. 31, 45–49 (2015). Gemici, U. & Filiz, S. Hydrochemistry of the Çeşme geothermal area in western Turkey. J. Volcanol. Geoth. Res. 110, 171–187 (2001). Fournier, R.O. Silica in thermal waters: laboratory and field investigations: In: Proceedings of International Symposium on Hydrogeochemistry and Biogeochemistry, Tokyo. 132–139 (1973). Fournier, R. O. Chemical geothermometers and mixing models for geothermal systems. Geothermics 5, 41–50 (1977). Arnorsson, S., Gunnlaugsson, E. & Svavarsson, H. The chemistry of geothermal waters in Iceland. III. Chemical geothermometry investigations. Geochim. Cosmochim. Ac. 47, 567–577 (1983). Fournier, R. O. Geochemical and hydrological considerations and the use of enthalpy-chloride diagrams in the prediction of underground conditions in hot-spring systems. J. Volcanol. Geoth. Res. 5, 1–16 (1979). Giggenbach, W. F. Geothermal solute equilibria. Derivation of Na-K-Mg-Ca geoindicators. Geochim. Cosmochim. Ac. 52, 2749–2765 (1988). Fournier, R. O. & Truesdell, A. H. An empirical Na-K-Ca geothermometer for natural waters. Geochim. Cosmochim. Ac. 37, 1255–1275 (1973). Yildiray, P. & Umran, S. Geochemical assessment of Simav geothermal field, Turkey. Rev. Mex. Cienc. Geol. 25, 408–425 (2008). Lasaga, A. C. Chemical kinetics of water-rock interactions. J. Geophys. Res. 89, 4009–4025 (1984). Zhao, H. et al. Deuterium excess record in a southern Tibetan ice core and its potential climatic implications. Clim. Dynam. 38, 1791–1803 (2014). Gao, Z. Y., Wang, X. D. & Yin, G. Isotopic effect of runoff in the Yarlung Zangbo River. Chin. J. Geochem. 31, 309–314 (2012). Jasechko, S. et al. Late-glacial to late-Holocene shifts in global precipitation δ18O. Clim. Past. 11, 1375–1393 (2015). Xu, Y. T. et al. Silicon isotope study of thermal springs in Jiaodong Region, Shandong Province. Sci. China Ser. D. 44, 155–159 (2001). Li, Y. H., Ding, T. P. & Wan, D. F. Experimental study of silicon isotope dynamic fractionation and its application in geology. Chin. J. Geochem. 14, 212–219 (1995). Ding, T. P. et al. Silicon Isotope Geochemistry: Geological Publishing House, Beijing (1996). De La Rocha, C. L., Brezinski, M. A. & DeNiro, M. J. A first look at the distribution of the stable isotopes of silicon in natural waters. Geochim. Cosmochim. Ac. 64, 2467–2477 (2000). Wu, Z. Q., Huang, F. & Huang, S. C. Isotope fractionation induced by phase transformation: First-principles investigation for Mg2SiO4. Earth Planet Sc. Lett 409, 339–347 (2015). Georg, R. B., Zhu, C., Reynolds, B. C. & Halliday, A. N. Stable silicon isotopes of groundwater, feldspars, and clay coatings in the Navajo Sandstone aquifer, Black Mesa, Arizona, USA. Geochim. Cosmochim. Ac. 73, 2229–2241 (2009). Ding, T. P. et al. Geochemistry of Silicon Isotopes: published by Deutshe Nationalbibligrafie (2018). Coulon, C. et al. Mesozoic and Cenozoic volcanic rocks from central and southern Tibet: 39Ar-40Ar dating, petrological characteristics and geodynamical significance. Earth Planet Sc. Lett. 79, 281–302 (1986). Hou, Z. Q. et al. Origin of adakitic intrusives generated during mid-Miocene east-west extension in southern Tibet. Earth Planet Sc. Lett. 220, 139–155 (2004). Richter, F. M. et al. Kinetic isotopic fractionation during diffusion of ionic species in water. Geochim. Cosmochim. Ac. 70, 277–289 (2006). Eggenkamp, H. Theoretical and Experimental Fractionation Studies of Chloride and Bromide Isotopes. The Geochemistry of Stable Chlorine and Bromine Isotopes, 75–93 (2014). Geilert, S. et al. Silicon isotope fractionation during abiotic silica precipitation at low temperatures: inferences from flow-through experiments. Geochim. Cosmochim. Ac. 142, 95–114 (2014). Basile-Doelsch, I., Meunier, J. D. & Parron, C. Another continental pool in the terrestrial silicon cycle. Nature 433, 399–402 (2005). Gunnarsson, I. & Arnorsson, S. Amorphous silica solubility and the thermodynamic properties of H4SiO4 in the range of 0 to 350 C at Psat. Geochim. Cosmochim. Ac. 64, 2295–2307 (2000). Feng, G. X. et al. Rare gas, hydrogen, rare alkali elements. Science Press, Beijing (1984). Chen, X. Y., Chafetz, H. & Lapen, T. Silicon isotope fractionation between hot spring waters and associated siliceous sinters under natural conditions. Goldschmidt (Abstract) (2017). Tan, H. B. et al. Understanding the circulation of geothermal waters in the Tibetan Plateau using oxygen and hydrogen stable isotopes. Appl. Geochem. 51, 23–32 (2014). Norrish, K. & Hutton, J. T. An accurate X-ray spectrographic method for the analysis of a wide range of geological samples. Geochim. Cosmochim. Ac. 33, 431–453 (1969). Gao, J. F. et al. Analysis of trace elements in rock samples using HR-ICP-MS. J. Nanjing Univ. (Nat. Sci.). 39, 844–850 (2003). Georg, R. B., Reynolds, B. C., Frank, M. & Halliday, A. N. New sample preparation techniques for the precise determination of the Si isotope composition of natural samples using MC-ICP-MS. Chem. Geol. 235, 95–104 (2006). Wan, D. F., Li, Y. H. & Song, H. B. Preparation of silicon isotopic standard materials (in Chinese with English abstract). Ac. Geosci. Sinica. 18, 330–335 (1997). This research was financially supported by the National Key R&D Program of China (no. 2017YFC0602405) and the National Natural Science Foundation of China (Nos 41673001, 41872074, 41422302). State Key Laboratory of Geological Processes and Mineral Resources, School of Earth Resources, China University of Geosciences, Wuhan, 430074, P.R. China & Shao-Yong Jiang State Key Laboratory for Mineral Deposits Research, School of Earth Sciences and Engineering, Nanjing University, Nanjing, 210023, P.R. China Hai-Zhen Wei , Shao-Yong Jiang , Simon V. Hohl & He-Pin Wu School of Earth Sciences and Engineering, Hohai University, Nanjing, 210098, P.R. China Hong-Bing Tan Department of Geosciences, University of Arizona, Tucson, Arizona, 85721, United States Christopher J. Eastoe Department of Earth and Planetary Sciences, McGill University, Montreal, H3A 0E8, Canada Anthony E. Williams-Jones Search for Wei Wang in: Search for Hai-Zhen Wei in: Search for Shao-Yong Jiang in: Search for Hong-Bing Tan in: Search for Christopher J. Eastoe in: Search for Anthony E. Williams-Jones in: Search for Simon V. Hohl in: Search for He-Pin Wu in: H.Z.W. and S.Y.J. conceived of the whole project; W.W. performed the experimental analysis and drafted the manuscript; H.B.T., S.V.H. and H.P.W. collected samples from three field trips in the study areas; C.J.E. and A.E.W. enhanced the interpretation in the manuscript; All authors reviewed the manuscript. Correspondence to Hai-Zhen Wei or Shao-Yong Jiang or Hong-Bing Tan. The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Final clean version of Supplementary Information Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Wang, W., Wei, H., Jiang, S. et al. The origin of rare alkali metals in geothermal fluids of southern Tibet, China: A silicon isotope perspective. Sci Rep 9, 7918 (2019) doi:10.1038/s41598-019-44249-5 DOI: https://doi.org/10.1038/s41598-019-44249-5 By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Scientific Reports menu About Scientific Reports Guest Edited Collections Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Editorial Board Highlights © 2020 Springer Nature Limited Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Digital Programme (Guidebook) ICMP Registration ICMP Participant List Plenary Talks Thematic Sessions Prize Lectures Young Researchers Symposium YRS Participant List Human Rights Session Financial Aid for Junior Participants Satellite Meetings About Montreal Download the official ICMP poster! Session code: pl Session type: Plenary All available abstracts Monday, Jul 23 [symposia auditorium] 11:00 Gilles Brassard (Université de Montréal), All no-signalling theories are local-realistic 14:00 Subir Sachdev (Harvard University), Solvable models of quantum matter without quasiparticles Tuesday, Jul 24 [symposia auditorium] 09:00 Thibault Damour (Institut des Hautes Études Scientifiques), Gravitational Waves and Binary Black Holes 10:30 Lai-Sang Young (New York University), Comparing chaotic and random dynamical systems Wednesday, Jul 25 [symposia auditorium] 09:00 F. Duncan M. Haldane (Princeton University), Flux attachment and non-commutative geometry in the fractional quantum Hall effect: some open mathematical problems 10:30 Anne-Laure Dalibard (Université Pierre et Marie Curie), Recent advances in fluid boundary layer theory 11:30 Alessandro Giuliani (Università di Roma Tre), Universal fluctuations in interacting dimers Thursday, Jul 26 [symposia auditorium] 09:00 John Cardy (University of California), The $T\overline T$ deformation of quantum field theory 10:30 Slava Rychkov (The Institut des Hautes Études Scientifiques), Conformal Field Theory and Critical Phenomena in $d=3$ 11:30 Rupert Frank (LMH Műnich), From the liquid drop model for nuclei to the ionization conjecture for atoms Friday, Jul 27 [symposia auditorium] 09:00 Masatoshi Noumi (Kobe University), Elliptic hypergeometric functions and elliptic difference Painlevé equation 10:30 Jean-Christophe Mourrat (École Normale Supérieure Paris), Quantitative stochastic homogenization 11:30 Fabio Toninelli (Université Claude Bernard Lyon 1), (2+1)-dimensional Stochastic Interface Dynamics Saturday, Jul 28 [symposia auditorium] 09:00 Edward Witten (Institute for Advanced Study), Open and Closed Topological Strings In Two Dimensions 10:30 Richard Kenyon (Brown University), Analytic limit shapes for the 5 vertex model 11:30 Lisa Jeffrey (University of Toronto), Higgs bundles and the triple reduced product Gilles Brassard All no-signalling theories are local-realistic PDF abstract It is generally believed that experimental violations of Bell's inequalities, especially the recent so-called loophole-free experiments, provide evidence that quantum theory cannot be both local and realistic. We demonstrate to the contrary that all reversible-dynamics no-signalling operational theories (including unitary quantum theory) can be given a local-realistic interpretation. Thus, we answer by the negative the 1935 question of Einstein, Podolsky and Rosen: Quantum-mechanical description of physical reality can\textbf{not} be considered complete (at least not the standard Copenhagen formulation). And moreover yes, it \textbf{can} be completed! However, we also demonstrate that the standard Everettian view, according to which the universal wavefunction is a complete representation of the universe, must be abandoned if locality is postulated as a metaphysical principle. Joint work with Paul Raymond-Robichaud. Based on arXiv:1710.01380 and arXiv:1709.10016 [quant-ph]. Scheduled time: Monday, July 23 from 11:00 to 12:00 Location: Symposia Auditorium Subir Sachdev Solvable models of quantum matter without quasiparticles I will describe mathematical aspects of the Sachdev-Ye-Kitaev (SYK) class of models of interacting fermions. The fermions can occupy any of $N$ quantum states ($N \rightarrow \infty$), and have random all-to-all $q$-fermion interaction terms in the Hamiltonian ($q \geq 4$ and even). The low energy excitations of these models cannot be expressed in a quasiparticle basis, but remarkably many aspects are exactly solvable. I will describe the computation of the low temperature free energy, and the many-body density of states at low energy. There is a non-vanishing entropy in the zero temperature limit. The density of states is determined by a path integral of a quantum gravity theory in two-dimensional anti-de Sitter space. Applications to `strange metal' states of correlated electron materials will be briefly mentioned. Slides: 1400_sachdev_symposia_mondayr.pdf Thibault Damour Institut des Hautes Études Scientifiques Gravitational Waves and Binary Black Holes The recent discovery of several gravitational wave events by the two interferometers of the Laser Interferometer Gravitational-Wave Observatory (LIGO), and by the Virgo interferometer, has brought the first direct evidence for the existence of black holes, and has also been the first observation of gravitational waves in the wave-zone. The talk will review the theoretical developments on the motion and gravitational radiation of binary black holes that have been crucial in interpreting the LIGO-Virgo events as being emitted by the coalescence of two black holes. Scheduled time: Tuesday, July 24 from 09:00 to 10:00 Lai-Sang Young Comparing chaotic and random dynamical systems In this talk I will compare and contrast (deterministic) chaotic dynamical systems and their stochastic counterparts, i.e. when small random perturbations are added to such systems to model uncontrolled fluctuations. Three groups of results, some old and some new, will be discussed. The first has to do with how deterministic systems, when sufficiently chaotic, produce observations resembling those from genuinely random processes. The second compares the ergodic theories of chaotic systems and of random maps (as in stochastic flows of diffeomorphisms generated by SDEs). One will see that results on SRB measures, Lyapunov exponents, entropy, fractal dimension, etc. are all nicer in the random setting. I will finish by suggesting that to improve the applicability of existing theory of chaotic systems, a little bit of random noise can go a long way. F. Duncan M. Haldane Flux attachment and non-commutative geometry in the fractional quantum Hall effect: some open mathematical problems "Holomorphic states"play a key role in model many-body fractional quantum Hall states, such as the Laughlin states. The origin of holomorpic structure is usually (incorrectly) explained as being a special property of the "lowest Landau level", but in fact derives from the non-commutative geometry of guiding centers of Landau orbits in any Landau level. The complex structure defines a metric that describes the shape of "flux attachment", which is a hidden variational parameter of the Laughlin state. The Laughlin and other even-more-interesting model states such as the non-Abelian Moore-Read and Read-Rezayi states have rich mathematical structure (for example they are Jack polynomials on genus-0 manifolds) but so far most quantitative information has come from numerical studies. The are a number of challenging problems that mathematical physics could usefully tackle. For example, the Laughlin states are the highest density states in the kernel of a "pseudopotential model" with close analogies to the the AKLT spin chain. Numerical studies clearly show that its excitation gap remains finite in the thermodynamic limit, but while a lower bound to the gaps of the AKLT model was found, this is lacking for the model for which the Laughlin state is the ground state. Other issues such as a modular-invariant formalism on the torus, and a Heisenberg as opposed to Schrodinger formulation appropriate for non-commutatve geometry will be discussed, if time permits. Scheduled time: Wednesday, July 25 from 09:00 to 10:00 Anne-Laure Dalibard Université Pierre et Marie Curie Recent advances in fluid boundary layer theory In the past few years, theoretical progress has been made in several directions in the understanding of fluid boundary layers: well-posedness and ill-posedness results for the time-dependent Prandtl equation and its variants, separation in time-dependent and stationary settings... In this talk, I will make a review of these results, and I will present some potential next steps. Slides: 1030_dalibard_symposia_wednesday.pdf Alessandro Giuliani Università di Roma Tre Universal fluctuations in interacting dimers In the last few years, the methods of constructive Fermionic Renormalization Group have successfully been applied to the study of the scaling limit of several two-dimensional statistical mechanics models at the critical point, including: weakly non-planar 2D Ising models, Ashkin-Teller, 8-Vertex, and close-packed interacting dimer models. In this talk, I will focus on the illustrative example of the interacting dimer model and review some of the universality results derived in this context. In particular, I will discuss a proof of the massless Gaussian free field (GFF) behavior of the height fluctuations. It turns out that GFF behavior is connected with a remarkable identity (`Haldane relation') between an amplitude and an anomalous critical exponent, characterizing the large distance behavior of the dimer-dimer correlations. Based on joint works with V. Mastropietro and F. Toninelli. Slides: 1130_giuliani_symposia_wednesday.pdf John Cardy The $T\overline T$ deformation of quantum field theory The $T\overline T$ deformation is a modification of local 2d QFT at short distances which is in some sense solvable. I argue that this is because it corresponds to coupling the theory to a random metric whose action is topological. Under the deformation, partition functions satisfy linear diffusion-type equations which describe a kind of Brownian motion in the moduli space of the world sheet manifold. Slides: 0900_cardy_symposia_thursday.pdf Scheduled time: Thursday, July 26 from 09:00 to 10:00 Slava Rychkov The Institut des Hautes Études Scientifiques Conformal Field Theory and Critical Phenomena in $d=3$ I will review the recent progress in understanding the critical phenomena in $d=3$ dimensions using the conformal bootstrap approach. In particular, I will discuss results about the critical point of the 3d Ising Model and the $O(N)$ models. Slides: 1030_slava_rychkov_thursday5.pdf Rupert Frank LMH Műnich From the liquid drop model for nuclei to the ionization conjecture for atoms The liquid drop model is an isoperimetric problem with a competing non-local term. It was originally introduced in the nuclear physics literature in 1930 and has received a lot of attention recently as an interesting problem in the calculus of variations. We discuss some new results and open problems. We show how the insights from this problem allowed us to prove the ionization conjecture in a certain model for an atom in density functional theory. Slides: 1130_frank_symposia_thursday.pdf Masatoshi Noumi Kobe University Elliptic hypergeometric functions and elliptic difference Painlevé equation Elliptic hypergeometric functions are a new class of special functions that have been developed during these two decades. In this talk I will give an overview of various aspects of elliptic hypergeometric series and integrals with emphasis on connections with integrable systems including the elliptic difference Painlevé equation. Slides: 900_noumi_symposia_friday.pdf Scheduled time: Friday, July 27 from 09:00 to 10:00 Jean-Christophe Mourrat École Normale Supérieure Paris Quantitative stochastic homogenization Over large scales, many disordered systems behave similarly to an equivalent "homogenized" system of simpler nature. A fundamental example of this phenomenon is that of reversible diffusion operators with random coefficients. The homogenization of these operators has been well-known since the late 70's. I will present recent results that go much beyond this qualitative statement, reaching optimal rates of convergence and a precise description of the next-order fluctuations. The approach is based on a rigorous renormalization argument and the idea of linearizing around the homogenized limit. Slides: 1030_mourrat_symposia_friday.pdf Fabio Toninelli Université Claude Bernard Lyon 1 (2+1)-dimensional Stochastic Interface Dynamics The goal of this talk is to discuss large-scale dynamical behavior of discrete interfaces. These stochastic processes model diverse statistical physics phenomena, such as interface growth by random deposition or the motion, due to thermal fluctuations, of the boundary between coexisting thermodynamic phases. While most known rigorous results concern (1+1)-dimensional models, I will present some recent ones in dimension (2+1). On the basis of a few concrete models, I will discuss both: (1) two-dimensional interface growth and the so-called Anisotropic KPZ universality class; and (2) reversible interface dynamics and the emergence of hydrodynamic limits, in the form of non-linear parabolic PDEs. Slides: 1130_toninelli_symposia_friday.pdf Edward Witten Institute for Advanced Study Open and Closed Topological Strings In Two Dimensions Several decades ago, three parallel theories of two-dimensional quantum gravity were developed, involving random matrices; Liouville theory coupled to matter; and topological field theory. The first two approaches have fairly straightforward extensions to the case of quantum gravity on a two-manifold with boundary, but the third does not. However, an extension of the third approach was discovered relatively recently by Pandharipande, Solomon, and Tessler, with later work by Buryak and Tessler. In this talk (based on work with R. Dijkgraaf), I will explain their construction in a more physical language. Slides: 900am_edward_witten_saturday.pdf Scheduled time: Saturday, July 28 from 09:00 to 10:00 Richard Kenyon Analytic limit shapes for the 5 vertex model This is joint work with Jan de Gier and Sam Watson. The Bethe Ansatz is a very old technique but using some new tools inspired by conformal invariance we can make progress on both the limit shape phenomenon and fluctuations for models beyond free fermionic models. In particular using the Bethe Ansatz we find an explicit expression for the free energy of the five vertex model. The resulting Euler-Lagrange equation can be reduced to an equation generalizing the complex Burgers' equation. We show how to solve this equation, giving analytic parameterizations for limit shapes. Slides: 1030_kenyon_symposia_saturdayr.pdf Lisa Jeffrey Higgs bundles and the triple reduced product (Joint work with Jacques Hurtubise, Steven Rayan, Paul Selick and Jonathan Weitsman) We identify the symplectic quotient of the product of three orbits in SU(3) with a space of Higgs bundles over the 2-sphere with three marked points,, where the residues of the Higgs fields at the marked points are constrained to lie in the three coadjoint orbits. By considering the spectral curves for the Hitchin system, we identify the moment map for a Hamiltonian circle action on the symplectic quotient. We use the earlier work of Adams, Harnad and Hurtubise to find Darboux coordinates. Slides: 1130_jeffrey_symposia_saturday.pdf
CommonCrawl
Complex Variable Book Suggestion What book should I choose to learn complex analysis as a physics Undergrad. I only want to use one book which will contain everything I need. resource-recommendations education mathematics complex-numbers analyticity Shaikot Jahan Shuvo Before answering, please see our policy on resource recommendation questions. Please write substantial answers that detail the style, content, and prerequisites of the book, paper or other resource. Explain the nature of the resource so that readers can decide which one is best suited for them rather than relying on the opinions of others. Answers containing only a reference to a book or paper will be removed! $\begingroup$ superoles.files.wordpress.com/2015/09/… $\endgroup$ – Count Iblis Sep 25 '17 at 21:58 I used Visual Complex Analysis by T. Needham. Not only this book introduces reader to the intricacies of the Complex Analysis, but it gives a very intuitive picture and reasoning for visual representation of the subject. Well written, concise, and clear. I'd recommend Complex Variable and Applications, by Ruel V. Churchill & James Ward Brown. It's known as "Churchill". It starts from the very beginning (what complex numebrs are) but it quickly steps into analytical functions, integrals and series. It covers everything quickly but enough. I'd say it's the main book for physicists about this subject. FGSUZ My course at the TU Delft used Applied Complex Variables for Scientists and Engineers by Yue Kuen Kwok. I liked it because as such books go it is relatively short, quite readable, and full of examples and problems. What I especially liked is that it always stays very close to real applications but it refuses to sacrifice rigor to do this. I also liked the pacing. Coming into section 1 you are introduced from-scratch to complex numbers, with defined terms like "modulus" and "argument". You are shown how they densely summarize cosine and sine rules, how they can be used for electrical circuits (impedances, basically) and their connection to stereographic projections. In section 2 you start to learn how to differentiate complex numbers and you are introduced to the Cauchy-Riemann conditions and the notion of an analytic function -- but also how they can be used to solve the heat equation and other places where harmonic functions are useful. Then section 3 gently guides the reader to the correct complex understanding of the exponential function, the logarithm, the trigonometric functions, and the hyperbolic trigonometric functions, as well as to fractional (and real) powers. The Riemann surfaces that a multivalued function is "really" defined over are discussed, as are the "branch cuts" that slice-and-dice them down into the more familiar "branch sheets" that work as little well-behaved complex planes. Only after all of this gradual buildup to we get to the really "intense" stuff in section 4 about complex integration, the Cauchy integral theorem and contour deformation, the Cauchy integral formula and its implication that complex-differentiable-once (in a neighborhood) implies infinitely-often-complex-differentiable (in that neighborhood), and the theorems of Liouville (analytic over all of $\mathbb C$= either unbounded or constant) and of maximums (analytic on a bounded domain = maximum is on the boundary). There are some applications to vector fields as well in there. As if aware that our minds are blown, section 5 returns back to the more tame idea of Taylor series and extends it to Laurent series and their convergence, and analytically continuing a power series outside of its radius of convergence to cover its entire Riemann surface, and section 6 returns to the Cauchy integral theorem with the residue calculus and the classification of poles, so there is some time for the dense mathematics to sink in. One recurring theme is trying to describe 2D fluid flow. There is also some discussion in these chapters about Fourier transforms and the like, with practice on adding contours that integrate to zero so that we can use the contour-deformation tricks that Cauchy gives us to get sums of residues. Section 7 was somewhat forgettable but introduced the Laplace transform which was nice, but section 8 on conformal mappings actually became very useful for me later on, as it turns out that the mathematics of bilinear transformations discussed in section 8.2 can be reappropriated to describe proper Lorentz transformations -- in 2-spinor calculus you find out that every Lorentz transform acts on the incoming and outgoing light rays from an observer as a bilinear tranform of a point on the complex plane, this point turns out to be a stereographic projection (see section 1!) of its location on the sphere of the sky. You get a really slick derivation both that accelerating observers must see the distant stars and galaxies all "crowd into" one point in the sky, and that their emitted light must "crowd into" the direction that they're going (so-called "relativistic beaming"). That was beyond the scope of the course, but the point is that I've seen this mathematics recur in a couple places now. (The latest was a use in continued-fraction calculators.) CR Drost Not the answer you're looking for? Browse other questions tagged resource-recommendations education mathematics complex-numbers analyticity or ask your own question. Course advice for someone interested in strings and mathematical physics How to teach myself physics needed at undergraduate electrical engineer level? Mathematical Prerequisites for QFT Advanced (undergraduate) book on electromagnetism Fun to read non-Feynman introductory physics book Feynman Lectures for a newcomer What is the most useful to learn out of complex analysis and differential equations for undergraduate studies in physics? Learning quantum mechanics with a strong mathematics background Challenging and "serious" introduction to basic physics Is it necessary to learn complex analysis in order to learn classical electrodynamics?
CommonCrawl