text
stringlengths
100
500k
subset
stringclasses
4 values
Some reflections on the use of inappropriate comparators in CEA José Antonio Sacristán ORCID: orcid.org/0000-0002-7300-93001,2, José-María Abellán-Perpiñán3, Tatiana Dilla2,4, Javier Soto4 & Juan Oliva5 Although the choice of the comparator is one of the aspects with a highest effect on the results of cost-effectiveness analyses, it is one of the less debated issues in international methodological guidelines. The inclusion of an inappropriate comparator may introduce biases on the outcomes and the recommendations of an economic analysis. Although the rules for cost-effectiveness analyses of sets of mutually exclusive alternatives have been widely described in the literature, in practice, they are hardly ever applied. In addition, there are many cases where the efficiency of the standard of care has never been assessed; or where the standard of care has demonstrated to be cost-effective with respect to a non-efficient option. In all these cases the comparator may lie outside the efficiency frontier, so the result of the CEA may be biased. Through some hypothetical examples, the paper shows how the complementary use of an independent reference may help to identify potential inappropriate comparators and inefficient use of resources. The aim of cost-effectiveness analysis (CEA) of health care programmes is to help policy makers to allocate scarce resources among available alternatives in order to maximize health outcomes [1]. Additional costs generated by one intervention over another are compared to the additional quality-adjusted life-years (QALYs) yielded, in the form of an incremental cost-effectiveness ratio (ICER). Decision rules have been developed to maximize the amount of QALYs provided by health care interventions restricted to a finite budget [2, 3]. According to the "fixed budget rule" [4] or "league table" approach [5], health care interventions are ranked in increasing order of ICER and then successively included in the health benefit basket or national health insurance scheme until the budget is exhausted. The ICER of the least cost-effective intervention that is adopted indicates the "critical ratio" [6] or cost-effectiveness threshold representing the opportunity cost of funding new programmes. On the contrary, according to the "fixed ratio rule" [4] or "threshold approach" [7], a new intervention is adopted if its ICER does not exceed a certain cost per QALY gained threshold of fixed price cut-off point. Both decision rules are coincidental if the budget implicitly determined by the "fixed ratio rule" is the same as the budget constraint assumed in the "fixed budget rule" [8]. In different countries, reimbursement and pricing decisions for new medicines are based on explicit or implicit cost per QALY thresholds [9,10,11,12,13,14]. Different league tables have been published attempting to rank-order an assortment of health interventions by cost-effectiveness [15,16,17,18]. Also, different methodological guidelines provide "reference cases" or "good practice codes" that CEA studies should follow to promote comparability among them [13, 19, 20]. Likewise, health technology assessment agencies have published reimbursement submission guidelines setting recommendations to conduct economic evaluations [21, 22]. Although CEA 's results may be affected by different assumptions, such as the rate at which future costs and benefits are discounted or the analysis' perspective, the choice of the comparator is one of the main factors that influences the CEA's results to a greater extent [23]. ICER is a relative concept in which the incremental costs and incremental effects of the analysis depend on the selected comparator (or the starting point of the analysis). The inclusion of an inappropriate comparator may introduce biases on the outcomes and the recommendations of an economic analysis. In this article, we describe the limitations of using inappropriate comparators, its impact on an inefficient use of resources and we propose a potential solution to identify the issue. Description of the problem: CEA results depend on the starting point of the comparison ICER results may guide decision making between mutually exclusive alternatives (one patient can only receive one of the treatments for one indication; e.g. an antiulcer drug) or between independent treatment alternatives (e.g. breast cancer screening, oral anticoagulants, vaccination campaigns, etc.), each of which, in a turn, can encompass a set of several mutually exclusive alternatives. Most CEAs are conducted between mutually exclusive alternatives. Working on the efficiency frontier (the line on the cost-effectiveness plane connecting the non-dominated treatment alternatives) is the right way to calculate the cost-effectiveness of mutually exclusive interventions. Although the theoretical rules for cost-effectiveness analyses of mutually exclusive alternatives have been widely described in the literature, in practice, they are hardly ever applied: not all the mutually exclusive alternatives are systematically identified nor ranked according to ICER; strongly dominated and extended dominated alternatives are not always excluded; and there is not a formal process to identify and incorporate the most efficient alternatives into the health care system. A review of 29 pharmacoeconomic guidelines [24] concluded that the most recommended comparator (in 86% of the guidelines) was "the standard of care for local practices" (assuming this is the alternative that would be replaced by the new intervention). However, very often, health care decision makers select a standard of care that is not an efficient alternative itself (e.g. a treatment for a severe disease, or for a rare disease, etc.). In addition, there are many occasions where the efficiency of the standard of care has never been assessed; or where the standard of care has demonstrated to be cost-effective versus a non-efficient option. In all these cases the result of the CEA could be biased as the new intervention could seem cost-effective versus another (in relation to a predefined threshold), when in fact it is an inefficient intervention. The potential bias not only occurs in the case of mutually exclusive alternatives but also in the evaluation of independent treatments. Although independent interventions (vaccination, screening, etc.) are not mutually exclusive, they always compete for a limited health care budget. The relevant question here is: if the ICER of intervention A vs A´ is $20.000 per QALY and the ICER of intervention B vs B' is $40.000 per QALY (both are efficient interventions considering a threshold of $50.000 per QALY), can we compare the ICERs of both interventions in the same league table if their starting point is different? In summary, assuming than the standard of care (or the starting point) is always the right comparator for a CEA poses three important limitations. Firstly, the identification of the optimal intervention (i.e. the one deemed most cost-effective) may vary depending on the starting point for the analysis [25]. The addition (or the subtraction) of an alternative may lead to a change in the preference for the alternatives in the original set. This preference reversal challenges a very basic normative requirement of rationality known as invariance, extensionality or independence of irrelevant alternatives [26,27,28,29] according to which "supposedly irrelevant factors", such as the content of the set of options among which the decision-maker has to choose, should not affect the preference order. Secondly, it is frequently assumed that the standard of care is an efficient intervention, ignoring if the existing interventions against that condition are themselves worth doing [30]. This is equivalent to take for granted that the current mix of interventions are efficient when indeed probably "the starting point is the historical inheritance of a set of insured interventions whose evidential base was poor or left unexplored, many of which were selected for reasons other than a plausibly demonstrated highly effective impact on population health" [31]. Lastly, "the standard of care" (and hence the starting point of the comparisons) differs greatly from one therapeutic area to another, whilst ICERs are valued equal irrespective their origin. These differences are diverse and not always obey to efficiency reasons. For example, in the area of oncology, many existing treatments are marginally better and much more expensive that the last treatment used as a comparator. In this case, it may be relatively easy for a new drug to demonstrate a favorable ICER compared to an inefficient standard of care [32]. On the other hand, in areas where only an old low-cost treatment (somewhat less effective than the new intervention) exists it may be difficult for a new intervention to demonstrate an acceptable ICER. In some way, the attractiveness of a therapeutic option is enhanced by the scope of the area to which it belongs to, what resembles a sort of contextual effect [33]. Potential implications of using an appropriate comparator The problem described in the previous section may have a significant impact on the efficient assignment of health care resources. In theory, resources in the health sector should be allocated across interventions and population groups in order to increase the population health. If, as in the case of mutually exclusive interventions, the standard of care is not an efficient intervention (or if it seems efficient compared to a non-cost-effective treatment); or, in the case of independent interventions, the starting point of the analyses generates non-comparable ICERs, the consequence would be an inefficient allocation of health resources. It would be helpful to develop a tool to identify potential inappropriate comparators in CEA. The use of an independent reference (like the meter as the unit of length in the decimal system) is a possible solution. For example, the development of a "generalized CEA" was proposed by WHO [30] to assess the costs and benefits of each set of mutually exclusive and independent interventions with respect to the "do-nothing" option. In that way, the cost-effectiveness of all the interventions, including currently funded interventions, would be assessed applying the classical process of decision rules for CE analysis starting from the origin. This paper does not propose a new methodology to conduct CEA, but a system to identify potential biased CEA due to the use of inappropriate comparators. Specifically, this work proposes the complementary use of an independent reference (an "independent" or "reference ICER") to identify potential deviations of the "conventional" (context-dependent) ICER from the reference baseline. A high discrepancy (deviation) between both measures could indicate the existence of an inefficient use of resources. Although our approach is similar to the "generalized CEA" by the WHO, we propose that the costs and benefits of the interventions are not evaluated with respect to the counterfactual of the null set of interventions (i.e. doing nothing), but with regards to a selected baseline which could be similar to the ICER corresponding to some efficient public health interventions (e.g. $20,000/QALY or less). Next sections compare the results of the "conventional ICER" (calculated versus the standard of care) and those obtained using the "independent ICER" (calculated versus an independent comparator). Outline of the approaches to set up the comparator Let \({p}_{i}\) stands for a typical programme to be evaluated from the set of available interventions \(P=\left({p}_{1 },{p}_{2},\dots ,{p}_{n }\right).\) Programme i is characterized as a pair \(\left({C}_{{p}_{i}},{QALY}_{{p}_{i}}\right)\) where \({C}_{{p}_{i}}\) and \({QALY}_{{p}_{i}}\) denote, respectively, the monetary cost and the number of QALYs attached to intervention \({p}_{i}\). Let \({d}_{i}\) be the condition or disease-specific comparator (i.e. the current practice) with which programme \({p}_{i}\) is compared, in such a way that each intervention in set P has its related comparator, so \(D=\left({d}_{1 },{d}_{2},\dots ,{d}_{n}\right)\). Disease-specific comparator i is characterized as a pair \(\left({C}_{{d}_{i}},{QALY}_{{d}_{i}}\right)\). Let r be a reference or independent comparator common to all the programmes belonging to set P. Reference or context-independent comparator r is described as the pair \(\left({C}_{R},{QALY}_{R}\right)\). The \({ICER}_{\left({p}_{i},{d}_{i}\right)}\) represents the additional monetary cost for each additional QALY obtained with an intervention \({p}_{i}\) over another programme \({d}_{i}\), calculated as follows: $${ICER}_{\left({p}_{i},{d}_{j}\right)}=\frac{\left({C}_{{p}_{i}}-{C}_{{d}_{i}}\right)}{\left({QALY}_{{p}_{i}}-{QALY}_{{d}_{i}}\right)}$$ The \({ICER}_{\left({p}_{i},r\right)}\) of an intervention \({p}_{i}\) over the reference comparator r is computed as: $${ICER}_{\left({p}_{i},r\right)}=\frac{\left({C}_{{p}_{i}}-{C}_{r}\right)}{\left({QALY}_{{p}_{i}}-{QALY}_{r}\right)}$$ Lastly, the indicator of the departure degree from the "incremental" rule (i.e. the adoption of the standard ICER, which is calculated with reference to the next best alternative) if the independent baseline r was used, \({I}_{\left({p}_{i},d,r\right)}\), is defined by: $${I}_{\left({p}_{i},d,r\right)}=\left(\frac{{ICER}_{\left({p}_{i},r\right)}}{{ICER}_{\left({p}_{i},{d}_{j}\right)}}-1\right)\cdot 100$$ when \({I}_{\left({p}_{i},d,r\right)}\) = 0% then both types of evaluation—that based on a disease-specific comparator and that based on a context-independent comparator—agree. On the contrary if \({E}_{\left({p}_{i},d,r\right)}\) ≠ 0% a discrepancy emerges which should be considered by the decision-maker. Some hypothetical examples Table 1 shows the costs and outcomes of various hypothetical programmes. Assume firstly that these programmes are not mutually exclusive, but independent ones, so there is a different disease-specific comparator for each of them. In this way, for example, intervention \({p}_{1}\) could be a screening test, \({p}_{2 }\) a pharmacological treatment, \({p}_{3}\) a vaccination campaign, and so on. Next, also assume that their ICERs (expressed in terms of dollars per QALY gained) have been calculated by using disease-related comparators. Lastly, assume that a cost-effectiveness ratio of $50,000 per QALY is considered as a threshold for efficiency. Table 1 Conventional incremental cost-effectiveness ratio (ICER) of five new programmes by using five disease-specific comparators The first three interventions have the same cost ($30,000) and generate the same health benefit (0.8 QALY). Option \({p}_{1}\) has a very favorable ICER ($5000 per QALY gained) because its cost is marginally higher than that of the comparator ($28,000) and the benefit improves twofold (0.4 QALY). Intervention \({p}_{2}\) is also efficient, although in this case, its cost ($28,000) and benefit (0.7 QALY) are just marginally better than the comparator. Intervention \({p}_{3}\) is very inefficient ($180,000 per QALY gained), given that its cost is significantly higher than that of its comparator ($12,000) and its additional benefit is only slightly better (0.1 QALY). Intervention \({p}_{4}\) is as efficient as intervention \({p}_{1}\), even though its cost is double ($60,000) and it generates the same benefits (0.8 QALY). Finally, intervention \({p}_{5}\) is the most expensive intervention ($90,000) in the table, but it is also an efficient choice (equivalent to \({p}_{2}\)), given that its additional cost and QALYs are marginally higher than the alternative option. According to a threshold of $50,000/QALY, a decision-maker would recommend the use of all interventions except intervention \({p}_{3}\). Table 1 shows that the efficiency of a given health intervention does not depend only on its own cost and effectiveness, but on cost and effectiveness of the alternative with which it is compared as well. These results cast several questions. For example, is it really more efficient intervention \({p}_{5}\) than intervention \({p}_{3}\), when the cost per QALY of the former is three times higher than that of the later? Or, are actually interventions \({p}_{1}\) and \({p}_{4}\), and interventions \({p}_{2}\) and \({p}_{5}\), equivalent in terms of efficiency? The answer to above questions is that it depends. For example, a high cost intervention like \({p}_{5 }\) may seem very efficient because both its effectiveness and cost are just marginally higher than those of the comparator, which is indeed inefficient in comparison to the predefined threshold. Or because the comparator, though is not cost-effective, was reimbursed thanks to other factors distinct from ICER like the burden of disease or the rarity of the disease. Alternatively, an intervention such as \({p}_{3}\) could appear inefficient because the only available alternative (much cheaper and somewhat less effective) for that indication is an off-patent drug which was approved many years ago. As noted in the Introduction, our point is that there are potential contextual effects that can bias the comparison of different ICERs. One source of such biases is, for example, the speed to which "the standard of care" changes due to the innovative dynamism existing in each therapeutic area. We think that the comparison of all the interventions to a common (non-null) reference comparator would allow to control for the existing dispersion throughout therapeutic areas. The result obtained from these comparisons would be a qualitative input that decision-makers could consider in order to prevent a mechanical application of the conventional ICER rule that ignores the possible sources of biases. The reference baseline could be a highly efficient health public intervention or, instead, some accepted efficiency bound. Let us now show how an independent reference comparator would work with the same five hypothetical interventions depicted in Table 1. The ICERs of those interventions when compared with a standard comparator are shown in Table 2. In this case, a cost-effectiveness ratio of $20,000/QALY has been chosen, although to facilitate calculations, an equivalent cost of $5,000 per 0.25 QALY gained is included in the table. Interventions \({p}_{1}\), \({p}_{2}\), and \({p}_{3}\) are equally efficient, while options \({p}_{4}\) and \({p}_{5}\) are inefficient. As shown in the first right column, the ranking of efficiency presented in Table 2 is different from that displayed in Table 1, when disease-related comparators were used. Table 2 Independent incremental cost-effectiveness ratio (ICER) of five new programmes by using a standard comparator The relative divergence between both types of ICERs is shown in Table 3. Visual analysis of Table 3 allows to compare the conventional and independent ICERs. In the case of programs 1 and 2, both ICERs are below the efficiency frontier, which would suggest that the disease specific comparator is adequate. On the contrary, the discrepancies between both ICERs in programs 3, 4 and 5, could be indicating a potential bias derived from the use of an inadequate disease specific comparator. In the case of intervention p3, the discrepancy could be indicating that we are facing an apparently inefficient program (which is efficient when using the independent comparator), and in the case of the \({p}_{4}\) and \({p}_{5}\) programs, we would be facing an apparently efficient programs (which is inefficient when using independent comparator). Table 3 Indicator of the divergence degree (%) between the "reference" or "independent" ICER and the conventional ICER Table 3 also shows the percentage of deviation from the conventional ICER when the independent comparator ($5000, 0.25 QALY) is used. In this example, the sign of the deviation of an apparent efficient intervention such as \({p}_{4}\) and \({p}_{5}\) (1900 and 673%, respectively) differs from the sign of the deviation of an apparent inefficient programme such as \({p}_{3}\)(−75%). Likewise, deviations of interventions sharing the same conventional ICER, such as \({p}_{1}\) and \({p}_{4}\)(5000$/QALY), and \({p}_{2}\) and \({p}_{5}\)(20,000$/QALY), are now quite different (deviation of \({p}_{4}\) is more than double that that of \({p}_{1}\) and deviation of \({p}_{5}\) is more than five times that that of programme \({p}_{2}\)). The key message of this paper is that the inclusion of an inappropriate comparator may introduce biases on the outcomes and the recommendations of an economic analysis. As Mason et al. [34] assert: "Decision makers should satisfy themselves that current practice is itself worth having before using it as a comparison for a new treatment. If the comparison programme is inefficient the analysis will be misleading". As the above examples show, different starting points can lead to different results in CEA. This bias violates basic rationality criteria in a similar way that contextual effects do in experiments on individual choices [35]. Apart from this problem, there are also significant differences in the speed to which innovation spreads in diverse therapeutic areas which makes difficult comparisons among them. This paper proposes the adoption of a common baseline to which new healthcare interventions are compared to identify potential biases in the results of CEA. This baseline could be a highly efficient public health intervention. This information would be an "additional factor" to take into account in reimbursement recommendations. Our proposal differs from generalized CEA [30] in that the set of interventions are not evaluated with respect to the counterfactual of the null set. We are aware that there are different constraints that limit the possibility of reallocating resources across different therapeutic areas, but the comparison of all the interventions to the same independent comparator may help to identify ineffiencies between therapeutic areas. The result obtained from these comparisons would be an input to consider in order to prevent the automatic application of the ICER rule. It is important to remark that the main objective of our proposal is not to replace the ICER for the ACER (average cost-effectiveness ratio), but to prevent contextual biases derived from using disease-specific comparators. The use of a common unit of measure, established by consensus, could contribute to consider the opportunity cost of including a new intervention and to adopt divestment decisions. We do not claim to overrule the context of marginal decisions. Rather, we claim for a correct implementation of marginal analysis avoiding starting-point biases and taking account concerns on the "historical inheritance" of the set of insured interventions in the different therapeutic areas. Weinstein MC, Stason WB. Foundations of cost-effectiveness analysis for health and medical practices. N Engl J Med. 1977;296:716–21. Johannesson M, Weinstein MC. On the decision rules of cost-effectiveness analysis. J Health Econ. 1993;12(4):459–67. Karlsson G, Johannesson M. The decision rules of cost-effectiveness analysis. Pharmacoeconomics. 1996;9(2):113–20. Maiwenn J, Talitha L, van Hout B. Optimal allocation of resources over health care programmes: dealing with decreasing marginal utility and uncertainty. Health Econ. 2005;14:655–67. Briggs A, Gray A. Using cost effectiveness information. BMJ. 2000;320(7229):246. Weinstein M, Zeckhauser R. Critical ratios and efficient allocation. J Public Econ. 1973;2:147–57. Birch S, Gafni A. The 'NICE' approach to technology assessment: an economics perspective. Health Care Manag Sci. 2004;7(1):35–41. Johannesson M, O'Conor RM. Cost-utility analysis from a societal perspective. Health Policy. 1997;39(3):241–53. Harris AH, Hill SR, Chin G, Li JJ, Walkom E. The role of value for money in public insurance coverage decisions for drugs in Australia: a retrospective analysis 1994–2004. Med Decis Making. 2008;28(5):713–22. NICE. Guide to the methods of technology appraisal 2013. 2013. https://www.nice.org.uk/process/pmg9/chapter/foreword. NICE. Changes to NICE drug appraisals: what you need to know. NICE; 2017. https://www.nice.org.uk/news/feature/changes-to-nice-drug-appraisals-what-you-need-to-know. Institute for Clinical and Economic Review. Overview of the ICER assessment framework and update for 2017–2019. https://icer-review.org/wp-content/uploads/2017/06/ICER-value-assessment-framework-Updated-050818.pdf. Neumann PJ, Cohen JT, Weinstein MC. Updating cost-effectiveness. The curious resilience of the $50,000-perQALY threshold. N Engl J Med. 2014;371:796–7. Reckers-Droog VT, van Exel NJA, Brower WBF. Looking back and moving forward: on the application of proportional shortfall in health priority setting in the Netherlands. Health Policy. 2018;122:621–9. Tengs TO, Adams ME, Pliskin JS, Safran DG, Siegel JE, Weinstein MC, Graham JD. Five-hundred life-saving interventions and their cost-effectiveness. Risk Anal. 1995;15(3):369–90. Dalziel K, Segal L, Mortimer D. Review of Australian health economic evaluation—245 interventions: what can we say about cost effectiveness? Cost Eff Resour Alloc. 2008;6:9. Horton S, Gelband H, Jamison D, Levin C, Nugent R, Watkins D. Ranking 93 health interventions for low- and middle-income countries by cost-effectiveness. PLoS ONE. 2017;12(8):e0182951. Wilson DK, Christensen A, Jacobsen PB, Kaplan RM. Standards for economic analyses of interventions for the field of health psychology and behavioral medicine. Health Psychol. 2019;38(8):669–71. Gold MR, Siegel JE, Russell LB, Weinstein MC, editors. Cost-effectiveness in health and medicine. New York, NY: Oxford University Press; 1996. Siegel JE, Weinstein MC, Russell LB, Gold MR. Recommendations for reporting cost-effectiveness analyses. Panel on cost-effectiveness in health and medicine. JAMA. 1996;276(16):1339–411. Bracco A, Krol M. Economic evaluations in European reimbursement submission guidelines: current status and comparisons. Expert Rev Pharmacoecon Outcomes Res. 2013;13(5):579–95. Heintz E, Lintamo L, Hultcrantz M, Jacobson S, Levi R, Munthe C, et al. Framework for systematic identification of ethical aspects of healthcare technologies: the SBU approach. Int J Technol Assess Health Care. 2015;31(3):124–30. Neyt M, Van Brabandt H. The importance of the comparator in economic evaluations: working on the efficiency frontier. Pharmacoeconomics. 2011;29(11):913–6. Ziouani S, Granados D, Borget I. How to select the best comparator? An international economic evaluation guidelines comparison. Value Health. 2016;19:A471–A472472. Cantor SB, Ganiats TG. Incremental cost-effectiveness analysis: the optimal strategy depends on the strategy set. J Clin Epidemiol. 1999;52:517–22. Luce RD, Raiffa H. Games and decisions: introduction and critical survey. Hoboken: Wiley; 1957. Keeney RL, Raiffa H. Decisions with multiple objectives: Preferences and value tradeoffs. Cambridge: Cambridge University Press; 1976. Arrow KJ. Risk perception in psychology and economics. Econ Inq. 1982;20:1–9. Kahneman D, Tversky A. Choices, values, and frames. Am Psychologist. 1984;39:341–50. Murray CJ, Evans DB, Acharya A, Baltussen RM. Development of WHO guidelines on generalized cost-effectiveness analysis. Health Econ. 2000;9(3):235–51. Culyer AJ. Cost-effectiveness thresholds in health care: a bookshelf guide to their meaning and use. Health Econ Policy Law. 2016;11(4):415–32. Bach P. New math on drug cost-effectiveness. N Engl J Med. 2016;373:1797–9. Tversky A. Elimination by aspects: a theory of choice. Psychol Rev. 1972;79:281–99. Mason J, Drummond M, Torrance G. Some guidelines on the use of cost effectiveness league tables. BMJ. 1993;306(6877):570–2. Tversky A, Simonson I. Context-dependent preferences. Manage Sci. 1993;39:1179–89. No financial support was received for this work. Department of Preventive Medicine and Public Health, School of Medicine, Universidad Autónoma de Madrid, Avenida Arzobispo Morcillo s/n. 28029, Madrid, Spain José Antonio Sacristán Medical Department, Lilly, Madrid, Spain José Antonio Sacristán & Tatiana Dilla Universidad de Murcia, Murcia, Spain José-María Abellán-Perpiñán Universidad Carlos III, Madrid, Spain Tatiana Dilla & Javier Soto Universidad de Castilla La Mancha, Toledo, Spain Juan Oliva Tatiana Dilla JAS generated the initial idea and wrote the first draft of the manuscript. All authors made relevant contributions to the work. All authors read and approved the final manuscript. Correspondence to José Antonio Sacristán. JAS and TD are also employees of Eli Lilly. JS is employee of Pfizer. The views or opinions presented in this work are solely those of the authors and do not represent those of the companies. Sacristán, J.A., Abellán-Perpiñán, JM., Dilla, T. et al. Some reflections on the use of inappropriate comparators in CEA. Cost Eff Resour Alloc 18, 29 (2020). https://doi.org/10.1186/s12962-020-00226-8 Incremental cost-effectiveness ratio Social perspective
CommonCrawl
www.springer.com The European Mathematical Society Pages A-Z StatProb Collection Project talk Spectral synthesis From Encyclopedia of Mathematics The reconstruction of the invariant subspaces of a family of linear operators from the eigen or root subspaces of this family contained in such subspaces. More precisely, let $ {\mathcal A} $ be a commutative family of operators on a topological vector space $ X $ and let $ \sigma _ {p} ( {\mathcal A} ) $ be its point spectrum, i.e. the set of numerical functions $ \lambda = \lambda ( A ) $ on $ {\mathcal A} $ for which the eigen subspaces $$ N _ {\mathcal A} ( \lambda ) = \cap _ {A \in {\mathcal A} } \mathop{\rm Ker} ( A- \lambda ( A) I) $$ are distinct from zero, and let $$ K _ {\mathcal A} ( \lambda ) = \cap _ {A \in {\mathcal A} } \cup _ {n \in \mathbf N } \mathop{\rm Ker} ( A- \lambda ( A) I) ^ {n} $$ be the root subspaces corresponding to the points $ \lambda \in \sigma _ {p} ( {\mathcal A} ) $( cf. Spectrum of an operator). A subspace $ L \subset X $ which is invariant under $ {\mathcal A} $ admits spectral synthesis if $ L $ coincides with the closure of the root subspaces contained in it. If all $ {\mathcal A} $- invariant subspaces admit spectral synthesis, then it is said that the family $ {\mathcal A} $ itself admits spectral synthesis. Examples of families admitting spectral synthesis are as follows: any compact commutative group of operators on a Banach space and, more generally, any group with relatively compact trajectories. If $ \mathop{\rm dim} X < \infty $, then every one-element family admits spectral synthesis in view of the existence of the Jordan decomposition. In the general case, for an operator $ A $ to admit spectral synthesis it is necessary at least to require that the whole of $ X $ admits spectral synthesis with respect to $ A $, that is, $ A $ should have a complete system of root subspaces. But this condition is not sufficient, even for normal operators on a Hilbert space. In order that a normal operator $ A $ admits spectral synthesis it is necessary and sufficient that $ \sigma _ {p} ( A) $ does not contain the support of a measure orthogonal to the polynomials. This condition holds if and only if for any domain $ G \subset \mathbf C $ there is an analytic function $ f $ in $ G $ for which $$ \sup _ {z \in G } | f( z) | < \sup _ {z \in G \cap \sigma _ {p} ( A) } | f( z) | . $$ In particular, unitary complete and self-adjoint complete operators (cf. Complete operator; Self-adjoint operator; Unitary operator) admit spectral synthesis. Spectral synthesis is also possible for complete operators that are "close" to unitary or self-adjoint ones (such as dissipative operators, cf. Dissipative operator, with a nuclear imaginary component, and operators with spectrum on a circle and with normal growth of the resolvent as one approaches the circle). The completeness of the system of root subspaces does not guarantee spectral synthesis of invariant subspaces even if one imposes the further condition that the operator be compact: The restriction of a complete compact operator to an invariant subspace need not have eigenvectors and can even coincide with any compact operator given in advance. The problems of spectral synthesis of invariant subspaces include not only the clarification of the possibility of approximating their elements by linear combinations of root vectors, but also the construction of an approximating sequence and the estimation of its rate of convergence. In the case of operators with a countable spectrum, the approximating sequence is usually constructed by averaging the sequence of partial sums of the formal Fourier series $ x \approx \sum _ {\lambda \in \sigma _ {p} ( A) } \epsilon _ \lambda x $, where $ \epsilon _ \lambda $ is the Riesz projector: $$ \epsilon _ \lambda x = \frac{1}{2 \pi i } \int\limits _ {\Gamma _ \lambda } ( z- A) ^ {-} 1 dz. $$ Here, $ \Gamma _ \lambda $ is a contour separating the point $ \lambda \in \sigma _ {p} ( A) $ from the rest of the spectrum. If a space $ X $ consists of functions on a locally compact Abelian group and $ {\mathcal A} $ coincides with the family of all shift operators, then the eigenspaces for $ {\mathcal A} $ are the one-dimensional subspaces generated by the characters of the group. Thus, the theory of spectral synthesis of invariant subspaces includes the classical problems of harmonic synthesis on a locally compact Abelian group (see Harmonic analysis, abstract), which consists of finding conditions under which the subspaces that are invariant under the translations in some topological vector space of functions on a group are generated by the characters contained in them. In particular, the possibility of spectral synthesis on compact groups or, more generally, in spaces of almost-periodic functions on groups is a consequence of the result stated above on the spectral synthesis for groups of operators with relatively compact trajectories. Moreover, the problems of spectral synthesis are closely connected with problems of synthesis of the ideals in a regular commutative Banach algebra: A closed ideal is the intersection of maximal ones ( "it admits spectral synthesis" ) if and only if its annihilator in the adjoint space admits spectral synthesis with respect to the family of operators adjoint to the operators of multiplication by elements of the algebra. The above definition of spectral synthesis can be extended in such a way that that it also covers families of operators without an extensive point spectrum (and even non-commutative families). In that case it is replaced by the requirement of a one-to-one correspondence between the invariant subspaces and the spectral characteristics of the restrictions to these subspaces of a given family of operators. In this sense one talks of spectral synthesis for modules over a regular commutative Banach algebra, and for representations of a locally compact Abelian group. [1] E. Hewitt, K.A. Ross, "Abstract harmonic analysis" , 1–2 , Springer (1979) [2] N.K. Nikol'skii, "Invariant subspaces in the theory of operators and theory of functions" J. Soviet Math. , 5 : 2 (1976) pp. 129–249 Itogi Nauk. i Tekhn. Mat. Anal. , 12 (1974) pp. 199–412 [3] J.J. Benedetto, "Spectral synthesis" , Teubner (1975) According to [a2], p. 140, the term "spectral synthesis" was introduced around 1947 by A. Beurling. Since then it has been a subject of much research in commutative harmonic analysis, i.e. in the context of the commutative Banach algebra $ L _ {1} ( G) $, $ G $ a locally compact Abelian group. The elements of the dual group $ \widehat{G} $ can be identified with the closed maximal ideals of $ L _ {1} ( G) $. The cospectrum of a closed ideal $ I $ in $ L _ {1} ( G) $ is the closed set in $ \widehat{G} $ consisting of all closed maximal ideals containing $ I $. To every closed subset $ E $ of $ \widehat{G} $ corresponds a natural closed ideal in $ L _ {1} ( G) $ having $ E $ as cospectrum, namely the intersection of all closed maximal ideals corresponding to the points of $ E $. $ E $ is called a set of spectral synthesis (or a Wiener set, [a2]) if this intersection is the only closed ideal having $ E $ as cospectrum. The classical approximation theorem, proved for $ G = \mathbf R $ by N. Wiener (1932), can be stated as: The empty set is a set of spectral synthesis. The first example of a set that is not a set of spectral synthesis (also called a "set of non spectral synthesisset of non spectral synthesis" ) was obtained in 1948 by L. Schwartz, who showed that spheres in $ \widehat{G} = \mathbf R ^ {n} $( $ n \geq 3 $) are such. That sets of non spectral synthesis exist in $ \widehat{G} $ for all non-compact $ G $ was proved by P. Malliavin (1959). A completely-different proof of this fact, using tensor algebra, was obtained in 1965 by N.Th. Varopoulos. A famous unsolved problem in this area is whether the union of two sets of spectral synthesis is again such a set (the union problem). See [1], [3], [a1], [a2] for many more details. [a1] C.C. Graham, O.C. McGehee, "Essays in commutative harmonic analysis" , Springer (1979) pp. Chapt. 5 [a2] H. Reiter, "Classical harmonic analysis and locally compact groups" , Clarendon Press (1968) How to Cite This Entry: Spectral synthesis. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Spectral_synthesis&oldid=48764 This article was adapted from an original article by V.S. Shul'man (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Spectral_synthesis&oldid=48764" TeX auto TeX done About Encyclopedia of Mathematics Impressum-Legal
CommonCrawl
Messy Interconnections of Innovation by jim on February 23, 2019 · Rare Science Books In 1986, the co-founder of the Massachusetts Institute of Technology's AI laboratory's, cognitive scientist Marvin Minsky, (1927-2016), published The Society of Mind. The book describes a theory which attempts to explain how what we call intelligence, could be a product of the interaction of non-intelligent parts. He proposed that each mind is made of many small processes which can only do thoughtless, simple things. Intelligence, he said, is the result of joining these parts with multi cross-connections in societies of tangled webs. Minsky concluded that much of the brain's power stems from merely the messy ways these processes are interconnected. The book's primary accomplishment was to put information into terms written for the general public, to explain the functions of the synapses (junctions between two nerve cells, consisting of a minute gap across which impulses pass by diffusion of a neurotransmitters); and neurotransmitters (chemical messengers which transmit signals across a chemical synapses). These basic units of the organization of the nervous system, were represented by individual cellular elements, which Wilhelm Gottfried von Waldeyer-Hartz, (1836-1921), christened as "neurons" in 1891. It was, however, through the work conducted by Spanish neuroscientist, Santiago Ramón y Cajal, (1852-1934), by which the unsurpassed discovery of the independent functionality of neurons within the nervous system was made possible. Cajal observed and described these points of contact in which various chemical substances intervene in detail, and, he did it at a time when there were no instruments that allowed the physiological verification of his brilliant deduction. He was fiercely opposed to the idea that the nervous system was made up of a network of continuous elements, as it had been stated by Joseph von Gerlach, (1820-1896), and supported by Camillo Golgi, (1843-1926). For Cajal, it was very clear that nerve cells worked independently in a nervous system in which the current had to follow a certain direction; from the dendrites to the neuron body and from this to the axon, which in turn transmits the impulse to other dendrites of other cells. Santiago Ramón y Cajal drawings Source: El Instituto Cajal del CSIC Cajal's opus ,"Textura del Sistema Nervioso del Hombre y los Vertebrados" (1894-1904), was made available to the international scientific community in a French translation, "Histologie du Système Nerveux de l'Homme et des Vertébrés", translated by Dr. L. Azoulay, and published in 1911, in 2 volumes by Maloine, Paris. The English translation, by N. and L.W. Swanson, was published in 1994, by Oxford University Press. The book provided the foundation of modern neuroanatomy, with a detailed description of nerve cell organization in the central and peripheral nervous system of numerous animal species, illustrated by Cajal's renowned drawings. These drawings are still reproduced in neuroscience textbooks today. The techniques used to create artificial intelligence are inspired by neurons in the human brain, and are known as neural networks. Neural networks power deep learning systems. They are composed of layers of interconnected artificial "neurons" that automatically learn about the features of a specific object based on large amounts of training data. For example, by looking at images of dogs, a neural network can learn about a dog's features by tweaking the connections between neurons. If it has learned those patterns well, it should be able to look at an image and correctly identify it as a dog. However, a slight alteration of a few pixels in the adversarial samples, which are created when an AI system is classifying data, may cause a misclassification of the depiction of dogs in the AI system. Adversarial networks, a technique which incorporates two neural networks with two different goals: one to make accurate classifications, the other altering the samples to trigger misclassifications, is the latest development in machine learning. They provide a way to conduct unsupervised learning, in which a machine could make logical inferences without requiring as much human training data and with a reduction in errors. Last October, Christie's sold an AI-generated portrait of Edmond De Belamy, for US $432,500, over 43 times its highest pre-sale estimate. The portrait was created from 15,000 portrait images .This portrait was drawn by an algorithm, which was created by Ian J. Goodfellow. He is currently a research scientist in machine learning at Google Brain and the inventor of the AI algorithm named GAN (Generative Adversarial Networks). The 27 ½ x 27 ½ in (700 x 700 mm.), portrait is signed at the bottom right by part of the algorithm code that produced it: {\displaystyle \min _{\mathcal {G}}\max _{\mathcal {D}}E_{x}\left[\log({\mathcal {D}}(x))\right]+E_{z}\left[\log(1-{\mathcal {D}}({\mathcal {G}}(z)))\right]} {\displaystyle \min _{\mathcal {G}}\max _{\mathcal {D}}E_{x}\left[\log({\mathcal {D}}(x))\right]+E_{z}\left[\log(1-{\mathcal {D}}({\mathcal {G}}(z)))\right]} AI-generated portrait of Edmond De Belamy Source: Christie's This MIT Technology Review report quote:- "AI's chief legacy might not be driverless cars or image search or even Alexa's ability to take orders, but its ability to come up with new ideas to fuel innovation itself," is already taking shape. Tagged as: Innovation, Neural Networks, Science Data scientist, book collector – Jim Sekkes Previous post: Sex: the single girl's perspective Next post: AMBIVALENT CENSORSHIP OF MEDIEVAL "SCIENCE" IN 17th CENTURY SPAIN: THE EXAMPLE OF THE HORTUS SANITATIS (MAINZ, 1491)
CommonCrawl
Prof. Peter Clarke Professor Deputy Director of NeSC Research Theme: Particle Physics Experiment [email protected] http://www.ph.ed.ac.uk/people/peter-clarke School of Physics and Astronomy, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh, EH9 3FD, United Kingdom Peter Clarke is Professor of Physics at the University of Edinburgh. He has a 1st Class Honours degree in Electronics Engineering (Southampton University,1980) and a D.Phil in Particle Physics (Oxford 1985). He was a CERN Fellow before being appointed as a lecturer first at Brunel University in 1987 and then moving to University College London in 1993. He was promoted to Reader and then Professor in 2001 and was Head of the Particle Physics Research Group between 2001-04. He moved to the University of Edinburgh in 2004 to take up the Chair of eScience and later become Director of the National eScience Centre 2006-09. He is a Fellow of the Institute of Physics and the Institute of Engineering and Technology. His early research work included the first direct measurements of CP violation in the Kaon system at CERN; Working at the SLD experiment at the Stanford Linear Collider (USA) and then the LEP electron positron collider at CERN he worked on precision measurements of the electro-weak interaction, the properties of the Z and W bosons and indirect searches for the Higgs boson. At UCL he worked on construction of the ATLAS experiment for the Large Hadron Collider He was involved in UK e-Science since its inception. He was a founder of the Centre of Excellence in Networked Systems at UCL and was prominent in advancing national and international networking for research. He has held roles in international grid computing infrastructure projects including the management board of the UK grid for particle physics (GridPP), the European Data Grid and the EGEE projects. He was a member of the Steering Committee of the Global Grid Forum international standards body between 2002-04 and co-Director of the Data Area. His present research is as a member of the LHCb experiment at the Large Hadron Collider at CERN. LHCb is searching for the signals associated with the imbalance between the interactions of mattter and anti-matter. He has produced the worlds most precise measurement of a CP violating phase called "phis" and he is deputy computing coordinator of the experiment. 1st year introduction to physics (P1B) 2005-2010. 1st year UG laboratories 2005-2009 2nd year undergraduate laboratories 2012-present Numerical recipes 2013-present Measurement of Z -> tau(+)tau(-) production in proton-proton collisions at root s=8 TeV DOI LHCb collaboration, M. Alexander, S. Ali, J. E. Andrews, S. Benson, R. Calabrese, L. Carson, M. G. Chapman, C. Chen, S. Chen et al., Journal of High Energy Physics, 9 (2018) Observation of the decay $\overline{B_s^0} \rightarrow χ_{c2} K^+ K^- $ in the $\varphi$ mass region DOI LHCB Collaboration, L. Carson, P. E. L. Clarke, G. A. Cowan, D. C. Craik, S. Eisenhardt, E. Gabriel, S. Gambetta, K. Gizdov, F. Muheim et al., Journal of High Energy Physics (2018) Measurement of the $\Upsilon$ polarizations in $pp$ collisions at $\sqrt{s}$=7 and 8TeV DOI Peter Clarke, Greig Cowan, Stephan Eisenhardt, Franz Muheim, Matthew Needham, Stephen Playfer and LHCb Collaboration, Journal of High Energy Physics, 1712, p. 110 (2017) Measurement of the shape of the $\Lambda_b^0\to\Lambda_c^+ \mu^- \overline{\nu}_{\mu}$ differential decay rate DOI Peter Clarke, Greig Cowan, Stephan Eisenhardt, Franz Muheim, Matthew Needham, Stephen Playfer and LHCb Collaboration, Physical Review, D96, 11 , p. 112005 (2017) Bose-Einstein correlations of same-sign charged pions in the forward region in $pp$ collisions at $\sqrt{s}$ = 7 TeV DOI P E L Clarke, G A Cowan, S Eisenhardt, F Muheim, M Needham, S Playfer and LHCb Collaboration, Journal of High Energy Physics, 1712, p. 025 (2017) Measurement of the $B^{\pm}$ production cross-section in pp collisions at $\sqrt{s} =$ 7 and 13 TeV DOI First Observation of the Rare Purely Baryonic Decay $B^0\to p\bar p$ DOI Peter Clarke, Greig Cowan, Stephan Eisenhardt, Franz Muheim, Matthew Needham, Stephen Playfer and LHCb Collaboration, Physical Review Letters, 119, 23 , p. 232001 (2017) Updated search for long-lived particles decaying to jet pairs DOI Peter Clarke, Greig Cowan, Stephan Eisenhardt, Franz Muheim, Matthew Needham, Stephen Playfer and LHCb Collaboration, European Physical Journal C: Particles and Fields, C77, 12 , p. 812 (2017) χc1 and χc2 Resonance Parameters with the Decays χc1,c2→J/ψμ+μ− DOI Peter Clarke, Greig Cowan, Stephan Eisenhardt, Franz Muheim, Matthew Needham, Stephen Playfer and LHCb Collaboration, Physical Review Letters, 119, 22 (2017) Measurement of $CP$ violation in $B^0\rightarrow J/\psi K^0_\mathrm{S}$ and $B^0\rightarrow\psi(2S) K^0_\mathrm{S}$ decays DOI Show all 365 research outputs Last updated: 19 Feb 2018 at 21:11
CommonCrawl
Critical hydraulic gradients for seepage-induced failure of landslide dams Austin Chukwueloka-Udechukwu Okeke1 & Fawu Wang1 Geoenvironmental Disasters volume 3, Article number: 9 (2016) Cite this article Landslide dams formed by rock avalanche processes usually fail by seepage erosion. This has been related to the complex sedimentological characteristics of rock avalanche dams which are mostly dominated by fragmented and pulverized materials. This paper presents a comprehensive experimental programme which evaluates the critical hydraulic and geometrical conditions for seepage-induced failure of landslide dams. The experiments were conducted in a flume tank specifically designed to monitor time-dependent transient changes in pore-water pressures within the unsaturated dam materials under steady-state seepage. Dam models of different geometries were built with either mixed or homogeneous materials. Two critical hydraulic gradients corresponding to the onset of seepage erosion initiation and collapse of the dam crest were determined for different upstream inflow rates, antecedent moisture contents, compactive efforts, grain size ranges, and dam geometries. Two major types of dam failure were identified: Type I and Type II. These were further subdivided into minor failure processes which include exfiltration, sapping, downstream toe bifurcation, and undermining of the downstream face. The critical hydraulic gradients for seepage erosion initiation varied from 0.042 to 0.147. Experiments conducted with the mixed materials indicate that the critical hydraulic gradients for collapse of the dam crest increased with an increase in uniformity coefficient. The deformation behaviour of the dams was significantly influenced by particle density, pore geometry, hydraulic conductivity, and the amount of gravel and pebbles present in the materials. The results indicate that the critical seepage velocity for failure of the dams decreased with an increase in downstream slope angle, but increased with an increase in pore geometry, dam height, dam crest width, upstream inflow rate, and antecedent moisture content. Landslide dams and other natural river blockages such as moraine dams and glacier-ice dams are formed in narrow valleys bordered by oversteepened slopes. Active geological processes in these settings such as erosion and weathering often lead to the availability of highly fractured and hydrothermally altered bedrock which constitute source materials for hillslope processes and landslide dam formation (Costa and Schuster 1988; Clague and Evans 1994; Korup et al. 2010). These potentially dangerous natural phenomena occur mostly in seismically-active regions where high orographic precipitations on rugged mountain terrain associated with frequent earthquakes and snowmelt contribute to several geological processes that lead to mass wasting and river-damming landslides (Korup and Tweed 2007; Allen et al. 2011; Evans et al. 2011; Crosta et al. 2013). Failure of landslide dams could trigger the sudden release of stored water masses from lakes created by these damming events. This consequently produces catastrophic outburst floods and debris flows that inundate the downstream areas, causing loss of lives and infrastructural damage (O'Connor and Costa 2004; Bonnard 2011; Plaza et al. 2011). For example, the worst recorded case of landslide dam disaster occurred during the 1786 Kangding-Luding earthquake in Sichuan Province, southwest China (Dai et al. 2005). The earthquake triggered a huge landslide which dammed the Dadu River but failed ten days later and generated a catastrophic outburst flood that drowned more than 100,000 people. Similarly, Chai et al. (2000) presented a comprehensive account of the catastrophic failure of three landslide dams (Dahaizi, Xiaohaizi, and Deixi), triggered by the August 1933, Ms 7.5 earthquake in Diexi town, Sichuan Province, China. These landslide dams failed two months later, triggering catastrophic outburst floods that traveled more than 250 km downstream, and claimed about 2,423 lives. Therefore, timely evaluation of landslide dams is important for prevention of catastrophic dam failures and mitigation of disasters caused by downstream flooding of the released water masses. Seepage erosion is one of the undermining factors affecting the stability and long-term performance of landslide dams and embankment dams. Many civil engineering and geoenvironmental studies have defined subsurface erosion processes by several terms such as piping, heave or blowout, seepage erosion, tunneling or jugging, internal erosion and sapping or spring sapping (Zasłavsky and Kassiff 1965; Jones 1981; Higgins 1982, 1984; Hutchinson 1982; Hagerty 1991; Wörman 1993; Terzaghi et al. 1996). However, a few researchers have made clear distinctions between the different processes involved in soil destabilization caused by seepage and piping (Jones 1981; Bryan and Yair 1982; Dunne 1990). The role of seepage in increasing positive pore-water pressure and causing apparent reduction of matric suction (u a -u w ) in unsaturated soils has been documented in the literature (Fredlund et al. 1978; Lam et al. 1987; Fredlund et al. 2012). Generally, landslide dams, stream banks and soil slopes are composed of unconsolidated materials which exist in unsaturated conditions. The stability of landslide dams in unsaturated conditions depends on the presence of matric suction which increases the shear strength of the soil τ, as described by the equation proposed by Fredlund et al. (1978): $$ \tau =c\hbox{'}+\left({\sigma}_n-{u}_a\right) \tan \varphi \hbox{'}+\left({u}_a-{u}_w\right) \tan {\varphi}^b $$ where c' = effective cohesion of the soil, (σ n -u a ) = net normal stress on the failure plane, ϕ' = effective friction angle with respect to the net normal stress, (u a -u w ) = matric suction, ϕ b = angle that denotes the rate of increase in shear strength relative to matric suction. Transient changes from unsaturated to saturated conditions under steady-state seepage initiate high hydraulic gradients that accentuate subsequent reduction of apparent cohesion of the soil. This, in turn, increases seepage forces that accelerate soil mobilization, exfiltration and downstream entrainment of the eroded soil particles, as described by the equation: $$ {F}_s={\gamma}_wi $$ where F s = seepage force per unit volume, i = hydraulic gradient, γ w = unit weight of water. Detailed research on seepage erosion processes in unsaturated soils and the effects of pore-water pressure on the stability of soil slopes have been carried out by Hutchinson (1982), Iverson and Major (1986), Howard and McLane (1988), Fredlund (1995), Skempton and Brogan (1994), Crosta and Prisco (1999), Rinaldi and Casagli (1999), Dapporto et al. (2001), Lobkovsky et al. (2004), Wilson et al. (2007), Fox et al. (2007), Cancienne et al. (2008), and Pagano et al. (2010). The concept of hydraulic criteria for assessing the likelihood of initiation of internal erosion in soils is based on the hydraulic load acting on a soil particle which must exceed the drag forces of the seeping water. This is related to the critical hydraulic gradient i c , defined as the hydraulic gradient at which the effective stress of the soil becomes negligible. Apparently, a large number of theoretical and experimental approaches have been used to obtain critical hydraulic gradients in embankment dams, levees, dykes and other water-retaining structures. For example, Terzaghi (1943) obtained i c value of 1 for upward directed seepage flow as described by the following equation: $$ {i}_c=\frac{\gamma \hbox{'}}{\gamma_w} $$ where γ' = submerged unit weight of soil, and γ w = unit weight of water. However, Skempton and Brogan (1994) observed selective erosion of fines in internally unstable cohesionless soils for upward flow conditions at critical hydraulic gradients (i c = 0.2 ~ 0.34) lower than that obtained from Terzaghi's classical approach. Similarly, Den Adel et al. (1988) carried out tests for horizontal seepage flow and obtained critical hydraulic gradient values of 0.16 to 0.17 and 0.7 for unstable and stable soils, respectively. Ahlinhan and Achmus (2010) performed experiments with unstable soils for upward and horizontal seepage flows and obtained critical hydraulic gradient values of 0.18 to 0.23. Ke and Takahashi (2012) obtained critical hydraulic gradients of 0.21 to 0.25 for internal erosion with binary mixtures of silica sands under one-dimensional upward seepage flow. Whilst a lot of research has been done on critical hydraulic gradients for internal erosion, problems still exist in defining and ascribing limit values of hydraulic gradients for seepage erosion. For instance, Samani and Willardson (1981) proposed the hydraulic failure gradient i f , defined as the hydraulic gradient at which the shear strength of a confined soil is reduced by the drag forces of the seeping water. Wan and Fell (2004) introduced i start and i boil to represent critical hydraulic gradients for the onset of internal erosion and boiling, respectively. However, the conventional one-dimensional upward seepage tests can only be used to determine the hydraulic criteria for seepage erosion in granular materials with the exclusion of other factors such as dam geometry (dam height, dam crest width, upstream and downstream slope angles), and rate of inflow into the upstream reservoir. Hence, elaborate evaluation of the influence of these geometrical and hydraulic factors on seepage processes in landslide dams would require carrying out flume experiments where the characteristic deformation behaviour of the dam models would allow for accurate determination of the limit values of these hydraulic parameters. Brief review of seepage erosion in soils Comprehensive research on seepage erosion and piping mechanisms in landslide dams (Meyer et al. 1994; Davies and McSaveney 2011; Wang et al. 2013; Okeke and Wang 2016; Wang et al. in press), levees and earth embankments (Richards and Reddy 2007), hillslopes (Ghiassian and Ghareh 2008), and stream banks (Fox and Wilson 2010), have all been completed. Variations in experimental results and opinions are strictly based on the design and method of experiment adopted, coupled with size and scale effects arising from the nature of material tested. Seepage erosion involves the detachment and entrainment of finer soil particles through a porous medium under a hydraulic gradient caused by the seeping water (Cedergren 1977). The various processes involved in seepage erosion mechanisms in hillslopes and landslide dams have been identified. For example, sapping, as defined by Hagerty (1991) involves exfiltration over a broad area on a sloping surface such that large lenticular cavities appear as a result of concentrated seepage which removes soil particles at the exit point and increases the diameter of the evolving channel over time. Iverson and Major (1986) derived a generalized analytical method for the evaluation of seepage forces considering static liquefaction and Coulomb failure under steady uniform seepage in any direction within a hillslope. They observed that slope destabilization occurred as a result of seepage force vector, which represents a body force that corresponds to the hydraulic gradient potential. They concluded that slope stability will invariably occur when the direction of the seepage flow is such that λ = 90°-ϕ, whereas the existence of a vertically upward seepage component results in Coulomb failure at similar conditions required for static liquefaction, especially when the slope angle is more or less equal to φ. Howard (1988) used flume experiments and numerical simulations to evaluate sapping processes and sapping zone morphology in homogeneous, isotropic sand mixtures. His experiments identified three distinct zones at the sapping face: mass wasting zone, sapping zone and fluvial transport zone, whereas numerical simulations performed by Howard and McLane (1988) revealed that the rate of mass wasting at the sapping face is dependent on the rate of sediment transport through the fluvial transport zone. Perzlmaier et al. (2007) presented an overview of empirically-derived critical hydraulic gradients for initiation of backward erosion in a range of soil types based on field experience in several dams and levees (Table 1). Richards and Reddy (2010) evaluated piping potential in earth structures using a modified triaxial system, referred to as the true triaxial piping test apparatus (TTPTA). This apparatus was designed for controlling confining stresses and determining critical hydraulic gradients and critical velocities required for initiation of internal erosion. Their tests found that the critical hydraulic gradient and the critical seepage velocity for internal erosion in uniform fine-grained quartz sand varied from 1.8 × 10−3 to 2.4 × 10−3 and 8.1 × 10−3 to 1.1 × 10−3 m/s, respectively. They concluded that the critical seepage velocity is an essential parameter for evaluation of piping potentials in non-cohesive soils. Moffat et al. (2011) used a rigid wall permeameter to study internal erosion susceptibility in widely graded cohesionless soils by imposing a unidirectional flow in either upward or downward directions such that a constant average hydraulic gradient was maintained across the specimen. They found that suffusion occurred by 'episodic migration' of the finer fraction when the imposed average hydraulic gradient was increased. Chang and Zhang (2012) determined the critical hydraulic gradients for internal erosion under complex stress states using a computer-controlled triaxial testing apparatus which allowed for independent control of hydraulic gradient and stress states. They found that under isotropic stress states, the initiation hydraulic gradient i start increased with an increase in effective mean stress. They further observed that under the same confining stress, the initiation gradients obtained under compression stress states were higher than those obtained under extension stress states. These findings may have cleared up some of the ambiguities associated with critical hydraulic gradients determined under one-dimensional seepage tests as noted by Fell and Fry (2013), due to the inability of the conventional method to monitor stress states of soils. Table 1 Comparison of empirically-derived critical average gradients i c for initiation of backward erosion and piping in different soil types (Perzlmaier et al. 2007) However, despite the wealth of research done so far, not much has been reported on the influence of geometrical and hydraulic conditions for seepage erosion development in landslide dams. This paper presents a comprehensive experimental programme conducted to investigate transient pore-water pressure variations and the critical hydraulic gradients for seepage-induced failure of landslide dams. A series of experiments were conducted in a flume tank modified to accurately determine the limit values of hydraulic gradients at the various stages of the dam failure process. This is in contrast to the conventional one-dimensional upward directed seepage tests performed in a modified triaxial chamber. The main objectives of this research are summarized as follows: (1) to determine the critical hydraulic gradients required for initiation i ini and failure i f of landslide dams under different geometrical and hydraulic conditions, as well as the critical seepage velocities for erosion and debris flow mobilization; (2) to investigate the effects of pore-water pressure during seepage processes and its role in initiating seepage erosion and dam failure; and (3) to identify the various failure mechanisms of landslide dams under steady-state seepage conditions. The experiments were conducted in a rectangular flume tank 2 m long, 0.45 m wide and 0.45 m high. The flume tank was made of 5 mm-thick acrylic sheets (plexiglass) of high transparency which enables visual observation of wetting front propagation, deformation and failure mechanism of the dam models. The flume was tilted to make a bed slope of ψ = 5°. The downstream end of the flume was equipped with two 4 cm-diameter holes for outflow of fluidized sediments. The water entering the upstream reservoir was provided by a rubber hose attached to a water tap while discharge into the upstream reservoir was controlled by a flowmeter connected to the drainage hose. The generation and dissipation of pore-water pressures during the experiments were monitored with three pore-water pressure sensors, hereafter referred to as p1, p2, and p3, with rated capacity of 50 kPa each (Fig. 1a). The sensors were fixed underneath the center of the flume bed through three 10 mm-diameter holes drilled on a horizontal line at the center of the flume bed. The sensors were separated by horizontal distances of 0.1 m and 0.103 m, respectively. Each of the pore-water pressure sensors was equipped with an L-shaped manometer attached to the outer wall of the flume to ensure an equal balance between the fluid pressure and atmospheric pressure. Transient variation in upstream reservoir level was monitored with a water level probe positioned near the toe of the upstream slope. Deformations and settlements caused by seepage and pore-water pressure buildup were monitored with two 0.1 m-range CMOS multi-function analog laser displacement sensors attached to a wooden overboard (Fig. 1b). The two sensors, hereafter referred to as H d1 and H d2, were separated by a distance of 0.04 m. a Experimental setup. H d Laser displacement sensors; Ups Upstream water level probe; p1, p2, and p3 Pore-water pressure sensors. b Side view of the flume tank before the commencement of an experiment Soil characteristics A series of experiments were conducted using different soils and testing conditions. Table 2 shows a summary of all the experiments conducted under different testing conditions while the results of the critical pore-water pressures and critical seepage velocities obtained from the tests are summarized in Table 3. Uniform commercial silica sand no. 8 was used to build the dam models, except in Exp 1 to 3 where the dam models were composed of different proportions of silica sand nos. 5 and 8, including industrial pebbles and gravel, hereafter referred to as sandfill dam (SD), gravelly dam I (GV-I), and gravelly dam II (GV-II), respectively. The grain size distribution curves of all the materials used are shown in Fig. 2. The mechanical and hydraulic characteristics of the materials used in the experiments are summarized in Table 4. Silica sand nos. 5 and 8 are generally composed of subangular to angular grains with dry repose angles of 32 and 35°, respectively. Constant-head permeability tests and other soil property tests were carried out on the soils based on the physical conditions (bulk density and antecedent moisture content) used in building the dam models in accordance with standards of the Japanese Geotechnical Society (JGS). Table 2 Summary of all the experiments at different testing conditions Table 3 Summary of results of critical pore-water pressures and critical seepage velocities obtained from the tests Grain size distribution curves of the dam materials. GV-I Gravelly dam I, GV-II Gravelly dam II, SD Sandfill dam, SS-8 Silica sand no. 8 Table 4 Mechanical and hydraulic characteristics of the materials used in the experiments Landslide dam model construction and experimental procedure Landslide dam models of different geometries were built approximately 0.4 m downslope from the upstream water inlet (Fig. 3a). Effort was made in building the dam models so as to simulate naturally existing landslide dam prototypes. Mechanically mixed soils were placed in the flume tank in equal lifts using the moist tamping method. Initially, oven-dried soils were mixed with a known volume of water and then compacted to obtain the desired moisture content and bulk density. All the experiments were conducted with an antecedent moisture content of 5 %, except in Exp 8 to 15 where the antecedent moisture content was varied from 5 to 20 %. The geometrical characteristics of the dam models are shown in Fig. 3b. The dam height H d and the dam crest width D crw were varied from 0.15 to 0.3 m and 0.1 to 0.25 m, respectively. The angles α and β representing the upstream and downstream slope angles were varied from 35 to 40° and 30 to 60°, respectively. a Plan view of the flume tank indicating the position of the dam model and monitoring sensors. b Schematic diagram of the dam geometry (not to scale) Seven different series of experiments, all summing up to 27 runs of tests, were carried out, each with intent to assess transient pore-water pressure variations and the critical hydraulic gradients for seepage erosion initiation and dam failure under steady-state seepage. The main experiments were conducted after carrying out a series of initial tests which were mostly done to check sensor reliability, result validation, test repeatability and selection of appropriate mixtures of materials. However, the results of experiments conducted on dams built with dam crest width D crw of 0.1 and 0.15 m are excluded in this paper due to some challenges posed by the monitoring sensors. The initial conditions set for all the tests assumed that the upstream reservoir was empty. Filling of the upstream reservoir was carried out with a rubber hose attached to a water tap, and connected to a manually-operated flowmeter. A steady-state seepage through the dam models was achieved by ensuring that the upstream reservoir level remained constant at approximately two-thirds of the dam height. Real-time data was acquired by connecting all the sensors to a standard high-speed monitoring and recording workstation comprised of two synchronized universal recorders (PCD-330B-F) and a laptop computer. Sampling frequency was set at 50 Hz for all the tests. At the beginning of each experiment, discharge into the upstream reservoir was set at the desired value using a manually-operated flowmeter. The discharge was maintained until the upstream reservoir level equaled two-thirds of the dam height. Afterward, an equilibrium hydraulic head was established by ensuring that the upstream reservoir level remained constant prior to the collapse of the dam crest. The change from unsaturated to saturated state began during the filling of the upstream reservoir. Consequently, loss of matric suction due to positive pore-water pressure buildup under steady-state seepage, as observed from sensor p3 (Fig. 4), marked the onset of static liquefaction and exfiltration of water from the downstream toe, which further led to debris flow mobilization and dam failure. Schematic diagram for the determination of hydraulic gradients Determination of critical hydraulic gradients Variations in hydraulic gradients (i 1 and i 2) through the dam models were determined from pore-water pressure values obtained from the experiments. Darcy (1856) in Fredlund et al. (2012) postulated that the rate of water flow through a soil mass was equal to the hydraulic gradient, as described by the equation: $$ {v}_w=-{k}_w\frac{\partial {h}_w}{\partial z} $$ where v w = flow rate of water (m3/s), k w = coefficient of permeability with respect to the water phase (m/s), ∂h w /∂z = hydraulic gradient in the z-direction. Hydraulic heads, h 1, h 2, and h 3 at three different locations within the dam models were computed from pore-water pressure values using the following equation (Fig. 4): $$ h=\frac{u_w}{\gamma_w{ \cos}^2\psi } $$ where u w = pore-water pressure (kPa), γ w = unit weight of water (kN/m3), ψ = flume bed slope angle (degree). Therefore, hydraulic gradient i 1 between sensors p1 and p2 was determined as described by the equation below: $$ {i}_1\approx \frac{-\left[\left({h}_2+{h}_{02}\right)-\left({h}_1+{h}_{01}\right)\right]}{\raisebox{1ex}{${L}_1$}\!\left/ \!\raisebox{-1ex}{$ \cos \psi $}\right.} $$ Similarly, the hydraulic gradient, i 2 between sensors p2 and p3 was determined as follows: where h 01, h 02 and h 03 represent corresponding vertical distances between the flat firm base and the slope bed, whereas L 1 and L 2 are horizontal distances between p1 and p2, and p2 and p3, respectively. Two limit values of hydraulic gradients, corresponding to the onset of initiation of seepage erosion i ini and collapse of the dam crest i f , were determined based on results obtained from the initial tests. General characteristics of the experiments Two characteristic types of dam failure (Type I and Type II) were observed during the experiments and were found to depend on the geometry and hydromechanical characteristics of the dam materials. These were further subdivided into several interrelated failure processes which included wetting front propagation, downstream slope saturation, exfiltration, sapping/seepage-face erosion, toe bifurcation, undermining and progressive sloughing of the downstream face, and late-stage overtopping. Type I involves failures which could be related to static liquefaction of the soil mass under steady-state seepage that reduced the apparent cohesion of the soil and led to debris flow mobilization. This type of failure was primarily initiated by sapping erosion which occurred as a result of steady exfiltration of water from the downstream toe; which by extension, triggered gradual undercutting and debuttressing of the downstream slope as the mobilized mass 'flowed' downstream, thus lowering the dam height (Fig. 5a). Dam failure occurred by overtopping as the upstream reservoir level reached the tip of the partially saturated dam material, eroding the entire crest to form a wide breach channel. This type of failure was characteristic of experiments conducted with low upstream inflow rates, low compactive effort (e o = 1.76), high downstream slope angle (β ≥ 40°), and dam crest width greater than 0.15 m. Typical failure mechanisms of the dams. a Type I - Upslope propagation of wetting front, exfiltration, sapping and sloughing of the fluidized soil mass. b Type II - Downslope propagation of wetting front, bifurcation, and undermining of the slope toe Type II involves failures triggered by downslope propagation of the wetting front and subsequent mobilization of the fluidized material at the upper part of the downstream face. This failure mechanism was characterized by downstream toe bifurcation and abrupt collapse of a large flank of the slope due to intense saturation which originated from the dam crest and progressed towards the downstream toe (Fig. 5b). Dam failure occurred by the formation of a hydraulic crack aligned perpendicular to the downstream face due to the reduction of the effective stress of the soil. This type of failure occurred mostly in dams of low downstream slope angle (β ≥ 40°), high shear strength of the soil relative to the shear stress of the seeping water, and high compactive effort (e o = 1.21). Influence of dam composition Three types of materials (SD, GV-I, and GV-II) were used to investigate transient changes in pore-water pressures and variations in hydraulic gradients under steady-state seepage through the dam models (Exp 1 ~ 3; Table 2 and 3). The dam models were built to obtain initial void ratios of 1.41, 0.71 and 0.84 for SD, GV-I, and GV-II materials, respectively. The resulting trends of pore-water pressures within the dam models indicate gross anisotropy and heterogeneity in dams composed of GV-I and GV-II, whereas the low critical pore-water pressures obtained in the dam built with homogeneous SD material demonstrates the liquefaction potential of cohesionless and isotropic sands (Fig. 6). The failure mechanism of the SD dam was basically characteristic of the Type I failure pattern. Enlargement of the sapping zone was characterized by occasional mass failures which were enhanced by a decrease in the effective stress of the soil as the energy of the exfiltrating water increased. In contrast, GV-I material showed Type II failure mechanism, whereas the failure mechanism of GV-II material evolved from Type II to Type I (Fig. 7). Critical pore-water pressure values (p crit-1) determined at p1, which correspond to the onset of failure of the dams were 1.30, 1.64 and 1.45 kPa for SD, GV-I, and GV-II, respectively. The observed trends of pore-water pressures within the dams were found to be inversely proportional to the initial void ratio e o (Table 3), and directly proportional to the coefficient of uniformity C u of the dam materials (Table 4). This could be potentially caused by capillary rise within the materials which depends on the grain size distribution and bulk density of the constituent soil mass that, in itself, affected the porosity of the soil. Thus, the stability and deformation characteristics of the dams increased as the grain size distribution changed from poorly to well graded. Similarly, the critical hydraulic gradients for seepage erosion initiation i ini , increased with a decrease in pore size, while the critical hydraulic gradient for collapse of the dam crest i f , was influenced by the grain size distribution. The effect of grain size distribution on the development of seepage in the dams was evidenced by the variations in seepage velocity as the dynamics of the seeping water changed from laminar flow to turbulent flow (Table 3; Additional file 1: Video S1). The fact that the longevity of the dam built with GV-II material (v crit-2, 1.21 × 10−6 m/s) was higher than those built with SD and GV-I materials (v crit-2, 5.68 × 10−6 and 5.39 × 10−6 m/s, respectively) demonstrates that other physical parameters such as particle density, hydraulic conductivity and gravel content affect seepage development in landslide dams and soil slopes (Kokusho and Fujikura 2008). Time-dependent transient changes in pore-water pressures and trends of hydraulic gradients in dams built with (a) Sandfill dam (b) Gravelly dam I and (c) Gravelly dam II Images of seepage-induced failure of dams built with (a) Gravelly dam I and (b) Gravelly dam II Rate of inflow into the upstream reservoir Exp 4~7 were conducted to evaluate the influence of inflow rate Q in into the upstream reservoir. The dam models were built with uniform geometrical and physical characteristics (Table 2). Figure 8 shows the variations in pore-water pressures through the dams at steady-state inflow rates of 1.67 × 10−5 m3/s, 5 × 10−5 m3/s, 1 × 10−4 m3/s, and 1.67 × 10−4 m3/s. The filling rate of the upstream reservoir initiated seepage processes that changed the dynamics of the pore-water pressures. The critical hydraulic gradients for initiation of seepage erosion (i ini-1 and i ini-2) varied from 0.067 to 0.122. A low p crit-1 value of 1.52 kPa was determined in the experiment conducted with Q in of 1.67 × 10−4 m3/s, relative to Q in of 5 × 10−5 m3/s (p crit-1 = 1.65 kPa) and 1 × 10−4 m3/s (p crit-1 = 1.68 kPa) (Table 3). This could be attributed to a rapid increase in the hydraulic head which initiated high seepage gradients that reduced the effective stress of the soil, leading to differential settlement, hydraulic cracking, and lowering of the dam crest. Thus, the rate of reduction of the shear strength of the soil due to a decrease in matric suction depends on the rate of inflow into the upstream reservoir Q in and the rate of propagation of the wetting front. Trends of hydraulic gradients through the dams indicate that i f1 decreased with an increase in Q in , whereas i f2 increased with an increase in Q in , suggesting a corresponding increase in seepage velocity between sensors p1 and p2 (Table 2; Fig. 13 in Appendix 1). Critical seepage velocities determined from the tests show that v crit-2 increased from 7.39 × 10−7 m/s for Q in of 1.67 × 10−5 m3/s to 1.01 × 10−6 m/s for Q in of 1.67 × 10−4 m3/s. Exfiltration, sapping and undercutting of the downstream toe, characteristic of Type I failure mechanism, occurred at low inflow rates as a result of low seepage processes that led to liquefaction and collapse of the dam crest (Exp 4 and 5). In contrast, hydraulic cracking, downstream face saturation, and toe bifurcation characteristic of Type II failure mechanism, occurred in experiments conducted with high inflow rates (Exp 6 and 7). The experimental results demonstrate that the stability and time of collapse of the dam crest T b decreased with an increase in inflow rate into the upstream reservoir. This was evidenced by the characteristic failure mechanism of the dam models which evolved from Type I to Type II with a corresponding increase in Q in (Table 3). Transient variations in pore-water pressures in experiments conducted with upstream inflow rates of (a) 1.67 × 10−5 m3/s (b) 5 × 10−5 m3/s (c) 1 × 10−4 m3/s (d) 1.67 × 10−4 m3/s Influence of material condition Soil wetting is a major cause of shear strength reduction and volume change in unsaturated soils and is a common occurring factor in collapsible soils and expansive soils. Exp 8~11 were conducted to assess the influence of antecedent moisture content w on the deformation behaviour of landslide dams under steady-state seepage. Antecedent moisture contents of the soils were increased by 5 % during soil preparation and dam model construction. Figure 9 shows the resulting trends of hydraulic gradients through the dams. A linear relationship was observed between the antecedent moisture content and the rate of deformation and collapse of the dam models (Fredlund 1999). It is noteworthy to mention that the critical hydraulic gradients (i f1 and i f2) coincided with the onset of dam deformation and crest settlement. Measured critical hydraulic gradients for seepage erosion initiation varied from 0.053 to 0.118, while the critical hydraulic gradient for failure of the dams increased with an increase in antecedent moisture content. Similarly, the reduction of capillary forces due to an increase in soil moisture content caused the critical seepage velocity to decrease from 1.31 × 10−6 m/s for w = 5 % to 9.52 × 10−7 m/s for w = 20 %. The failure mechanism of the dams evolved from Type II to Type I as antecedent moisture content increased through the dams. The rate of exfiltration and sapping erosion at the downstream toe increased from low saturated soils to highly saturated soils. This was attributed to the reduction of matric suction caused by wetting resulting in high void ratios that accentuated the abrupt collapse of the dams. Trends of hydraulic gradients in dams built with an e o of 1.76 and antecedent moisture contents of (a) 5 % (b) 10 % (c) 15 % (d) 20 % Figure 10 shows trends of hydraulic gradients and the failure mechanism of dam models built with the same antecedent moisture contents (5, 10, 15 and 20 %), but packed at a higher compactive effort, e o = 1.21 (Exp 12~15). The characteristic trends displayed by the hydraulic gradients, as well as the low critical seepage velocities determined from the experiments indicate, that the initial void ratio e o of the soil affected the failure mechanism of the dams. It may be important to note that i f1 and i f2 increased with an increase in antecedent moisture content, thus suggesting that the dynamics of the seeping water were mainly characteristic of a laminar flow. The stability of the dam models increased as antecedent moisture content decreased from 20 to 5 %, as observed from T b and p crit-3, and thus indicates the effect of pore-water pressures in reducing the effective stress of the soil (Tables 2 and 3). This effect can be related to the influence of matric suction on the liquefaction potential and shear strength reduction in partially saturated soils (Simon and Collison 2001; Okamura and Soga 2006). Comparison between Exp 8 ~ 11 and Exp 12 ~ 15 shows that the deformation and collapse mechanism of the dam models were more pronounced in dams with an e o of 1.76 (Exp 8 ~ 11) than in those with an e o of 1.21 (Exp 12 ~ 15) (Figs. 16 and 17 in Appendix 2). Similarly, a comparison between the critical hydraulic gradients measurements in Exp 8 ~ 11 and Exp 12 ~ 15 shows that the critical hydraulic gradient decreased with a decrease in initial void ratio. The observed trends of wetting front propagation and the transient changes in pore-water pressures suggest that seepage flow through the dam materials was not essentially controlled by matric suction but by a hydraulic head gradient (Fredlund and Rahardjo 1993). Influence of dam geometry The geometry of landslide dams is one of the major factors contributing to seepage erosion and slope instability. The two major factors that control the critical hydraulic gradient for instability in soil slopes are the downstream slope angles β and the gradient of the soil layer ψ (Iverson and Major 1986; Budhu and Gobin 1996). Basically, the internal friction angle of a dry cohesionless soil, at zero external pressure, is equal to the maximum stable slope angle of the soil. However, the soil mass collapses to a lower slope angle if steady-state seepage occurs. A series of experiments were conducted to evaluate the effects of downstream slope angle β on the critical hydraulic gradients for failure of landslide dams (Exp 16~19). The downstream slope angles were varied from 30 to 60°. A close examination of the results indicates that the stability of the dams increased as the downstream slope angle decreased from 60 to 30° (Table 2). The time of collapse of the dam crest increased from β = 60° (T b , 900 s) to β = 30° (T b , 2300 s). Similarly, i f1 increased with an increase in β, whereas i f2 decreased with an increase in β (Fig. 14 in Appendix 1). Also, the critical seepage velocity decreased with an increase in β, indicating high failure potentials in dams of high downstream slope angles (Table 3). The variations in pore-water pressures and the failure mechanism of the dams are shown in Fig. 18 (Appendix 2). The failure mechanism of the dams built with β in the range of 30 to 40° was initiated by the bifurcation of the downstream toe (Type II), whereas exfiltration, sapping and undermining of the downstream toe were characteristic of dams with β in the range of 41 to 60° (Type I). Budhu and Gobin 1996 remarked that for a soil which has ϕ of 30°, the exit hydraulic gradient at the slope face increases from 1 (when λ = β) to a limit value of sin β (when λ = 90°). The influence of dam height on the stability and longevity of landslide dams under steady-state seepage was evaluated in dams built with different dam heights H d , ranging from 0.15 m to 0.3 m (Tables 2 and 3). The experiments were conducted at a constant upstream inflow rate of 1.2 × 10−4 m3/s (Exp 20~23). A positive correlation was observed between the critical hydraulic gradients for dam failure (i f1 and i f2) and the dam height. The values of i f1 and i f2 increased from 1.17 and 0.55 for H d = 0.15 m, to 1.35 and 0.85 for H d = 0.30 m (Table 2; Fig. 15 in Appendix 1). Critical pore-water pressure values correlating with the onset of failure of the dams increased from 1.13 kPa (H d = 0.15 m) to 1.72 kPa (H d = 0.30 m) (Fig. 19 in Appendix 2). The results show that at constant α and β, the stability of the dams increased with a decrease in dam height H d . This was further evidenced by the failure mechanism of the dams which evolved from Type I for H d = 0.15 m to Type II for H d = 0.30 m. The results indicate that the height of landslide dams is an important parameter for assessing the stability of natural river blockages. Exp 24~25 were conducted to evaluate the influence of dam crest width D crw on the failure mechanism of landslide dams. A steady-state seepage was maintained at a constant upstream inflow of 1.67 × 10−4 m3/s. The results of transient variations in pore-water pressures and the corresponding trends of hydraulic gradients in the dams built with D crw of 0.20 m and 0.25 m (Exp 24 and 25) are shown in Fig. 11. The critical hydraulic gradients for seepage erosion initiation (i ini-1 and i ini-2), varied from 0.081 to 0.118. Exfiltration, sapping and debuttressing of the downstream toe, characteristic of Type I failure pattern, were the major failure mechanisms of the dams (Fig. 12). The rate of propagation of the wetting front through the dams was strongly influenced by D crw /H d . High D crw /H d resulted in high values of i f1, i f2, and v crit . The continual propagation of wetting front through the dams resulted in a gradual reduction of the effective stress of the soil, and subsequent mobilization of the liquefied mass which travelled downstream with an initial speed of 1.2 × 10−5 m/s. The episodic occurrence of hydraulic cracks and undermining and sloughing of the fluidized slope mass continued until the dam breached by overtopping. The results demonstrate that at constant hydraulic and geometrical conditions (H d , α and β), i f1 and i f2, as well as v crit , increased with an increase in D crw , indicating that the critical seepage velocity and the critical hydraulic gradient for seepage erosion in landslide dams are influenced by dam crest width D crw and D crw /H d . Evolution of pore-water pressures and hydraulic gradients in dams built with dam crest widths of (a) 0.20 m (b) 0.25 m Exfiltration, sapping and downstream toe debuttressing under steady-state seepage in dams built with dam crest widths of (a) 0.20 m (b) 0.25 m An extensive experimental programme was carried out to investigate the effects of transient variations in pore-water pressures and the critical hydraulic gradients for seepage-induced failure of landslide dams using a flume tank specifically designed for accurate determination of these hydraulic parameters. A steady-state seepage was maintained by ensuring that the upstream reservoir level remained constant prior to the collapse of the dam crest. Limit values of hydraulic gradients and seepage velocities were determined for different hydromechanical and geometrical conditions. Based on the experimental results, the following conclusions can be drawn: Sapping was the most dominant mechanism of slope destabilization observed in all the experiments. Other significant interrelated failure processes of the dam models included wetting front propagation, downstream face saturation, exfiltration, hydraulic cracking, toe bifurcation, downstream slope undercutting, sloughing and late-stage overtopping. Two characteristic types of failure, which depend on the geometrical and hydromechanical properties of the dams were observed: Type I and Type II. Type I commonly occurred in dams built with low compactive effort (e o = 1.76), high downstream slope angle (β ≥ 40°), crest width greater than 0.15 m, and moisture content lower than 15 %. This type of failure was initiated by exfiltration, sapping, and upslope propagation of the wetting front towards the dry upper region of the dam crest. Type I failure mechanism shares similar characteristics to the three distinct zones of slope deformation triggered by sapping, which are: fluvial, sapping and undermining zones, as reported by Howard and McLane (1988). In contrast, Type II was found in dams of low downstream slope angle (β ≥ 40°), dam height greater than 0.25 m, high upstream inflow rates and high compactive effort (e o = 1.21). Failure in these dams was triggered by downslope propagation of the wetting front, bifurcation of the damp lowermost part of the downstream toe, sapping erosion and sloughing of the fluidized slope material. The build-up of positive pore-water pressure under steady-state seepage and its effects on the apparent cohesion of the soil were evaluated for different upstream inflow rates and antecedent moisture contents. The results indicated that the stability and longevity of the dam models increased with a decrease in upstream inflow rate and antecedent moisture content. Thus, demonstrating the significance of pore geometry, particle density, gradation, and hydraulic conductivity of materials forming landslide dams in the development of seepage processes. In all the experiments, the critical hydraulic gradients for seepage erosion initiation (i ini-1 and i ini-2) ranged from 0.042 to 0.147. The critical hydraulic gradient for collapse of the dam crest i f was strongly influenced by several factors, such as the initial void ratio (compactive effort), antecedent moisture content, particle density, grain size distribution, inflow rate into the upstream reservoir and the geometrical characteristics of the dams. In the dams built with mixed materials, i f1 and i f2 increased with an increase in uniformity coefficient. The critical hydraulic gradient for collapse of the dam crest i f increased with an increase in inflow rate into the upstream reservoir (filling rate). Similarly, i f1 and i f2 were controlled by the combined effects of antecedent moisture content and porosity of the soil. At low void ratios, i f1 decreased with an increase in antecedent moisture content, whereas i f2 increased as antecedent moisture content increased through the dams. However, at high void ratios, under the same antecedent moisture contents, i f1 and i f2 increased with an increase in antecedent moisture content, suggesting seepage flow dynamics typical of laminar flows. Furthermore, both i f1 and i f2 increased with an increase in H d and D crw , whereas i f1 increased with an increase in β, and i f2 decreased as β increased. This indicates that the critical hydraulic gradient for dam failure for near-horizontal flow (Ψ = 5°), depends on β. These experiments demonstrate that seepage mechanisms in landslide dams comprised of unsaturated homogeneous and isotropic cohesionless materials are influenced by the hydraulic properties of the materials, as well as the geometrical characteristics of the dams. The textural characteristics of the materials used in these experiments are typical of landslide dams formed by rock avalanche processes where fragmentation and pulverization of the rock materials cause seepage processes to develop in the upper blocky carapace layer. However, further research should be done considering a wide range of sediment sizes and the addition of commercially available kaolinite clay to evaluate the mechanism of shear strength reduction under steady-state seepage. It is believed that performing unsaturated seepage analysis and limit equilibrium analysis, with regards to the results and conditions set for these experiments, could give further insights into the critical conditions for stability of landslide dams under steady-state seepage. C c = coefficient of curvature C u = coefficient of uniformity D crw (m) = dam crest width D 50 (mm) = median grain size e o = initial void ratio F s (kN/m3) = seepage force per unit volume H d (m) = height of the dam i 1 = hydraulic gradient (between sensors p1 and p2) i ini-1 = critical hydraulic gradient for seepage erosion initiation (between sensors p1 and p2) i f1 = critical hydraulic gradient for collapse of the dam crest (between sensors p1 and p2) K (m/s) = coefficient of permeability p crit-1 (kPa) = critical pore-water pressure for collapse of the dam crest at p1 Q in (m3/s) = inflow rate into the upstream reservoir T b (s) = time of collapse of the dam crest u w (kPa) = pore-water pressure V crit-1 (m/s) = critical seepage velocity determined at p1 w (%) = antecedent moisture content α (degree) = upstream slope angle β (degree) = downstream slope angle γ' (kN/m3) = submerged unit weight of soil w (kN/m3) = unit weight of water λ = seepage direction \( \rho \) dry (Mg/m3) = dry bulk density ϕ (degree) = internal friction angle ψ (degree) = flume bed slope angle Ahlinhan, M.F, and M. Achmus. 2010. Experimental investigation of critical hydraulic gradients for unstable soils. In Proceedings of the Fifth International Conference on Scour and Erosion, San Francisco, California, ed. S.E. Burns, S.K. Bhatia, C.M.C. Avila, and B.E. Hunt, 599–608. doi:10.1061/41147(392)58. Allen, S.K., S.C. Cox, and I.F. Owens. 2011. Rock avalanches and other landslides in the central Southern Alps of New Zealand: a regional study considering possible climate change impacts. Landslides 8(1): 33–48. Bligh, W.G. 1910. Dams, barrages and weirs on porous foundations. Engineering News 64(26): 708–710. Bonnard, C. 2011. Technical and human aspects of historic rockslide-dammed lakes and landslide dam breaches. In Natural and artificial rockslide dams, ed. S.G. Evans, R.L. Hermanns, A. Strom, and G. Scarascia-Mugnozza, 101–122. Berlin Heidelberg: Springer. Bryan, R.B., and A. Yair. 1982. Badland Geomorphology and Piping, 408. Norwich: Geobooks. Budhu, M., and R. Gobin. 1996. Slope instability from ground-water seepage. Journal of hydraulic Engineering 122(7):415–417. Cancienne, R.M., G.A. Fox, and A. Simon. 2008. Influence of seepage undercutting on the stability of root-reinforced streambanks. Earth Surface Processes and Landforms 33(11): 1769–1786. Cedergren, H.R. 1977. Seepage, drainage, and flow nets (Vol. 16), 1–534. New York: Wiley. Chai, H.J., H.C. Liu, Z.Y. Zhang, and Z.W. Xu. 2000. The distribution, causes and effects of damming landslides in China. Journal of Chengdu University of Technology 27: 302–307. Chang, D.S., and L.M. Zhang. 2012. Critical hydraulic gradients of internal erosion under complex stress states. Journal of Geotechnical and Geoenvironmental Engineering 139(9): 1454–1467. Chugaev, R.R. 1962. Gründungsumriss von Wasserbauwerken (in Russian).. Moskau – Leningrad. Clague, J.J., and S.G. Evans. 1994. Formation and failure of natural dams in the Canadian Cordillera, Geological Survey of Canada Bulletin, 464. Costa, J.E., and R.L. Schuster. 1988. The formation and failure of natural dams. Geological Society of America Bulletin 100(7): 1054–1068. Crosta, G., and C.D. Prisco. 1999. On slope instability induced by seepage erosion. Canadian Geotechnical Journal 36(6): 1056–1073. Crosta, G.B., P. Frattini, and F. Agliardi. 2013. Deep seated gravitational slope deformations in the European Alps. Tectonophysics 605: 13–33. Dai, F.C., C.F. Lee, J.H. Deng, and L.G. Tham. 2005. The 1786 earthquake-triggered landslide dam and subsequent dam-break flood on the Dadu River, southwestern China. Geomorphology 65(3): 205–221. Dapporto, S., M. Rinaldi, and N. Casagli. 2001. Failure mechanisms and pore water pressure conditions: analysis of a riverbank along the Arno River (Central Italy). Engineering Geology 61(4): 221–242. Darcy, H. 1856. Histoire des Foundataines Publique de Dijon, 590–594. Paris: Dalmont. Davies, T.R., and M.J. McSaveney. 2011. Rock-avalanche size and runout–implications for landslide dams. In Natural and Artificial Rockslide Dams, ed. S.G. Evans, R.L. Hermanns, A. Strom, and G. Scarascia-Mugnozza, 441–462. Berlin Heidelberg: Springer. Den Adel, H., K.J. Bakker, and M. Klein Breteler. 1988. Internal Stability of Minestone. In Proceedings of the International Symposium on Modelling Soil–Water–Structure Interaction, International Association for Hydraulic Research (IAHR), Netherlands, ed. P.A. Kolkman, J. Lindenberg, and K.W. Pilarczyk, 225–231. Rotterdam: Balkema. Dunne, T. 1990. Hydrology, mechanics, and geomorphic implications of erosion by subsurface flow. In Groundwater Geomorphology: The Role of Subsurface Water in Earth-Surface Processes and Landforms, Geol. Soc. Am. Spec. Pap. 252, ed. C.G. Higgins and D.R. Coates, 1–28. Evans, S.G., K.B. Delaney, R.L. Hermanns, A. Strom, and G. Scarascia-Mugnozza. 2011. The formation and behaviour of natural and artificial rockslide dams; implications for engineering performance and hazard management. In Natural and artificial rockslide dams, ed. S.G. Evans, R.L. Hermanns, A. Strom, and G. Scarascia-Mugnozza, 1–75. Berlin Heidelberg: Springer. Fell, R., and J.J. Fry. 2013. State of the art on the likelihood of internal erosion of dams and levees by means of testing. In Erosion in Geomechanics Applied to Dams and Levees, Chapter 1, ed. S. Bonelli, 1–99. London: ISTE-Wiley. Fox, G.A., and G.V. Wilson. 2010. The role of subsurface flow in hillslope and stream bank erosion: a review. Soil Science Society of America Journal 74(3): 717–733. Fox, G.A., G.V. Wilson, A. Simon, E.J. Langendoen, O. Akay, and J.W. Fuchs. 2007. Measuring streambank erosion due to groundwater seepage: correlation to bank pore water pressure, precipitation and stream stage. Earth Surface Processes and Landforms 32(10): 1558–1573. Fredlund, D.G. 1995. The stability of slopes with negative pore-water pressures. In The Ian Boyd Donald Symposium on Modern Developments in Geomechanics, vol. 3168, ed. C.M. Haberfield, 99–116. Clayton: Monash University, Department of Civil Engineering. Fredlund, D.G. 1999. The scope of unsaturated soil mechanics: an overview. In The Emergence of Unsaturated Soil Mechanics: Fredlund Volume, ed. A.W. Clifton, G.W. Wilson, and S.L. Barbour, 140–156. Ottawa: NRC Research Press. Fredlund, D.G., N.R. Morgenstern, and R.A. Widger. 1978. The shear strength of unsaturated soils. Canadian Geotechnical Journal 15(3): 313–321. Fredlund, D.G., H. Rahardjo. 1993. Soil mechanics for unsaturated soils. John Wiley & Sons. New York. Fredlund, D.G., H. Rahardjo, and M.D. Fredlund. 2012. Unsaturated soil mechanics in engineering practice, 926. New York: Wiley. Ghiassian, H., and S. Ghareh. 2008. Stability of sandy slopes under seepage conditions. Landslides 5(4): 397–406. Hagerty, D.J. 1991. Piping/sapping erosion: 1 Basic considerations. Journal of Hydraulic Engineering 117: 991–1008. Higgins, C.G. 1982. Drainage systems developed by sapping on Earth and Mars. Geology 10(3): 147–152. Higgins, C.G. 1984. Piping and sapping; development of landforms by groundwater outflow, 18–58. Boston: Allen & Unwin. Howard, A.D. 1988. Groundwater sapping experiments and modeling. Sapping Features of the Colorado Plateau: A Comparative Planetary Geology Field Guide 491: 71–83. Howard, A.D., and C.F. McLane. 1988. Erosion of cohesionless sediment by groundwater seepage. Water Resources Research 24(10): 1659–1674. Hutchinson, J.N. 1982. Damage to slopes produced by seepage erosion in sands. In Landslides and mudflows, 250–265. Moscow: Centre of International Projects, GKNT. Iverson, R.M., and J.J. Major. 1986. Groundwater seepage vectors and the potential for hillslope failure and debris flow mobilization. Water Resources Research 22(11): 1543–1548. Jones, J.A.A. 1981. The nature of soil piping: A review of research. In Brit. Geomorphol. Res. Group Res. Monogr. Serie 3. Norwich: Geobooks. Ke, L., and A. Takahashi. 2012. Influence of internal erosion on deformation and strength of gap-graded non-cohesive soil. In Proceedings of the Sixth International Conference on Scour and Erosion, Paris, 847–854. Kokusho, T., and Y. Fujikura. 2008. Effect of particle gradation on seepage failure in granular soils. In 4th Int'l Conf. on Scour and Erosion, Tokyo, Japan, 497–504. Korup, O., and F. Tweed. 2007. Ice, moraine, and landslide dams in mountainous terrain. Quaternary Science Reviews 26(25): 3406–3422. Korup, O., A.L. Densmore, and F. Schlunegger. 2010. The role of landslides in mountain range evolution. Geomorphology 120(1): 77–90. Lam, L., D.G. Fredlund, and S.L. Barbour. 1987. Transient seepage model for saturated-unsaturated soil systems: a geotechnical engineering approach. Canadian Geotechnical Journal 24(4): 565–580. Lane, E.W. 1935. Security from under-seepage masonry dams on earth foundations. Transactions of the American Society of Agricultural Engineers 60(4): 929–966. Lobkovsky, A.E., B. Jensen, A. Kudrolli, and D.H. Rothman. 2004. Threshold phenomena in erosion driven by subsurface flow. Journal of Geophysical Research 109: F04010. doi:10.1029/2004JF000172. Meyer, W., R.L. Schuster, and M.A. Sabol. 1994. Potential for seepage erosion of landslide dam. Journal of Geotechnical Engineering 120(7): 1211–1229. Moffat, R., R.J. Fannin, and S.J. Garner. 2011. Spatial and temporal progression of internal erosion in cohesionless soil. Canadian Geotechnical Journal 48(3): 399–412. Müller-Kirchenbauer, H., M. Rankl, and C. Schlötzer. 1993. Mechanism for regressive erosion beneath dams and barrages. In Proceedings of the First International Conference on Filters in Geotechnical and Hydraulic Engineering, ed. J. Brauns, M. Heibaum, and U. Schuler, 369–376. Rotterdam: Balkema. O'Connor, J.E., and J.E. Costa. 2004. The world's largest floods, past and present: their causes and magnitudes. In U.S Geological Survey Circular, 1254, 13. Okamura, M., and Y. Soga. 2006. Effects of pore fluid compressibility on liquefaction resistance of partially saturated sand. Soils and Foundations 46(5): 695–700. Okeke, A.C.U., and F. Wang. 2016. Hydromechanical constraints on piping failure of landslide dams: an experimental investigation. Geoenvironmental Disasters 3(1):1–17. Pagano, L., E. Fontanella, S. Sica, and A. Desideri. 2010. Pore water pressure measurements in the interpretation of the hydraulic behaviour of two earth dams. Soils and Foundations 50(2): 295–307. Perzlmaier, S., P. Muckenthaler, and A.R. Koelewijn. 2007. Hydraulic criteria for internal erosion in cohesionless soil. Assessment of risk of internal erosion of water retaining structures: dams, dykes and levees. In Intermediate Report of the European Working Group of ICOLD, 30–44. Munich: Technical University of Munich. Plaza, G., O. Zevallos, and É. Cadier. 2011. La Josefina Landslide Dam and Its Catastrophic Breaching in the Andean Region of Ecuador. In Natural and artificial rockslide dams, ed. S.G. Evans, R.L. Hermanns, A. Strom, and G. Scarascia-Mugnozza, 389–406. Berlin Heidelberg: Springer. Richards, K.S., and K.R. Reddy. 2007. Critical appraisal of piping phenomena in earth dams. Bulletin of Engineering Geology and the Environment 66(4): 381–402. Richards, K.S., and K.R. Reddy. 2010. True triaxial piping test apparatus for evaluation of piping potential in earth structures. Journal of ASTM Geotech Test 33(1): 83–95. Rinaldi, M., and N. Casagli. 1999. Stability of streambanks formed in partially saturated soils and effects of negative pore water pressures: the Sieve River (Italy). Geomorphology 26(4): 253–277. Samani, Z.A., and L.S. Willardson. 1981. Soil hydraulic stability in a subsurface drainage system. Transactions of the American Society of Agricultural Engineers 24(3): 666–669. Simon, A., and A.J. Collison. 2001. Pore‐water pressure effects on the detachment of cohesive streambeds: seepage forces and matric suction. Earth Surface Processes and Landforms 26(13): 1421–1442. Skempton, A.W., and J.M. Brogan. 1994. Experiments on piping in sandy gravels. Geotechnique 44(3): 449–460. Terzaghi, K. 1943. Theoretical soil mechanics, 1–510. New York: Wiley. Terzaghi, K., R.B. Peck, and G. Mesri. 1996. Soil Mechanics in Engineering Practice, 3rd ed. New York: Wiley. Wan, C.F., and R. Fell. 2004. Experimental investigation of internal instability of soils in embankment dams and their foundations, NICIV Report No. R429. Sydney: University of South Wales. Wang, G., R. Huang, T. Kamai, and F. Zhang. 2013. The internal structure of a rockslide dam induced by the 2008 Wenchuan (M w 7.9) earthquake, China. Engineering Geology 156:28–36. Wang, F.W., Y. Kuwada, A.C. Okeke, M. Hoshimoto, T. Kogure, and T. Sakai. in press. Comprehensive study on failure prediction of landslide dams by piping. 12th Proc. International Symposium on Landslides, Napoli, Italy. Weijers, J.B.A., and J.B. Sellmeijer. 1993. A new model to deal with the piping mechanism. In Proceedings of the First International Conference on Filters in Geotechnical and Hydraulic Engineering, ed. J. Brauns, M. Heibaum, and U. Schuler, 345–355. Rotterdam: Balkema. Wilson, G.V., R.K. Periketi, G.A. Fox, S.M. Dabney, F.D. Shields, and R.F. Cullum. 2007. Soil properties controlling seepage erosion contributions to streambank failure. Earth Surface Processes and Landforms 32(3): 447–459. Wörman, A. 1993. Seepage-induced mass wasting in coarse soil slopes. Journal of Hydraulic Engineering 119(10): 1155–1168. Zasłavsky, D., and G. Kassiff. 1965. Theoretical formulation of piping mechanism in cohesive soils. Geotechnique 15(3): 305–316. This investigation was financially supported by JSPS KAKENHI Grant Number A-2424106 for landslide dam failure prediction. Dr Solomon Obialo Onwuka (University of Nigeria, Nsukka) is gratefully acknowledged for his valuable comments and suggestions. The authors would like to thank the anonymous reviewers for reviewing the draft version of the manuscript. Department of Geoscience, Graduate School of Science and Engineering, Shimane University, 1060 Nishikawatsu-cho, Matsue, Shimane, 690-8504, Japan Austin Chukwueloka-Udechukwu Okeke & Fawu Wang Austin Chukwueloka-Udechukwu Okeke Fawu Wang Correspondence to Austin Chukwueloka-Udechukwu Okeke. FW acquired the laboratory materials used in the research. ACO designed and conducted the experiments. FW supervised the research and made suggestions on the initial method adopted for the experiments. ACO analyzed the experimental results and wrote the first draft of the manuscript. All authors read and approved the final manuscript. Failure mechanism of Sandfill Dam (Experiment 1). (MP4 187388 kb) Trends of hydraulic gradients in experiments carried out with upstream inflow rates of (a) 1.67 × 10−5 m3/s (b) 5 × 10−5 m3/s (c) 1 × 10−4 m3/s (d) 1.67 × 10−4 m3/s Trends of hydraulic gradients in dams built with downstream slope angles of (a) 30° (b) 40° (c) 50° (d) 60° Trends of hydraulic gradients in dams built with dam heights of (a) 0.15 m (b) 0.20 m (c) 0.25 m (d) 0.30 m Evolution of pore-water pressures in dams built with an e o of 1.76 and antecedent moisture contents of (a) 5 % (b) 10 % (c) 15 % (d) 20 % Variations in pore-water pressures in dams built with downstream slope angles of (a) 30° (b) 40° (c) 50° (d) 60° Transient changes in pore-water pressures in dams built with dam heights of (a) 0.15 m (b) 0.20 m (c) 0.25 m (d) 0.30 m Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Okeke, A.CU., Wang, F. Critical hydraulic gradients for seepage-induced failure of landslide dams. Geoenviron Disasters 3, 9 (2016). https://doi.org/10.1186/s40677-016-0043-z Sapping Hydraulic gradient Critical seepage velocity Wetting front propagation Downstream slope saturation Landslide Dams
CommonCrawl
Home > Vol 24, No 2 (2020) The Journal of Samara State Technical University, Ser. Physical and Mathematical Sciences is the periodical scientific edition published by Samara State Technical University since 1996. For a long time the journal was an edition where the new scientific results of Russian scientific schools had been published. Now the journal is focused on both Russian and foreign scientists, working in the priority research areas of Samara State Technical University because the main purpose of the journal is an open dissemination of scientific knowledge among Russian and foreign scientists. Since 2011 the journal is a quarterly printed edition (four issues a year); issue size — 200 p.; language of articles — Russian, English. The journal is published in printed and electronic version. The editorial board takes and estimates the manuscripts irrespective of race, gender, nationality, heritage, citizenship, occupation, employment, residence, political, philosophic, religious and any other views of the author. The contributed article should be a completed scientific research. It shouldn't have been published, or be in process of publication in other editions. The manuscript should contain novel scientific results in the priority research areas of Samara State Technical University, including "Differential Equations and Mathematical Physics", "Mechanics of Solids", "Mathematical Modeling, Numerical Methods and Software Systems". The journal is published at the expense of publisher. All materials are publishing free of charge, the author's fee is not provided. All materials of the electronic version are freely available. The target audience of the journal are the scientists working in the following areas: "Differential Equations and Mathematical Physics", "Deformable Solid Body Mechanics", "Mathematical Modeling, Numerical Methods and Software Systems". The journal is included in the Russian Science Citation Index database on the Web of Science platform. The journal is included in VINITI abstracts databases. The issue details are publishing in ULRICH'S Periodical Directory. The journal articles are indexed in Scholar.Google.com, zbMATH, СyberLeninka.ru, Math-Net.ru. The journal is integrated in CrossRef and FundRef search systems. URL: https://journals.eco-vector.com/1991-8615/issue/view/2521 On a boundary value problem for a third-order parabolic-hyperbolic type equation with a displacement boundary condition in its hyperbolicity domain Balkizov Z.A. In the article, we investigate a boundary-value problem with a third-order inhomogeneous parabolic-hyperbolic equation with a wave operator in a hyperbolicity domain. A linear combination with variable coefficients in terms of derivatives of the sought function on independent characteristics, as well as on the line of type and order changing is specified as a boundary condition. We have established necessary and sufficient conditions that guarantee existence and uniqueness of a regular solution to the problem under study. In some cases, a solution representation is written out explicitly. Journal of Samara State Technical University, Ser. Physical and Mathematical Sciences. 2020;24(2):211-225 Group classification, invariant solutions and conservation laws of nonlinear orthotropic two-dimensional filtration equation with the Riemann–Liouville time-fractional derivative Lukashchuk V.O., Lukashchuk S.Y. A nonlinear two-dimensional orthotropic filtration equation with the Riemann–Liouville time-fractional derivative is considered. It is proved that this equation can admits only linear autonomous groups of point transformations. The Lie point symmetry group classification problem for the equation in question is solved with respect to coefficients of piezoconductivity. These coefficients are assumed to be functions of the square of the pressure gradient absolute value. It is proved that if the order of fractional differentiation is less than one then the considered equation with arbitrary coefficients admits a four-parameter group of point transformations in orthotropic case, and a five-parameter group in isotropic case. For the power-law piezoconductivity, the group admitted by the equation is five-parametric in orthotropic case, and six-parametric in isotropic case. Also, a special case of power function of piezoconductivity is determined for which there is an additional extension of admitted groups by the projective transformation. There is no an analogue of this case for the integer-order filtration equation. It is also shown that if the order of fractional differentiation $\alpha \in (1,2)$ then dimensions of admitted groups are incremented by one for all cases since an additional translation symmetry exists. This symmetry is corresponded to an additional particular solution of the fractional filtration equation under consideration. Using the group classification results for orthotropic case, the representations of group-invariant solutions are obtained for two-dimensional subalgebras from optimal systems of symmetry subalgebras. Examples of reduced equations obtained by the symmetry reduction technique are given, and some exact solutions of these equations are presented. It is proved that the considered time-fractional filtration equation is nonlinearly self-adjoint and therefore the corresponding conservation laws can be constructed. The components of obtained conserved vectors are given in an explicit form. Sobolev spaces and boundary-value problems for the curl and gradient-of-divergence operators Saks R.S. We study boundary value and spectral problems in a bounded domain $G$ with smooth border for operators $\operatorname{rot} +\lambda I$ and $\nabla \operatorname{div} +\lambda I$ in the Sobolev spaces. For $\lambda\neq 0$ these operators are reducible (by B. Veinberg and V. Grushin method) to elliptical matrices and the boundary value problems and satisfy the conditions of V. Solonnikov's ellipticity. Useful properties of solutions of these spectral problems derive from the theory and estimates. The $\nabla \operatorname{div}$ and $ \operatorname{rot}$ operators have self-adjoint extensions $\mathcal{N}_d$ and $\mathcal{S}$ in orthogonal subspaces $\mathcal{A}_{\gamma }$ and $\mathbf{V}^0$ forming from potential and vortex fields in $\mathbf{L}_{2}(G)$. Their eigenvectors form orthogonal basis in $\mathcal{A}_{\gamma }$ and $\mathbf{V}^0$ elements which are presented by Fourier series and operators are transformations of series. We define analogues of Sobolev spaces $\mathbf{A}^{2k}_{\gamma }$ and $\mathbf{W}^m$ orders of $2k$ and $m$ in classes of potential and vortex fields and classes $ C (2k,m)$ of their direct sums. It is proved that if $\lambda\neq \operatorname{Sp}(\operatorname{rot})$, then the operator $ \operatorname{rot}+\lambda I$ displays the class $C(2k,m+1)$ on the class $C(2k,m)$ one-to-one and continuously. And if $\lambda\neq \operatorname{Sp}(\nabla \operatorname{div})$, then operator $\nabla \operatorname{div}+\lambda I$ maps the class $C(2(k+1), m)$ on the class $C(2k,m)$, respectively. Creep and long-term strength of metals under unsteady complex stress states (Review) Lokoshchenko A.M., Fomin L.V., Teraud W.V., Basalov Y.G., Agababyan V.S. This article is an analytical review of experimental and theoretical studies of creep and creep rupture strength of metals under unsteady complex stress states published over the past 60 years.The first systematic studies of the creep of metals under complex stress conditions were published in the late 50s and early 60s of the 20th century in the Soviet Union (L. M. Kachanov and Yu. N. Rabotnov) and Great Britain (A. E. Johnson). Pioneering work on creep rupture strength first appeared in the USSR (L. M. Kachanov and Yu. N. Rabotnov). Subsequently, Yu. N. Rabotnov developed the kinetic theory of creep and creep rupture strength, with the help of which it is possible to efficiently describe various features of the creep process of metals up to fracture under various loading programs. Different versions of the kinetic theory use either a scalar damage parameter, or a vector parameter, or a tensor parameter, or a combination of them. Following the work of M. Kachanov and Yu. N. Rabotnov mechanics of continuum destruction began to develop in Europe, in Asia, and then in the USA.The hypothesis of proportionality of stress deviators and deviators of creep strain rates is accepted as the main connection between the components of stress tensors and creep strains. When modeling experimental data, the proportionality coefficient in this dependence takes different forms. The main problem in the development of this direction is the difficulty in obtaining experimental data with arbitrary loading programs.This review provides the main results of studies conducted by scientists from different countries. Except Yu. N. Rabotnov and L. M. Kachanov, also a significant contribution to the development of the direction of science made by Russian scientists N. N. Malinin, A. A. Ilyushin, V. S. Namestnikov, S. A. Shesterikov, A. M. Lokoshchenko, Yu. P. Samarin, O. V. Sosnin, A. F. Nikitenko, et al. Exact solutions to generalized plane Beltrami–Trkal and Ballabh flows Prosviryakov E.Y. Nonstationary plane flows of a viscous incompressible fluid in a potential field of external forces are considered. An elliptic partial differential equation is obtained, with each solution being a vortex flow stream function described by an exact solution to the Navier–Stokes equations. The obtained solutions generalize the Beltrami–Trkal and Ballabh flows. Examples of such new solutions are given. They are intended to verify numerical algorithms and computer programs. Research of a retrial queueing system with exclusion of customers and three-phase phased by follow-up Nazarov A.A., Izmailova Y.E. In this paper, we consider a retrial queueing system (RQ-system) which receives to the input a Poisson flow with a given intensity. If at the time of customer the server is busy, the displacement of customer standing on the server takes place. Customers that do not have time to be successfully serviced go into orbit, in order to, after an accidental exponential delay, again turn to the server for maintenance. It is shown that the limiting characteristic function of the number of customers in the orbit and the states of the server converges to a three-dimensional Gaussian distribution. The mean vector and covariance matrix are obtained for this distribution. A stationary probability distribution of the server states is also found. Stochastic calculation of curves dynamics of enterprise Saraev A.L., Saraev L.A. The article proposes mathematical models of the stochastic dynamics of the single-factor manufacturing enterprises development through internal and external investments. Balance equations for such enterprises are formulated, describing random processes of continuous increase in output and growth of production factors. The interaction of proportional, progressive and digressive depreciation with internal and external investments is investigated. Equations are obtained to determine the equilibrium state of the enterprise and the limiting values of the factors of production are calculated. The cases of the stable progressive development of the enterprise, the suspension of its work during the re-equipment of production and the temporary crisis of production shutdown during equipment replacement are considered. The algorithm for the numerical solution of stochastic differential equations of enterprise development is constructed in accordance with the Euler–Maruyama method. For each implementation of this algorithm, the corresponding stochastic trajectories are constructed for the random function of the production factor. A variant of the method for calculating the expectation of a random function of a factor of production is developed and the corresponding differential equation is obtained for it. It is shown that the numerical solution of this equation and the average value of the function of the production factor calculated from two hundred realizations of stochastic trajectories give almost identical results. Numerical analysis of the developed models showed good compliance with the known statistical data of the production enterprise. Couette flow of hot viscous gas Khorin A.N., Konyukhova A.A. A new exact solution is found for the equations of motion of a viscous gas for a stationary shear flow of hot (800–1500 K) gas between two parallel plates moving at different speeds (an analog of the incompressible Couette flow). One of the plates was considered thermally insulated. For the dependence of the coefficient of viscosity on temperature, the Sutherland formula is adopted. Unlike other known exact solutions, instead of a linear association between the viscosity and thermal conductivity coefficients, a more accurate formula was used to calculate the thermal conductivity coefficient, having the same accuracy in the temperature range under consideration as the Sutherland formula (2 %). Using the obtained exact solution, the qualitative effect of compressibility on the friction stress and the temperature, and velocity profiles were investigated. It is shown that the compressibility of the gas leads to an increase in the friction stress, if one of the plates is thermally insulated. The new exact solution was compared with the known exact solution (Golubkin, V.N. & Sizykh, G.B., 2018) obtained using the Sutherland formula for the viscosity coefficient and the Reynolds analogy for the thermal conductivity coefficient. It was found that both solutions lead to the same conclusions about the qualitative effect of compressibility on the friction stress and on the temperature and velocity profiles. However, the increase in friction stress caused by compressibility of the gas turned out to be underestimated twice when using the Reynolds analogy. This shows that the assumption of a linear relationship between the coefficients of viscosity and thermal conductivity can lead to noticeable quantitative errors. $\alpha$-Differentiable functions in complex plane Pashaei R., Pishkoo A., Asgari M.S., Ebrahimi Bagha D. In this paper, the conformable fractional derivative of order $\alpha$ is defined in complex plane. Regarding to multi-valued function $z^{1-\alpha}$, we obtain fractional Cauchy–Riemann equations which in case of $\alpha=1$ give classical Cauchy–Riemann equations. The properties relating to complex conformable fractional derivative of certain functions in complex plane have been considered. Then, we discuss about two complex conformable differential equations and solutions with their Riemann surfaces. For some values of order of derivative, $\alpha$, we compare their plots. An undamped oscillation model with twodifferent contact angles for a spherical dropletimpacting on solid surface Chen S., Cong B., Zhang D., Liu X., Shen S. In order to further elucidate the dynamic theory of droplet oscillating on solid surface, a new handling method of contact angle of the droplet during the process of the oscillation was founded, which is based on the spherical model. The influence of gravity on the contact angle andspreading radius was discussed. Thus, an equation between the spreading radius of the dropletand time flow was founded. The results of theoretical calculation were compared with smoothednumerical results.
CommonCrawl
Risk-minimizing portfolio selection for insurance payment processes under a Markov-modulated model JIMO Home Pricing American options under proportional transaction costs using a penalty approach and a finite difference scheme April 2013, 9(2): 391-409. doi: 10.3934/jimo.2013.9.391 A penalty-free method for equality constrained optimization Zhongwen Chen 1, , Songqiang Qiu 1, and Yujie Jiao 1, School of Mathematics Science, Soochow University, Suzhou, 215006, China, China, China Received September 2011 Revised January 2013 Published February 2013 A penalty-free method is introduced for solving nonlinear programming with nonlinear equality constraints. This method does not use any penalty function, nor a filter. It uses trust region technique to compute trial steps. By comparing the measures of feasibility and optimality, the algorithm either tries to reduce the value of objective function by solving a normal subproblem and a tangential subproblem or tries to improve feasibility by solving a normal subproblem only. In order to guarantee global convergence, the measure of constraint violation in each iteration is required not to exceed a progressively decreasing limit. Under usual assumptions, we prove that the given algorithm is globally convergent to first order stationary points. Preliminary numerical results on CUTEr problems are reported. Keywords: penalty-free method, trust region, Nonlinear equality-constrained optimization, global convergence.. Mathematics Subject Classification: Primary: 65K05; Secondary: 90C26, 90C3. Citation: Zhongwen Chen, Songqiang Qiu, Yujie Jiao. A penalty-free method for equality constrained optimization. Journal of Industrial & Management Optimization, 2013, 9 (2) : 391-409. doi: 10.3934/jimo.2013.9.391 R. Andreani, E. G. Birgin, J. M. Martinez and M. L. Schuverdt, Augmented Lagrangian methods under the constant positive linear dependence constraint qualification,, Math. Prog. Ser. B, 111 (2008), 5. doi: 10.1007/s10107-006-0077-1. Google Scholar R. H. Bielschowsky and F. A. M. Gomes, Dynamic control of infeasibility in equality constrained optimization,, SIAM J. Optim., 19 (2008), 1299. doi: 10.1137/070679557. Google Scholar I. Bongartz, A. R. Conn, N. I. M. Gould and Ph. L. Toint, CUTE: Constrained and unconstrained testing enviroment,, ACM Tran. Math. Software, 21 (1995), 123. Google Scholar I. Bongartz, A. R. Conn, N. I. M. Gould, M. A. Saunders and Ph. L. Toint, "A Numerical Comparison between the LANCELOT and MINOS Packages for Large-Scale Constrained Optimization: The Complete Numerical Results,", Report 97/14, (1997). Google Scholar R. H. Byrd, F. E. Curtis and J. Nocedal, An inexact SQP method for equality constrained optimization,, SIAM J Optim., 19 (2008), 351. doi: 10.1137/060674004. Google Scholar R. H. Byrd, F. E. Curtis and J. Nocedal, An inexact Newton method for nonconvex equality constrained optimization,, Math. Prog., 122 (2010), 273. doi: 10.1007/s10107-008-0248-3. Google Scholar Z. W. Chen, A penalty-free-type nonmonotone trust-region method for nonlinear constrained optimization,, Appl. Math. and Comput., 173 (2006), 1014. doi: 10.1016/j.amc.2005.04.031. Google Scholar C. M. Chin and R. Fletcher, On the global convergence of an SLP-filter algorithm that takes EQP steps,, Math. Prog. Ser. A, 96 (2003), 161. Google Scholar A. R. Conn, N. I. M. Gould and Ph. L. Toint, "Trust-Region Methods,", MPS-SIAM Ser. Optim., (2000). doi: 10.1137/1.9780898719857. Google Scholar E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles,, Math. Prog. Serial A., 91 (2002), 201. doi: 10.1007/s101070100263. Google Scholar R. Fletcher and S. Leyffer, Nonlinear programming without a penalty function,, Math. Prog. Ser. A, 91 (2002), 239. doi: 10.1007/s101070100244. Google Scholar R. Fletcher, S. Leyffer and Ph. L. Toint, On the global convergence of a filter-SQP algorithm,, SIAM J. Optim., 13 (2002), 44. doi: 10.1137/S105262340038081X. Google Scholar R. Fletcher, S. Leyffer and Ph. L. Toint, "A Brief History of Filter Methods,", Optimization Online, (2006). Google Scholar N. I. M. Gould and Ph. L. Toint, Nonlinear programming without a penalty function or a filter,, Math. Prog. Ser. A, 122 (2010), 155. doi: 10.1007/s10107-008-0244-7. Google Scholar X. Liu and Y. Yuan, A sequential quadratic programming method without a penalty function or a filter for nonlinear equality constrained optimization,, SIAM J. Optim., 21 (2011), 545. doi: 10.1137/080739884. Google Scholar S. Qiu and Z. Chen, A new penalty-free-type algorithm that based on trust region techniques,, Appl. Math. Comput., 218 (2012), 11089. doi: 10.1016/j.amc.2012.04.065. Google Scholar M. Ulbrich and S. Ulbrich, Non-monotone trust region methods for nonlinear equality constrained optimization without a penalty function,, Math. Prog. Ser. B, 95 (2003), 103. doi: 10.1007/s10107-002-0343-9. Google Scholar M. Ulbrich, S. Ulbrich and L. N. Vicente, A globally convergent primal-dual interior-point filter method for nonlinear programming,, Math. Prog. Ser. A, 100 (2004), 379. doi: 10.1007/s10107-003-0477-4. Google Scholar A. Wächter and L. T. Biegler, Line search filter methods for nonlinear programming: Local convergence,, SIAM J. Optim., 16 (2005), 32. doi: 10.1137/S1052623403426544. Google Scholar H. Yamashita, "A Globally Convergent Quasi-Newton Method for Equality Constrained Optimization that Does Not Use a Penalty Function,", Technical Report, (1979). Google Scholar H. Yamashita and H. Yabe, "A Globally and Superlinearly Convergent Trust-Region SQP Method Without a Penalty Function for Nonlinearly Constrained Optimization,", Technical Report, (2003). Google Scholar C. Zoppke-Donaldson, "A Tolerance Tube Approach to Sequential Quadratic Programming with Applications,", Ph.D Thesis, (1995). Google Scholar Nobuko Sagara, Masao Fukushima. trust region method for nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2005, 1 (2) : 171-180. doi: 10.3934/jimo.2005.1.171 Liang Zhang, Wenyu Sun, Raimundo J. B. de Sampaio, Jinyun Yuan. A wedge trust region method with self-correcting geometry for derivative-free optimization. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 169-184. doi: 10.3934/naco.2015.5.169 Bülent Karasözen. Survey of trust-region derivative free optimization methods. Journal of Industrial & Management Optimization, 2007, 3 (2) : 321-334. doi: 10.3934/jimo.2007.3.321 Jun Chen, Wenyu Sun, Zhenghao Yang. A non-monotone retrospective trust-region method for unconstrained optimization. Journal of Industrial & Management Optimization, 2013, 9 (4) : 919-944. doi: 10.3934/jimo.2013.9.919 Lijuan Zhao, Wenyu Sun. Nonmonotone retrospective conic trust region method for unconstrained optimization. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 309-325. doi: 10.3934/naco.2013.3.309 Changjun Yu, Kok Lay Teo, Liansheng Zhang, Yanqin Bai. On a refinement of the convergence analysis for the new exact penalty function method for continuous inequality constrained optimization problem. Journal of Industrial & Management Optimization, 2012, 8 (2) : 485-491. doi: 10.3934/jimo.2012.8.485 Songqiang Qiu, Zhongwen Chen. An adaptively regularized sequential quadratic programming method for equality constrained optimization. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-14. doi: 10.3934/jimo.2019075 Boshi Tian, Xiaoqi Yang, Kaiwen Meng. An interior-point $l_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization. Journal of Industrial & Management Optimization, 2016, 12 (3) : 949-973. doi: 10.3934/jimo.2016.12.949 Jun Takaki, Nobuo Yamashita. A derivative-free trust-region algorithm for unconstrained optimization with controlled error. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 117-145. doi: 10.3934/naco.2011.1.117 Changjun Yu, Kok Lay Teo, Liansheng Zhang, Yanqin Bai. A new exact penalty function method for continuous inequality constrained optimization problems. Journal of Industrial & Management Optimization, 2010, 6 (4) : 895-910. doi: 10.3934/jimo.2010.6.895 M. S. Lee, B. S. Goh, H. G. Harno, K. H. Lim. On a two-phase approximate greatest descent method for nonlinear optimization with equality constraints. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 315-326. doi: 10.3934/naco.2018020 Xin Zhang, Jie Wen, Qin Ni. Subspace trust-region algorithm with conic model for unconstrained optimization. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 223-234. doi: 10.3934/naco.2013.3.223 A. M. Bagirov, Moumita Ghosh, Dean Webb. A derivative-free method for linearly constrained nonsmooth optimization. Journal of Industrial & Management Optimization, 2006, 2 (3) : 319-338. doi: 10.3934/jimo.2006.2.319 Dan Xue, Wenyu Sun, Hongjin He. A structured trust region method for nonconvex programming with separable structure. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 283-293. doi: 10.3934/naco.2013.3.283 Wen-ling Zhao, Dao-jin Song. A global error bound via the SQP method for constrained optimization problem. Journal of Industrial & Management Optimization, 2007, 3 (4) : 775-781. doi: 10.3934/jimo.2007.3.775 Wei-Zhe Gu, Li-Yong Lu. The linear convergence of a derivative-free descent method for nonlinear complementarity problems. Journal of Industrial & Management Optimization, 2017, 13 (2) : 531-548. doi: 10.3934/jimo.2016030 Regina S. Burachik, C. Yalçın Kaya. An update rule and a convergence result for a penalty function method. Journal of Industrial & Management Optimization, 2007, 3 (2) : 381-398. doi: 10.3934/jimo.2007.3.381 Ahmet Sahiner, Gulden Kapusuz, Nurullah Yilmaz. A new smoothing approach to exact penalty functions for inequality constrained optimization problems. Numerical Algebra, Control & Optimization, 2016, 6 (2) : 161-173. doi: 10.3934/naco.2016006 Chunlin Hao, Xinwei Liu. Global convergence of an SQP algorithm for nonlinear optimization with overdetermined constraints. Numerical Algebra, Control & Optimization, 2012, 2 (1) : 19-29. doi: 10.3934/naco.2012.2.19 Zhongwen Chen Songqiang Qiu Yujie Jiao
CommonCrawl
Serial dependence in the perceptual judgments of radiologists Mauro Manassi ORCID: orcid.org/0000-0003-4210-75701 na1, Cristina Ghirardo2 na1, Teresa Canas-Bajo2,4 na1, Zhihang Ren2,4, William Prinzmetal2 & David Whitney2,3,4 In radiological screening, clinicians scan myriads of radiographs with the intent of recognizing and differentiating lesions. Even though they are trained experts, radiologists' human search engines are not perfect: average daily error rates are estimated around 3–5%. A main underlying assumption in radiological screening is that visual search on a current radiograph occurs independently of previously seen radiographs. However, recent studies have shown that human perception is biased by previously seen stimuli; the bias in our visual system to misperceive current stimuli towards previous stimuli is called serial dependence. Here, we tested whether serial dependence impacts radiologists' recognition of simulated lesions embedded in actual radiographs. We found that serial dependence affected radiologists' recognition of simulated lesions; perception on an average trial was pulled 13% toward the 1-back stimulus. Simulated lesions were perceived as biased towards the those seen in the previous 1 or 2 radiographs. Similar results were found when testing lesion recognition in a group of untrained observers. Taken together, these results suggest that perceptual judgements of radiologists are affected by previous visual experience, and thus some of the diagnostic errors exhibited by radiologists may be caused by serial dependence from previously seen radiographs. In a medical screening setting, radiologists repeatedly search for signs of tumors in radiological scan images, classifying them, judging their size, class, position and so on. An underlying assumption about visual search in this setting is that current perceptual experience is independent of our previous perceptual experience. Here, we show that perceptual judgments of radiologists are biased by serial dependence. We found that radiologists' recognition of simulated lesions was strongly biased by their past visual experience. This source of error, unlike a mere response bias, extended over 10 seconds back in time (was temporally tuned), occurred only between similar lesions (was featurally tuned), and within a limited spatial region (was spatially tuned). Our experiments provide evidence for a newly pinpointed source of error in radiological screening. Crucially, our results show limited and precise boundaries within which the detrimental effects of serial dependence occur in radiologists, and open the path to potential strategies which may mitigate their detrimental effects. Cancer diagnosis in medical images is crucial for the health of millions of people, but it is still far from perfect. For example, within mammography, false negative and false positive rates have been reported to be 0.15% and 9%, respectively (Nelson et al., 2016). Some of these misdiagnoses are due to misperceptions and misinterpretations of radiographs by clinicians (Berlin, 2007; Croskerry, 2003). Interpretive errors in radiology are defined as the discrepancy in interpretation between the radiologist and peer consensus (Bruno et al., 2015; Waite et al., 2017), and it has been proposed that perceptual errors account for 60–80% of the total amount (Funaki et al., 1997; Kim & Mansfield, 2014). Some sources of interpretive error have been identified and characterized, including search and recognition errors (Carmody et al., 1980; Nodine et al., 1996), cognitive biases (Croskerry, 2003; Lee et al., 2013), search satisfaction (Ashman et al., 2000; Berbaum & Franken Jr, 2011), subsequent search misses (Birdwell et al., 2001; Boyer et al., 2004; Harvey et al., 1993), and low prevalence (Wolfe et al., 2005, 2007; Rich et al., 2008; Menneer et al., 2010; Evans et al., 2013; Horowitz, 2017; Kunar et al., 2017). However, some other errors in cancer image interpretation are still without explanation (Bruno et al., 2015; Waite et al., 2017, 2019). Given the importance of this issue, a great deal of research has been carried out in the last decades to understand how to identify and characterize the source of these mistakes in order to mitigate them as much as possible. When looking at a radiograph, clinicians are typically asked to localize lesions (if present), and then to classify them by judging their size, class, and so on. Importantly, during this visual search task, radiologists often examine dozens or hundreds of images in batches, sometimes seeing several related images one after the other. During this process, a main underlying assumption is that radiologists' percepts and decisions about a current image are completely independent of prior perceptual events. Recent theoretical and empirical research has raised the possibility that this is not true. The visual system is characterized by visual serial dependency, a type of sequential effect in which what was previously seen influences (captures) what is seen and reported at this moment (Cicchini et al., 2014; Fischer & Whitney, 2014). Serial dependencies can manifest in several domains, such as perception (Cicchini et al., 2017, 2018; Fischer & Whitney, 2014; Manassi et al., 2018), decision making (Abrahamyan et al., 2016; Fernberger, 1920), and memory (Barbosa & Compte, 2020; Fornaciai & Park, 2020; Kiyonaga et al., 2017), and they occur with a variety of features and objects, including orientation, position, faces, attractiveness, ambiguous objects, ensemble coding of orientation, and numerosity (Bliss et al., 2017; Corbett et al., 2011; Fischer & Whitney, 2014; Fornaciai & Park, 2018; Kondo et al., 2012; Liberman et al., 2018; Manassi et al., 2017; Taubert & Alais, 2016; Taubert et al., 2016a; Wexler et al., 2015; Xia et al., 2016). Serial dependence is characterized by three main kinds of tuning. First, feature tuning: serial dependence occurs only between similar features and not between dissimilar ones (Fischer & Whitney, 2014; Fritsche et al., 2017; Manassi et al., 2017, 2018). Second, temporal tuning: serial dependence gradually decays over time (Fischer & Whitney, 2014; Manassi et al., 2018; Wexler et al., 2015). Third, spatial tuning: serial dependence occurs only within a limited spatial window; it is strongest when previous and current objects are presented at the same location, and it gradually decays as the relative distance increases (Bliss et al., 2017; Collins, 2019; Fischer & Whitney, 2014; Manassi et al., 2018). In addition, attention is a necessary component for serial dependence (Fischer & Whitney, 2014; Fritsche & de Lange, 2019; Kim et al., 2020). The empirical results above prompted our theoretical suggestion that perception occurs through Continuity Fields—temporally and spatially tuned operators or filters that bias our percepts towards previous stimuli through serial dependence (Alais et al., 2017; Cicchini et al., 2017; Fischer & Whitney, 2014; Taubert et al., 2016a, 2016b). Continuity Fields are a helpful, beneficial mechanism for promoting perceptual stability because they produce a smoothed percept that better matches the autocorrelations in the world in which we live (Fischer & Whitney, 2014; Liberman et al., 2014; Manassi et al., 2017). In contrast to the highly structured and stable physical world, retinal images are constantly changing due to external and internal sources of noise and discontinuities from eye blinks, occlusions, shadows, camouflage, retinal motion, and other factors. Rather than processing each momentary image or object as being independent of preceding ones, the visual system favors recycling previously perceived features and objects. By incorporating serially dependent perceptual interpretations, the visual system smooths perception (and decision making and memory; Kiyonaga et al., 2017) over time and helps us perceive a continuous and stable world despite noise and change. The benefits of serial dependence arise because the world we encounter is usually autocorrelated. But it is not always. In some artificial, human-contrived, situations the world is not autocorrelated. One obvious example of this are visual stimuli attended in laboratory experiments (in visual psychophysics, cognition, psychology, neurophysiology, and many other domains). Often stimuli are randomly ordered, with the assumption that trials are treated independently by the brain (Mulder et al., 2012; Winkel et al., 2014). Serial dependence negatively impacts the ability to measure performance in these cases (Fischer & Whitney, 2014; Fründ et al., 2014; Liberman et al., 2014). Visual search in clinical settings, such as reading radiographs or pathology slides, is an even more striking example where stimuli may not be autocorrelated. When seeing and judging lesions under such circumstances, serial dependence could introduce a bias in perceptual judgments that may result in a significant reduction in sensitivity and increase in errors. The negative impacts of serial dependence in search tasks would be especially prominent in cases where there is low signal, high noise, high uncertainty, or where fine discriminations are required (Bliss et al., 2017; Cicchini et al., 2014, 2017, 2018; Fischer & Whitney, 2014; Manassi et al., 2017). These are exactly the challenging situations that radiologists routinely face when searching scans. We hypothesize that because of serial dependence, radiologists' perceptual decisions on any given current radiograph could be biased towards the previous images they have seen. To preview our results, we measured recognition of simulated tumors in trained clinicians and found that their perceptual judgments were significantly affected by serial dependence. Observers and apparatus All experimental procedures were approved by and conducted in accordance with the guidelines and regulations of the UC Berkeley Institutional Review Board. Participants provided informed consent in accordance with the IRB guidelines of the University of California at Berkeley. All participants had normal or corrected-to-normal vision, and were all naïve to the purpose of the experiment. Fifteen trained radiologists (gender: 4 female, 11 males; qualification: 11 experts, 3 residents, & 1 fellow; age: 27–72 years) participated in Experiment 1. They were recruited at RSNA, Radiological Society of North America Annual Meeting (Chicago, US December 1st–6th, 2019). Of the fifteen, two participants did not complete the study, and their data were excluded. Eleven non-expert observers (7 female; aged 19–21 years) participated in Experiment 2. Sample size was determined based on radiologists' availability at RSNA, and was similar to current studies of serial dependence (Cicchini et al., 2018; Manassi et al., 2019; Pascucci et al., 2017). Eleven non-expert observers (7 female; aged 19–21 years) participated in Experiment 2. They were recruited from a student pool at UC Berkeley. Stimuli were generated on a 13.3 inch 2017 MacBook Pro with a 28.7 cm × 18 cm screen with PsychoPy (Peirce, 2007, 2009). The refresh rate of the display was 60 Hz and the resolution 1440 × 900 pixels. Stimuli were viewed from a distance of approximately 57 cm. Observers used a laptop keyboard for all responses. Stimuli and design To simulate the screening performed by radiologists, we created three objects with random shapes and generated 48 morph shapes in between each pair (147 shapes in total; Fig. 1A). We used these shapes as simulated lesions. On each trial, radiologists viewed a random simulated lesion superimposed on a mammogram section and were then asked to adjust a shape to match the simulated lesion they previously saw. The stimuli consisted of light-gray shapes based on 3 original prototype shapes (A/B/C; Fig. 1A). A set of 48 shape morph shapes was created between these prototypes, resulting in a morph continuum of 147 shapes. The shapes were approximately 3.7° width and height. Each shape was blurred by using a gaussian blur function in OpenCV with a gaussian kernel size of 1.55°. On each trial, a random shape was presented at a random angular location relative to central fixation (0.35°) in the peripheral visual field (4.4° eccentricity, from center to center). The shape was embedded in a random mammogram (30% transparency level) and was presented for 500 ms (Fig. 1B). Mammograms were taken from The Digital Database for Screening Mammography (Bowyer et al., 1996; 100 possible alternatives) and enlarged to fit the screen. The mammograms (~ 2000 × 4500 pixels) were enlarged three times and cut at a central position such that about 15% of each x-ray was displayed. This resulted in breast tissue covering the entire screen. Next, we presented a mask composed of random Brownian noise background (1/f2 spatial noise). After the mask, a random shape drawn from the morph continuum (width and height: 3.7°; color: light-gray) appeared at the fixation point location, and observers were asked to adjust the shape to match the perceived shape using the left/right arrow keys (continuous report, adjustment task; left–right arrow keys to adjust the shape). The starting shape was randomized on each trial. Observers were allowed to take as much time as necessary to respond and pressed the spacebar to confirm the chosen shape. Following the response and a 250 ms delay, the next trial started. Stimuli and design of the Experiments 1 and 2. A We created three objects with random shapes (prototypes A/B/C, shown in a bigger size) and generated 48 morph shapes in between each pair (147 shapes in total). We used these shapes as simulated lesions during radiological screening. B Observers were presented with a random shape (simulated lesion) hidden in a mammogram section, followed by a noise mask. Radiologists were then asked to adjust the shape to match the simulated lesion they previously saw, and pressed spacebar to confirm. During the inter-trial-interval, a red fixation dot appeared in the center. The size of the shape adjustment is identical to the size of the simulated lesion, but it was enlarged for illustrative purposes. After a 250 ms inter-trial interval, the next trial started During the experiment, observers were asked to continuously fixate a red dot in the center (0.35° radius). On each trial, they were first presented with a shape in a random location at 4.4° eccentricity, followed by a noise mask (Fig. 1). Observers were then asked to adjust a shape to match the one they previously saw (adjustment task). Observers performed 3 blocks of 85 trials each (Fig. 1B). In a preliminary session, observers completed a practice block of 10 trials. Mean adjustment time was 3240 ± 804 ms in Experiment 1 and 2980 ± 578 ms in Experiment 2. The only difference between Experiment 1 and 2 were the participants. In Experiment 1, we tested trained radiologists, whereas in Experiment 2, we tested students from the UC Berkeley population. Equipment and experimental design were otherwise identical. Feature tuning analysis We measured response errors on the adjustment task to determine whether a subject's judgment of each simulated lesion was influenced by the previously seen lesions. Response error was computed as the shortest distance along the morph wheel between the match morph and the target one (current response – current shape morph). For each participant's data, trials were considered lapses and were excluded if adjustment error exceeded 3 standard deviations from the absolute mean adjustment error or if the response time was longer than 20 s. Less than 2% of data was excluded on average. Response error was compared to the difference in shape between the current and previous trial, computed as the shortest distance along the morph wheel between the previous target lesion (n-back) and the current target shape (current response – current shape morph). We quantified feature tuning by fitting a von Mises distribution to each subject's data points (see details below). Additionally, for each observer, we computed the running circular average within a 20 morph units window. Figure 3A-B shows the average of the moving averages across all the observers, and the corresponding von Mises fit. Figure 3E-F shows the half-amplitudes von Mises distribution for individual observers. Temporal tuning analysis We quantified temporal tuning by fitting a derivative of von Mises to each subject's data using the following equation: $$y = - \frac{{a\kappa \sin \left( {x - \mu } \right)e^{{\kappa \cos \left( {x - \mu } \right)}} }}{{2\pi I_{0} \left( \kappa \right)}}$$ where parameter \(y\) is response error on each trial, \(x\) is the relative orientation of the previous trial, \(a\) is the amplitude modulation parameter of the derivative-of-von-Mises, \(\mu\) indicates the symmetry axis of the von Mises derivative, \(\kappa\) indicates the concentration of the von Mises derivative, and \(I_{0} \left( \kappa \right)\) is the modified Bessel function of order 0. In our experiments, \(\mu\) is set to 0. We fitted the von Mises derivative using constrained nonlinear minimization of the residual sum of squares. As a measure of serial dependence, we reported half the peak-to-trough amplitude of the derivative-of-von-Mises (Figure 3E, F). We used the half amplitude of the von Mises, the \(a\) parameter in the above equation, to measure the degree to which observers' reports of simulated lesions were pulled in the direction of n-back simulated lesions. For example, if subjects' perception of a lesion was repelled by the 1-back simulated tumor (e.g., because of a negative aftereffect), or not influenced by the 1-back lesion (because of independent, bias-free perception on each trial), then the half-amplitude of the von Mises should be negative or close to zero, respectively. For each subject's data, we generated confidence intervals by calculating a bootstrapped distribution of the model-fitting parameter values. For each observer, we resampled the data with replacement 5000 times (Efron & Tibshirani, 1986). The relationship on each trial between response error and relative difference in shape (between the current and previous trial) was maintained. On each iteration, we fitted a new von Mises to obtain a bootstrapped half-amplitude and width for each subject. Previous research recently showed that individual observers can have idiosyncratic biases in object recognition and localization, which are unrelated to serial dependence. For example, there are individual stable differences in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy (Kosovicheva & Whitney, 2017; Wang et al., 2020). For this reason, we conducted an additional control analysis to remove such potential unrelated biases before fitting the von Mises derivative function. We plotted observer's error values (current response – current shape morph) as a function of the actual stimulus presented (current shape morph), and fit a radial basis function (30 Gaussian Kernels used) to the data. This allowed us to quantify the idiosyncratic bias for each observer. For example, observers may make a consistent error in reporting a simulated lesion of 20 morph units as being 10, thus creating a systematic error of − 10 morph units. Conversely, if there was no systematic error, all error would approximate zero. We then regressed out the bias quantified by the radial basis fit by subtracting it from the observer's error. This subtraction left us with residual errors that did not include the idiosyncratic biases unrelated to serial dependence. Importantly, the addition of this control analysis—removing systematic biases unrelated to serial effects—had no significant impact on the serial dependence results. It did not generate or increase the measured serial dependence. As an additional method to rule out potential unrelated biases on the serial dependence effect, we explored the effect of future trials on the current response (Fornaciai & Park, 2020; Maus et al., 2013). That is, we compared the current trial response error to the difference in shape between the current and following trial (n-forward). Since observers have not seen the future trial shape, their current response in a given trial should not be in any ways related to the shape that will be presented to them next. Spatial tuning analysis In order to measure the spatial tuning of serial dependence, we binned trials according to the distance between the current and previous shape angular locations (Fig. 4). First, we divided trials from each observer into 3 main relative angular distance groups: 0°–60°, 61°–120°, and 121°–180° for 1-back trials. For example, a relative angular distance of 0° indicates that previous and current lesions were presented at the same location (for example, 45° and 45° of angular distance in previous and current trials). Similarly, a relative angular distance of 60° indicates that previous and current lesions were presented at 30° and 90° of angular distance. The distance between successive shape locations was computed as \(\sqrt {\left( {x{\text{current}} - x{\text{previous}}} \right)^{2} + \left( {y{\text{current}} - y{\text{current}}} \right)^{2} .}\) Second, we extracted 60 random trials from each observer for each distance group, and collapsed all the trials from all the observers in three super-subject groups. Third, for each super-subject we fitted a derivative of von Mises and computed the half amplitudes. Fourth, we performed a regression line analysis across the three half amplitudes of the distance groups. For each super-subject, this analysis yielded a slope of the regression line, which reflects how much serial dependence varies as a function of distance between sequential stimuli. We repeated the procedure 5000 times, by resampling the data with replacement on each iteration. We tested whether serial dependence influenced recognition of simulated lesions when viewing consecutive images of mammogram tissues in radiologists and untrained observers. Response error (y-axis) was computed as the shortest distance along the morph wheel between the match shape and the simulated lesion. Average response error was similar across groups; 9.2 ± 1.8 morph units in Experiment 1 (radiologists) and 8.9 ± 1.8 in Experiment 2 (untrained observers; t(22) = 0.34, p = 0.74). To further quantify discriminability of the simulated lesions, we fit a von Mises function to each observer's response error frequency distribution (Fig. 2A) and computed the corresponding Cumulative Distribution Function (CDF; Fig. 2B). The CDF was generated with a ceiling and floor parameters of 0.1 and 0.9, respectively, and a free x-axis shift parameter to allow for any observers' bias to be taken into account. For each observer's individual CDF, a Continuous Report Discrimination index (C.R.D.) was defined as half of the difference between the 25th and 75th percentile of their Cumulative Distribution Function (Fig. 2C). This measure can be considered as the equivalent of JND (Just Noticeable Difference) for continuous reports. The mean CRD was 3.97 ± 0.26 morph units for radiologists and 4.08 ± 0.25 morph units for untrained observers. Continuous Report Discrimination index (C.R.D). A For each observer, we plotted a frequency histogram of the adjustment errors and fitted a Von Mises to quantify adjustment performance. B We then converted the von Mises fit into a Cumulative Distribution Function. Continuous Report Discrimination index was calculated by taking the half difference between 25 and 75th percentile in terms of adjustment error morph units. C Each dot shows CRD index for individual observers in the two groups. Bars indicate average in Experiment 1 and 2, and error bars indicate standard error To test whether radiologists' lesion perception was pulled by lesions in previous mammograms, we plotted the adjustment error on the current trial in relation to the difference in shape between the current and previous trial, computed as the shortest distance along the morph wheel between the previous lesion and the current lesion. A derivative-of-von Mises curve was then fitted to the observers' data (Fig. 3A, B, see Feature Tuning analysis). We bootstrapped each subject's data 5000 times and reported the mean bootstrapped half-amplitude as a metric of the sequential dependence (Fig. 3E, F). Serial dependence in the perception of simulated lesions by expert radiologists and untrained observers. A, B In units of shape morph steps, the x-axis is the shortest distance along the morph wheel between the current and one-back simulated lesion, and the y-axis is the shortest distance along the morph wheel between the selected match shape and current simulated lesion. Positive x axis values indicate that the one-back simulated lesion was clockwise on the shape morph wheel relative to the current simulated lesion, and positive y axis values indicate that the current adjusted shape was also clockwise relative to the current simulated lesion. The average of the running averages across observers (blue line) reveals a clear trend in the data, which followed a derivative-of-von-Mises shape (model fit depicted as black solid line; fit on average of running averages). Light-blue shaded error bars indicate standard error across observers. Lesion perception was attracted toward the morph seen on the previous trial. Importantly, it was tuned for similarity between previous and current morph (feature tuning). C, D The derivative-of-von Mises was converted into its source von Mises function (y-axis), and the relative morph difference was plotted in terms of CRD units (x-axis). Violet shaded error bars indicate 95% confidence interval. The curve indicates the proportion of change in response predicted by the change in the sequential stimulus. E, F Bootstrapped half amplitudes of derivative of von Mises fit for 1, 2, and 3 trials back. Half amplitude for 1-forward is shown as a comparison (grey bars). Each filled dot represents the bootstrapped half amplitude (morph units) for a single observer. Bars indicate the group bootstrap and error bars are bootstrapped 95% confidence intervals In Experiment 1, all participants except for one displayed a positive von Mises half-amplitude, indicating that lesion perception on a given trial was significantly pulled in the direction of the lesion presented in the previous trial (p < 0.001, group bootstrap, n = 13, Fig. 3E). Even the lesion two trials in the past influenced current judgments (p = 0.01, group bootstrap, Fig. 3E). No attraction was found for 3-trials back (p = 0.09, group bootstrap, Fig. 3E). A similar pattern of results was found in Experiment 2 with untrained observers. Lesion perception on a given trial was significantly pulled in the direction of lesions presented in the previous trial for 1 and 2 trials back (n = 11; 1-Back; p < 0.001, 2-Back; p < 0.001, group bootstrap, Fig. 3F) but not for 3-back (n = 11; p = 0.128, group bootstrap, Fig. 3F). There was no statistical difference between radiologists and untrained observers for 1-back and 2-back (Fig. 3; 1-back, p = 0.88; 2back, p = 0.19), whereas there was a statistical difference for 3-back (p = 0.02; but no serial dependence was detected in those conditions). As a control for possible confounds or artifacts, we checked whether lesion perception could have been biased from lesions one, two, or three trials in the future. As expected, lesion perception was not significantly influenced by future stimuli for radiologists (1-forward, group bootstrap half amplitude: 0.27 morph units, p = 0.50; 2-forward, group bootstrap half amplitude: 0.35 morph units, p = 0.5, 3-forward group bootstrap half amplitude: 0.5 morph units, p = 0.38). The same was true for naïve observers (1-forward, group bootstrap half amplitude: − 0.83 morph units, p = 0.16; 2-forward, group bootstrap half amplitude: 0.22 morph units, p = 0.72; 3-forward, group-bootstrap half amplitude: 0.23 morph units, p = 0.67). Average response time was similar across Experiments; 3244 ± 845 ms in Experiment 1 and 2980 ± 578 ms in Experiment 2 (t(22) = 0.834, p = 0.41). Lesion recognition was therefore strongly attracted toward lesions in previous mammograms seen more than 5 s or 10 s ago (Fig. 3E, F). These results suggest a featural tuning (Fig. 3A, B) and temporal tuning of 5–10 s (Fig. 3E, F), in accordance with previous literature (Fischer & Whitney, 2014; Fritsche et al., 2017; Manassi et al., 2018; Moors et al., 2015; Taubert et al., 2016a; Wexler et al., 2015). In order to further characterize the strength of the serial dependence effect, we computed how much the current simulated lesion was captured by lesions in the previous trial. We converted the derivative-of-von Mises into its source von Mises function. In order to compare our effect with shape discriminability, we divided the relative morph difference (previous tumor – current tumor; x-axis) by the average CRD index (from Fig. 2C). The plots in Fig. 3B, C show the proportion of change in response (efficiency) predicted by the change in the sequential stimulus. Serial dependence captured the current (simulated) tumor with peaks of 22–25%, and expanded over a large discriminability range (from − 10 to + 10 CRD units). As an additional analysis, we investigated how much adjustment errors were biased more towards the shape category on the previous trial compared to other previous object categories. Shape categories A/B/C were defined as the prototype A/B/C − / + 24 morph units (49 morph units in total). Adjustment responses were coded as indicating category A/B/C. We computed the percentage of mistakes towards the shape category in 1-back trials, and normalized the index by subtracting 33.33% (chance percentage level) from each percentage index (see Fig. 2 in Manassi et al., 2019 for an in-depth explanation of the analysis). Observers misclassified the simulated lesion on a current trial as the lesion in 1-back trials 8% more often than expected by chance. In order to further quantify the strength of the 1-back serial dependence effect, we conducted a linear regression analysis on the response error as a function of the relative morph difference (from − 17 to + 17 morph units on the x-axis in Fig. 3A, B, 25% of the central range). Average slope was 0.132 ± 0.10 in Experiment 1 and 0.143 ± 0.10 in Experiment 2, thus meaning that both radiologists and untrained participants exhibited a perceptual pull of ~ 13% towards simulated lesions viewed 1 trial back (Fig. 4, radiologists; 1-back, p < 0.01; 2-back, p = 0.30; 3-back, p = 0.09; naïve observers; 1-back, p < 0.01; 2-back, p < 0.001; 3-back, p = 0.01). Serial dependence effect size estimation. A, B Blue lines indicate the average of the running averages across observers (same data as Fig. 2). Light-blue shaded error bars indicate standard error across observers. We fitted a linear regression on the response error as a function of the relative morph difference from − 17 to + 17 morph units (model fit depicted as green dashed line; fit on average of running averages). Dark green shaded areas indicate the morph relative difference considered in the regression analysis. C, D Bootstrapped regression slopes for 1, 2, and 3 trials back. Each filled dot represents the regression slope for a single observer. Bars indicate the group bootstrap slope and error bars are bootstrapped 95% confidence intervals As previously mentioned, an important property of serial dependence is spatial tuning (Bliss et al., 2017; Cicchini et al., 2017; Fischer & Whitney, 2014; Fornaciai & Park, 2018; Manassi et al., 2018). We therefore investigated whether serial dependence in simulated radiological screening is affected by the spatial distance between current and previous lesions. On each trial, the simulated lesion was presented at a fixed distance from the center but at random angular distance. Hence, we predicted that serial dependence will be highest when current and previous lesions are presented at a close relative distance, and will gradually decay as relative distance increases. For each participant, we divided the trials into three groups based on the relative distance of the 1-trial back stimulus (Fig. 5; See Spatial Tuning analysis section). Spatial tuning of serial dependence. A refers to Experiment 1, whereas B refers to Experiment 2. Each red dot refers to a different relative angular distance between current lesion and lesion in the 1-back trial, super-subject bootstrapped mean. For example, a bin distance 0° indicates that current and previous simulated tumor presented at the same location (30° of angular distance, for example). Error bars are bootstrapped 95% confidence intervals. Dashed line indicates half-amplitude zero (no bias) In Experiment 1, serial dependence occurred for an angular distance groups of 0°–60° and 61°–120°, (0°–60°: p < 0.001; 61°–120°: p < 0.001 group bootstrapped distribution; Fig. 5A), whereas no serial dependence occurred for an angular distance group of 121°–180° (121°–180°: p = 0.20; group bootstrapped distribution; Fig. 5A). There was no statistical difference across the two groups for relative distances of 0°–60° (p = 0.29), 61°–120° (p = 0.11) and 121°–180° (p = 0.42). In order to further characterize spatial tuning for 1-trial back, we performed a regression analysis on the three distance groups. Regression slope was significantly different from zero, thus indicating a gradual decay of serial dependence with increased relative distance (slope = − 0.89; p = 0.05; group bootstrapped distribution). These results are consistent with prior findings that serial dependence is modulated by the relative location of the sequential targets. Therefore, in a radiological screening environment, the current lesion may be misperceived as more similar to the previous one if current and previous lesions are presented at similar locations. Interestingly, untrained observers from Experiment 2 did not show the same spatial tuning: serial dependence occurred at all tested angular distance groups (0°–60°: p < 0.05; 61°–120°: p < 0.001; 121°–180°: p < 0.05; group bootstrapped distribution; Fig. 5) with no gradual decay as a function of spatial separation. When performing a regression analysis on the three distance groups, regression slope was not significantly different from zero (slope = − 0.05; p = 0.90; group bootstrapped distribution; Fig. 5B). The implications of this result will be discussed in the next section. Taken together, our results show that simulated tumor recognition is strongly biased towards previously presented simulated lesions up to 10 s in the past. Importantly, this sequential effect occurs with expert radiologists and exhibits all the defining properties of traditional serial dependence: feature tuning (Fig. 3A, B), temporal tuning (Fig. 3E, F) and spatial tuning (Fig. 5A). We found that the perceptual decisions of radiologists were subject to serial dependence. Simulated lesion recognition was biased towards simulated tumors presented up to 10 s in the past (Fig. 3A). Importantly, radiologists exhibited a perceptual pull of ~ 13% towards previously seen tumors (Fig. 4). Moreover, serial dependence alone resulted in 8% more miscategorizations than were expected by chance or due to noise. This perceptual pull exhibited all three tuning characteristics of Continuity Fields: feature tuning (Fig. 3A, B), temporal tuning (Fig. 3E, F) and spatial tuning (Fig. 5A). In Experiment 2, we found largely similar results with untrained observers, with the exception that less clear spatial tuning was found. Taken together, these results show that radiologists' perceptual judgements are affected by serial dependence. Our results extend previous work, which investigated the impact of serial dependence in a simulated clinical search task (Manassi et al., 2019). In untrained observers, it was found that shape classification performance was strongly impaired by recent visual experience, biasing classification judgments toward the previous image content. Whereas those results can be considered as a proof of concept that serial dependence can be detrimental in clinical tasks, the present study extended this in several ways including (1) testing trained radiologists, (2) using actual mammogram textured backgrounds as stimuli and (3) implementing a more thorough continuous report task instead of a classification judgment. The results thus show that trained radiologists, as well as naïve observers, suffer from serial dependence. Future research will investigate whether this kind of error occurs in a more realistic radiological screening setting. Interestingly, we did not find spatial tuning in Experiment 2 with untrained observers. Whereas this seems like a somewhat surprising result, it must be considered that the maximum relative distance in our experiments was 8.8° (double the radius), and previous literature has shown that the spatial window where serial dependence occurs is around 10°–15° or even larger (Collins, 2019; Fischer & Whitney, 2014; Manassi et al., 2019). The potentially interesting result, therefore, is the finding of narrower spatial tuning with expert radiologist observers. The reason for this narrowed spatial tuning is unknown, but it does raise questions about the role of familiarity and expertise. Serial dependence is known to scale with uncertainty (Cicchini et al., 2017), and it is possible that the spatial tuning of serial dependence varies with familiarity as well. In addition to differences in expertise and familiarity, an additional difference between the two groups of observers in these experiments could be attentional. Previous literature has shown that serial dependence is gated by attention (Fischer & Whitney, 2014; Fornaciai & Park, 2018; Liberman et al., 2016; Rafiei et al., 2021). In comparison to untrained observers, radiologists may pay more attention to the stimuli or attend to different features of the stimuli; therefore, serial dependence tuning may differ with expertise. It might be argued that our results can be explained by a mere motor response bias, i.e. the motor response during the adjustment task may be biased towards the previous motor response. However, a large literature has shown that serial dependence still occurs when no adjustment is given in the previous trial, thus ruling out a mere motor effect (Fischer & Whitney, 2014; Manassi et al., 2017, 2018). In addition, a simple motor bias cannot explain why serial dependence was tuned for the relative spatial location, biasing simulated tumor judgments only when current and previous tumors were presented at a close angular distance (Fig. 5A). Neither can it explain relative featural difference, biasing tumor adjustment only when current and previous tumors were similar enough (Fig. 3A, B). Beyond the motor component, there is an intense debate on the underlying mechanism(s) of serial dependence. Among others, serial dependence was proposed to occur on the perception (Cicchini et al., 2017; Fischer & Whitney, 2014; Manassi et al., 2018), decision (Fritsche et al., 2017; Pascucci et al., 2017) and memory level (Barbosa et al., 2020; Bliss et al., 2017). Our results do not allow us to disentangle on which level(s) serial dependence actually occurs. There is psychophysical evidence that serial dependence acts on perception, thus biasing object appearance towards the past (Cicchini et al., 2017; Fischer & Whitney, 2014; Fornaciai & Park, 2019). How serial dependence in perception actually occurs is still a matter of debate; it was recently shown that awareness is required for serial dependence to occur, thus suggesting that a top-down feedback from high level areas is crucial for serial dependence (Fornaciai & Park, 2019; Kim et al., 2020). It may be argued that the duration of the mammogram presentation (500 ms) is too short and radiologists observe mammograms for a much longer period of time. In fact, the average duration of radiograph fixation for hitting the first mass has been reported as 1.8–2 s, which is surprisingly brief (Krupinski, 1996; Nodine et al., 1996). Interestingly, sufficiently long mammogram exposure durations may lead to the opposite effect, i.e. negative aftereffect. It was found that when adapting normal observers to image samples of dense or fatty tissues, exposure to fatty images caused an intermediate image to appear more dense (and vice versa) (Kompaniez et al., 2013; Kompaniez-Dunigan et al., 2015, 2018). Importantly, mammogram perception was biased away from the past. Future research will establish under which conditions these two biases (perception biased towards or away from the past) arise in radiological screening. Limitations of current study Our results show that radiologists suffer from significant serial dependence in their perceptual judgments. Whether these significant serial dependencies are left at the door of the reading room is as-yet untested. However, the results here show that radiologists are not immune from sequential effects in perceptual decisions. This is only a first step, and there are many improvements required to optimize the ecological validity of our findings. Future improvements will be implemented in order to fully address the impact of serial dependence in a clinical setting. First, the stimuli. Our study tested serial dependence with a generated set of shape stimuli, but actual tumor images will be required to test the role of serial dependence in radiological screening. In addition, within a radiograph, there can be a variety of features which may be interpreted as tumors, from actual masses, to microcalcifications, architectural distortions, and focal asymmetries. Future research will test whether these features, as well as actual lesions, suffer from serial dependence. Second, the task. We chose a continuous report paradigm in our experiments, as it provides precise trial-wise errors and has proven to be very reliable in measurements of serial dependence in the past (Cicchini et al., 2017, 2021; Fritsche & de Lange, 2019; Fischer & Whitney, 2014; Fritsche et al., 2017; Liberman et al., 2014). Given the radiologists' time constraints and resulting limited number of trials, we considered this task to be relatively efficient. The untrained observer data provides a useful baseline in this respect. A previous paper that used a 3AFC classification task found a similar amount of serial dependence in untrained observers as that found here (Manassi et al., 2019). Nevertheless, as the actual task of the radiologist involves classifying lesions and localizing them, implementing more realistic tasks with radiologists will be important in future studies. Third, mammogram duration. Although radiologists fixate radiographs for slightly longer durations (500 ms in the present and 1.8–2 s reported in the literature; Krupinski, 1996; Nodine et al., 1996), they were shown to perform above chance in detecting abnormalities in chest radiographs with 200 ms duration (Kundel & Nodine, 1975). It will be interesting to test which biases arise with increasing stimulus duration, whether a positive one (as shown by our results), a negative one (Kompaniez et al., 2013; Kompaniez-Dunigan et al., 2015, 2018), or no bias at all. Finally, whereas our results may indicate that radiological screening is detrimentally affected by serial dependence, they also open avenues to mitigate this bias. Since serial dependence was shown to occur only under restricted featural, spatial, and temporal conditions, some strategies could be implemented to induce perceptual decisions outside of these conditions. For example, mammograms could be presented at different spatial locations. Because of spatial tuning, the relative distance between lesions would be so large that serial dependence would no longer occur. Other strategies may be implemented based on temporal and featural tuning as well. All relevant data are available from the authors under request. Abrahamyan, A., Silva, L. L., Dakin, S. C., Carandini, M., & Gardner, J. L. (2016). Adaptable history biases in human perceptual decisions. Proceedings of the National Academy of Sciences USA, 113(25), E3548-3557. Alais, D., Leung, J., & Van der Burg, E. (2017). Linear summation of repulsive and attractive serial dependencies: Orientation and motion dependencies sum in motion perception. Journal of Neuroscience, 37(16), 4381–4390. Ashman, C. J., Yu, J. S., & Wolfman, D. (2000). Satisfaction of search in osteoradiology. American Journal of Roentgenology, 175(2), 541–544. Barbosa, J., & Compte, A. (2020). Build-up of serial dependence in color working memory. Scientific Reports, 10.1(2020), 1–7. Barbosa, J., Stein, H., Martinez, R. L., Galan-Gadea, A., Li, S., Dalmau, J., Adam, K. C., Valls-Solé, J., Constantinidis, C., & Compte, A. (2020). Interplay between persistent activity and activity-silent dynamics in the prefrontal cortex underlies serial biases in working memory. Nature Neuroscience, 23(8), 1016–1024. Berbaum, K. S., & Franken, E. A., Jr. (2011). Satisfaction of search in radiographic modalities. Radiology, 261(3), 1000–1001. author reply 1001. Berlin, L. (2007). Accuracy of diagnostic procedures: Has it improved over the past five decades? American Journal of Roentgenology, 188(5), 1173–1178. Birdwell, R. L., Ikeda, D. M., O'Shaughnessy, K. F., & Sickles, E. A. (2001). Mammographic characteristics of 115 missed cancers later detected with screening mammography and the potential utility of computer-aided detection. Radiology, 219(1), 192–202. Bliss, D. P., Sun, J. J., & D'Esposito, M. (2017). Serial dependence is absent at the time of perception but increases in visual working memory. Science and Reports, 7(1), 14739. Bowyer, K., Kopans, D., Kegelmeyer, W., Moore, R., Sallam, M., Chang, K., & Woods, K. (1996). The digital database for screening mammography. In Third international workshop on digital mammography. Boyer, B., Hauret, L., Bellaiche, R., Gräf, C., Bourcier, B., & Fichet, G. (2004). Retrospectively detectable carcinomas: Review of the literature. Journal De Radiologie, 85(12 Pt 2), 2071–2078. Bruno, M. A., Walker, E. A., & Abujudeh, H. H. (2015). Understanding and confronting our mistakes: The epidemiology of error in radiology and strategies for error reduction. Radiographics, 35(6), 1668–1676. Carmody, D. P., Nodine, C. F., & Kundel, H. L. (1980). An analysis of perceptual and cognitive factors in radiographic interpretation. Perception, 9(3), 339–344. Cicchini, G. M., Anobile, G., & Burr, D. C. (2014). Compressive mapping of number to space reflects dynamic encoding mechanisms, not static logarithmic transform. Proceedings of the National Academy of Sciences USA, 111(21), 7867–7872. Cicchini, G. M., Mikellidou, K., & Burr, D. (2017). Serial dependencies act directly on perception. Journal of Vision, 17(14), 6. Cicchini, G. M., Mikellidou, K., & Burr, D. C. (2018). The functional role of serial dependence. Proceedings of the Biological Sciences, 285(1890), 20181722. Cicchini, G. M., Benedetto, A., & Burr, D. C. (2021). Perceptual history propagates down to early levels of sensory analysis. Current Biology, 31(6), 1245-1250.e2. Collins, T. (2019). The perceptual continuity field is retinotopic. Scientific Reports, 9(1), 1–6. Corbett, J. E., Fischer, J., & Whitney, D. (2011). Facilitating stable representations: Serial dependence in vision. PLoS ONE, 6(1), e16701. Croskerry, P. (2003). The importance of cognitive errors in diagnosis and strategies to minimize them. Academic Medicine, 78(8), 775–780. Efron, B., & Tibshirani, R. (1986). Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Statistical Science, 1(1), 54–75. Evans, K. K., Birdwell, R. L., & Wolfe, J. M. (2013). If you don't find it often, you often don't find it: Why some cancers are missed in breast cancer screening. PLoS ONE, 8(5), e64366. Fernberger, S. W. (1920). Interdependence of judgments within the series for the method of constant stimuli. Journal of Experimental Psychology, 3(2), 126. Fischer, J., & Whitney, D. (2014). Serial dependence in visual perception. Nature Neuroscience, 17(5), 738–743. Fornaciai, M., & Park, J. (2018). Serial dependence in numerosity perception. Journal of Vision, 18(9), 15. Fornaciai, M., & Park, J. (2019). Spontaneous repulsive adaptation in the absence of attractive serial dependence. Journal of Vision, 19(5), 21–21. Fornaciai, M., & Park, J. (2020). Attractive serial dependence between memorized stimuli. Cognition, 200, 104250. Fritsche, M., & de Lange, F. P. (2019). The role of feature-based attention in visual serial dependence. Journal of Vision, 19(13), 21–21. Fritsche, M., Mostert, P., & de Lange, F. P. (2017). Opposite effects of recent history on perception and decision. Current Biology, 27(4), 590–595. Fründ, I., Wichmann, F. A., & Macke, J. H. (2014). Quantifying the effect of intertrial dependence on perceptual decisions. Journal of Vision, 14(7), 9–9. Funaki, B., Szymski, G. X., & Rosenblum, J. D. (1997). Significant on-call misses by radiology residents interpreting computed tomographic studies: Perception versus cognition. Emergency Radiology, 4(5), 290–294. Harvey, J. A., Fajardo, L. L., & Innis, C. A. (1993). Previous mammograms in patients with impalpable breast carcinoma: Retrospective vs blinded interpretation. 1993 ARRS President's Award. AJR. American Journal of Roentgenology, 161(6), 1167–1172. Horowitz, T. S. (2017). Prevalence in visual search: From the clinic to the lab and back again. Japanese Psychological Research, 59(2), 65–108. Kim, S., Burr, D., Cicchini, G. M., & Alais, D. (2020). Serial dependence in perception requires conscious awareness. Current Biology, 30(6), R257–R258. Kim, Y. W., & Mansfield, L. T. (2014). Fool me twice: Delayed diagnoses in radiology with emphasis on perpetuated errors. American Journal of Roentgenology, 202(3), 465–470. Kiyonaga, A., Scimeca, J. M., Bliss, D. P., & Whitney, D. (2017). Serial dependence across perception, attention, and memory. Trends in Cognitive Sciences, 21(7), 493–497. Kompaniez, E., Abbey, C. K., Boone, J. M., & Webster, M. A. (2013). Adaptation aftereffects in the perception of radiological images. PLoS ONE, 8(10), e76175. Kompaniez-Dunigan, E., Abbey, C. K., Boone, J. M., & Webster, M. A. (2015). Adaptation and visual search in mammographic images. Attention, Perception, & Psychophysics, 77(4), 1081–1087. Kompaniez-Dunigan, E., Abbey, C. K., Boone, J. M., & Webster, M. A. (2018). Visual adaptation and the amplitude spectra of radiological images. Cognitive Research: Principles and Implications, 3(1), 1–12. Kondo, A., Takahashi, K., & Watanabe, K. (2012). Sequential effects in face-attractiveness judgment. Perception, 41(1), 43–49. Kosovicheva, A., & Whitney, D. (2017). Stable individual signatures in object localization. Current Biology, 27(14), R700–R701. Krupinski, E. A. (1996). Visual scanning patterns of radiologists searching mammograms. Academic Radiology, 3(2), 137–144. Kunar, M. A., Watson, D. G., Taylor-Phillips, S., & Wolska, J. (2017). Low prevalence search for cancers in mammograms: Evidence using laboratory experiments and computer aided detection. Journal of Experimental Psychology: Applied, 23(4), 369. Kundel, H. L., & Nodine, C. F. (1975). Interpreting chest radiographs without visual search. Radiology, 116(3), 527–532. Lee, C. S., Nagy, P. G., Weaver, S. J., & Newman-Toker, D. E. (2013). Cognitive and system factors contributing to diagnostic errors in radiology. American Journal of Roentgenology, 201(3), 611–617. Liberman, A., Fischer, J., & Whitney, D. (2014). Serial dependence in the perception of faces. Current Biology, 24(21), 2569–2574. Liberman, A., Manassi, M., & Whitney, D. (2018). Serial dependence promotes the stability of perceived emotional expression depending on face similarity. Attention, Perception, & Psychophysics, 80(6), 1461–1473. Liberman, A., Zhang, K., & Whitney, D. (2016). Serial dependence promotes object stability during occlusion. Journal of Vision, 16(15), 16. Manassi, M., Liberman, A., Chaney, W., & Whitney, D. (2017). The perceived stability of scenes: Serial dependence in ensemble representations. Science and Reports, 7(1), 1971. Manassi, M., Liberman, A., Kosovicheva, A., Zhang, K., & Whitney, D. (2018). Serial dependence in position occurs at the time of perception. Psychonomic Bulletin & Review, 25(6), 2245–2253. Manassi, M., Kristjánsson, Á., & Whitney, D. (2019). Serial dependence in a simulated clinical visual search task. Scientific Reports, 9(1), 1–10. Maus, G. W., Chaney, W., Liberman, A., & Whitney, D. (2013). The challenge of measuring long-term positive aftereffects. Current Biology, 23(10), R438–R439. Menneer, T., Donnelly, N., Godwin, H. J., & Cave, K. R. (2010). High or low target prevalence increases the dual-target cost in visual search. Journal of Experimental Psychology: Applied, 16(2), 133. Moors, P., Stein, T., Wagemans, J., & van Ee, R. (2015). Serial correlations in Continuous Flash Suppression. Neuroscience of Consciousness, 2015(1), niv010. Mulder, M. J., Wagenmakers, E.-J., Ratcliff, R., Boekel, W., & Forstmann, B. U. (2012). Bias in the brain: A diffusion model analysis of prior probability and potential payoff. The Journal of Neuroscience, 32(7), 2335–2343. Nelson, H. D., O'Meara, E. S., Kerlikowske, K., Balch, S., & Miglioretti, D. (2016). Factors associated with rates of false-positive and false-negative results from digital mammography screening: An analysis of registry data. Annals of Internal Medicine, 164(4), 226–235. Nodine, C. F., Kundel, H. L., Lauver, S. C., & Toto, L. C. (1996). Nature of expertise in searching mammograms for breast masses. Academic Radiology, 3(12), 1000–1006. Pascucci, D., Mancuso, G., Santandrea, E., Della Libera, C., Plomp, G., & Chelazzi, L. (2017). Laws of concatenated perception: Vision goes for novelty, Decisions for perseverance. bioRxiv, 15, 929. Peirce, J. W. (2007). PsychoPy—Psychophysics software in Python. Journal of Neuroscience Methods, 162(1–2), 8–13. Peirce, J. W. (2009). Generating stimuli for neuroscience using PsychoPy. Frontiers in Neuroinformatics, 2, 10. Rafiei, M., Hansmann-Roth, S., Whitney, D., Kristjansson, A., & Chetverikov, A. (2021). Optimizing perception: Attended and ignored stimuli create opposing perceptual biases. Attention, Perception, & Psychophysics, 83(3), 1230–1239. Rich, A. N., Kunar, M. A., Van-Wert, M. J., Hidalgo-Sotelo, B., Horowitz, T. S., & Wolfe, J. M. (2008). Why do we miss rare targets? Exploring the boundaries of the low prevalence effect. Journal of Vision, 8(15), 11–17. Taubert, J., & Alais, D. (2016). Serial dependence in face attractiveness judgements tolerates rotations around the yaw axis but not the roll axis. Visual Cognition, 24(2), 103–114. Taubert, J., Alais, D., & Burr, D. (2016a). Different coding strategies for the perception of stable and changeable facial attributes. Science and Reports, 6, 32239. Taubert, J., Van der Burg, E., & Alais, D. (2016b). Love at second sight: Sequential dependence of facial attractiveness in an on-line dating paradigm. Science and Reports, 6, 22740. Waite, S., Grigorian, A., Alexander, R. G., Macknik, S. L., Carrasco, M., Heeger, D. J., & Martinez-Conde, S. (2019). Analysis of perceptual expertise in radiology–Current knowledge and a new perspective. Frontiers in Human Neuroscience, 13, 213. Waite, S., Scott, J., Gale, B., Fuchs, T., Kolla, S., & Reede, D. (2017). Interpretive error in radiology. American Journal of Roentgenology, 208(4), 739–749. Wang, Z., Murai, Y., & Whitney, D. (2020). Idiosyncratic perception: A link between acuity, perceived position and apparent size. Proceedings of the Royal Society b: Biological Sciences, 287(1930), 20200825. Wexler, M., Duyck, M., & Mamassian, P. (2015). Persistent states in vision break universality and time invariance. Proc Natl Acad Sci U S A, 112(48), 14990–14995. Winkel, J., Keuken, M. C., van Maanen, L., Wagenmakers, E.-J., & Forstmann, B. U. (2014). Early evidence affects later decisions: Why evidence accumulation is required to explain response time data. Psychonomic Bulletin & Review, 21(3), 777–784. Wolfe, J. M., Horowitz, T. S., & Kenner, N. M. (2005). Cognitive psychology: Rare items often missed in visual searches. Nature, 435(7041), 439. Wolfe, J. M., Horowitz, T. S., Van Wert, M. J., Kenner, N. M., Place, S. S., & Kibbi, N. (2007). Low target prevalence is a stubborn source of errors in visual search tasks. Journal of Experimental Psychology: General, 136(4), 623–638. Xia, Y., Leib, A. Y., & Whitney, D. (2016). Serial dependence in the perception of attractiveness. Journal of Vision, 16(15), 28. We would like to thank Yuki Murai for helpful comments on data analysis. This work was supported by the Swiss National Science Foundation fellowship P2ELP3_158876 (M.M.) and the National Institutes of Health Grant R01 CA236793. Mauro Manassi, Cristina Ghirardo and Teresa Canas-Bajo have contributed equally to this work School of Psychology, King's College, University of Aberdeen, Aberdeen, UK Mauro Manassi Department of Psychology, University of California, Berkeley, CA, USA Cristina Ghirardo, Teresa Canas-Bajo, Zhihang Ren, William Prinzmetal & David Whitney Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA David Whitney Vision Science Group, University of California, Berkeley, CA, USA Teresa Canas-Bajo, Zhihang Ren & David Whitney Cristina Ghirardo Teresa Canas-Bajo Zhihang Ren William Prinzmetal MM, WP and DW designed the study. MM and CG conducted the experiments, MM, CG, TC-B and ZR analyzed the data, MM wrote the first draft of the manuscript, and DW edited the manuscript. All authors read and approved the final manuscript. Correspondence to Mauro Manassi. The ethics approval for the study was obtained from the Human Research Ethics Committee of UC Berkeley, and the experiment was conducted in accordance with the approved guidelines and regulations. All participants provided informed consent to take part in the experiments. All participants gave permission for the publication of their data. The authors declare no competing financial and non-financial interests. Manassi, M., Ghirardo, C., Canas-Bajo, T. et al. Serial dependence in the perceptual judgments of radiologists. Cogn. Research 6, 65 (2021). https://doi.org/10.1186/s41235-021-00331-z DOI: https://doi.org/10.1186/s41235-021-00331-z Serial dependence Radiological screening Sequential effects Sequential dependence Visual Search in Real-World and Applied Contexts
CommonCrawl
11.3: Outer measure and null sets [ "article:topic", "authorname:lebl", "null sets", "showtoc:no" ] https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FBookshelves%2FAnalysis%2FBook%253A_Introduction_to_Real_Analysis_(Lebl)%2F11%253A_Multivariable_Integral%2F11.03%253A_Outer_measure_and_null_sets Outer measure Before we characterize all Riemann integrable functions, we need to make a slight detour. We introduce a way of measuring the size of sets in \({\mathbb{R}}^n\). Let \(S \subset {\mathbb{R}}^n\) be a subset. Define the outer measure of \(S\) as \[m^*(S) := \inf\, \sum_{j=1}^\infty V(R_j) ,\] where the infimum is taken over all sequences \(\{ R_j \}\) of open rectangles such that \(S \subset \bigcup_{j=1}^\infty R_j\). In particular \(S\) is of measure zero or a null set if \(m^*(S) = 0\). We will only need measure zero sets and so we focus on these. Note that \(S\) is of measure zero if for every \(\epsilon > 0\) there exist a sequence of open rectangles \(\{ R_j \}\) such that \[S \subset \bigcup_{j=1}^\infty R_j \qquad \text{and} \qquad \sum_{j=1}^\infty V(R_j) < \epsilon.\] Furthermore, if \(S\) is measure zero and \(S' \subset S\), then \(S'\) is of measure zero. We can in fact use the same exact rectangles. The set \({\mathbb{Q}}^n \subset {\mathbb{R}}^n\) of points with rational coordinates is a set of measure zero. Proof: The set \({\mathbb{Q}}^n\) is countable and therefore let us write it as a sequence \(q_1,q_2,\ldots\). For each \(q_j\) find an open rectangle \(R_j\) with \(q_j \in R_j\) and \(V(R_j) < \epsilon 2^{-j}\). Then \[{\mathbb{Q}}^n \subset \bigcup_{j=1}^\infty R_j \qquad \text{and} \qquad \sum_{j=1}^\infty V(R_j) < \sum_{j=1}^\infty \epsilon 2^{-j} = \epsilon .\] In fact, the example points to a more general result. A countable union of measure zero sets is of measure zero. Suppose \[S = \bigcup_{j=1}^\infty S_j\] where \(S_j\) are all measure zero sets. Let \(\epsilon > 0\) be given. For each \(j\) there exists a sequence of open rectangles \(\{ R_{j,k} \}_{k=1}^\infty\) such that \[S_j \subset \bigcup_{k=1}^\infty R_{j,k}\] and \[\sum_{k=1}^\infty V(R_{j,k}) < 2^{-j} \epsilon .\] Then \[S \subset \bigcup_{j=1}^\infty \bigcup_{k=1}^\infty R_{j,k} .\] As \(V(R_{j,k})\) is always positive, the sum over all \(j\) and \(k\) can be done in any order. In particular, it can be done as \[\sum_{j=1}^\infty \sum_{k=1}^\infty V(R_{j,k}) < \sum_{j=1}^\infty 2^{-j} \epsilon = \epsilon . \qedhere\] The next example is not just interesting, it will be useful later. [mv:example:planenull] Let \(P := \{ x \in {\mathbb{R}}^n : x^k = c \}\) for a fixed \(k=1,2,\ldots,n\) and a fixed constant \(c \in {\mathbb{R}}\). Then \(P\) is of measure zero. Proof: First fix \(s\) and let us prove that \[P_s := \{ x \in {\mathbb{R}}^n : x^k = c, \left\lvert {x^j} \right\rvert \leq s \text{ for all $j\not=k$} \}\] is of measure zero. Given any \(\epsilon > 0\) define the open rectangle \[R := \{ x \in {\mathbb{R}}^n : c-\epsilon < x^k < c+\epsilon, \left\lvert {x^j} \right\rvert < s+1 \text{ for all $j\not=k$} \}\] It is clear that \(P_s \subset R\). Furthermore \[V(R) = 2\epsilon {\bigl(2(s+1)\bigr)}^{n-1} .\] As \(s\) is fixed, we can make \(V(R)\) arbitrarily small by picking \(\epsilon\) small enough. Next we note that \[P = \bigcup_{j=1}^\infty P_j\] and a countable union of measure zero sets is measure zero. If \(a < b\), then \(m^*([a,b]) = b-a\). Proof: In the case of \({\mathbb{R}}\), open rectangles are open intervals. Since \([a,b] \subset (a-\epsilon,b+\epsilon)\) for all \(\epsilon > 0\). Hence, \(m^*([a,b]) \leq b-a\). Let us prove the other inequality. Suppose that \(\{ (a_j,b_j) \}\) are open intervals such that \[[a,b] \subset \bigcup_{j=1}^\infty (a_j,b_j) .\] We wish to bound \(\sum (b_j-a_j)\) from below. Since \([a,b]\) is compact, then there are only finitely many open intervals that still cover \([a,b]\). As throwing out some of the intervals only makes the sum smaller, we only need to take the finite number of intervals still covering \([a,b]\). If \((a_i,b_i) \subset (a_j,b_j)\), then we can throw out \((a_i,b_i)\) as well. Therefore we have \([a,b] \subset \bigcup_{j=1}^k (a_j,b_j)\) for some \(k\), and we assume that the intervals are sorted such that \(a_1 < a_2 < \cdots < a_k\). Note that since \((a_2,b_2)\) is not contained in \((a_1,b_1)\) we have that \(a_1 < a_2 < b_1 < b_2\). Similarly \(a_j < a_{j+1} < b_j < b_{j+1}\). Furthermore, \(a_1 < a\) and \(b_k > b\). Thus, \[m^*([a,b]) \geq \sum_{j=1}^k (b_j-a_j) \geq \sum_{j=1}^{k-1} (a_{j+1}-a_j) + (b_k-a_k) = b_k-a_1 > b-a .\] [mv:prop:compactnull] Suppose \(E \subset {\mathbb{R}}^n\) is a compact set of measure zero. Then for every \(\epsilon > 0\), there exist finitely many open rectangles \(R_1,R_2,\ldots,R_k\) such that \[E \subset R_1 \cup R_2 \cup \cdots \cup R_k \qquad \text{and} \qquad \sum_{j=1}^k V(R_j) < \epsilon.\] Find a sequence of open rectangles \(\{ R_j \}\) such that \[E \subset \bigcup_{j=1}^\infty R_j \qquad \text{and} \qquad \sum_{j=1}^\infty V(R_j) < \epsilon.\] By compactness, finitely many of these rectangles still contain \(E\). That is, there is some \(k\) such that \(E \subset R_1 \cup R_2 \cup \cdots \cup R_k\). Hence \[\sum_{j=1}^k V(R_j) \leq \sum_{j=1}^\infty V(R_j) < \epsilon. \qedhere\] The image of a measure zero set using a continuous map is not necessarily a measure zero set. However if we assume the mapping is continuously differentiable, then the mapping cannot "stretch" the set too much. The proposition does not require compactness, and this is left as an exercise. [prop:imagenull] Suppose \(U \subset {\mathbb{R}}^n\) is an open set and \(f \colon U \to {\mathbb{R}}^n\) is a continuously differentiable mapping. If \(E \subset U\) is a compact measure zero set, then \(f(E)\) is measure zero. As FIXME: distance to boundary, did we do that? We should! FIXME: maybe this closed/open rectangle bussiness should be addressed above Let \(\epsilon > 0\) be given. FIXME: Let \(\delta > 0\) be the distance to boundary Let us "fatten" \(E\) a little bit. Using compactness, there exist finitely many open rectangles \(T_1,T_2,\ldots,T_k\) such that \[E \subset T_1 \cup T_2 \cup \cdots \cup T_k \qquad \text{and} \qquad V(T_1) + V(T_2) + \cdots + V(T_k) < \epsilon .\] Since a closed rectangle has the same volume as an open rectangle with the same sides, so we could take \(R_j\) to be the closure of \(T_j\), Furthermore a closed rectangle can be written as finitely many small rectangles. Consequently for some \(\ell\) there exist finitely many closed rectangles \(R_1,R_2,\ldots,R_n\) of side at most \(\frac{\sqrt{n}\delta}{2}\). such that \[E \subset R_1 \cup R_2 \cup \cdots \cup R_\ell \qquad \text{and} \qquad V(R_1) + V(R_2) + \cdots + V(R_\ell) < \epsilon .\] Let \[E' := R_1 \cup R_2 \cup \cdots \cup R_\ell\] It is left as an exercise (see Exercise As \(f\) is continuously differentiable, the function that takes \(x\) to \(\left\lVert {Df(x)} \right\rVert\) is continuous, therefore \(\left\lVert {Df(x)} \right\rVert\) achieves a maximum on \(E\). Thus there exists some \(C > 0\) such that \(\left\lVert {Df(x)} \right\rVert \leq C\) on \(E\). FIXME: may need the fact that the derivative exists AND is continuous on a FATTER E which is still comapact and of size \(\epsilon\). FIXME: Then use the whole lipschitz thing we have. so we can assume that on \(E\) FIXME: FIXME: Cantor set, fat cantor set, can be done in \({\mathbb{R}}^n\) FIXME: maybe too much If \(A \subset B\) then \(m^*(A) \leq m^*(B)\). Show that if \(R \subset {\mathbb{R}}^n\) is a closed rectangle then \(m^*(R) = V(R)\). Prove a version of without using compactness: a) Mimic the proof to first prove that the proposition holds only if \(E\) is relatively compact; a set \(E \subset U\) is relatively compact if the closure of \(E\) in the subspace topology on \(E\) is compact, or in other words if there exists a compact set \(K\) with \(K \subset U\) and \(E \subset K\). Hint: The bound on the size of the derivative still holds, but you may need to use countably many rectangles. Be careful as the closure of \(E\) need no longer be measure zero. b) Now prove it for any null set \(E\). Hint: First show that \(\{ x \in U : d(x,y) \geq \nicefrac{1}{M} \text{ for all\)y U\(and } d(0,x) \leq M \}\) is a compact set for any \(M > 0\). Let \(U \subset {\mathbb{R}}^n\) be an open set and let \(f \colon U \to {\mathbb{R}}\) be a continuously differentiable function. Let \(G := \{ (x,y) \in U \times {\mathbb{R}}: y = f(x) \}\) be the graph of \(f\). Show that \(f\) is of measure zero. 11.4: The set of Riemann Integrable Functions null sets
CommonCrawl
Approximation properties of Lüroth expansions Local well-posedness for the Klein-Gordon-Zakharov system in 3D A $ G^{\delta, 1} $ almost conservation law for mCH and the evolution of its radius of spatial analyticity A. Alexandrou Himonas 1,, and Gerson Petronilho 2, University of Notre Dame, Department of Mathematics, Notre Dame, IN 46556, USA Universidade Federal de São Carlos, Departamento de Matemática, São Carlos, SP 13565-905, Brazil Received May 2020 Published October 2020 The Cauchy problem of the modified Camassa-Holm (mCH) equation with initial data $ u(0) $ that are analytic on the line and have uniform radius of analyticity $ r(0) $ is considered. First, by using bilinear estimates for the nonlocal nonlinearity in analytic Bourgain spaces, it is shown that this equation is well-posed in analytic Gevrey spaces $ G^{\delta, s} $, with useful solution lifespan $ T_0 $ and size estimates. This shows that the radius of spatial analyticity $ r(t) $ persists during the time interval $ [-T_0, T_0] $. Then, exploiting the fact that solutions to this equation conserve the $ H^1 $ norm, and utilizing the available bilinear estimates, an almost conservation low in $ G^{\delta,1} $ spaces is proved. Finally, using this almost conservation law it is shown that the solution $ u(t) $ exists for all time $ t $ and a lower bound for the radius of spatial analyticity is provided. Keywords: Modified Camassa-Holm equation, Cauchy problem, analytic spaces, uniform radius of analyticity, bilinear estimates, algebraic decrease, approximate conservation law. Mathematics Subject Classification: Primary: 35Q53. Citation: A. Alexandrou Himonas, Gerson Petronilho. A $ G^{\delta, 1} $ almost conservation law for mCH and the evolution of its radius of spatial analyticity. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2020351 R. F. Barostichi, A. A. Himonas and G. Petronilho, Autonomous Ovsyannikov theorem and applications to nonlocal evolution equations and systems, J. Funct. Anal., 270 (2016), 330-358. doi: 10.1016/j.jfa.2015.06.008. Google Scholar J. L. Bona, Z. Grujić and H. Kalisch, Algebraic lower bounds for the uniform radius of spatial analyticity for the generalized KdV equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 22 (2005), 783-797. doi: 10.1016/j.anihpc.2004.12.004. Google Scholar J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. Part 2: KdV equation, Geom. Funct. Anal., 3 (1993), 209-262. doi: 10.1007/BF01895688. Google Scholar J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. Part 1: Schrödinger equation, Geom. Funct. Anal., 3 (1993), 209-262. Google Scholar J. Bourgain, On the Cauchy problem for periodic KdV-type equations, J. Fourier Anal. Appl., 1993 (1995), 17-86. Google Scholar A. Bressan and A. Constantin, Global conservative solutions of the Camassa-Holm equation, Arch. Ration. Mech. Anal., 183 (2007), 215-239. doi: 10.1007/s00205-006-0010-z. Google Scholar R. Camassa and D. D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett., 71 (1993), 1661-1664. doi: 10.1103/PhysRevLett.71.1661. Google Scholar J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Sharp global well-posedness for KdV and modified KdV on $\mathbb R$ and $\mathbb T$, J. Amer. Math. Soc., 16 (2003), 705-749. doi: 10.1090/S0894-0347-03-00421-1. Google Scholar J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Multilinear estimates for periodic KdV equations, and applications, J. Funct. Anal., 211 (2004), 173-218. doi: 10.1016/S0022-1236(03)00218-0. Google Scholar A. Constantin and J. Escher, Global existence and blow-up for a shallow water equation, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 26 (1998), 303-328. Google Scholar A. Constantin and J. Escher, Wave breaking for nonlinear nonlocal shallow water equations, Acta Math., 181 (1998), 229-243. doi: 10.1007/BF02392586. Google Scholar A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperi-Procesi equations, Arch. Ration. Mech. Anal., 192 (2009), 165-186. doi: 10.1007/s00205-008-0128-2. Google Scholar A. Constantin and W. A. Strauss, Stability of peakons, Comm. Pure Appl. Math., 53 (2000), 603-610. doi: 10.1002/(SICI)1097-0312(200005)53:5<603::AID-CPA3>3.0.CO;2-L. Google Scholar R. Danchin, A few remarks on the Camassa-Holm equation, Differential Integral Equations, 14 (2001), 953-988. Google Scholar R. Figuera, A. A. Himonas and F. Yan, A higher dispersion KdV equation on the line, Nonlinear Anal., 199 (2000), 112055, 38 pp. doi: 10.1016/j.na.2020.112055. Google Scholar C. Foias and R. Temam, Gevrey class regularity for the solutions of the Navier-Stokes equations, J. Funct. Anal., 87 (1989), 359-369. doi: 10.1016/0022-1236(89)90015-3. Google Scholar B. Fuchssteiner and A. S. Fokas, Symplectic structures, their Bäcklund transformations and hereditary symmetries, Phys. D, 4 (1981/1982), 47-66. doi: 10.1016/0167-2789(81)90004-X. Google Scholar Z. Grujić and H. Kalisch, Local well-posedness of the generalized Korteweg-de Vries equation in spaces of analytic functions, Differential and Integral Equations, 15 (2002), 1325-1334. Google Scholar A. A. Himonas, H. Kalisch and S. Selberg, On persistence of spatial analyticity for the dispersion-generalized periodic KdV equation, Nonlinear Anal. Real World Appl, 38 (2017), 35-48. doi: 10.1016/j.nonrwa.2017.04.003. Google Scholar A. A. Himonas and G. Misiołek, Global well-posedness of the Cauchy problem for a shallow water equation on the circle, J. Differential Equations, 161 (2000), 479-495. doi: 10.1006/jdeq.1999.3695. Google Scholar A. A. Himonas and C. Kenig, Non-uniform dependence on initial data for the CH equation on the line, Differential Integral Equations, 22 (2009), 201-224. Google Scholar A. A. Himonas and G. Misiołek, The Cauchy problem for a shallow water type equation, Comm. Partial Differential Equations, 23 (1998), 123-139. doi: 10.1080/03605309808821340. Google Scholar A. A. Himonas and G. Misiołek, Analyticity of the Cauchy problem for an integrable evolution equation, Math. Ann., 327 (2003), 575-584. doi: 10.1007/s00208-003-0466-1. Google Scholar H. Hirayama, Local well-posedness for the periodic higher order KdV type equations, NoDEA Nonlinear Differential Equations Appl., 19 (2012), 677-693. doi: 10.1007/s00030-011-0147-9. Google Scholar T. Kato, On the Cauchy problem for the (generalized) Korteweg-de Vries equation, Advances in Mathematics Supplementary Studies, Studies in Applied Math., 8 (1983), 93-128. Google Scholar T. Kato and K. Masuda, Nonlinear evolution equations and analyticity I, Ann. Inst. H. Poincaré Anal. Non Linéaire, 3 (1986), 455-467. doi: 10.1016/S0294-1449(16)30377-8. Google Scholar Y. Katznelson, An Introduction to Harmonic Analysis Corrected ed., Dover Publications, Inc., New York, 1976. Google Scholar C. Kenig, G. Ponce and L. Vega, A bilinear estimate with applications to the KdV equation, J. Amer. Math. Soc., 9 (1996), 573-603. doi: 10.1090/S0894-0347-96-00200-7. Google Scholar C. E. Kenig, G. Ponce and L. Vega, Well-posedness of the initial value problem for the Korteweg-de Vries equation, J. Amer. Math. Soc., 4 (1991), 323-347. doi: 10.1090/S0894-0347-1991-1086966-0. Google Scholar C. E. Kenig, G. Ponce and L. Vega, Well-posedness and scattering results for the generalized Korteweg-de Vries equation via the contraction principl, Comm. Pure Appl. Math., 46 (1993), 527-620. doi: 10.1002/cpa.3160460405. Google Scholar C. E. Kenig, G. Ponce and L. Vega, Higher-order nonlinear dispersive equations, Proc. Amer. Math. Soc., 122 (1994), 157-166. doi: 10.1090/S0002-9939-1994-1195480-8. Google Scholar D. J. Korteweg and G. de Vries, On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves, Philos. Mag., 39 (1895), 422-443. doi: 10.1080/14786449508620739. Google Scholar J. Lenells, Traveling wave solutions of the Camassa-Holm equation, J. Differential Equations, 217 (2005), 393-430. doi: 10.1016/j.jde.2004.09.007. Google Scholar Y. A. Li and P. J. Olver, Well-posedness and blow-up solutions for an integrable nonlinearly dispersive model wave equation, J. Differential Equations, 162 (2000), 27-63. doi: 10.1006/jdeq.1999.3683. Google Scholar F. Linares and G. Ponce, Introduction to Nonlinear Dispersive Equations, Universitext Springer, New York, 2009. Google Scholar G. Rodríguez-Blanco, On the Cauchy problem for the Camassa-Holm equation, Nonlinear Anal., 46 (2001), 309-327. doi: 10.1016/S0362-546X(01)00791-X. Google Scholar S. Selberg and D. O. da Silva, Lower Bounds on the radius of a spatial analyticity for the KdV equation, Ann. Henri Poincaré, 18 (2017), 1009-1023. doi: 10.1007/s00023-016-0498-1. Google Scholar T. Tao, Nonlinear Dispersive Equations-Local and Global Analysis, CBMS Regional Conference Series in Mathematics, 106. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 2006. doi: 10.1090/cbms/106. Google Scholar Rong Chen, Shihang Pan, Baoshuai Zhang. Global conservative solutions for a modified periodic coupled Camassa-Holm system. Electronic Research Archive, 2021, 29 (1) : 1691-1708. doi: 10.3934/era.2020087 Zaihui Gan, Fanghua Lin, Jiajun Tong. On the viscous Camassa-Holm equations with fractional diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3427-3450. doi: 10.3934/dcds.2020029 Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of a Sobolev type impulsive functional evolution system in Banach spaces. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020049 Constantine M. Dafermos. A variational approach to the Riemann problem for hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 185-195. doi: 10.3934/dcds.2009.23.185 Soniya Singh, Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of second order impulsive systems with state-dependent delay in Banach spaces. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020103 Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348 Yunfeng Jia, Yi Li, Jianhua Wu, Hong-Kun Xu. Cauchy problem of semilinear inhomogeneous elliptic equations of Matukuma-type with multiple growth terms. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3485-3507. doi: 10.3934/dcds.2019227 Pengyu Chen, Yongxiang Li, Xuping Zhang. Cauchy problem for stochastic non-autonomous evolution equations governed by noncompact evolution families. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1531-1547. doi: 10.3934/dcdsb.2020171 Haruki Umakoshi. A semilinear heat equation with initial data in negative Sobolev spaces. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 745-767. doi: 10.3934/dcdss.2020365 Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163 Ville Salo, Ilkka Törmä. Recoding Lie algebraic subshifts. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 1005-1021. doi: 10.3934/dcds.2020307 Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054 Xiaorui Wang, Genqi Xu, Hao Chen. Uniform stabilization of 1-D Schrödinger equation with internal difference-type control. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021022 Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247 Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020384 John Mallet-Paret, Roger D. Nussbaum. Asymptotic homogenization for delay-differential equations and a question of analyticity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3789-3812. doi: 10.3934/dcds.2020044 Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3375-3394. doi: 10.3934/dcds.2020033 A. Alexandrou Himonas Gerson Petronilho \begin{document}$ G^{\delta, 1} $\end{document} almost conservation law for mCH and the evolution of its radius of spatial analyticity" readonly="readonly">
CommonCrawl
Distinct transcriptional and metabolic profiles associated with empathy in Buddhist priests: a pilot study Junji Ohnishi1,2, Satoshi Ayuzawa3,4, Seiji Nakamura5, Shigeko Sakamoto2, Miyo Hori2, Tomoko Sasaoka4, Eriko Takimoto-Ohnishi2,6, Masakazu Tanatsugu7 & Kazuo Murakami2,8 Human Genomics volume 11, Article number: 21 (2017) Cite this article Growing evidence suggests that spiritual/religious involvement may have beneficial effects on both psychological and physical functions. However, the biological basis for this relationship remains unclear. This study explored the role of spiritual/religious involvement across a wide range of biological markers, including transcripts and metabolites, associated with the psychological aspects of empathy in Buddhist priests. Ten professional Buddhist priests and 10 age-matched non-priest controls were recruited. The participants provided peripheral blood samples for the analysis of gene expression and metabolic profiles. The participants also completed validated questionnaires measuring empathy, the Health-Promoting Lifestyle Profile-II (HPLP-II), and a brief-type self-administered diet history questionnaire (BDHQ). The microarray analyses revealed that the distinct transcripts in the Buddhist priests included up-regulated genes related to type I interferon (IFN) innate anti-viral responses (i.e., MX1, RSAD2, IFIT1, IFIT3, IFI27, IFI44L, and HERC5), and the genes C17orf97 (ligand of arginyltranseferase 1; ATE1), hemoglobin γA (HBG1), keratin-associated protein (KRTAP10-12), and sialic acid Ig-like lectin 14 (SIGLEC14) were down-regulated at baseline. The metabolomics analysis revealed that the metabolites, including 3-aminoisobutylic acid (BAIBA), choline, several essential amino acids (e.g., methionine, phenylalanine), and amino acid derivatives (e.g., 2-aminoadipic acid, asymmetric dimethyl-arginine (ADMA), symmetric dimethyl-arginine (SMDA)), were elevated in the Buddhist priests. By contrast, there was no significant difference of healthy lifestyle behaviors and daily nutrient intakes between the priests and the controls in this study. With regard to the psychological aspects, the Buddhist priests showed significantly higher empathy compared with the control. Spearman's rank correlation analysis showed that empathy aspects in the priests were significantly correlated with the certain transcripts and metabolites. We performed in vivo phenotyping using transcriptomics, metabolomics, and psychological analyses and found an association between empathy and the phenotype of Buddhist priests in this pilot study. The up-regulation of the anti-viral type I IFN responsive genes and distinct metabolites in the plasma may represent systemic biological adaptations with a unique signature underlying spiritual/religious practices for Buddhists. Spirituality/religiosity is one of several unique aspects of human social environments and generally consists of psychological aspects accompanied by one's behavior and social relationships. According to psycho-neuro-immune models of health regulation, spirituality/religiosity can be positively associated with psychological and physical health [1, 2]. Spirituality/religiosity provide coping resources that may improve mental health by increasing the frequency of positive psychological aspects and positive emotions [3, 4]. These positive psychological aspects include empathy and altruism, and the beneficial positive emotions include general well-being, happiness, and self-esteem [1, 5]. Coping resources also reduce negative stress that could result in emotional disorders, such as depression or anxiety. Because greater physical activity is associated with better mental health [6, 7], spirituality/religiosity should have a favorable impact on the physical state by balancing mental health. Furthermore, longitudinal studies by Miller et al. revealed that a high personal importance of spirituality/religiosity was associated with thicker cortices in certain brain regions (i.e., the left and right parietal and occipital regions, the mesial frontal lobe of the right hemisphere, and the cuneus and precuneus in the left hemisphere) and may confer protective benefits against the depressive symptoms in individuals with a high familial risk of major depression [8]. The surrounding social environments can change the basal expression profiles of certain genes (i.e., basal transcriptome) that are critical for the internal biological processes in our body [9, 10]. Notably, these effects are reliably associated with psychological states and are often induced by individuals' subjective perceptions of their experiences in the surrounding social environments [11]. Stressors can activate the autonomic nervous system and/or the hypothalamic-pituitary-adrenal (HPA) axis to allow for physiological adaptation [12]. Individuals can manage these perceived stressors if the stressors fall within their coping abilities. However, once events or environmental demands exceed one's coping ability, psychological stress ensues [11, 12]. Perceived adverse environmental events, such as psychological stress (e.g., perceived social isolation, bereavement, or long-term caregiving), could elicit negatively affected states, such as feelings of anxiety or depression. These negative psychological states, in turn, directly influence neural-endocrine processes with an immune dysfunction to enhance susceptibility to disease (e.g., viral infection, cardiovascular disease, and type II diabetes) and shape complex behavioral phenotypes (e.g., poorer sleep or appetite and addiction) [11, 12]. Meanwhile, these adversity-associated processes can influence the genome function. Consequently, these stressors can change the basal gene expression activity in peripheral blood leukocytes by up-regulating the expression of pro-inflammatory genes (e.g., interleukin-1β (IL1B), interleukin-8 (IL8), and tumor necrosis factor (TNF)) and down-regulating the expression of genes involved in type I interferon (IFN) innate anti-viral responses (e.g., IFIT-family genes and MX-family genes) [9, 10]. Due to the sensitivity of leukocytes to social adversity, these adverse experiences may shift the leukocyte's basal transcriptional resources from a default anti-viral defense mode to an inflammation-promoting mode. In contrast, Fredrickson et al. have recently shown that a positive psychological state and social conditions (e.g., well-being) could oppose the adversity-associated transcriptome in peripheral blood [13, 14]. In another experimental design, we presented evidence that mirthful laughter with positive emotion can up-regulate distinct genes in the peripheral leukocytes of type II diabetes patients and adjust the activity of natural killer (NK) cells [15]. Notably, most studies reporting a positive connection between spirituality/religiosity and health have focused on "Christian" populations. These studies have typically been conducted in Western and Middle Eastern countries that are generally religious and economically well developed [2]. In contrast, a limited number of studies have been performed in non-Christian unique religious cultures in China [16] and Japan. In this pilot study, we investigated the impacts of spirituality/religiosity on the functional activity of the human genome and metabolic reactions required for social regulation in qualified Buddhist priests. The objectives of this study were to (1) characterize the basal gene expression and metabolite profiles of qualified Japanese Buddhist priests; (2) evaluate the empathy psychological aspects in the priests, which are core elements of the fundamental concept of Buddhism [17, 18]; and (3) examine the association between empathy and the identified molecular markers. Recruitment of participants The data in this study were collected as a part of a research project that was conducted over 3 years to explore the role of spiritual/religious involvement across a wide range of biological markers. Ten male Japanese Buddhist priests and 10 male healthy age-matched Japanese non-priest controls were recruited from six different urban or rural areas (e.g., Tokyo, Kanagawa, Ibaraki, Kyoto, Gunma, and Wakayama in Japan) via leaflets and electronic media. All Buddhist participants in this study belonged to the "Shingon" sect of Japanese Buddhism [19] and have performed the spiritual/religious main practices and duties for more than 7 years (median length, 15.50 years; range 7–35 years). These participants were qualified priests and commuted daily from their own homes to temples in towns and cities. All of the non-priest controls had steady full-time jobs and commuted daily from their own residences to their workplaces in towns and cities. Their occupational titles included manager, professional, technician, office worker, and service worker. Additionally, the non-priest controls had not received any training related to meditation or religious practices. The commuting methods of both the priests and the non-priest controls were similar and included travel on foot, using one's own car or via public transportations, with commutes completed within approximately 50 min. Both the priests and the non-priest controls met the following criteria: (i) decent health status with no subjective symptoms in the previous month and (ii) no history of receiving any dietary counseling or therapy from a doctor or dietitian. All participants were given a complete explanation of the study, which was approved on September 11, 2013, by the ethics committee of Tsukuba University of Technology (approval number TUT 20130911) and provided written informed consent. All participants were paid for their participation. During the study, the participants were asked a series of questions about their age, physical health, and medication use. The participants included a heterogeneous group of middle-aged adults (29–52 years old) as shown in Additional files 1, 2, 3 and 4. One priest had a previous history of venous thrombosis and had been administered an anticoagulant agent (Warfarin). He was also taking an antihypertensive drug (candesartan). Two priests suffered from diabetes; one of which was prescribed oral hypoglycemic agents (glimepiride and sitagliptin phosphate hydrate) and rosuvastatin to treat the accompanying hypercholesterolemia, and the other priest declared his diabetic nephropathy. In this pilot study, we did not exclude these three priests due to the limited number of recruited priests. A general serum chemical analysis and hematological tests were performed to examine their general conditions. These results are presented in Additional file 1. All participants had relative proportions of the peripheral blood cell types within normal ranges (27–90% neutrophils, 20–51% lymphocytes, 2–12% monocytes, 0–3% basophils, and 0–10% eosinophils). Genome-wide expression profiling For RNA preparation, 2 × 2.5 mL samples of peripheral blood were drawn into PaxGene Blood RNA tubes (PreAnalytiX/QIAGEN Inc., Valencia, CA) from each subject. After blood collection, PaxGene tubes were left stand at room temperature (RT) for 2 h to ensure complete lyses of all blood cells, followed by stored at 4 °C for overnight. All tubes were stored at − 80 °C until RNA isolation. Based on the convenience of the participants, we performed blood sample collection on three different days (Dec 8, 2013, Apr 28, 2014, and Jun 8, 2014) within 6-month period. Total RNA was isolated within 6 months after storage. Total RNA was isolated using PaxGene Blood RNA Kit (PreAnalytiX/QIAGEN) according to the manufacturers' instructions. Quantity and purity of the RNA was tested using the NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA) and the Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA). In any event, only high-quality RNA samples containing intact 18S and 28S RNA were used for subsequent microarray and quantitative RT-PCR analyses. One hundred nanograms of the total RNA was converted to complementary DNA (cDNA), amplified, and labeled with Cy3-labeled CTP using the Low Input Quick Amp Labeling Kit (Agilent Technologies, Santa Clara, CA) according to the protocol supplied by the manufacturer. Following labeling and clean up, the amplified RNA and dye incorporation were quantified using the NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA). We used Agilent SurePrint G3 Human GE v2 8x60k Microarrays (Agilent Technologies, Santa Clara, CA) containing 50,599 unique genes. All samples were assayed in a single batch. After hybridization at 65 °C for 17 h, the arrays were washed consequently using the Gene Expression Wash Pack (Agilent Technologies, Santa Clara, CA). The microarrays were scanned using the Agilent Scanner, and the fluorescence intensities of the scanned images were quantified using Feature Extraction software ver. 10.7.3.1 (Agilent Technologies, Santa Clara, CA). Normalization was performed using Agilent GeneSpring GX version 13.1.1. (per chip normalization, 75 percentile shift; per gene normalization, none) with the range of expression intensities for inter-microarray. Only those genes whose expression data were available in more than 50% of hybridizations were included for further analysis [20]. Microarray raw data were deposited in the National Center for Biotechnology Information Gene Expression Omnibus (accession number GSE77676). Differential expression analysis: the rank products method Due to small sample sizes (n = 10 for each group) in this study, we could not evaluate whether transcriptional data showed the normal distribution. Therefore, we chose to employ the Rank Products Method, a "non-parametric" test as a cautious approach to determine significantly differential expressed genes. This method is reported to be relatively powerful especially for small sample sizes and when the data is non-homogeneous [21]. A gene considered as significantly differential expressed if the adjusted P value (false discovery rate, FDR) was equal to or less than 5% (0.05). Quantitative RT-PCR analysis We quantified the expression of 12 selected genes identified in the priests, including anti-viral signaling molecules (MX1, RSAD2, IFIT1, IFIT3, IFI27, IFI44L, HERC5), DEFA4, FOLR3, HBG1, C17orf97, and S100P through quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR). cDNA was synthesized from 1000 ng total RNA using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific). PCR reactions were carried out using ABI7500 Real Time PCR System (Thermo Fisher Scientific). Specific sets of primers and TaqMan probes were obtained from Thermo Fisher Scientific. Gene expression level of the target transcript was normalized with the values of an endogenous control gene (GAPDH). The data were normalized using the ∆C T method (∆C T = ∆C T target––∆C T control) by measuring cycle threshold ratios between candidate genes and an internal control gene, GAPDH. Expression level was described inversely as "− ∆C T "; thus, higher relative quantities indicate greater expression of target genes [20]. Differences of each transcript between the priests and the controls were compared using a Mann–Whitney U test, genes were considered as significantly differential expressed if the P value was equal to or less than 5% (0.05). Metabolomics analysis Peripheral blood samples were collected in Vacutainer tubes voted with ethylendiaminetetraacetic acid (VP-NA070K; Terumo Corporation, Tokyo, Japan) and immediately centrifuged at × 1200g for 10 min to separated plasma. The plasma samples were then frozen in dry ice and stored at − 80 °C until use. The Human Metabolome Technologies, Inc., (Tsuruoka, Japan) performed all metabolome analysis [22]. For "hydrophilic" metabolites, capillary electrophoresis (CE)-coupled a time-of-flight mass spectroscopy (TOF/MS) was performed. Briefly, 50 μL of plasma samples were mixed with 450 μL methanol containing internal standards (solution ID H3304-1002, Human Metabolome Technologies, Inc., Tsuruoka, Japan) at 0 °C in order to inactivate enzymes. Then, 500 μL chloroform and 200 μL Milli-Q water were added, and the mixed solution was centrifuged at × 2300g for 5 min at 4 °C. The 350 μL of upper aqueous phase was filtered through a Millipore 5-kDa cutoff filter to remove proteins. The filtrate was centrifugally lyophilized and dissolved in 50 μL of Milli-Q water for following CE-TOF/MS analysis. CE-TOF/MAS was carried out using an Agilent CE Cappilary Electrophoresis System equipped with an Agilent 6210 Time of Flight mass spectrometer, Agilent 1100 isocratic HPLC pump, Agilent G1603A CE-MS adapter kit, and Agilent G1607A CE-ESI-MS sprayer kit (Agilent Technologies, Waldbronn, Germany). The systems were controlled by Agilent G2201AA ChemStation software version B.03.01 for CE (Agilent Technologies, Waldbronn, Germany). The metabolites were analyzed by using a fused silica capillary (50 μm i.d. × 80 cm total length), with commercial electrophoresis buffer (solution ID H3301-1001 for cation analysis and H3302-1021 for anion analysis, Human Metabolome Technologies) as the electrolyte. The sample was injected at a pressure of 50 mbar for 10 s (approximately 10 nL) in cation analysis and 25 s (approximately 25 nL) in anion analysis. The spectrometer was scanned from m/z 50 to 1000. For "hydrophobic" metabolites, liquid chromatography (LC) time-of-flight mass spectrometry (TOFMS) was performed. Briefly, 500 μL of plasma samples were mixed with 1500 μL 1% formic acid/acetonitrile containing internal standard solution (solution ID H3304-1002, Human Metabolome Technologies, Inc., Tsuruoka, Japan) at 0 °C in order to inactivate enzymes. Then, the mixture was centrifuged at × 2300g for 5 min at 4 °C. The supernatant was filtered through Hybrid SPE phospholipid cartridge (55261-U; Supelco, Bellefonte, PA, USA) to remove phospholipids. The filtrate 400 μL was lyophilized and dissolved in 100 μL of 50% isopropanol/Milli-Q water solution for analysis. LC-TOFMS was performed using an Agilent LC System (Agilent 1200 series RRLC system SL) equipped with an Agilent 6230 time-of-light mass spectrometer (Agilent Technologies, Waldbronn, Germany). The systems were controlled by Agilent G2201AA ChemStation software version B.03.01 for CE (Agilent Technologies, Waldbronn, Germany). The cationic and anionic compounds were measured by using ODS column (2 × 50 mm, 2 μm). Peaks were extracted using the MasterHands automatic integration software ver.2.16.0.15 (Keio University, Tsuruoka, Japan) to obtain peak information including the m/z ratio, migration time for (MT) for CE-TOF/MS measurement, retention time (RT) for LC-TOF/MS measurement and peak area. Signal peaks corresponding to "isotopomers," "adduct ions," and other "ions" of known metabolites were excluded. The remaining peaks were annotated with putative metabolites from the metabolite database and Known-Unknown library database established in the HMT, based on MT/RT and m/z ratio values as determined by TOF/MS. The tolerance range for the peak annotation was configured at ± 0.5 min for MT/RT, and ± 10 ppm for m/z ratio. In addition, the peak areas were normalized against those of the internal standards. Finally, the resultant relative area values were further normalized by sample amount. For metabolomics analysis, the relative area was defined as the relative concentration of each metabolite. The Human Metabolome Technologies, Inc., performed hierarchical cluster analysis (HCA) and principal component analysis (PCA) by our proprietary software, PeakStat and SampleStat, respectively. Detected metabolites were plotted on metabolic pathway maps using VANTED (Visualization and Analysis of Networks containing Experimental Data) software. Health-promoting lifestyle profile (HPLP) The HPLP was originally developed by Walker, Sechrist, and Pender in 1987 and revised as HPLP-II in 1995 [23]. Wei et al. developed the Japanese version of the HPLP-II and established its validity and credibility with a Cronbach's α internal consistency coefficient of 0.90 [24]. The HPLP-II is a 52-item questionnaire composed of two main categories and six sub-dimension scales. The health-promoting behaviors category includes the health responsibility (9 items), physical activity (8 items), and nutrition (9 items) subscales. The psychosocial well-being category includes the spiritual growth (9 items), interpersonal relationship (9 items), and stress management (8 items) subscales. Each subscale can be used independently. Higher scores indicate a healthier lifestyle. All items on the HPLP-II are affirmative with no reverse questions. The answers were provided on a four-point Likert-type scale, and ratings of "never," "sometimes," "frequently," and "regularly" were scored as 1, 2, 3, and 4 points, respectively. Therefore, the total scores on the HPLP-II ranged between 52 and 208. The scores on the "physical activity" and "stress management" subscales range between 8 and 32, and the scores on the other four subscales range between 9 and 36. In this study, we subsequently divided the total scores and the sum of each subscale score of the HPLP-II by the number of items; therefore, the mean item score ranged from 1 to 4. The internal consistency of the HPLP-II scale used in this study was demonstrated with a Cronbach's α of 0.902. We divided the HPLP-II ratings into three groups of "good," "moderate," and "poor." For the total HPLP-II scale, a score of 3.01–4 was considered "good," a score of 2.01–3 was considered "moderate," and a score of 1–2 was considered "poor." All measurements obtained using the HPLP-II are summarized in Additional file 2. Brief-type self-administered diet history questionnaire (BDHQ) The BDHQ is a 58-item short version of the self-administered diet history questionnaire (DHQ) that assesses Japanese dietary habits. The BDHQ was reported to have a satisfactory ability to rank the energy-adjusted intakes of many nutrients in healthy Japanese adults [25]. The BDHQ assesses the dietary habits during the preceding month and consists of the following five sections: (i) intake frequency of 46 food and non-alcoholic beverage items; (ii) daily intake of rice, including the type of rice (refined or unrefined, etc.) and miso soup; (iii) frequency of drinking alcoholic beverages and the amount per drink of five different types of alcoholic beverages; (iv) usual cooking methods; and (v) general dietary behaviors. Most food and beverage items listed on the DHQ are very commonly consumed in Japan, with some modifications using a food list provided by the National Health and Nutrition Survey of Japan as additional information. Standard portion sizes and adult sizes of bowls for rice and cups for miso soup were derived from several recipe books for Japanese dishes. All measurements obtained using the BDHQ are summarized in Additional file 3. Empathy process scale Empathy was evaluated using the Empathetic Process Scale, which was recently developed by Hayama et al. based on the Interpersonal Reactivity Index proposed by Davis [26]. This Japanese 30-item questionnaire was designed with six sub-dimension scores to assess both the cognitive and emotional aspects of empathy [27]. This scale focuses on more detailed emotional aspects than that developed by Davis. The emotional aspects of empathy include "Sharing positive emotions with others," "Good feeling for others' positive emotions," "Sharing negative emotions with others," and "Sympathy for others' negative emotions." The cognitive aspects of empathy include "Perspective taking" and "Sensibility about others' emotions." The self-reported responses were provide on a 5-point Likert-type scale with the anchors of 1 = Strongly disagree and 5 = Strongly agree. All six subscales consisted of five items with scores ranging from 5 to 25. The internal consistency of the Empathy Scale used in this study was demonstrated with a Cronbach's α of 0.821. Two professional native-speaking English translators at Tokyo Kasei University translated the Japanese questionnaire into English (Additional file 5). All data are presented as medians with interquartile ranges (25–75th percentile). All statistical analyses were performed using SPSS version 19 (IBM Inc., Armonk, NY, USA) and GraphPad Prism 6.0 (GraphPad Software Inc., San Diego, CA, USA). Because the sample sizes in this study were relatively small (n = 10 in each group), we could not determine whether the transcripts, metabolites, blood, and psychological indices followed a normal distribution. Therefore, we chose the non-parametric Mann–Whitney U test to compare the detected values between the priest group and the control group. A P value of 0.05 or less was considered significant. The effect sizes (r) for the Mann–Whitney U test were calculated based on the Z value as described below. N represents the total sample size. Cohen's guidelines indicate that a large effect size is r = 0.5, a medium effect size is r = 0.3, and a small effect size is r = 0.1 [28]. $$ r=\frac{Z}{\sqrt{N}} $$ To explore the significance of the deviated transcripts and metabolites in the "spiritual/religious" trained priests, Spearman's rank correlation coefficients were calculated to assess the correlation between the psychological indices (Empathy Scales) and the significant transcriptional or metabolic candidates. Statistical significance was defined as P < 0.05. Descriptive characteristics of the participants' lifestyle behaviors and blood chemistries Lifestyle behaviors are considered essential components that influence well-being and health. Dietary habits, regular physical exercise, and religiosity/spirituality are major factors associated with these behaviors [29]. Therefore, we firstly compared the healthy lifestyle behaviors and dietary habits between the priests (n = 10) and the age-matched non-priest controls (n = 10). The healthy lifestyle behaviors and dietary habits were evaluated using the HPLP-II questionnaires and BDHQ questionnaires, respectively. The outcomes of the six HPLP-II sub-dimensions are dot-plotted in Additional file 4 and summarized in Additional file 2. Measurements from the BDHQ were summarized in Additional file 3. Additional files 2, 3 and 4 show that no significant differences were observed in the healthy lifestyle behaviors and daily nutrient intakes between the priests and the non-priest controls in this study. Additional file 1 presents the serum chemistries of the participants in this study. The priests and the non-priest controls were matched with respect to age, gender (all males), ethnic group (all Japanese), and BMI, and there were no significant differences between the groups regarding these parameters (P > 0.05). In most of the blood indices, no significantly differences were observed at the basal state between the priests (n = 10) and the non-priest controls (n = 10). Only the sodium (Na) and potassium (K) concentrations were significantly different between the groups (P = 0.03 and 0.04, respectively); however, these values were within the normal healthy ranges. Two priests with diabetes had high blood sugar (133 and 164 mg/dL) and HbA1c (6.2 and 7.6%) levels; one priest also showed a high level of plasma creatinine (1.87 mg/dL) due to diabetic nephropathy. Notably, no significant difference (P = 0.47) in their levels of the systemic inflammatory marker C-reactive protein (CRP) was observed comparing to other healthy priests or all non-priest controls. The median CRP level in the priest group was 0.05 (0.03–0.08, 25–75%) (mg/dL) and that in the non-priest control group was 0.04 (0.03–0.05, 25–75%) (mg/dL). Notably, the results of the comparison of the serum nutritional variables between the groups shown in Additional file 1 (protein, cholesterol, triglyceride, glucose, Na, and K) were consistent with the findings obtained using the BDHQ (protein, cholesterol, fat, carbohydrate, sodium, and potassium) shown in Additional file 3. Microarray results and selection of candidate genes To determine the gene expression profiles of the qualified priests at the basal state, we performed one-color-microarray experiments using Agilent platforms. We tested 50,599 probe sets to investigate the expression of 23,284 unique human genes. To obtain an initial list of candidate genes, we performed a Ranked Product analysis to compare the ranks of the top differentially expressed genes between the priests and the controls. We set the threshold at a P value < 0.05; 162 probe sets representing 111 genes (42 up-regulated genes and 69 down-regulated genes) were differentially expressed between the priests and the non-priest controls after the probe sets passed filtering criteria of 5% false discovery rate (FDR). Additional file 6 provides a list of all 111 differentially expressed candidate genes. The 42 up-regulated genes and 69 down-regulated genes were subjected to a Gene Ontology (GO) analysis using AmiGO (http://amigo2.geneontology.org/amigo). The most significantly enriched functions include "immune effector process (GO:0002252)," "type I interferon signaling pathway (GO:0060337)," "cellular response to type I interferon (GO:0071357)," "innate immune response (GO:0045087)," "response to type I interferon (GO:0034340)," "defense response (GO:0006952)," and "defense response to virus (GO:0051607)." Tables 1 and 2 show the expression values of representative ranked genes that were differentially expressed by more than 1.5-fold with their corresponding P values, FDRs, and average raw signals (n = 10, priests) in the microarray experiments. Table 1 lists the genes that were up-regulated in the priests, including type I IFN innate anti-viral response genes (e.g., MX1, RSAD2, IFIT1, IFIT3, IFI27, IFI44L, and E3 ubiquitin ligase (HERC5)), defensin4 (DEFA4), and soluble-type folate receptor (FOLR3). Table 2 lists the genes that were down-regulated in the priests, including hemoglobin γA (HBG1), keratin-associated protein (KRTAP10-12), sialic acid Ig-like lectin 14 (SIGLEC14), calcium-binding protein (S100P), and C17orf97. Table 1 Up-expressed genes in the priests Table 2 Down-expressed genes in the priests Result validation by qRT-PCR To further verify the microarray results using an independent technique, we performed quantitative real-time RT-PCR (qRT-PCR). We compared the basal expression levels of 12 selected representative transcripts in the priests, including MX1, RSAD2, IFIT1, IFIT3, IFI27, IFI44L, HERC5, DEFA4, FOLR3, HBG1, C17orf97, and S100P, between the priests and the non-priest controls. As shown in Fig. 1, the participant priests showed a significantly up-regulated expression of seven genes that are involved in type I IFN anti-viral responses (e.g., MX1, IFI27, IFIT1, IFIT3, RSAD2, IFI44L, and HERC5). C17orf97 was validated by qRT-PCR as a significantly down-regulated gene in the priests. This gene product is currently known as LIAT1 or ligand of arginyltransferase 1 (ATE1), because C17orf97 interacts with ATE1, which is a component of the N-end rule pathway of proteins degradation [30]. Notably, C17orf97 is also down-regulated in the hedonic well-being status [13]. No significant differences were observed in the expression levels of the remaining examined genes between the groups (P > 0.05). The dot plots of the representative transcriptional markers identified in the priests. Dots represent subjects (circle, priests; square, controls), and line represents median. Eight genes were selected for validation by quantitative real-time PCR (qRT-PCR). qRT-PCR data are normalized to the housekeeping gene GAPDH. Differences of each transcript were compared using a Mann–Whitney U test, indicating P value and the effect size r. a MX1 (P = 0.0015, r = − 0.676), b IFI27 (P = 0.0115, r = − 0.558), c IFIT1 (P = 0.0433, r = − 0.456), d IFIT3 (P = 0.0115, r = − 0.558), e RSAD2 (P = 0.0068, r = − 0.592), f IFI44L (P = 0.0433, r = − 0.456), g HERC5 (P = 0.0029, r = − 0.642), and h C17orf97 (P = 0.0089, r = − 0.583). Statistical significance was defined as P < 0.05. Cohen's guidelines for the effect sizes (r) for Mann–Whitney U test are that a large effect is 0.5, a medium effect is 0.3, and small effect is 0.1 [28] To clarify the underlying metabolic alterations in the spiritually trained Buddhists, we performed global metabolomics profiling to compare the priest participants with the non-priest controls. The complete data sets are shown in Additional file 7. In total, 275 metabolites were detected in the plasma of the participants as follows: 149 hydrophilic metabolites (105 and 44 metabolites in cationic and anionic modes, respectively) and 126 hydrophobic metabolites (62 and 64 metabolites in positive and negative modes, respectively). We detected 20 candidate metabolites in the priests, and the relative concentrations of these metabolites were significantly higher in the priests than those in the non-priest controls based on non-parametric Mann–Whitney U test (P < 0.05, Additional file 7). We chose 14 certain metabolites from the 20 candidate metabolites because these metabolite peaks were detected in all 20 participants. Table 3 showed the chosen metabolites in the priests, including 3-aminoisobutyric acid (BAIBA), choline, amino acids (methionine, phenylalanine, histidine, valine, isoleucine, and leucine), amino acid derivatives (symmetric dimethylarginine (SDMA), asymmetric dimethylarginine (ADMA), 2-aminoadipic acid (AABA)), creatine, and acylcarnitine (13:1). Figure 2 shows the following four representative metabolites that were significantly higher (P < 0.01) in the priests than in the non-priest controls: methionine (1.43-fold, P = 0.0002, r = − 0.761); BAIBA (3.06-fold, P = 0.0015, r = − 0.676); phenylalanine (1.36-fold, P = 0.0029, r = − 0.642); and choline (1.38-fold, P = 0.0052, r = − 0.608). Table 3 Fourteen metabolite profiling in the priests Representative four plasma metabolite markers identified in the priests. Dots represent subjects (circle, priests; square, controls), and line represents median. a Methionine in plasma of priests vs. the controls (P = 0.0002, r = − 0.761). b 3-Aminoisobutyric acid (BAIBA) in plasma of priests vs. the controls (P = 0.0015, r = − 0.676). c Phenylalanine in plasma of priests vs. the controls (P = 0.0029, r = − 0.642). d Choline in plasma of priests vs. the controls (P = 0.0052, r = − 0.608). Plasma level of each metabolite was shown as the relative area value calculated for metabolomics analysis by the Human Metabolome Technologies, Inc. The relative area value was defined as the relative concentration of each metabolite. Differences of each plasma metabolites were compared using a Mann–Whitney U test, indicating P value and the effect size r. Statistical significance was defined as P < 0.05. Cohen's guidelines for the effect sizes (r) for Mann–Whitney U test are that a large effect is 0.5, a medium effect is 0.3, and small effect is 0.1 [28] Distribution of empathy sub-dimension scores We evaluated the psychological trait of empathy in the priests and the non-priest controls. Table 4 presents the empathy sub-dimension scores in the priests and the non-priest controls. The priest group exhibited significantly higher empathy in four sub-dimensions than the non-priest controls as follows: "Sharing positive emotions with others" (priests 22.0 vs. controls 18.0, P = 0.018, r = − 0.521); "Good feeling for others' positive emotions" (priests 22.0 vs. controls 20.0, P = 0.029, r = − 0.487); "Sharing negative emotions with others" (priests 18.5 vs. controls 16.0, P = 0.039, r = − 0.462); and "Sensibility about others' emotions" (priests 22.0 vs. controls 18.5, P = 0.042, r = − 0.455). Table 4 Empathy scores in priests and the controls Bivariate correlations (Spearman's ρ) between the identified transcripts, metabolites, and the empathy sub-dimension scores To examine the correlations among the identified transcripts, metabolites, and the four empathy sub-dimensions, we performed a correlation analysis using Spearman's rank correlation coefficients (two-tailed) in the combined samples (n = 10 + 10: priests + non-priest controls). We observed a significant correlation (ρ = − 0.636 or 0.448 ~ 0.700) between the identified transcripts and metabolites (Table 5), between the identified transcripts and the four empathy sub-dimension scores (Table 6), and between the identified metabolites and the four empathy sub-dimension scores (Table 7). Table 5 Bivariate correlations (Spearman's ρ) between the transcriptional markers and metabolite markers Table 6 Bivariate correlations (Spearman's ρ) between the transcriptional markers and empathy sub-dimension scores Table 7 Bivariate correlations (Spearman's ρ) between the metabolite markers and empathy sub-dimension scores As shown in Table 5, the BAIBA metabolite was significantly correlated with six distinct transcripts as follows: BAIBA was negatively correlated with C17orf97 (ρ = − 0.636, P = 0.026, Fig. 3) and positively correlated with five anti-viral genes including RSAD2 (ρ = 0.571, P = 0.008), HERC5 (ρ = 0.559, P = 0.010), MX1 (ρ = 0.556, P = 0.011), IFIT3 (ρ = 0.516, P = 0.020), and IFIT1 (ρ = 0.501, P = 0.025). Meanwhile, in addition to BAIBA, the MX1 gene was correlated with five different metabolites, including methionine, SDMA, phenylalanine, and glycodeoxycholic acid. Scatter plots of representative correlation between the molecular markers identified in the priests and empathy. Spearman's rank correlation coefficients were calculated. Dots represent subjects (circle, priests; square, controls). a C17orf97 transcript vs. 3-aminoisobutyric acid (BAIBA) metabolite (ρ = − 0.636). b IFIT3 transcript vs. empathy aspect "Sharing negative emotions with others" (ρ = 0.603). c 3-Aminoisobutyric acid (BAIBA) metabolite vs. empathy aspect "Sharing positive emotions with others" (ρ = 0.700) As shown in Table 6, "Sharing positive emotions with others" was associated with HERC5 (ρ = 0.595, P = 0.006), RSAD2 (ρ = 0.537, P = 0.015), MX1 (ρ = 0.511, P = 0.019), IFIT3 (ρ = 0.483, P = 0.031), IFI44L (ρ = 0.479, P = 0.033), and C17orf97 (ρ = − 0.459, P = 0.042), while "Sharing negative emotions with others" was associated with IFIT3 (ρ = 0.603, P = 0.005, Fig. 3), IFIT1 (ρ = 0.569, P = 0.009), IFI44L (ρ = 0.536, P = 0.015), HERC5 (ρ = 0.487, P = 0.030), and RSAD2 (ρ = 0.448, P = 0.047). "Sensitivity about others' emotions" was correlated with IFIT3 (ρ = 0.496, P = 0.026) and HERC5 (ρ = 0.470, P = 0.037). "Good feeling for others' positive emotions" was negatively correlated with C17orf97 (ρ = − 0.528, P = 0.017). As shown in Table 7, "positive" emotion-related empathy was correlated with five distinct metabolites. "Sharing positive emotions with others" was associated with BAIBA (ρ = 0.700, P = 0.001, Fig. 3), ADMA (ρ = 0.602, P = 0.005), SDMA (ρ = 0.561, P = 0.010), and methionine (ρ = 0.477, P = 0.033). "Good feeling for others' positive emotions" was associated with SDMA (ρ = 0.649, P = 0.002), ADMA (ρ = 0.569, P = 0.009), histidine (ρ = 0.491, P = 0.028), BAIBA (ρ = 0.455, P = 0.044). Finally, "Sensibility about others' emotions" was positively related with creatinine (ρ = 0.573, P = 0.008). Spirituality/religiosity is one of the unique aspects of human social environments and can be positively associated with psychological and physical health according to psycho-neuro-immune models of health regulation [1]. Our current cross-sectional study defined the systemic signatures of the gene expression and the metabolic profiles in Buddhist priests compared with those in the non-priest controls. The list of identified transcripts included components of type I IFN responses involved in innate anti-viral protection. Additionally, the identified metabolites indicated that enhanced proteolytic metabolism might occur in the priests. In contrast, no significant differences were observed in the healthy lifestyle behaviors and daily nutrient intake between the two groups. Interestingly, we observed a significant correlation between empathy and the molecular markers identified in the priests. Among the top ranked list of significantly up-regulated transcripts in the priests, seven gene products (MX1, RSAD2, IFIT1, IFIT3, IFI27, IFI44L, and HERC5) were involved in anti-viral protection mechanisms with type I IFN signaling [31,32,33]. It is unclear whether the observed basal physiological state in the priest participants controls inflammatory responses or the pathological inflammatory states. However, acute infectious illnesses or chronic inflammation likely did not cause the up-regulation of the type I IFN responsive genes in the priests because no apparent infection signs and symptoms were observed in any of the participants on the day of the experiment, and no significant difference were detected in the plasma CRP levels (P = 0.47), which is a systemic inflammatory marker, between the priests and the non-priest controls (Additional file 1). Notably, the fold change value of these seven transcripts (1.77- to 2.99-fold) seemed to be within the normal range of physiological variations and were one-digit less than those that are observed under infection conditions [34, 35]. Furthermore, no significant differences were detected between the two groups in the expression of inflammation-related genes. One unique feature of the type I IFN system is a weak signal that constitutively produces IFN-α, which is critical to "rev-up" for efficient and robust responses to viral infections in innate immune cells [36]. The up-regulation of the seven genes might possibly provide a foundation for more efficient or sensitive cellular responses for anti-viral protection. "Trained immunity" was recently described as a concept related to the memory-like innate immune function after microbial encounters [37]. Repeated exposure to specific environments or conditions during common religious/spiritual practices might trigger long-lasting changes as "allostatic responses" [38] in the anti-viral response of the type I IFN pathways. Furthermore, the trained spiritual/religious priests may have conditioned themselves to respond more quickly or with a higher sensitivity to environmental changes, particularly viral infections. Notably, six of the seven identified anti-viral genes were inversely down-regulated under socially adverse conditions, such as chronic loneliness [9, 10]. In this pilot study, we did not exclude three priests with known physical ailments from statistical analyses due to the limited number of recruited priests. Nonetheless, the results of analyses performed when these priests were excluded exhibited similar significances (Additional file 8). We uncovered that Buddhists had higher levels of plasma free amino acids and amino acid-derivatives than the non-priest controls. These results indicated that the Buddhists might exhibit higher intracellular protein turnover/metabolism. The two major systems involved in cellular proteolysis are the ubiquitin-proteasome system [39] and the autophagy system [40]. Both systems play crucial roles in adapting and properly sustaining protein turnover (i.e., proteostasis) under a large variety of different environmental conditions. In particular, the autophagy pathway plays a crucial role in the resistance against infections and inflammatory conditions [41]. Therefore, autophagy may mediate the cytoprotective processes by maintaining protein and organelle quality control in the system better in the Buddhists than in the non-priest-controls. Among the identified plasma metabolites, BAIBA was correlated with the expression of six transcripts, including C17orf97, RSAD2, HERC5, MX1, IFIT3, and ITIF1 (Table 5). Furthermore, BAIBA can be generated by the catabolism of the branched-chain amino acid valine and functions as a myokine secreted from the skeletal muscle cells to alter the functions of other tissues. For example, BAIBA increases the expression of brown-adipocyte-specific genes in white adipocytes and β-oxidation genes in hepatocytes through a peroxisome proliferator-activated receptor alpha (PPARα)-dependent mechanism [42]. Although we cannot determine a causal relationship responsible for the observed correlations, we speculate that circulating BAIBA may have greater effects on leukocyte gene expression. Future studies are needed to determine the direct action of BAIBA on the up-regulation of anti-viral genes in peripheral leukocytes. Choline, which is another notable metabolite in the priests, serves as a component in blood and membrane phospholipids and structural lipoproteins and is a precursor for the neurotransmitter acetylcholine. Choline and its oxidative product betaine, which serves as a methyl group donor, are important sources of one-carbon units. In this study, the priests showed significantly higher levels of plasma choline (8.1E − 3 vs. 6.7E − 3: 1.38-fold, P = 0.0052, r = − 0.608, Table 3) than the non-priest controls, and their levels of betaine were higher (2.7E − 2 vs. 2.3E − 2: 1.17-fold, P = 0.052, Additional file 7). Choline promotes homocysteine remethylation to methionine and affects the concentration of the universal methyl donor S-adenosylmethionine (SAM). Altered concentrations of SAM may influence DNA methylation at cytosine bases, thereby influencing gene transcription, genomic imprinting, and genomic stability [43]. Plasma choline is delivered across the blood-brain-barrier by a specific transporter [44]. Therefore, in this study, the higher levels of plasma choline likely increased the brain choline levels in the priests compared with that in the non-priest controls. Experimental studies suggest that increased levels of brain choline and phosphatidylcholine improve memory performance and cognitive function as well as enhance neuroprotection and neurorepair activities [45]. Additionally, plasma choline levels are inversely associated with high anxiety levels [46]. Altogether, we hypothesized that the higher levels of plasma choline in the priests may exhibit some beneficial effects on the neuropsychological status, such as improvements in cognitive function and reduced anxiety. Troen et al. showed that cognitive dysfunction in folate-deficient rats was related to depletion of phosphatidylcholine in the brain. Notably, dietary methionine could prevent both cognitive impairment and low phosphatidylcholine [47]. Therefore, the higher levels of plasma methionine in the priests in this study could also present further beneficial contributions to cognitive functions. The highlight of this study was that we observed a significant correlation between empathy and the molecular markers identified in the priests (Tables 6 and 7). The Buddhist priests showed higher levels of empathy than the non-priest control (Table 4). Previous studies have shown that religious people tend to perceive themselves as pro-social and report higher levels of altruism or charitable deeds compared with non-religious people [5]. Individual Buddhists lead their own daily lives according to Buddhism values and virtues. Compassion (or loving-kindness) is a fundamental Buddhist concept of inter-personal relationships and is defined as the deep wish to relieve others' suffering, coupled with the motivation to alleviate such suffering [17]. Empathy is a core element of both compassion (the wish to relieve others' suffering) and loving-kindness (the wish of happiness for others) and an affective response that arises from the comprehension of another's emotional state [18]. Although compassion is interpersonal, compassion has also been empirically linked to personal benefits, including increased positive emotions [48], improved physical health [49], and a reduced immunological stress response [50]. Emerging literature suggests that positive emotional styles (higher activation/arousal of positive emotions) are associated with improved protective immune responses [51, 52]. Notably, this study also showed that the positive emotional aspect of empathy is significantly correlated with the five transcripts for anti-viral responses (IFIT3, HERC5, RSAD2, IFI44L, and MX1) as shown in Table 6. Meanwhile, C17orf97, which was a down-regulated gene in the priests, and three representative metabolites, i.e., BAIBA, SDMA, and ADMA, were significantly correlated with the positive empathy aspects of "Sharing the positive emotions with others" and "Good feeling for others' positive emotions" (Tables 6 and 7). The current results are the first to demonstrate a link between the molecular signatures of Buddhist priests and the psychological aspects of empathy. Finally, a large body of research indicates that lifestyle interventions such as physical exercise and a healthy diet are effective for inducing phenotypic alterations in circulating cells and influencing immune function as a non-clinical human model of the body's response to physiological stresses [53]. Additionally, both physical exercise and a healthy diet influence the human gene expression profiles in leukocytes [54, 55]. In the current pilot study, no significant differences in dietary habits and physical activity were observed between the priests and the non-priest controls (Additional files 2, 3 and 4). Despite the small sample sizes (n = 10 in each group), the scores on the overall HPLP-II in both groups were within the ranges obtaining in 512 male residents (mean age 45.5 ± 13.1 years) in a mixed rural-urban representative area in Japan. The actual states of the community residents' (n = 1176: 512 males and 664 females) lifestyles were recently investigated using the same HPLP II questionnaires used in this study [24]. The identified transcript and metabolite profiles in the priest participants may potentially serve as markers and mediators for daily spiritual/religious practices. However, this study had several limitations. First, the sizes of the participant groups were too small to generalize the observed results. Second, the "cross-sectional design" made it impossible to determine the cause-and-effect relationship among the identified markers (transcriptomics and metabolomics), and spiritual/religious practices, or the empathy aspects. Third, the psychological aspects of empathy were based on subjective self-reports, leading to a potential recall bias, which could have influenced the accuracy of the reported data. Fourth, no direct biological measures of immune cells were available in this study (e.g., NK cell or dendritic cell activities or effector responses to an immunologic challenge). Finally, the priests selected for this study were from only one religious affiliation, Shingon Buddhism; therefore, we must compare the basal transcriptome/metabolome profiles of other religious priests carefully to generalize these results. Despite the relatively small sample size, we were able to detect the inter-individual consistency of the transcriptional/metabolic alterations in the Buddhist priests. In the future, it will be important to determine whether Buddhists acquire the representative transcriptome and metabolome profiles at baseline or during the course of training needed to become a prospective Buddhist priest [19]. Fancourt et al. recently demonstrated that group-drumming interventions produce psychological benefits associated with a shift toward an anti-inflammatory immune profile [56]. Therefore, we need to examine the correlations between the identified molecular markers and other characteristic factors of daily Buddhist practices (e.g., the number of Buddhist prayers one offers each day or the number of hits of a Buddhist drum each day). The up-regulated set of anti-viral responsive genes may indicate that the trained priests have gained unique immunological traits to protect their bodies from environmental parasitic infections based on the "behavioral-immune response hypothesis." In this pilot study, we integrated in vivo phenotyping with transcriptomics, metabolomics, and psychological analyses, thereby identifying distinguishing biological characteristics and their association with empathy in Buddhism priests. The identification of the distinct transcripts and metabolites in the priests may be a first step toward understanding the molecular context of systemic biological alterations in the unique signature of spirituality/religiosity in Buddhists. AABA: 2-Aminoadipic acid ADMA: Asymmetric dimethyl-arginine ATE1: Arginyltranseferase 1 BAIBA: 3-Aminoisobutylic acid BDHQ: A brief-type self-administered diet history questionnaire CRP: DEFA4: Defensin alpha 4 FOLR3: Folate receptor 3 HBG1: Hemoglobin γA HERC5: HECT and RLD domain containing E3 ubiquitin protein ligase 5 HPLP-II: The Health-Promoting Lifestyle Profile-II IFI44L: Interferon induced protein 44 like IFIT: Interferon induced protein with tetratricopeptide repeats IFN: Interferon IL: Interleukin KRTAP10-12: Keratin-associated protein LIAT1: Ligand of arginyltransferase 1 MX1: MX dynamin-like GTPase 1 NK: PPARα: Peroxisome proliferator-activated receptor alpha RSAD2: Radical S-adenosyl methionine domain containing 2 SIGLEC14: Sialic acid Ig-like lectin 14 SMDA: Symmetric dimethyl-arginine TNF: Tumor necrosis factor Koenig HG. Religion, spirituality, and health: The research and clinical implications. ISRN Psychiatry. 2012;2012:33. Koenig HG, Zaben FA, Khalifa DA. Religion, spirituality and mental health in the West and the Middle East. Asian J Psychiatr. 2012;5(2):180–2. Loewenthal KM, MacLeod AK, Goldblatt V, Lubitsh G, Valentine JD. Comfort and joy? Religion, cognition, and mood in Protestants and Jews under stress. Cognit Emot. 2000;14(3):355–74. McIntosh DN, Poulin MJ, Silver RC, Holman EA. The distinct roles of spirituality and religiosity in physical and mental health after collective trauma: a national longitudinal study of responses to the 9/11 attacks. J Behav Med. 2011;34(6):497–507. Batson CD. Altruism in Humans. New York: Oxford University Press; 2011. ISBN 978-0-19-534106-5. Hill PC, Pargament KI. Advances in the conceptualization and measurement of religion and spirituality. Implications for physical and mental health research. Am Psychol. 2003;58(1):64–74. Steinmo S, Hagger-Johnson G, Shahab L. Bidirectional association between mental health and physical activity in older adults: Whitehall II prospective cohort study. Prev Med. 2014;66:74–9. Miller L, Bansal R, Wickramaratne P, Hao X, Tenke CE, Weissman MM, Peterson BS. Neuroanatomical correlates of religiosity and spirituality: a study in adults at high and low familial risk for depression. JAMA Psychiatry. 2014;71(2):128–35. Slavich GM, Cole SW. The emerging field of human social genomics. Clin Psychol Sci. 2013;1(3):331–48. Cole SW. Human social genomics. PLoS Genet. 2014;10(8):e1004601. Irwin MR, Cole SW. Reciprocal regulation of the neural and innate immune systems. Nat Rev Immunol. 2011;11(9):625–32. Glaser R, Kiecolt-Glaser JK. Stress-induced immune dysfunction: implications for health. Nat Rev Immunol. 2005;5(3):243–51. Fredrickson BL, Grewen KM, Coffey KA, Algoe SB, Firestine AM, Arevalo JMG, et al. A functional genomic perspective on human well-being. Proc Natl Acad Sci U S A. 2013;110(33):13684–9. Fredrickson BL, Grewen KM, Algoe SB, Firestine AM, Arevalo JMG, Ma J, et al. Psychological well-being and the human conserved transcriptional response to adversity. PLoS One. 2015;10(3):e0121839. Takimoto-Ohnishi E, Ohnishi J, Murakami K. Mind–body medicine: effect of the mind on gene expression. Person Med Univ. 2012;1(1):2–6. Wang Z, Koenig HG, Zhang Y, Ma W, Huang Y. Religious involvement and mental disorders in mainland china. PLoS One. 2015;10(6):e0128800. Wallace BA. Intersubjectivity in indo-Tibetan Buddhism. J Consci Stud. 2001;8(5–7):209–30. de Waal FBM. Putting the altruism back into altruism: the evolution of empathy. Annu Rev Psychol. 2008;59:279–300. Sharf RH. Thinking through Shingon ritual. J Internat Associ Buddhist Stud. 2003;26(1):51–96. Yamaoka M, Maeda N, Nakamura S, Kashine S, Nakagawa Y, Hiuge-Shimizu A, et al. A pilot investigation of visceral fat adiposity and gene expression profile in peripheral blood cells. PLoS One. 2012;7(10):e47377. Jeffery IB, Higgins DG, Culhane AC. Comparison and evaluation of methods for generating differentially expressed gene lists from microarray data. BMC Bioinformatics. 2006;7:359. Yamashita A, Zhao Y, Matsuura Y, Yamasaki K, Moriguchi-Goto S, Sugita C, et al. Increased metabolite levels of glycolysis and pentose phosphate pathway in rabbit atherosclerotic arteries and hypoxic macrophage. PLoS One. 2014;9(1):e86426. Walker SN, Sechrist KR, Pender NJ. The health-promoting lifestyle profile II. Omaha: University of Nebraska Medical Center, College of Nursing; 1995. http://www.unmc.edu/nursing/faculty/health-promoting-lifestyle-profile-II.html. Zhang S-C, Wie C-N, Harada K, Ueda K, Fukumoto K, Matsuo H, et al. Relationship between lifestyle and lifestyle-related factors in a rural-urban population of Japan. Environ Health Prev Med. 2013;18(4):267–74. Kobayashi S, Honda S, Murakami K, Sasaki S, Okubo H, Hirota N, Notsu A, Fukui M, Date C. Both comprehensive and brief self-administered diet history questionnaires satisfactorily rank nutrient intakes in Japanese adults. J Epidemiol. 2012;22(2):151–9. Davis MH. Measuring individual differences in empathy: evidence for a multidimensional approach. J Pers Soc Psychol. 1983;44(1):113–26. Sawada M, Hayama D. Dispositional vengeance and anger on schadenfreude. Psychol Rep. 2012;111(1):322–34. Fritz CO, Morris PE, Richler JJ. Effect size estimates: current use, calculations, and interpretation. J Exp Psychol. 2012;General 141(1):2–18. Ali NS, Ali OSJ. Stress perception, lifestyle behaviors, and emotional intelligence in undergraduate nursing students. Nurs Edu Pract. 2016;6(10):16–22. Brower CS, Rosen CE, Jones RH, Wadas BC, Piatkov KI, Varshavsky A. Liat1, an arginyltransferase-binding protein whose evolution among primates involved changes in the numbers of its 10-residue repeats. Proc Natl Acad Sci U S A. 2014;111(46):E4936–45. Sadler AJ, Williams BR. Interferon-inducible antiviral effectors. Nat Rev Immunol. 2008;8(7):559–68. Seo JY, Yaneva R, Cresswell P. Viperin: a multifunctional, interferon-inducible protein that regulates virus replication. Cell Host Microbe. 2011;10(6):534–9. Diamond MS, Farzan M. The broad-spectrum antiviral functions of IFIT and IFITM proteins. Nat Rev Immunol. 2013;13(1):46–57. Ioannidis I, McNally B, Willette M, Peeples ME, Chaussabel D, Durbin JE, et al. Plasticity and virus specificity of the airway epithelial cell immune response during respiratory virus infection. J Virol. 2012;86(10):5422–36. Zhai Y, Franco LM, Atmar RL, Quarles JM, Arden N, Bucasas KL, et al. Host transcriptional response to influenza and other acute respiratory viral infections––a prospective cohort study. PLoS Pathog. 2015;11(6):e1004869. Takaoka A, Taniguchi T. New aspects of IFN-alpha/beta signalling in immunity, oncogenesis and bone metabolism. Cancer Sci. 2003;94(5):405–11. Netea MG, Latz E, Mills KH, O'Neill LA. Immune memory: a paradigm shift in understanding host defense. Nat Immunol. 2015;16(7):675–9. Karatsoreos IN, McEwen BS. Resilience and vulnerability: a neurobiological perspective. F1000Prime Rep. 2013;5:13. Schmidt M, Finley D. Regulation of proteasome activity in health and disease. Biochim Biophys Acta. 2014;1843(1):13–25. Martinez-Lopez N, Athonvarangkul D, Singh R. Autophagy and aging. Adv Exp Med Biol. 2015;847:73–87. Levine B, Mizushima N, Virgin HW. Autophagy in immunity and inflammation. Nature. 2011;469(7330):323–35. Roberts LD, Boström P, O'Sullivan JF, Schinzel RT, Lewis GD, Dejam A, et al. β-Aminoisobutyric acid induces browning of white fat and hepatic β-oxidation and is inversely correlated with cardiometabolic risk factors. Cell Metab. 2014;19(1):96–108. Ueland PM. Choline and betaine in health and disease. J Inherit Metab Dis. 2011;34(1):3–15. Geldenhuys WJ, Allen DD. The blood-brain barrier choline transporter. Cent Nerv Syst Agents Med Chem. 2012;12(2):95–9. Alvarez-Sabín J, Román GC. The role of citicoline in neuroprotection and neurorepair in ischemic stroke. Brain Sci. 2013;3(3):1395–414. Bjelland I, Tell GS, Vollset SE, Konstantinova S, Ueland PM. Choline in anxiety and depression: the Hordaland health study. Am J Clin Nutr. 2009;90(4):1056–60. Troen AM, Chao WH, Crivello NA, D'Anci KE, Shukitt-Hale B, Smith DE, Selhub J, Rosenberg IH. Cognitive impairment in folate-deficient rats corresponds to depleted brain phosphatidylcholine and is prevented by dietary methionine without lowering plasma homocysteine. J Nutr. 2008;138(12):2502–9. Dunn EW, Aknin LB, Norton MI. Spending money on others promotes happiness. Science. 2008;319(5870):1687–8. Kok BE, Coffey KA, Cohn MA, Catalino LI, Vacharkulksemsuk T, Algoe SB, et al. How positive emotions build physical health: perceived positive social connections account for the upward spiral between positive emotions and vagal tone. Psychol Sci. 2013;24(7):1123–32. Pace TW, Negi LT, Adame DD, Cole SP, Sivilli TI, Brown TD, et al. Effect of compassion meditation on neuroendocrine, innate immune and behavioral responses to psychosocial stress. Psychoneuroendocrinology. 2009;34(1):87–98. Marsland AL, Cohen S, Rabin BS, Manuck SB. Trait positive affect and antibody response to hepatitis B vaccination. Brain Behav Immun. 2006;20(3):261–9. Dhabhar FS. Enhancing versus suppressive effects of stress on immune function: implications for immunoprotection versus immunopathology. Allergy Asthma Clin Immunol. 2008;4(1):2–11. Gleeson M, Bishop NC, Stensel DJ, Lindley MR, Mastana SS, Nimmo MA. The anti-inflammatory effects of exercise: mechanisms and implications for the prevention and treatment of disease. Nat Rev Immunol. 2011;11(9):607–15. Olsen KS, Skeie G, Lund E. Whole-blood gene expression profiles in large-scale epidemiological studies: what do they tell? Curr Nutr Rep. 2015;4(4):377–86. Ntanasis-Stathopoulos J, Tzanninis HG, Philippou A, Koutsilieris M. Epigenetic regulation on gene expression induced by physical exercise. J Musculoskelet Neuronal Interact. 2013;13(2):133–46. Fancourt D, Perkins R, Ascenso S, Carvalho LA, Steptoe A, Williamon A. Effects of group drumming interventions on anxiety, depression, social resilience and inflammatory immune response among mental health service users. PLoS One. 2016;11(3):e0151136. The authors are eternally grateful to Dr. Honnen Nakamura for his continuous encouragement during this research. We acknowledge Drs. Yukio Ichitani and Eisho Yoshikawa for critical reading of this manuscript and offering constructive suggestions. This study was supported by Koyasan University Fujikin Shuhei Ogawa Memorial Fund for the Promotion of Projects and Research and by Mind and Gene Institute of Foundation for Advancement of International Science. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Microarray raw data have been deposited in the National Center for Biotechnology Information Gene Expression Omnibus (accession number GSE77676). All relevant data are available in the Additional files 1, 2, 3, 5, and 6. Department of Food and Nutrition, Tokyo Kasei University, Itabashi, Tokyo, Japan Junji Ohnishi Foundation for Advancement of International Science, Kasuga, Tsukuba, Japan , Shigeko Sakamoto , Miyo Hori , Eriko Takimoto-Ohnishi & Kazuo Murakami Division of Health Sciences, Tsukuba University of Technology, Kasuga, Tsukuba, Japan Satoshi Ayuzawa Departmemt of Neurosugery, Center for Integrative Medicine, Kasuga, Tsukuba, Japan & Tomoko Sasaoka DNA Chip Research Inc., Minato, Tokyo, Japan Seiji Nakamura National Center for Child Health and Development, Tokyo, Japan Eriko Takimoto-Ohnishi Graduate School of Medicine, Kyoto Prefectural University of Medicine, Kyoto, Japan Masakazu Tanatsugu The Institute of Esoteric Culture, Koyasan University, Ina, Wakayama, Japan Kazuo Murakami Search for Junji Ohnishi in: Search for Satoshi Ayuzawa in: Search for Seiji Nakamura in: Search for Shigeko Sakamoto in: Search for Miyo Hori in: Search for Tomoko Sasaoka in: Search for Eriko Takimoto-Ohnishi in: Search for Masakazu Tanatsugu in: Search for Kazuo Murakami in: JO, SS, MH, SN, TS, ET-O, MT, SA, and KM conceived and designed the experiments. JO, SS, MH, TS and SA performed the experiments. JO, SN, SS, and MH analyzed the data. SN contributed reagents/materials/analysis tools for microarray analysis. JO, SN and SA wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Junji Ohnishi. All participants were given a complete explanation of the study, which was approved on September 11, 2013, by the ethics committee of Tsukuba University of Technology (approval number TUT 20130911) and provided written informed consent. All participants were paid for their participation. During the study, the participants were asked a series of questions about their age, physical health, and medication use. Data from all participants was analyzed anonymously. Physiological characteristics of all participants. Values are expressed as median and interquartile range (25–75th percentile). A P value < 0.05 is statistically significant by Mann–Whitney U test. Cohen's guidelines for the effect sizes (r) for Mann-Whitney U test are that a large effect is 0.5, a medium effect is 0.3, and small effect is 0.1 (Fritz et al. 2012). BMI, body mass index; T-protein, total protein; AST, aspartate transaminase; ALT, alanine transaminase; γ-GT, gamma-glutamyl transferase; CRP, C-reactive protein; HDL-Chol, high-density lipoprotein-cholersterol; LDL-Chol, low-density lipoprotein-cholesterol; BUN, blood urea nitrogen; HbA1c, hemoglobin A1c; RBC, red blood cell count; WBC, white blood cell count; MCV, mean corpuscular volume; MCH, mean corpuscular hemoglobin; MCH-C, mean corpuscular hemoglobin concentration; PLT-C, platelet count. (DOCX 533 kb) Comparison of health-promoting behaviors between the priests and the controls estimated by health-promoting lifestyle profile-II (HPLP-II). Values are expressed as median and interquartile range (25–75th percentile). A P value < 0.05 is statistically significant by Mann–Whitney U test. Cohen's guidelines for the effect sizes (r) for Mann–Whitney U test are that a large effect is 0.5, a medium effect is 0.3, and small effect is 0.1 [28]. (DOCX 27 kb) Comparison of mean daily energy intake and crude and energy-adjusted nutrient intakes estimated by BDHQ between the priests and the controls. Representative mean daily crude nutrient intakes were estimated by BDHQ. Values are expressed as median and interquartile range (25–75th percentile). A P value < 0.05 is statistically significant by Mann–Whitney U test. Cohen's guidelines for the effect sizes (r) for Mann-Whitney U test are that a large effect is 0.5, a medium effect is 0.3, and small effect is 0.1 [28]. (DOCX 535 kb) Comparisons of health-promoting lifestyle profiles (HPLP-II) between the priests and the controls. HPLP-II is a 52-item questionnaire composed of two main categories and six sub-dimension scales. The health-promoting behaviors category includes health responsibility, physical activity, and nutrition subscales. The psychosocial well-being category includes spiritual growth, interpersonal relationship, and stress management subscales. Dots represent subjects (●, priests n = 10; □, controls n = 10), and line represents median. Differences of each sub-dimension of HPLP-II were compared using a Mann–Whitney U test, indicating P value and the effect size r. (A) health responsibility (P = 0.864, r = −0.042), (B) physical activity (P = 0.837, r = −0.051), (C) nutrition (P = 0.423, r = −0.187), (D) spiritual growth (P = 0.678, r = −0.103), (E) interpersonal relationship (P = 0.615, r = −0.119), (F) stress management (P = 0.593, r = −0.127). Statistical significance was defined as P < 0.05. Cohen's guidelines for the effect sizes (r) for Mann–Whitney U test are that a large effect is 0.5, a medium effect is 0.3, and small effect is 0.1 [28]. (TIFF 260 kb) The empathetic process scale questionnaire. (DOCX 526 kb) The list of distinct transcripts identified in the priests. (XLSX 22 kb) The metabolome analysis. (XLSX 136 kb) The statistical analysis without data of three priest participants with known physical ailments (priest, n = 7 vs. non-priest controls; n = 10). Values are expressed as median and interquartile range (25–75th percentile). A P value < 0.05 is statistically significant by Mann–Whitney U test. A P value < 0.05 is statistically significant by Mann–Whitney U test. Cohen's guidelines for the effect sizes (r) for Mann–Whitney U test are that a large effect is 0.5, a medium effect is 0.3, and small effect is 0.1 [28]. (XLSX 41 kb) Ohnishi, J., Ayuzawa, S., Nakamura, S. et al. Distinct transcriptional and metabolic profiles associated with empathy in Buddhist priests: a pilot study. Hum Genomics 11, 21 (2017) doi:10.1186/s40246-017-0117-3
CommonCrawl
npj systems biology and applications technology features iOmicsPASS: network-based integration of multiomics data for predictive subnetwork discovery Pathway-based subnetworks enable cross-disease biomarker discovery Syed Haider, Cindy Q. Yao, … Paul C. Boutros Versatile knowledge guided network inference method for prioritizing key regulatory factors in multi-omics data Christoph Ogris, Yue Hu, … Nikola S. Müller An efficient and effective method to identify significantly perturbed subnetworks in cancer Le Yang, Runpu Chen, … Yijun Sun Computational analysis of fused co-expression networks for the identification of candidate cancer gene biomarkers Sara Pidò, Gaia Ceddia & Marco Masseroli Connecting omics signatures and revealing biological mechanisms with iLINCS Marcin Pilarczyk, Mehdi Fazel-Najafabadi, … Mario Medvedovic Personalized Integrated Network Modeling of the Cancer Proteome Atlas Min Jin Ha, Sayantan Banerjee, … Veerabhadran Baladandayuthapani Integrative pathway enrichment analysis of multivariate omics data Marta Paczkowska, Jonathan Barenboim, … PCAWG Consortium Gene co-expression in the interactome: moving from correlation toward causation via an integrated approach to disease module discovery Paola Paci, Giulia Fiscon, … Joseph Loscalzo Parsimonious Gene Correlation Network Analysis (PGCNA): a tool to define modular gene co-expression for refined molecular stratification in cancer Matthew A. Care, David R. Westhead & Reuben M. Tooze Technology Feature Hiromi W. L. Koh ORCID: orcid.org/0000-0002-6894-51291,2, Damian Fermin3, Christine Vogel4, Kwok Pui Choi5, Rob M. Ewing6 & Hyungwon Choi ORCID: orcid.org/0000-0002-6687-30881,2,7 npj Systems Biology and Applications volume 5, Article number: 22 (2019) Cite this article Computational tools for multiomics data integration have usually been designed for unsupervised detection of multiomics features explaining large phenotypic variations. To achieve this, some approaches extract latent signals in heterogeneous data sets from a joint statistical error model, while others use biological networks to propagate differential expression signals and find consensus signatures. However, few approaches directly consider molecular interaction as a data feature, the essential linker between different omics data sets. The increasing availability of genome-scale interactome data connecting different molecular levels motivates a new class of methods to extract interactive signals from multiomics data. Here we developed iOmicsPASS, a tool to search for predictive subnetworks consisting of molecular interactions within and between related omics data types in a supervised analysis setting. Based on user-provided network data and relevant omics data sets, iOmicsPASS computes a score for each molecular interaction, and applies a modified nearest shrunken centroid algorithm to the scores to select densely connected subnetworks that can accurately predict each phenotypic group. iOmicsPASS detects a sparse set of predictive molecular interactions without loss of prediction accuracy compared to alternative methods, and the selected network signature immediately provides mechanistic interpretation of the multiomics profile representing each sample group. Extensive simulation studies demonstrate clear benefit of interaction-level modeling. iOmicsPASS analysis of TCGA/CPTAC breast cancer data also highlights new transcriptional regulatory network underlying the basal-like subtype as positive protein markers, a result not seen through analysis of individual omics data. Today's systems biology research frequently employs two or more omics platforms such as massively parallel sequencing and mass spectrometry to identify systemic patterns in biological signals from different types of molecules. It is a complex task to synthesize findings from multiple sets of heterogeneous data and to tease out easily interpretable feature sets explaining phenotypic variation, often from a limited number of observations. Therefore, efficient computational frameworks that can integrate data sets with proper biological prior are of paramount importance. Numerous data analysis software packages are already available for multiomics data integration in different contexts.1 Sample clustering via multiomics data integration is a popular application, as unsupervised analysis creates abundant opportunities to extract different types of signals such as latent factors (LFs), without being confined to prespecified sample groups that may or may not be a major source of variation in the data. For example, iCluster2 and its recent extension iClusterPlus3 are successful model-based solutions that extract shared LFs with varying contributions from individual omics data sets and cluster subjects in the space of identified factors. Patient-specific data fusion also offers a highly flexible approach to model each individual's multiomics profile as an outcome of subject-specific feature sets, while providing stratification of subjects into clusters with automatic selection of an optimal number of clusters, all achieved by Bayesian nonparametric inference.4 A more recent approach called Multi-Omics Factor Analysis (MOFA) provides a computationally efficient group factor analysis method equipped with mean field approximation-based Bayesian inference to account for various types of quantitation (i.e. intensities, counts, and binary status), with the ability to tease out factors that are shared across different omics data and those that are unique to each data source.5 While model-based approaches have proven to be efficient for sample-level analysis, not all prioritized data features, such as the loading scores of individual molecules in LFs, immediately align well with known mechanistic links between molecules within each omics type (e.g. physical binding of two proteins), or between different omics types (e.g. regulation of a transcription factor (TF) proteins and mRNAs of its target genes.) Network-based approaches address this lack of biological interpretability by incorporating experimentally acquired genome-scale biological network data into the analysis. Instead of completely relying on mathematical deconvolutions to identify latent structures, these approaches borrow prior information from experimentally tested or predicted interactions to overlay heterogeneous multiomics data and overcome the inherent noise in data sets of a small sample size. The latter class of methods is best exemplified by PARADIGM,6 an unsupervised analysis method to infer patient-specific pathway activation and deactivation status by formulating the underlying probability model of multiomics data as factor graphs.6,7 LemonTree finds coexpressed gene clusters and reconstructs regulatory programs involving other upstream omics data as network modules.7 Other methods have also taken system-level data summarization approaches such as network propagation algorithms to merge signals from mutations and gene expression data, detecting gene signatures that would otherwise be missed in association analysis for disease phenotypes if the individual data sets had been analyzed in isolation.8 Despite these developments, few software implementations offer a data integration approach that combines multiomics measurements over networks in a way that (i) prioritized molecular features immediately reveal functional relations between themselves and (ii) the molecular levels of the features are directly relevant to the given type of networks. For example, many network-based integrative analyses have merged mutation data and/or transcriptomic data over protein–protein interaction (PPI) data.9 However, given the relatively modest correlation among DNA copy number, mRNA, and protein expression noted over a number of studies,10,11 it is more desirable to integrate protein expression data of two physically binding proteins rather than at the DNA or mRNA level data. Likewise, if a TF is known to regulate expression of a target gene, then the relevant data types are protein abundance of the TF and mRNA expression of the target gene. To fill this gap, we developed a network-based method iOmicsPASS to integrate multiomics profiles over genome-scale biological networks and identify sparse subnetworks predictive of prespecified phenotypes. iOmicsPASS performs two main tasks: (i) integrate quantitative multiomics data consisting of DNA copy number (optional), transcriptomics and proteomics data by computing interaction scores for a given network and (ii) discover a set of molecular interactions whose joint expression patterns predict phenotypic groups the best, i.e., predictive subnetworks. We first show that iOmicsPASS accurately identifies key interactions underlying phenotype-predictive signals using simulation studies, with high sensitivity under varying network coverage. In particular, we show that our adaptive centroid calculation and group-specific shrinkage operator yield locally connected predictive subnetworks with improved predictive performance over other modes of predictive feature selection, especially against the obvious alternative of applying machine learning algorithms on the concatenated data.12 We next illustrate the utility of iOmicsPASS through the analysis of The Cancer Genome Atlas (TCGA) breast cancer (BRCA) data, where we integrated multiple omics profiles for mRNA expression and protein abundance, with and without the normalization of the mRNA data by the DNA copy number variation. Not only does iOmicsPASS recapitulate a network of hormone receptors and transcription regulators defining BRCA subtypes, it also expands the subtype-specific subnetworks to additional markers that have literature evidence of interactive or regulatory mechanisms relevant for subtype characterization. The scoring adjustment we introduced especially highlighted (TF) regulatory networks positively regulated in the basal-like subtypes, where few positive markers have been delineated. Overview of iOmicsPASS workflow iOmicsPASS takes quantitative multiomics data and biological networks as input, and it calculates interaction scores for all molecular interactions in the network. The interaction scores are subsequently used for predictive subnetwork discovery. Some biological networks are nondirectional in nature (e.g., physical or genetic interactions), while others are inherently directional (e.g., TF regulatory networks). iOmicsPASS treats all network data as undirected graphs in the derivation of interaction scores and avoids modeling the full conditional probability structure for the directional networks. This design was chosen considering the fact that most input data sets used for analysis are cross-sectional expression data sets and thus it is more sensible to focus on coexpression of two interacting molecules, rather than directly incorporating the fact that one molecule 'regulates'' the expression of the other. This choice renders the algorithmic design generic to the integration of various types of multiomics data (directional and nondirectional), including pairs of template and product molecules (DNA and mRNA), physically binding partners (proteins), or transcription/translation regulatory element and its target. Our current implementation focuses on the integration of mRNA and protein data over TF regulatory networks and PPI networks (with or without DNA copy number variation data.) Figure 1a shows the three analysis modules in the iOmicsPASS workflow: (1) transformation of quantitative multiomics data into scores for biological interactions; (2) selection of predictive subnetworks from the composite network by a modified shrunken gene-centroid algorithm;13 and (3) reporting of biological pathways enriched in the subnetwork selected for each phenotypic group. a iOmicsPASS workflow. iOmicsPASS takes multiomics data, biological network data, and sample meta information as input. The omics data sets are integrated via interaction scores for all interactions in the network. Subnetwork discovery module discovers the subnetwork signatures distinguishing phenotypic groups, and pathway enrichment module reports associated biological processes. The software also produces a set of text files containing the details of the selected subnetworks and the materials for visualization of networks in the Cytoscape software. b Each omics data is first standardized into Z-scores and converted to interaction scores over the network. Two TFs (gene 1 and gene 3) and their common target gene (gene 2) are shown as an example. Interaction scores are computed for the PPI between protein 1 and protein 3 and the transcription factor regulation between the two TF proteins and mRNA molecule of their target gene. c The resulting interaction scores are used as an input to select the predictive edges for phenotypic groups using the modified nearest shrunken centroid algorithm The key first step in our workflow is transforming the quantitative data of individual molecules into interaction scores (Fig. 1b). We derive a score for each edge connecting two interacting molecules from their respective Z-scores, assuming that simultaneously high or low expression indicates high or low chance of the interaction, respectively (see "Methods"). For example, protein abundance of a TF gene and mRNA expression of its target gene can be integrated to infer the activation potential of a TF regulatory network. If both the protein abundance of the TF gene and the mRNA expression of the target genes have high Z-scores in a sample (leading to high interaction scores), then we assume that elevated abundance of the TF has contributed to the upregulation of mRNAs of the target genes in that sample. Likewise, if two physically binding proteins have simultaneously high Z-scores, it indicates that there is an increased chance of physical binding between the two, although additional determinants of actual binding such as post-translational modifications will have to adjudicate this conclusion. Next, the resulting interaction scores are used as input to the subnetwork discovery module. The module embodies a modified version of the nearest shrunken centroid (NSC) classification algorithm, a simple yet powerful method originally developed for gene expression microarray data.13 The NSC algorithm treats individual features as independent Gaussian random variables searches and selects a sparse set of predictive features (Fig. 1c). The method is our preferred choice for further adaptation to build a network-oriented feature selection method since, unlike regression-based methods, it does not require choice of a reference group and it is thus naturally amenable for multiclass classification problems. In iOmicsPASS, we introduced a group-specific shrinkage operator to render the network signature selection unbiased for networks of varying sizes across sample groups, and adjusted the calculation of centroids in a way that favors densely connected networks over scattered networks as predictive signatures. Based on the interaction scores, our modified NSC algorithm searches for sparse and well-connected subnetworks that predict phenotypes with the smallest misclassification error (cross validation). See "Methods" for details. The analysis pipeline in iOmicsPASS reports several key results in separate text files: (i) a file containing interaction scores, which can be used for further analysis such as principal component analysis (PCA); (ii) predictive subnetworks for phenotypic groups with group-specific centroids; (iii) data files to visualize the predictive subnetworks in the Cytoscape environment;14 and (iv) a table of pathways enriched in the predictive subnetworks. We illustrate these functionalities below. Simulations: iOmicsPASS recovers dense predictive networks We first conducted comprehensive simulation studies to evaluate the prediction performance of iOmicsPASS in comparison with other approaches: (i) NSC algorithm applied to concatenated multiomics data; (ii) Support Vector Machine (SVM), a widely used kernel learning algorithm in machine learning; and (iii) iOmicsPASS without the modified centroid shrinkage that favors densely connected predictive subnetworks. Here our primary goal is to show that, when the underlying data generation scheme is completely or partially captured by the given network, iOmicsPASS's feature selection method not only achieves comparable prediction performance to the kernel learning methods but also produces a sparse set of easily interpretable biological interactions, a property not usually offered by prediction tools of high complexity. Supplementary Fig. 1 shows the overall simulation design and parameters used to generate the quantitative data sets. Protein expression data for 1000 TF genes and mRNA expression data for 5000 target genes were simulated 100 times, in which true TF activation signals were planted for 10% of the TF proteins and their direct mRNA gene targets with probabilities proportional to the number of TFs that target them. The signal-to-noise ratio (setup A, B, and C) and the assay sensitivity parameter (PAS) were set to reflect the properties of the TCGA BRCA data set, and the TF network was also derived from the network data assembled for human data (see "Methods") in order to emulate the topology of TF regulatory networks in human cells. Here, 48,682 possible TF-target regulatory interactions were constructed between the molecules, and of those, 6742 (13.8%) were regarded as edges with true signals. We first compared iOmicsPASS with the original NSC algorithm applied to the concatenated dual-omics data sets. Figure 2 shows that the receiver operating characteristic curves of iOmicsPASS (red lines) are consistently superior to those of the NSC algorithm (black lines) with concatenated data across the combinations of signal-to-noise ratio and assay sensitivity. When assay sensitivity of proteomics was set at 0.7, the AUC improved more than 20% in setting A (AUC of the original NSC = 0.701; AUC of iOmicsPASS = 0.847), where the planted signal-to-noise ratio was the strongest. The comparative performance remained similar in settings B and C: iOmicsPASS with AUC values of 0.821 and 0.822, respectively, the NSC-based predictions had the smallest AUC (AUC = 0.641) in setting B with weaker signals. Overall, the simulation studies suggest that there is a clear benefit in selecting predictive features from a list of interactions based on interaction scores, rather than from measurements of individual molecules. a Simulation results using the NSC algorithm applied to the concatenated data (black lines), the NSC algorithm to the interaction scores (blue lines), and the modified NSC algorithm to the interaction scores in iOmicsPASS (red lines). Six different parameters determining the levels of signal and noise were used to simulate data based on a biological network sampled from a real TF and PPI network. b Area under the curve (AUC) of three approaches, each represented by one colored line in a, using three simulation setups at assay sensitivity values of 0.7 and 0.8 Furthermore, we evaluated the impact of the score adjustment in the NSC algorithm to secure locally dense subnetwork signatures in iOmicsPASS. Classification without this adjustment led to poorer performance (blue lines, Fig. 2) than that of NSC with the modified scoring algorithm (red lines). The effect of this modification was more visible in simulation setups with greater noise (simulation settings B and C compared to A): the difference in AUC was 0.061 in setting C, the largest among all three settings. Overall, the results of the simulation studies suggest that the enforcement of network modularity in the predictive signature can improve prediction accuracy considerably. In addition, we also compared the cross-validated misclassification error rates of iOmicsPASS to that of SVM applied to the same hundred simulated data sets (concatenated data) in all three simulation settings using assay sensitivity of 0.7. We tuned the SVM classifier for optimal kernel (gamma) and cost parameters using the first set of data and applied the same parameters across all the rest of the data as this part of the implementation took long computation time. The cross-validated error rates of the SVM (radial basis function kernel, gamma 0.001, cost 0.1) were on average 57% across all three settings, suggesting that even the most flexible kernel learning algorithm was unable to find classification decision boundaries robustly when two noisy, heterogeneous data sets are concatenated. When we investigated SVM classifiers across data sets, we discovered that the number of support vectors was consistently above 90 (of 100), supporting the interpretation that the algorithm could not reasonably simplified classification boundaries from the training data. Finally, we tested the performance of iOmicsPASS when the network information is incomplete and noisy, i.e., when the user-provided network contains spurious interactions (false positives) and lacks bona fide interactions (false negatives). To this end, we simulated noisy networks that include spurious interactions and lack a portion of true interactions (see "Supplementary Methods"). The results show that, even with a partially complete network, iOmicsPASS analyses still outperformed that of the NSC algorithm applied to the concatenated data and that of the NSC algorithm applied to the interaction scores across all three simulation setups (Supplementary Fig. 2). TF and PPI networks predictive of breast cancer subtypes Next, we tested the ability of iOmicsPASS to discover predictive subnetworks for BRCA subtypes. We used the invasive ductal BRCA data of TCGA as a benchmark data set, with four intrinsic subtypes defined by the mRNA-based PAM50 signature as phenotypic groups.15 The objective of our analysis is twofold: (i) evaluate the ability of iOmicsPASS to correctly classify tumors to predefined mRNA-based subtypes using multiomics data and (ii) identify combined TF regulatory and PPI subnetworks predictive of each subtype. To this end, we integrated DNA copy number, transcriptomics, and proteomics data produced by the TCGA16 and Clinical Proteomic Tumor Analysis Consortium (CPTAC),17 respectively. TCGA BRCA cohort has a total of 1098 tumor samples, and 103 of those had all three omics data available. TCGA assigned 24 samples to Basal-like, 18 to HER2E, 29 were luminal A, and 32 were luminal B subtype. In our main analysis, all three types of omics data were provided as input into the software. iOmicsPASS used DNA copy number to normalize the transcriptomic data of respective genes and mapped the transcriptomics and proteomics data to the TF and PPI networks. Supplementary Fig. 3 shows the individual PCA plots using all features data in individual omics data, illustrating the level of heterogeneity of the three data sets in terms of the contribution to the largest variation (e.g., principal component 1). In particular, the plot for the proteomic data suggest the presence of variation unrelated to the separation of the four subtypes. This turned out to be a data quality issue for a portion of samples, and we will return to discuss this point later in the "Results" section. Nevertheless, the supervised iOmicsPASS analysis overcame the heterogeneity of data sets, demonstrating good separation of subtypes in the integrated space. Supplementary Fig. 4a shows the PCA plot of the integrated interaction scores for the subnetworks selected by iOmicsPASS, suggesting that there exist subnetwork signatures that separate the four subtypes in the integrated data. One notable feature is that the identity of HER2E subtype is unclear in this sample projection plot, which can be largely considered as a part of luminal subtypes (with some of them later misclassified as luminal A in the supervised analysis.) The largest variation along the first principal component represents the separation between the luminal A subtype and the rest, while the second principal component separates the basal-like subtype from the rest. These observations clearly suggest that the proteomic data captured additional heterogeneity within and across the PAM50 subtypes. Meanwhile, the supervised analysis via the predictive subnetwork module identified subtype-specific subnetworks consisting of 2880 molecular interactions including 647 proteins and 871 mRNAs (Supplementary Table 1). Although the overall cross-validated test error rate was ~30%, the misclassification errors were observed to be the highest for HER2E subtype (training error 55.6% and test error 85.0% from cross validation). This is consistent with the observation from the PCA plot, where out of the 18 HER2E tumors, eight were classified as luminal A subtype (green triangles) and two were classified as basal-like subtype (green solid circles) as shown in Supplementary Fig. 3b. The heatmap of interaction scores in Fig. 3a provides further insight to why the integrated scores do not completely separate HER2E subtype from the rest. Although iOmicsPASS captured simultaneous upregulation of Erb-b2 (HER2) and GRB7 protein expression in the HER2E predictive signature, this predictive subnetwork was overwhelmed by the network of luminal A-predictive TF regulations and PPIs in the DNA replication and DNA damage response network that divided HER2E tumors into two sub-groups. The heterogeneity in this subnetwork completely masks the differences in hormone receptors (ESR1, PGR, and AR) and HER2 levels, leading to classification of those tumors into luminal A subtype. This drawback can be addressed by alternative ways to compute discriminant scores and assigning subjects to classes, and we further discuss this as a future extension in the "Discussion" section. a Heatmap of the interaction scores for the union of all four subnetworks in the BRCA data. The cyan color bar on the right-side highlights the subtype-specific subnetworks. b Heatmap of statistical significance scores for the pathway enrichment in the subtype-specific subnetworks. The significance score was calculated as minus the logarithm (base 10) of Benjamini–Hochberg adjusted p-value. For downregulated pathways that were enriched with genes or proteins with lower interaction scores, we multiplied −1 to the significance score to make the score negative. Red and blue represent the direction of interaction scores (positive and negative, respectively) Visualization of group-specific subnetworks and pathways Figure 4 shows the organization of the subnetworks across the four subtypes selected by iOmicsPASS, generated by plugging its text output into Cytoscape (proteins in circles and mRNAs in triangles.) The network organization clearly demonstrates that the incorporation of proteomics data not only singled out the diagnostic markers of the hormone receptors specifically but also captured protein-level downregulation of the DNA repair machinery in luminal A subtype, the group known to have the best prognosis. In the latter, the downregulated subnetwork highlighted in blue in luminal A subtype in Fig. 4, consists of a multitude of protein complexes and (TF) regulatory networks involving DNA replication and DNA-repair pathways (see Supplementary Table 1 for detailed subnetwork information.) Subnetwork signatures predictive of the four intrinsic subtypes of BRCA illustrated in Cytoscape. Red and blue lines (edges) are interactions (TF regulation or PPI) with higher and lower interaction scores compared to the overall centroid, i.e. average profile in the data set. In each subtype-specific network, the proteins are indicated by cyan-colored circles and the mRNAs are green-colored triangles. Gray-colored nodes are not a part of the predictive subnetwork in a given subtype. Yellow-colored nodes indicate hub proteins of subnetworks for each breast cancer subtype One of the most interesting features in the network diagram is that several transcriptional regulators, including CEBPB, NFIB, WWTR1, and WDR74, were selected as positive, not negative, markers of the basal-like subtype. We circle back to this point later when we discuss the impact of mRNA expression data normalization by the DNA copy number. Nonetheless, the edge-level analysis clearly highlighted protein-level evidence of the transcription regulators as the unique driver of basal-like subtype, which was not discovered when each individual omics data was analyzed in isolation. Consistent with this visualization, Fig. 3b shows the summary of pathways enriched in the subtype-specific subnetworks, drawn from the table generated by the software. The luminal B subtype had upregulation of estrogen receptor signaling, while luminal A subtype showed strong enrichment of downregulated cell cycle-related pathways. luminal B subtype showed enrichment of the hormone receptor and signal transduction pathways as well as FOXA1 and AP1 transcription regulatory network, indicating the protein-level evidence of ER signaling network is stronger in luminal B subtype than luminal A subtype in these data. As expected, HER2E subtype showed enrichment of upregulated signaling pathways that are not upregulated in the luminal subtypes, including epidermal growth factor receptor and innate immune response. The basal-like subtype largely showed upregulation of the PLK1 signaling cascade, APC/C-mediated degradation of cell cycle proteins, and small GTPase-mediated signal transduction, with characteristic downregulation of all hormone receptor-related pathways. Impact of DNA copy number-based normalization of mRNA data We next compared the subnetworks reported by iOmicsPASS with normalization of mRNA data by DNA copy number data with the subnetworks acquired from the data without copy number-based normalization. If the user decides to normalize mRNA data by DNA copy number, this implies that each TF edge in the predictive networks is interpreted through the interaction scores of TF protein and its target gene's mRNA per DNA copy (default option when all three omics data are available). If the user does not normalize the mRNA data by DNA copy number, then its interpretation becomes the interaction scores of TF protein and its target mRNA, regardless of the copy number variation. While the former option leads to the interpretation of interaction scores for TF edges with respect to the rate of transcription per DNA copy of a gene, it also requires generation of DNA copy number data and its inclusion may introduce additional noise into the combined data. The analysis without DNA copy number data produced a smaller subnetwork at its optimal threshold (2578 edges, Supplementary Fig. 5), with a similar level of cross-validated error rates. When we compared the two subnetworks, the two analyses shared 1851 edges (see the analysis without normalization in Supplementary Table 2). The edges with larger centroid values were retained in both analyses, especially the PPI edges and the TF edges that are directly related to the hormone receptors such as ESR1 protein and ERBB2 protein. One major difference is that the predictive signature of luminal A subtype with DNA copy number normalization does not contain the TF network of ESR1 and its connection to FOXA1, as the subnetwork representing DNA damage response and repair was more densely connected and our scoring adjustment determined that the latter network was more predictive of subtype than the former. Moreover, the positive signature of CEBPB TF protein in the basal-like subtype was considerably weakened in the analysis without copy number-based normalization, which suggests that the normalization can help enhance the transcription regulation signature. Comparison of iOmicsPASS to other methods Next, we compared the classification performance of iOmicsPASS in the BRCA data with comparison to other multiomics data integration approaches based on cross validation. Since we limit our feature selection space to the user-provided network, it is important to ensure that the predictive power is not compromised by the bounded feature search space. Our method is inherently a supervised classification method, but network-based supervised classification methods that can take multiomics data are scarce. Hence we compared iOmicsPASS to a few alternative tools available. We note that we attempted the nonlinear SVM classifier, yet the implementation in R did not finish the analysis within a few days (e1071 library), and thus we excluded the kernel learning algorithm in this comparison. We first concatenated the two omics data sets into a data matrix and applied the NSC method. Supplementary Fig. 6 shows that the cross-validated misclassification rates of the NSC algorithm were slightly smaller than iOmicsPASS, especially for HER2E subtypes. However, when we examined the predictive features in the analysis, not only the method identified a large number of molecular features (4996 mRNAs, 170 proteins) as the predictive signature, 96.7% of the thousands of predictive nodes were mRNA molecules, and the number of selected features were above 5000. In sum, despite seemingly better prediction performance, the concatenation-based integration coupled with the original NSC algorithm did not effectively incorporate additional predictive information provided by the proteomic data—the molecular level where the expression of functionally active gene products is observed. By contrast, the predictive subnetworks reported by iOmicsPASS analysis merged both molecular types in a more balanced manner, as all molecular interaction were forced to include protein-level information whether it is from the TF network or the PPI network. We next explored the connection of iOmicsPASS to existing state-of-the-art multiomics data integration methods. As the majority of data integration approaches have been developed in an unsupervised analysis setting, we chose MOFA as a representative method for the class of LF analysis of multiomics data. Interestingly, the MOFA analysis of the same mRNA-protein data set detected the first LF with prominent contribution from a large number of proteins. We found that this observation was an artifact of data quality issues in the proteomics data. As pointed out by Mertins et al. and the CPTAC,17 the iTRAQ ratios had aberrant global distributions for approximately a quarter of the samples (see Supplementary Fig. 7). We subsequently verified that LF2 and LF4 were the factors that separated the samples into the four intrinsic subtypes (Supplementary Fig. 8b). Using the coordinates of samples in LF2 and LF4, we also identified that HER2E subtype did not form an independent cluster in this analysis. In fact, consistent with the iOmicsPASS analysis, many HER2E subjects were instead clustered with the luminal subtypes. We then considered the proteins with large loading scores on both LFs (LF2 and LF4) in the MOFA analysis. Although MOFA identified the key genes such as ESR1, FOXA1, GATA3, and AR in the top feature set, almost all top features in terms of the magnitude of loading scores were mRNAs, not proteins (Supplementary Fig. 8b). iOmicsPASS captured most of the same genes at the protein level since the method searches for predictive features within the bounds of interactions involving protein molecules, a prior that the user indirectly imposes in pursuit of predictive features. In this work, we presented a supervised learning method for integrating multiomics data in the space of biological networks and extracting network signatures predictive of each phenotypic group in supervised analysis. The key difference of iOmicsPASS framework is that the predictive features are searched within the given set of molecular interactions and the scoring algorithm favors densely connected subnetworks enriched with predictive signals. Despite the constraints, iOmicsPASS was able to extract key interaction signatures without compromising its prediction performance in both simulated data and BRCA data, while the original NSC algorithm showed consistently poor classification performance in simulation data and picked heavily mRNA-centric gene signatures in the BRCA data. An important advantage of iOmicsPASS is that the selected predictive signature forms densely connected subnetworks, rather than molecules that are scattered across the network. In other words, the method forcefully limits the search space of predictive features to the known interactions, and by doing so, the user essentially specifies a biological prior in the predictive feature selection. We showed that this framework was able to identify biological networks specifically relevant to subtype prediction without sacrificing prediction error rates. This guided analysis approach may provide advantages especially in data sets with a relatively modest sample size (e.g., tens of samples per phenotypic group), where the most advanced kernel learning algorithm failed to find clear decision boundaries. Meanwhile, the method has room for future improvement. First, the current implementation does not provide functionalities for prediction of phenotypic groups in external data sets yet. This was a deliberate choice, since it is difficult to expect molecular profiling studies with all three omics platforms for hundreds of tumor samples, especially when MS-based proteomics is a part of the omics repertoire. Hence our immediate future work is to develop a prediction module to make phenotype predictions for a new data set with incomplete multiomics data. Second, as we integrate more diverse types of omics data, some omics data will yield more influence than others. As such, future work will explore optimal weighting of different omics data sets in the predictions. Third, our current implementation discards molecules that are not represented in the user-provided network data. Hence important markers that are poorly represented in biological networks can be lost in the analysis. Our future development will consider maintaining these predictive 'singleton' nodes. Lastly, and most importantly, future versions of iOmicsPASS will make probabilistic predictions that allow multiplicity in assignments. As we demonstrate in the examples above, there will be a subset of tumor samples that share characteristics of multiple subtypes, given typical heterogeneity in tumors. Hence a prediction method that allows for this possibility will provide a more biologically realistic framework than the current mutually exclusive subtyping. iOmicsPASS will be one of the first methods that systematically address this issue in prediction. Calculation of interaction scores for biological networks The first module of iOmicsPASS computes interaction score for each interaction represented in a user-provided biological network. For the integration of mRNA transcript and protein data, two types of networks are relevant: (1) PPI network to link different proteins and (2) TF regulatory network to link TF proteins with the mRNAs of their target genes. DNA copy number can also be incorporated as a normalizing constant for mRNA abundance, since the ratio of mRNA to DNA copy number can be considered as the "output" of gene transcription per DNA copy, i.e., transcription efficiency. For the derivation of interaction scores in the context of mRNA and protein data integration, we let p denotes the total number of edges, n the number of samples, and t the type of interaction data (t = 1 for TF regulatory network, t = 2 for PPI network). For i = 1,…,p, j = 1,…,n, and t = 1 or 2, the interaction score for edge eijt is calculated as: When DNA copy number data are not provided, $$e_{ij1} = z_{{\rm{prot}}_A,j} + z_{{\rm{mRNA}}_B,j},$$ $$e_{ij2} = z_{{\rm{prot}}_A,j} + z_{{\rm{prot}}_B,j}.$$ When DNA copy number data are provided, $$e_{ij1} = z_{{\rm{prot}}_A,j} + \left( {z_{{\rm{mRNA}}_B,j} - z_{{\rm{dna}}_B,j}} \right),$$ $$e_{ij2} = z_{{\rm{prot}}_A,j} + z_{{\rm{prot}}_B,j},$$ here z represents the Z-score of log-transformed measurement (base 2) of each molecule in the respective omics data sets, and hence addition and subtraction of these Z-scores are equivalent to multiplication and division of the original abundance data in the original scale. When the type of edge is created from a TF regulatory network (i.e., t = 1), protA represents the protein of a TF gene A, and mRNAB is the mRNA of target gene B. dnaB refers to the DNA copy number of the same gene (target gene B). On the other hand, when the type of edges is PPI network (i.e., t = 2), protA and protB represent the proteins of the respective genes A and B. From here on, we use the term "feature" to refer to edges or their interaction scores. Subnetwork discovery module Using the feature data created above, iOmicsPASS identifies a sparse subset of interactions whose interaction scores are predictive of phenotypic groups. iOmicsPASS applies the NSC method, originally introduced in the Prediction of Analysis of Microarray method,13 to interaction scores. The NSC algorithm is known to have bias in assigning samples to groups of a larger sample size,18,19 which motivated us to extend the original algorithm in several ways, to account for the dependence between features and the selection bias in sampling. First, we computed the centroid for feature i considering the sample size of each phenotypic group (e.g., tumor subtypes). Specifically, we compute it as $$\bar x_i = \frac{1}{K}\mathop {\sum }\limits_{k = 1}^K \bar x_{ik} = \frac{1}{K}\mathop {\sum }\limits_{k = 1}^K \left( {\frac{1}{{n_k}}\mathop {\sum }\limits_{j \in C_k} x_{ij}} \right),$$ where Ck represents the indices of nk samples in phenotypic group k and K is the number of phenotypic groups. This calculation avoids the sampling bias towards the phenotypic groups with large sample sizes. Second, in the NSC method, dik is defined to be the t-statistic for each gene i: $$d_{ik} = \frac{{\bar x_{ik} - \bar x_i}}{{m_k\,(s_i + s_0)}},$$ where \(\bar x_{ik}\) represents the centroid of gene i in phenotypic group k and \(\bar x_i\) represents the overall centroid of gene i. The denominator serves as a normalizing constant, where \(m_k = \sqrt {1/n_k + 1/n}\), si is the pooled within-class standard deviation of gene i, and s0 is the median value of si across all features, a positive constant. In iOmicsPASS, instead of genes, the term dik refers to the test statistic for edge i in phenotypic group k. We add an extra term to dik, which accounts for the consistency of interaction scores in neighbor edges, i.e., edges that share common nodes with the current edge i. For every edge ei, we define neighbor edges of ei as the ones that share at least one of the two nodes with it. We now define a new test statistic for edge i, \(d_{ik}^ \ast\), as: $$d_{ik}^ \ast = d_{ik} + \left( {\psi _{i,k} \times \frac{{|N_{e_i,1}\,|\mathop {\sum }\nolimits_{s \in N_{e_i,1}} d_{sk} + |N_{e_i,2}\,|\mathop {\sum }\nolimits_{r \in N_{e_i,2}} d_{rk}}}{{\left| {N_{e_i}} \right|}}} \right),$$ where \(N_{e_i}\) represents the set of neighbor edges of ei, and \(|N_{e_i}|\) denotes the number of edges in the neighborhood set \(N_{e_i}\). The set \(N_{e_i}\) is further partitioned into two subsets, \(N_{e_i,1}\) and \(N_{e_i,2}\), to represent the set of TF and PPI edges, respectively. The multiplicative factor \(\psi _{i,k}\) for phenotypic group k represents the proportion of agreement in sign (direction of change) between ei and its neighbor edges. Specifically, it is calculated by: $$\psi _{i,k} = \frac{{2e^{5\left( {p_{ik} - 0.5} \right)}}}{{1 + e^{5\left( {p_{ik} - 0.5} \right)}}},$$ $$p_{ik} = \frac{{\mathop {\sum }\nolimits_{j \in N_{e_i},i \ne j} {\mathrm{sign}}\left( {d_{ik}} \right) = {\mathrm{sign}}\left( {d_{jk}} \right)}}{{|N_{e_i}|}}.$$ We tested other variants of this function and the functional form did not impact the shape of the subnetworks significantly as long as its range is between 0 and 2, and it is greater than 1 if at least half of the neighbor edges have consistently up- or down-regulated interaction scores with that of edge ei. These adjustments ensure that the local subnetworks densely populated by predictive edges with identical sign are favored to the ones with predictive edges scattered randomly. Third, for each phenotypic group k, a group-specific threshold Δk is derived from the shrinkage parameter, Δ, calculated by: $$\Delta _k = \frac{\Delta }{{\Delta _{{\rm{max}}}}} \times {\rm{max}} _i\,d_{ik},$$ $$\Delta _{{\rm{max}}} = \frac{1}{K}\mathop {\sum }\limits_{k = 1}^K {\rm{max}} _i\,d_{ik}.$$ We set the values of Δ on a grid of 30 equally spaced values arranged in an increasing order (i.e., \(\Delta \in \{ 0,\Delta _1,\Delta _2, \ldots ,\Delta _{{\mathrm{max}}}\}\)) from zero to a value sufficiently large such that the group-specific centroids of all the features are reduced towards the overall centroid as Δ increases. Using a soft-thresholding approach, each feature's score is recomputed as \(d_{ik}^\prime\), reducing it by an absolute shrinkage amount incrementally, until it reaches zero: $$d_{ik}^\prime = {{\rm{sign}}}\left( {d_{ik}^ \ast } \right)\left( {\left| {d_{ik}^ \ast } \right| - \Delta _k} \right)_ +,$$ where \(\left( {\left| {d_{ik}^ \ast } \right| - \Delta _k} \right)_ + = \max \left( {0,\left| {d_{ik}^ \ast } \right| - \Delta _k} \right)\)and \({{\rm{sign}}}\left( {d_{ik}^ \ast } \right) = 1\) if \(d_{ik}^ \ast > 0\), \({{\rm{sign}}}\left( {d_{ik}^ \ast } \right) = - 1\) if \(d_{ik}^ \ast < 0\) and \({{\rm{sign}}}\left( {d_{ik}^ \ast } \right) = 0,\) otherwise. We choose the optimal value for Δ based on cross validation to yield a sparse set of predictive features. Nonzero score (i.e. \(|d_{ik}^\prime | > 0\)) indicate that the feature has certain level of predictiveness and will be used to form a subnetwork that best classifies samples to their phenotypic groups using the discriminant scores defined below. Suppose that we have a test sample with edge-level interaction scores \(x_1^ \ast = \left( {x_1^ \ast ,x_2^ \ast , \ldots ,x_p^ \ast } \right),\) then we compute the discriminant score for group k as: $$\delta _k\left( {x^ \ast } \right) = \mathop {\sum }\limits_{i = 1}^p \frac{{\left( {x_i^ \ast - \bar x\prime _{ik}} \right)^2}}{{\left( {s_i + s_0} \right)^2}} - 2\log \left( {\pi _k} \right),$$ where πk is the prior probability of the kth class (equal prior πk = 1/K by default). Then we assign each test sample to the phenotypic group with the smallest discriminant score using the classification rule: \(C\left( {x^ \ast } \right) = \ell\) where \(\delta _\ell \left( {x^ \ast } \right) = \mathop {{\min }}\limits_k \delta _k\left( {x^ \ast } \right)\). Using the discriminant scores, the estimated probability of sample x* membership to group k is computed as: $$\hat p_k\left( {x^ \ast } \right) = \frac{{e^{ - \frac{1}{2}\delta _k\left( {x^ \ast } \right)}}}{{\mathop {\sum }\nolimits_{\ell = 1}^K e^{ - \frac{1}{2}\delta _\ell \left( {x^ \ast } \right)}}}.$$ Pathway enrichment module for subtype-specific networks Lastly, iOmicsPASS tests enrichment of biological functions and pathways in the selected subnetwork for each phenotypic group. We first separate the edges into those with positive and negative \(d_{ik}^\prime\) scores, separately for each phenotypic group. Then, we apply hypergeometric test to compute the probability of overrepresentation of those edges in pathways. We set all possible edges in the user-provided network that were available in the data as the background. See Supplementary Methods for details. Multiomics data, pathways and biological networks Both breast data were downloaded using Genomics Data Commons (GDC) data portal from TCGA and proteomics data were downloaded from the CPTAC.20 The transcriptomic and copy number variation data were processed using GDC processing pipelines (with GRCh38 as the reference genome). To standardize the type of gene identifiers used across the different omics data, all identifiers were mapped to HGNC gene symbol. ENSEMBL identifiers were converted to gene symbols in the mRNA data and gene symbols were already provided in the proteomics data. For CNV data, the chromosomal positions were used to infer the genomic coordinates and a gene symbol was assigned. If two or more genes overlap at the start and end site, a weighted average of the segment mean values was computed for each gene where the weights are proportional to the length of the segment occupied. Then, the average segment mean was computed for each gene in each individual to obtain a segmental mean value per gene. We downloaded two types of biological networks: PPI network and TF regulatory network. For the PPI network, the two sources were from iRefIndex21 and BioPlex 2.0,22 where we removed the redundancies and compiled into an integrated database. For the TF regulatory network, we collected the interactions between (TF) proteins and target genes and put together from the following sources: TRED,23 ITFP,24 ENCODE, and TTRUST.25 The final network consisted of 16,266 proteins forming 197,664 edges in the PPI network and 2486 TFs and 14,796 target genes forming 101,272 edges in the TF network. For the pathway data, we used biological pathways from the ConsensusPathDB (CPDB)26 and Gene Ontology (GO).27 The pathways from the CPDB include data from multiple sources such as KEGG, PharmGKB, SMPDB, HumanCyc, BioCarta, EHMN, Reactome, NetPath, Pathway Interaction Database, and Wikipathways. For GO, we considered the biological processes only. As a result, the final pathway consisted of a total of 14,598 pathways involving 17,250 genes. All network and pathway files are distributed along with the tool. Data and code availability iOmicsPASS is an OS platform-independent tool, written in C++ language. It is freely available through GitHub repository at https://github.com/cssblab/iOmicsPASS. The tool is distributed in a zip folder, along with sample data sets and a software manual. Supplemental R codes are provided for the visualization of misclassification errors (cross validation) and for the choice of appropriate thresholds. The processed data sets (copy number variation, mRNA and protein expression data) used for breast cancer data from TCGA, as well as R codes used to produce the heatmaps and plots in this paper can be downloaded from https://github.com/Hiromikwl/DataCodes_iOmicsPASS. The output of the tool used for network visualization in Cytoscape is provided as Supplementary Tables, which are freely available at NPJ Systems Biology and Applications website. The R codes to generate simulation data sets are also provided through the GitHub site. Huang, S., Chaudhary, K. & Garmire, L. X. More is better: recent progress in multi-omics data integration methods. Front. Genet. 8, 84 (2017). Shen, R., Olshen, A. B. & Ladanyi, M. Integrative clustering of multiple genomic data types using a joint latent variable model with application to breast and lung cancer subtype analysis. Bioinformatics 25, 2906–2912 (2009). Mo, Q. et al. Pattern discovery and cancer gene identification in integrated cancer genomic data. Proc. Natl Acad. Sci. USA 110, 4245–4250 (2013). Yuan, Y., Savage, R. S. & Markowetz, F. Patient-specific data fusion defines prognostic cancer subtypes. PLoS Comput. Biol. 7, e1002227 (2011). Argelaguet, R. et al. Multi-omics factor analysis—a framework for unsupervised integration of multi-omics data sets. Mol. Syst. Biol. 14, e8124 (2018). Vaske, C. J. et al. Inference of patient-specific pathway activities from multi-dimensional cancer genomics data using PARADIGM. Bioinformatics 26, i237–245 (2010). Bonnet, E., Calzone, L. & Michoel, T. Integrative multi-omics module network inference with Lemon-Tree. PLoS Comput Biol. 11, e1003983 (2015). Ruffalo, M., Koyuturk, M. & Sharan, R. Network-based integration of disparate omic data to identify "silent players" in cancer. PLoS Comput. Biol. 11, e1004595 (2015). Hofree, M., Shen, J. P., Carter, H., Gross, A. & Ideker, T. Network-based stratification of tumor mutations. Nat. Methods 10, 1108–1115 (2013). Maier, T., Guell, M. & Serrano, L. Correlation of mRNA and protein in complex biological samples. FEBS Lett. 583, 3966–3973 (2009). Vogel, C. & Marcotte, E. M. Insights into the regulation of protein abundance from proteomic and transcriptomic analyses. Nat. Rev. Genet. 13, 227–232 (2012). Ritchie, M. D., Holzinger, E. R., Li, R., Pendergrass, S. A. & Kim, D. Methods of integrating data to uncover genotype-phenotype interactions. Nat. Rev. Genet. 16, 85–97 (2015). Tibshirani, R., Hastie, T., Narasimhan, B. & Chu, G. Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proc. Natl Acad. Sci. USA 99, 6567–6572 (2002). Shannon, P. et al. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 13, 2498–2504 (2003). Parker, J. S. et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol. 27, 1160–1167 (2009). Cancer Genome Atlas, N. Comprehensive molecular portraits of human breast tumours. Nature 490, 61–70 (2012). Mertins, P. et al. Proteogenomics connects somatic mutations to signalling in breast cancer. Nature 534, 55–62 (2016). He, H. & Garcia, E. A. Learning from imbalanced data. IEEE Trans. Knowl. Data Eng. 21, 1263–1284 (2009). Blagus, R. & Lusa, L. Class prediction for high-dimensional class-imbalanced data. BMC Bioinform. 11, 523 (2010). Edwards, N. J. et al. The CPTAC data portal: a resource for cancer proteomics research. J. Proteome Res. 14, 2707–2713 (2015). Razick, S., Magklaras, G. & Donaldson, I. M. iRefIndex: a consolidated protein interaction database with provenance. BMC Bioinform. 9, 405 (2008). Huttlin, E. L. et al. Architecture of the human interactome defines protein communities and disease networks. Nature 545, 505–509 (2017). Zhao, F., Xuan, Z., Liu, L. & Zhang, M. Q. TRED: a transcriptional regulatory element database and a platform for in silico gene regulation studies. Nucleic Acids Res. 33, D103–107 (2005). Zheng, G. et al. ITFP: an integrated platform of mammalian transcription factors. Bioinformatics 24, 2416–2417 (2008). Han, H. et al. TRRUST: a reference database of human transcriptional regulatory interactions. Sci. Rep. 5, 11432 (2015). Kamburov, A. et al. ConsensusPathDB: toward a more complete picture of cell biology. Nucleic Acids Res. 39, D712–717 (2011). Ashburner, M. et al. Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat. Genet. 25, 25–29 (2000). This work was supported in part by a grant from the Singapore Ministry of Education (to H.C. and K.C.; MOE2016-T2-1-001), the support of Institute of Molecular & Cell Biology, ASTAR, and National Medical Research Council of Singapore (to H.C.; NMRC-CG-M009). Department of Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore Hiromi W. L. Koh & Hyungwon Choi Saw Swee Hock School of Public Health, National University of Singapore, Singapore, Singapore University of Michigan Medical School, Ann Arbor, MI, USA Damian Fermin Center for Genomics and Systems Biology, Department of Biology, New York University, New York, NY, 10003, USA Christine Vogel Department of Statistics and Applied Probability, National University of Singapore, Singapore, Singapore Kwok Pui Choi School of Biological Sciences, University of Southampton, Southampton, UK Rob M. Ewing Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore Hyungwon Choi Hiromi W. L. Koh H.K. and H.C. conceived the project. All the authors contributed to the development of the algorithm and H.K., D.F., and H.C. implemented the software. R.E. and C.V. led biological interpretation of the data analysis results. H.K. and H.C. wrote the manuscript with input from all the authors. H.C. supervised the project. Correspondence to Hyungwon Choi. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Koh, H.W.L., Fermin, D., Vogel, C. et al. iOmicsPASS: network-based integration of multiomics data for predictive subnetwork discovery. npj Syst Biol Appl 5, 22 (2019). https://doi.org/10.1038/s41540-019-0099-y DOI: https://doi.org/10.1038/s41540-019-0099-y NETISCE: a network-based tool for cell fate reprogramming Lauren Marazzi Milan Shah Paola Vera-Licona npj Systems Biology and Applications (2022) Emerging landscape of molecular interaction networks: Opportunities, challenges and prospects Gauri Panditrao Rupa Bhowmick Ram Rup Sarkar Journal of Biosciences (2022) multiSLIDE is a web server for exploring connected elements of biological pathways in multi-omics data Soumita Ghosh Abhik Datta Computational Techniques and Tools for Omics Data Analysis: State-of-the-Art, Challenges, and Future Directions Parampreet Kaur Ashima Singh Inderveer Chana Archives of Computational Methods in Engineering (2021) Software tools, databases and resources in metabolomics: updates from 2018 to 2019 Keiron O'Shea Biswapriya B. Misra Metabolomics (2020) For Authors and Referees npj Systems Biology and Applications (npj Syst Biol Appl) ISSN 2056-7189 (online)
CommonCrawl
Publications of HSE Intersection cohomology of the Uhlenbeck compactification of the Calogero-Moser space HSE Academic Journals HSE Publishing House HSE Working Papers HSE Data Books IT and mathematics state and public administration Almost Half of Russians Suffer from Loneliness In Russia, 43.1% of the adult population experiences loneliness. This share is comprised mostly of older people, but quite often young people as well. At each age, loneliness is experienced in its own way, and at certain times it becomes especially painful. This Is Robot Speaking: A Social Scientist's Perspective on Human-Robot Interaction Whether we like it or not, talking with a robot on the phone is now part of our everyday lives. What is the right way to respond to a mechanical voice, who should we blame when communication fails, and could AI make an ideal conversationalist? HSE sociologist Alisa Maksimova answers these questions and more, based on findings from her study of interactions between humans and robots presented in the book Adventures in Technology: Barriers to Digitalisation in Russia. HSE University and Consortium Partners Aim to Take Human Capital Studies to New Level On January 14, at the 2021 Gaidar Forum, which was held at the Russian Presidential Academy of National Economy and Public Administration (RANEPA), HSE University Rector Yaroslav Kuzminov and leaders of three other institutions signed an agreement to establish the Human Capital Multidisciplinary Research Center (HCMRC). The other co-signers were RANEPA Rector Vladimir Mau, MGIMO University Rector Anatoly Torkunov, and Dmitry Funk, Director of the N.N. Miklukho-Maclay Institute of Ethnology and Anthropology of the Russian Academy of Sciences. The Center will focus on that key areas of human capital studies that are of current global importance. Chapters of books Research at HSE Selecta Mathematica, New Series. 2016. Vol. 22. No. 4. P. 2491-2534. Finkelberg M. V., Ginzburg V., Ionov A., Kuznetsov A. G. We study the natural Gieseker and Uhlenbeck compactifications of the rational Calogero–Moser phase space. The Gieseker compactification is smooth and provides a small resolution of the Uhlenbeck compactification. We use the resolution to compute the stalks of the IC-sheaf of the Uhlenbeck compactification. Priority areas: mathematics Full text (TXT, 937 Kb) Keywords: Calogero-Moser systemsUhlenbeck compactification Publication based on the results of: Алгебраическая геометрия и ее приложения: Производные категории; Гомологические и мотивные методы в некоммутативной геометрии; Специальные многообразия; Классическая геометрия; Геометрическая теория представлений; Арифметическая геометрия(2016) Finkelberg M. V., Ginzburg V., Ionov A. et al. arxiv.org. math. Cornell University, 2015 We study the natural Gieseker and Uhlenbeck compactifications of the rational Calogero–Moser phase space. The Gieseker compactification is smooth and provides a small resolution of the Uhlenbeck compactification. This allows computing the IC stalks of the latter. Dunkl operators at infinity and Calogero-Moser systems Sergeev A. International Mathematical Research Notes. 2015. Added: Sep 7, 2015 A finite analog of the AGT relation I: finite W-algebras and quasimaps' spaces Braverman A., Rybnikov L. G., Feigin B. L. et al. Communications in Mathematical Physics. 2011. Vol. 308. No. 2. P. 457-478. Recently Alday, Gaiotto and Tachikawa proposed a conjecture relating 4-dimensional super-symmetric gauge theory for a gauge group G with certain 2-dimensional conformal field theory. This conjecture implies the existence of certain structures on the (equivariant) intersection cohomology of the Uhlenbeck partial compactification of the moduli space of framed G-bundles on P^2. More precisely, it predicts the existence of an action of the corresponding W-algebra on the above cohomology, satisfying certain properties. We propose a "finite analog" of the (above corollary of the) AGT conjecture. Induced dynamics A.K.Pogrebkov. Journal of Nonlinear Mathematical Physics. 2020. Vol. 27. No. 2. P. 324-336. Construction of new integrable systems and methods of their investigation is one of the main directions of development of the modern mathematical physics. Here we present an approach based on the study of behavior of roots of functions of canonical variables with respect to a parameter of simultaneous shift of space variables. Dynamics of singularities of the KdV and Sinh–Gordon equations, as well as rational cases of the Calogero–Moser and Ruijsenaars–Schneider models are shown to provide examples of such induced dynamics. Some other examples are given to demonstrates highly nontrivial collisions of particles and Liouville integrability of induced dynamical systems. Instanton moduli spaces and $\mathscr W$-algebras Braverman A., Finkelberg M. V., Nakajima H. arxiv.org. math. Cornell University, 2014. No. 2381. We describe the (equivariant) intersection cohomology of certain moduli spaces ("framed Uhlenbeck spaces") together with some structures on them (such as e.g.\ the Poincar\'e pairing) in terms of representation theory of some vertex operator algebras ("W-algebras"). Added: Oct 2, 2014 Absolutely convergent Fourier series. An improvement of the Beurling-Helson theorem Vladimir Lebedev. arxiv.org. math. Cornell University, 2011. No. 1112.4892v1. We obtain a partial solution of the problem on the growth of the norms of exponential functions with a continuous phase in the Wiener algebra. The problem was posed by J.-P. Kahane at the International Congress of Mathematicians in Stockholm in 1962. He conjectured that (for a nonlinear phase) one can not achieve the growth slower than the logarithm of the frequency. Though the conjecture is still not confirmed, the author obtained first nontrivial results. Новые информационные технологии. Тезисы докладов XVIII международной студенческой конференции-школы-семинара М.: МИЭМ, 2010. Обоснование адиабатического предела для гиперболических уравнений Гинзбурга-Ландау Пальвелев Р., Сергеев А. Г. Труды Математического института им. В.А. Стеклова РАН. 2012. Т. 277. С. 199-214. Метод параметрикса для диффузий и цепей Маркова Конаков В. Д. STI. WP BRP. Издательство попечительского совета механико-математического факультета МГУ, 2012. № 2012. Hypercommutative operad as a homotopy quotient of BV Khoroshkin A., Markaryan N. S., Shadrin S. arxiv.org. math. Cornell University, 2012. No. 1206.3749. We give an explicit formula for a quasi-isomorphism between the operads Hycomm (the homology of the moduli space of stable genus 0 curves) and BV/Δ (the homotopy quotient of Batalin-Vilkovisky operad by the BV-operator). In other words we derive an equivalence of Hycomm-algebras and BV-algebras enhanced with a homotopy that trivializes the BV-operator. These formulas are given in terms of the Givental graphs, and are proved in two different ways. One proof uses the Givental group action, and the other proof goes through a chain of explicit formulas on resolutions of Hycomm and BV. The second approach gives, in particular, a homological explanation of the Givental group action on Hycomm-algebras. Is the function field of a reductive Lie algebra purely transcendental over the field of invariants for the adjoint action? Colliot-Thélène J., Kunyavskiĭ B., Vladimir L. Popov et al. Compositio Mathematica. 2011. Vol. 147. No. 2. P. 428-466. Let k be a field of characteristic zero, let G be a connected reductive algebraic group over k and let g be its Lie algebra. Let k(G), respectively, k(g), be the field of k- rational functions on G, respectively, g. The conjugation action of G on itself induces the adjoint action of G on g. We investigate the question whether or not the field extensions k(G)/k(G)^G and k(g)/k(g)^G are purely transcendental. We show that the answer is the same for k(G)/k(G)^G and k(g)/k(g)^G, and reduce the problem to the case where G is simple. For simple groups we show that the answer is positive if G is split of type A_n or C_n, and negative for groups of other types, except possibly G_2. A key ingredient in the proof of the negative result is a recent formula for the unramified Brauer group of a homogeneous space with connected stabilizers. As a byproduct of our investigation we give an affirmative answer to a question of Grothendieck about the existence of a rational section of the categorical quotient morphism for the conjugating action of G on itself. Cross-sections, quotients, and representation rings of semisimple algebraic groups V. L. Popov. Transformation Groups. 2011. Vol. 16. No. 3. P. 827-856. Let G be a connected semisimple algebraic group over an algebraically closed field k. In 1965 Steinberg proved that if G is simply connected, then in G there exists a closed irreducible cross-section of the set of closures of regular conjugacy classes. We prove that in arbitrary G such a cross-section exists if and only if the universal covering isogeny Ĝ → G is bijective; this answers Grothendieck's question cited in the epigraph. In particular, for char k = 0, the converse to Steinberg's theorem holds. The existence of a cross-section in G implies, at least for char k = 0, that the algebra k[G]G of class functions on G is generated by rk G elements. We describe, for arbitrary G, a minimal generating set of k[G]G and that of the representation ring of G and answer two Grothendieck's questions on constructing generating sets of k[G]G. We prove the existence of a rational (i.e., local) section of the quotient morphism for arbitrary G and the existence of a rational cross-section in G (for char k = 0, this has been proved earlier); this answers the other question cited in the epigraph. We also prove that the existence of a rational section is equivalent to the existence of a rational W-equivariant map T- - - >G/T where T is a maximal torus of G and W the Weyl group. Введение в математическую статистику Ивченко Г. И., Медведев Ю. И. М.: ЛКИ, 2010. Новые информационные технологии. Тезисы докладов XIX международной студенческой конференции-школы-семинара
CommonCrawl
Last day 👉 Mega discount of upto 60% Off on CAT 2023 courses. Enroll here For the following questions answer them individually In a survey of political preference, 78% of those asked were in favor of at least one of the proposals: I, II and III. 50% of those asked favored proposal I, 30% favored proposal II, and 20% favored proposal III. If 5% of those asked favored all three of the proposals, what percentage of those asked favored more than one of the 3 proposals. Let the distribution of votes for each of the proposal be as given below. From the information given, we know that a+b+c+d+e+f+g = 78 --- (1) a+b+e+f = 50  ---- (2) b+c+f+g = 30 ----  (3) e+f+g+d = 20 ---- (4) and f = 5 --- (5) We need to find b+e+g+f = ? In the above equations, (2)+(3)+(4) - (1) implies (a+b+e+f)+(b+c+f+g)+(e+f+g+d) - (a+b+c+d+e+f+g) = 50+30+20-78 = 22 Or, b+e+g+2f=22. As, f = 5, it implies that b+e+g+f=17 For two positive integers a and b define the function h(a,b):as the greatest common factor (G.C.F) of a, b. Let A be a set of n positive integers. G(A), the GCF of the elements of set A is computed by repeatedly using the function h. The minimum number of times h is required to be used to compute G is: 1/2 n (n - 1) Let p and q be any two elements of the set A. For the computation of the GCF of elements of the set A, we can replace both p and q by just the GCF(p,q) and the result is unchanged. So, for every application of the function h, we are reducing the number of elements of the set A by 1. (In this case two numbers p and q are replaced by one number GCF(p,q)). Expanding this concept further, the minimum number of times the function h should be called is n-1 The figure below shows two concentric circles with centre 0. PQRS is a square, inscribed in the outer circle. It also circumscribes the inner circle, touching it at points B, C, D and A. What is the ratio of the perimeter of the outer circle to that of polygon ABCD? $$\frac{\pi}{4}$$ $$\frac{3\pi}{2}$$ $$\pi$$ By symmetry, it is safe to assume that the polygon ABCD is a square. So, AB = PO. The perimeter of the inner square = 4 AB. The perimeter of the outer circle = $$ 2 \pi \times AB$$ So, ratio = $$ \frac{2 \pi \times AB}{4AB}$$ = $$ \frac{\pi}{2}$$ Three labeled boxes containing red and white cricket balls are all mislabeled. It is known that one of the boxes contains only white balls and one only red balls. The third contains a mixture of red and white balls. You are required to correctly label the boxes with the labels red, white and red and white by picking a sample of one ball from only one box. What is the label on the box you should sample? Not possible to determine from a sample of one ball All of them can be mislabeled in 2 ways: Red box  - white label White box - Red and white label Red and white box - Red label Red box - red and white label White box - Red label Red and white box - White label So, we would try the box with the red and white label and if it has a white ball, labelling to the boxes is done as per case 1. If it has a red ball labelling is done as per case 2. Note: It's not a good idea to try the white label as if we get a red ball, we can't make out if we are picking from red box or red and white box. Similarly if we try from the box with red label and we get a white ball, again we can't make out if it is coming from the white box or the red and white box. If $$n^2 = 123456787654321$$, what is $$n$$? Observe the pattern given below. $$11^2 = 121$$ $$111^2 = 12321$$ $$1111^2 = 1234321$$ and so on. So, $$11111111^2 = 123456787654321$$ Abraham, Border, Charlie, Dennis and Elmer and their respective wives recently dined together and were seated at a circular table. The seats were so arranged that men and women alternated and each woman was three places distant from her husband. Mrs. Charlie sat to the immediate left of Mr. Abraham. Mrs. Elmer sat two places to the right of Mrs. Border. Who sat to the right of Mr. Abraham? Mrs. Dennis Mrs. Elmer Mrs. Border Mrs. Border or Mrs. Dennis Mrs. Abraham can't be sitting next to him as per the seating arrangement. Wives sit three places away from their husbands. Mrs. Charlie is sitting to the left of Mr. Abraham. So, she can't be sitting to his right. Mrs. Elenor is sitting two places to the right of Mrs. Border (and not Mrs. Charlie). So, she can't be sitting right next to Mr. Abraham. Mrs. Border and Mrs. Dennis are the remaining two wives and each is equally likely to to be sitting to the right of Mr. Abraham. Navjivan Express from Ahmedabad to Chennai leaves Ahmedabad at 6:30 am and travels at 50km per hour towards Baroda situated 100 kms away. At 7:00 am Howrah - Ahmedabad express leaves Baroda towards Ahmedabad and travels at 40 km per hour. At 7:30 Mr. Shah, the traffic controller at Baroda realises that both the trains are running on the same track. How much time does he have to avert a head-on collision between the two trains? The distance between Ahmedabad and Baroda is 100 Km Navjivan express starts at 6:30 am at 50 Km/hr and Howrah expresses starts at 7:00 am at 40 Km/hr. Distance covered by Navjivan express in 30 minutes (by 7 am) is 25 Km/hr. So, at 7 am, the distance between the two trains is 75 Kms and they are travelling towards each other a relative speed of 50+40=90 Km/hr. So, time taken them to meet is 75/90*60 = 50 minutes. As, Mr. Shah realizes the problem after thirty minutes, time left to avoid collision is 50-30 = 20 minutes There is a circle of radius 1 cm. Each member of a sequence of regular polygons S1(n), n = 4,5,6,... , where n is the number of sides of the polygon, is circumscribing the circle; and each member of the sequence of regular polygons S2(n), n = 4,5,6.... where n is the number of sides of the polygon, is inscribed in the circle. Let L1(n) and L2(n) denote the perimeters of the corresponding polygons of S1(n) and S2(n). Then $$\frac{L1(13)+2\pi }{L2(17)}$$ is greater than $$\frac{\pi}{4}$$ and less than 1 greater than 1 and less than 2 greater than 2 less than $$\frac{\pi}{4}$$ The perimeter of the circle is equal to 2$$\pi $$. The perimeter of the polygon inscribing the circle is always greater than the perimeter of the circle => L1(13) > 2$$\pi $$ The perimeter of the polygon inscribed in the circle is always less than the perimeter of the circle => L2(13) < 2$$\pi $$ => $$\frac{L1(13)+2\pi }{L2(17)}$$ > 2 There is a square field with each side 500 metres long. It has a compound wall along its perimeter. At one of its comers, a triangular area of the field is to be cordoned off by erecting a straight line fence. The compound wall and the fence will form its borders. If the length of the fence is 100 metres, what is the maximum area in square metres that can be cordoned off? Let EF be the fence. As the field is in the shape of a square, the straight line face that is put up will cordon a field of the form right angled triangle. The area of a right angled triangle is maximum when the sides that contain the right angle are equal. Let the side be a. $$a^2 + a^2$$ = $$100^2$$ => $$a = 50\sqrt{2}$$ Area = $$\frac{1}{2}*50\sqrt{2}*50\sqrt{2}$$ = 2500 sq m. DIRECTIONS for the following two questions: These questions are based on the situation given below: Ten coins are distributed among four people P, Q, R, S such that one of them gets one coin, another gets two coins, the third gets three coins and the fourth gets four coins. It is known that Q gets more coins than P, and S gets fewer coins than R. If the number of coins distributed to Q is twice the number distributed to P then which one of the following is necessarily true? R gets an even number of coins. R gets an odd number of coins. S gets an even number of coins. S gets an odd number of coins. From the given passage, it is given that Q>P, R>S. According to the information Q = 2 when P = 1 or Q = 4 when P = 2. In first case R = 4 and S = 3 and in the second case R = 3 and S = 1. In both the instances S is odd. Made In India 🇮🇳
CommonCrawl
HomeM&P&CMathematicalbig list – Open mathematical questions for which we really, really have no idea what the answer is big list – Open mathematical questions for which we really, really have no idea what the answer is By moting1a Mathematical 0 Comments There is no shortage of open problems in mathematics. While a formal proof for any of them remains elusive, with the "yes/no" questions among them mathematicians are typically not working in both directions but rather have a pretty clear idea of what the answer should be. Famous conjectures such as Riemann and Collatz are supported by some very convincing heuristics, leading mathematicians to believe in their validity so strongly that they write papers based on the assumption that they are true. For other wide open problems such as $P$ vs. $NP$, one side ($P=NP$ in this case) is usually considered so unlikely to be true that almost nobody seriously works on it. Of course, whenever a "conjecture" is attached to an open question that already implies that one answer is preferred over the other – people don't conjecture $A$ and $\neg A$ simultaneously. Are there any open mathematical questions with a yes/no answer for which we have no good reason to assume one or the other, for which we really have absolutely no idea what the true answer might be? Maybe are there infinitely many Fermat primes. From Wikipedia: "The following heuristic argument suggests there are only finitely many Fermat primes…" @DavidH: My money is on the "not" side. What about the values of sufficiently large Ramsey numbers? See, e.g., the Erdős quote about how we're better off trying to destroy the omnicidal aliens who want to know $R(6,6)$ than trying to compute it for them… This is outside of any expertise I have, so I'll put it as a comment rather than an answer. It is not known whether the Burnside group $B(2,5)$ is finite. Some groups $B(m,n)$ are, some aren't. I don't know whether the experts have a consensus on $B(2,5)$. For some details, see www-history.mcs.st-and.ac.uk/HistTopics/Burnside_problem.html 32 people think this answer is useful I believe whether or not the Thompson group $F$ is amenable is such question. The paper/article "WHAT is… Thompson's Group" mentions that at a conference devoted to the group there was a poll in which 12 said it was and 12 said it was not. There are in fact papers claiming (at least at the time) to have proofs for both sides. Here are some posts to get an idea of the "controversy": 1, 2,3. In 4 dimensions, it is an open question as to whether there are any exotic smooth structures on the 4-sphere. A more or less elementary example I'm quite fond of is the Erdős conjecture on arithmetic progressions, which asserts the following: If for some set $S\subseteq \mathbb{N}$ the sum $$\sum_{s\in S}\frac{1}s$$ diverges, then $S$ contains arbitrarily long arithmetic progressions. I've never seen a heuristic argument one way or the other – I believe the strongest known result, as of now is Szemerédi's theorem, which, more or less, states that if the lower asymptotic density of $S$ is positive (i.e. there are infinitely many $n$ such that $|[1,n]\cap S|>n\varepsilon$), then it contains arbitrarily long arithmetic progressions. There's also the Green-Tao theorem which is a special case of the conjecture, giving that the primes have arbitrarily long arithmetic progressions (and, indeed, establishes the fact for a larger class of sets as well). Yet, neither of these suggests that the result holds in general. It's tempting to believe it's true, because it'd be such a beautiful theorem, but there's not much to support that – it's really unclear why the sum of the reciprocals diverging would have anything to do with arithmetic progressions. Still, there's no obvious examples of where it fails, so it's hard to make an argument against it either. Hilbert's 10th problem over over $\mathbb{Q}$/Mazur's conjecture. These are two open problems that point in opposite directions, and I think experts really aren't sure which way to guess. Hilbert's 10th problem over $\mathbb{Q}$ Is there an algorithm which, given a collection of polynomial equations with rational coefficients, do they have a rational solution? The problem is open. Here are heuristics each way. For "no". Individual diophantine equations are really hard. Think of how many mathematicians worked to prove $x^n+y^n=1$ has no solutions other than $(0,1)$ and $(1,0)$ for various values of $n$. Is it really plausible that all of their work could be reduced to running an algorithm? There is no such algorithm over $\mathbb{Z}$. (Matiyasevich-Robinson-Davis-Putnam) For "yes": There are powerful theorems and conjectures about diophantine equations having finitely many solutions. For example, Mordell's Conjecture (now Falting's Theorem) immediately tells us that there are finitely many rational points on $x^n+y^n=1$ for any given $n$. If the Bombieri-Lang conjecture were proved, which doesn't seem impossible, we'd have much more powerful tools. And while finite is not the same as zero, we have developed a lot of tools to find those finitely many solutions in many cases. See Bjorn Poonen's course notes for a survey. But here is the really frustrating thing. Suppose you believe that the answer is "no". Then you probably want to prove that you can encode the halting problem as a question about diophantine equations (this was how MDRP was proved), or else encode solving diophantine equations over $\mathbb{Z}$ into solving diophantine equations over $\mathbb{Q}$. In order to do this, you'd presumably write down some diophantine equation whose solutions looked like the states of a universal Turing machine, or looked like $\mathbb{Z}$. In any case, it would probably have infinitely many solutions, spread out discretely. And thus runs you into Mazur's conjecture Given any collection of polynomial equations over $\mathbb{Q}$ in $n$-variables, let $X(\mathbb{Q})$ be the set of their solutions and let $\overline{X(\mathbb{Q})}$ be the topological closure of $X(\mathbb{Q})$ in $\mathbb{R}^n$. Then $\overline{X(\mathbb{Q})}$ has finitely many connected components. So the general difficulty of diophantine equations leads one to imagine that the problem is unsolvable, but Mazur's conjecture blocks the most plausible route to proving it is unsolvable. Of course, one can imagine that diophantine equations over $\mathbb{Q}$ are unsolvable, and yet they can't encode a Turing machine. I think it is unclear whether most unsolvable problems are unsolvable because they are equivalent to the Halting problem, or whether that is (essentially) the only kind of problem we know how to prove is unsolvable. In the theory of dynamical systems, problems involve limit cycles in general are always very difficult. The second part of Hilbert's sixteenth problem is my personal "favorite". The upper bound for the number of limit cycles of planar polynomial vector fields of degree $n$ remains unsolved for any $n>1$. For example, can quadratic plane vector fields ($n=2$) have more than four limit cycles? It may be extremely tricky to find a quadratic system with five limit cycles, but we really have absolutely no idea. In 1950s, mathematicians claimed quadratics systems have maximal three limit cycles and had multiple other mathematicians conformed, but it was shown wrong when a quadratic system with four limit cycles was found. For details, you can check this article. The smooth Poincare conjecture in dimension 4 has already been mentioned, so I'll mention the smooth Schönflies Problem in that dimension. The question is whether there is a diffeomorphism of $S^4$ taking any smoothly embedded copy of $S^3$ in $S^4$ to the standard equatorial $S^3\subset S^4$. This is true in all other dimensions, but $4$ is such an unusual dimension that it's difficult to speculate what the answer is in this case. From a number theoretic perspective, there are a few famous problems related to ranks of elliptic curves, which a lot of modern research in the area is geared towards solving. For example, Manjul Bhargava recently received the Fields medal partly for his work on bounding average ranks of elliptic curves (and proving that the Birch and Swinnerton Dyer conjecture is true for ever-increasing percentages of elliptic curves). To describe some of the results: an elliptic curve over $\mathbb{Q}$ is a rational smooth genus 1 projective curve with a rational point, or in less scary terms, the set of solutions to an equation that looks like $$E(\mathbb{Q}) = \{(x,y) \in \mathbb{Q}^2: y^2 = x^3 + ax + b\}$$ where $a, b \in \mathbb{Q}$. It's a fact that any such set forms a finitely generated abelian group, so by the structure theorem for such objects the group of rational points is $$E(\mathbb{Q}) \cong \mathbb{Z}^r + \Delta,$$ where $\Delta$ is some finite group. Now, we have complete descriptions of what this group $\Delta$ can be – a Theorem of Mazur limits it to a small finite list of finite groups of size less than 12. However the values of $r$ are much more mysterious. We define the rank of $E$ to be this $r = r(E)$. Now, we know quite a lot about $r$ – for example, in "100%" of cases the rank is $0$ or $1$ (where here "100%" is used in the probablistic sense, not to mean that every elliptic curve has rank $0$ or $1$!). There is also the Birch and Swinnerton Dyer Conjecture (BSD), which is one of the very open problems that you mention that nobody has any idea how to prove, but which most people believe. It relates the rank of the elliptic curve to the order of vanishing of its $L$-function at 1. Perhaps the strongest heuristic for it is that it's been proved in certain special cases, as well as Bhargava's work. So much of modern number theory research goes towards BSD, and it's one of the famous Millenium problems. However, what we don't have much intuition with is: Question: Are the ranks of elliptic curves over $\mathbb{Q}$ bounded? That is, is there some $R$ such that for any elliptic curve $E/\mathbb{Q}$, we have $r(E) \leq R$? As of last year, it was very open – there were loose heuristics both ways. The largest rank we've found so far is a curve with rank at least 28, due to Elkies, which has been the record-holder for a long time now. As I mentioned before, Bhargava has proved the average rank is bounded by at least 1.5, and this was enough to win a Fields medal. However, having said all that, I think there has been some excitement recently with some stronger heuristics that lean towards the rank being bounded. I don't know enough about these heuristics to comment any further, but there's more information here: http://quomodocumque.wordpress.com/2014/07/20/are-ranks-bounded/ I don't think anyone has a clear idea whether there exist classical solutions to the Navier-Stokes equation. http://www.claymath.org/millenium-problems/navier%E2%80%93stokes-equation Most attempts have focused on trying to prove it is true. However Leray gave a suggestion for looking for a counterexample. It was later shown that his proposed counterexample would never work: J. Nečas, M. Růžička, and V. Šverák, On Leray's self-similar solutions of the Navier-Stokes equations, Acta Mathematica, 1996, Volume 176, Issue 2, pp 283-294. However the fact that counterexamples have been proposed does suggest that it is reasonable to think the conjecture is false. An easy-to-understand open problem involves the first counterexample to Euler's Sum of Powers conjecture: Q: Does $x_1^5+x_2^5+x_3^5+x_4^5+x_5^5=0$ have infinitely many primitive non-zero integer solutions? (Primitive being the $x_i$ have no common factor.) Only three are known so far and nobody has given a good heuristic argument that the list is finite, or if there are infinitely many. There are interesting congruential constraints on the $x_i$. More generally, Q: For odd $k>3$, does $x_1^k+x_2^k+\dots+x_{k}^k = 0$ have infinitely many primitive non-zero integer solutions? The Answer 10 From what I can tell, neither the existence nor non-existence of the Moore Graph of degree 57 and diameter 2 is strongly attested. Most of the work to date on the subject revolves around the various properties such a graph (should it exist) must or must not possess, but none of these seem to give a strong indication to lean one way or the other. Also, the respondents to a poll on this blog post from 2009 seem to be split pretty evenly. Do all compact smooth manifolds of dimension $\geq 5$ admit Einstein metrics? (An Einstein metric is a Riemannian metric with constant Ricci curvature.) A list of fundamental open problems in differential geometry and geometric analysis can be found at the end of Yau's excellent survey, Review of Geometry and Analysis. It was written in 2000, so is very current. Aside: Some problems which don't fit the bill (in that there's evidence one way or another or in that I hear the same guesses), but are interesting anyway are: Does the 6-sphere admit an integrable almost complex structure? Hopf Conjecture: Does $\mathbb{S}^2 \times \mathbb{S}^2$ admit a metric with positive sectional curvature? Chern Conjecture: Does every compact affine manifold have vanishing Euler characteristic? Does there exist a finitely presented, infinite torsion group? A torsion group is a group where every element has finite order. Burnside's problem (1902) asked if there exists a finitely generated, infinite torsion group. Such a group of unbounded exponent was constructed by Golod and Shafarevich in 1964, while Novikov and Adian did it for bounded exponent in 1968. Ol'shanskii constructed finitely generated infinite groups, all of whose proper, non-trivial subgroups are cyclic of order a fixed prime $p$ ("Tarski monster" groups). However, all these examples are finitely generated but not finitely presentable. The question of whether or not there exists a finitely presented example is still wide open. Apparently, Rips gave a possible method of constructing such a group, but Ol'shanskii and Sapir turned his handle to no avail (reference). It is worth mentioning that Efim Zelmanov was awarded a fields medal for a related problem, called the restricted Burnside problem. 9 people think this answer is useful I think a proper answer to this question are examples of questions where numerical evidence is extremely difficult to obtain. So for example, we don't know anything interesting about the Collatz conjecture but at least we know that it's true for a huge number of cases. As an example of something we don't know at all, consider $S_n$ the symmetric group, and define $s_i$ to be the adjacent transposition $(i,i+1)$. Then for a permutation $\pi\in S_n$, defined a reduced decomposition of $\pi$ to be a minimal length product of transpositions that gives you $\pi$. For example if $\pi=4321$ then $w=s_1s_2s_1s_3s_2s_1$ is a reduced decomposition of $\pi$. It's easy to see the minimal length of the reduced decomposition is the number of inversions in $\pi$. On the other hand, the question of how many distinct reduced decompositions $R(\pi)$ there are of $\pi$ is a ridiculously complicated question. We know the answer for permutations that have particular (lack of) patterns such as vexellary permutations, in particular the reverse permutation $(n,n-1,\cdots,1)$, which has $f_n$ reduced decompositions, where $f_n$ is the number of staircase shaped Young tableaux of shape $(n-1,n-2,\cdots,1)$. For any other non-trivial permutation there is essentially nothing known. For $n=7$, the number of reduced decompositions for the reverse permutation exceeds the number of atoms in the known universe. A similar difficulty arises for pretty much any non-trivial permutation. One can't even obtain an order of magnitude estimate. The first proof that many people learn is that there are infinitely many primes. (If not the first, then it's often second to the fact that $\sqrt 2$ is irrational). A natural generalization of this was considered by Dirichlet, who showed that as long as the arithmetic progression $a, a+d, a+2d, a+3d, …$ doesn't have a trivial reason for not having many primes, then in fact it contains infinitely many primes. This is known as Dirichlet's Theorem on Primes in Arithmetic Progressions. Remarkably, if there are infinitely many primes, then it is also known that the sequence has asymptotically $1/\varphi(d)$ of all primes, where $\varphi(d)$ is the number of numbers up to $d$ that are relatively prime to $d$. In other words, every nontrivial arithmetic progression has the exact same percentage of primes, a sort of equidistribution theorem. The next natural generalization is to consider higher polynomials, such as quadratic polynomials. (For the moment, call a polynomial quadratic if it's of the form $ax^2 + bx + c$ with $a \neq 0$). Is the analogue true? Can we predict distribution? In fact, we have not found a single quadratic polynomial that takes infinitely many primes (nor any polynomial of degree > 1). We have not even been successful showing that $x^2 + 1$ takes infinitely many primes, nor do we have any idea how. Going a bit deeper, it is possible to conjecture densities using the circle method or its variants, even for higher degree polynomials. But we have no idea how to prove them. In short, is $x^2 + 1$ prime infinitely often? The existence of projective finite planes. All the known examples have order prime power. Quote: The existence of finite projective planes of other orders is an open question. The only general restriction known on the order is the Bruck-Ryser-Chowla theorem that if the order $N$ is congruent to 1 or 2 $\text{mod}$ 4, it must be the sum of two squares. This rules out $N = 6$. The next case $N = 10$ has been ruled out by massive computer calculations. Nothing more is known; in particular, the question of whether there exists a finite projective plane of order $N = 12$ is still open. Existence of rectangular cuboid with all edges, all faces' diagonals, and the main diagonal being integers. Feasibility of reformulating all of math in only well-defined ultrafinitistic terms From the point of view of physics there is something strange about the way math is used. By the Church–Turing–Deutsch principle all physical processes have (quantum) computable descriptions, but the way we do math invokes non-computable concepts such as the uncountable reals. What happens if we use math in practice is that the uncomputability of any concepts will always stay hidden in the intermediary parts, they will never arise in the final results. This suggests that you don't need to invoke uncomputable concepts in the first place, but so far there has not been a lot of progress made by the advocates of ultrafinitism. The problem of asymptotic behavior of the maximal cardinality of a cap sets in $\mathbb{Z}/3\mathbb{Z}^r$ as $r$ to infinity gives rise to the following yes/no question that is open. A cap set, here, is a set with no three points on an affine line. This is equivalent to the existence of $x,y,z$ such that $x+y+z = 0$, or the existence of a 3-term arithmetic progression (see a related answer). Is the maximal cardinality of a cap sets in $\mathbb{Z}/3\mathbb{Z}^r$ a $O((3- \delta)^{r })$ for some $\delta > 0$? See a blog post of Terry Tao from a couple of years ago where he expresses an opinion on the matter but also acknowledges the dissent of a good friend of his. There is some "natural" axiom that added to ZFC decides CH? Interesting discussion in Introduction to Set Theory, Third Edition, Revised and Expanded. The Higman Conjecture concerns the number of conjugacy classes of $UT_n(\mathbb{F}_q)$, the group of unipotent upper-triangular matrices with entries in a finite field with $q$ elements. The conjecture is that for a fixed $n$ the number of conjugacy classes of $UT_n(\mathbb{F}_q)$ is given by a polynomial in $q$. This has been proven up to $n=13$, but beyond that it's unknown. The difficulty might be related to the fact that $UT_n(\mathbb{F}_q)$ has wild representation type. I know of a number of failed attempts at proof, and it seems that most of the people thinking about this conjecture believe it to be true. At the same time, there is a collection of subgroups of $UT_n(\mathbb{F}_q)$, known as "pattern groups," for which an analogous conjecture is known to be false. Kaplansky's Zero-Divisor Conjecture Let $K$ be a field and $G$ a torsion free group, then is the group ring $KG$ a domain? All the research up until now have been affirmative. This problem has been dealt in the book "The algebraic structure of group rings" by D. Passman. It is one of the toughest and least approachable problem in the whole field of Algebra. Latest I know is that it has been proved for torsion free solvable groups. Still a long way to go. Another natural question after this (if the above is true) is, if we replace $K$ by any domain $D$, say $\Bbb{Z}$ What is the answer to the problem to which Graham's number is an upper bound? Quoting Wikipedia about the definition: Connect each pair of geometric vertices of an n-dimensional hypercube to obtain a complete graph on 2n vertices. Colour each of the edges of this graph either red or blue. What is the smallest value of n for which every such colouring contains at least one single-coloured complete subgraph on four coplanar vertices? Here are some details on why it is completely unknown: Graham and Rotschild proved that $6 \leq N \leq f(f(f(f(f(f(f(12)))))))$, where $f(x)=2 \uparrow^x 3$ and $\uparrow $ denotes up-arrow notation. Currently, the best bound known is: $13 \leq N < 2 \uparrow\uparrow\uparrow 6$. Mathematicians thought that the answer was $6$, until the lower bound of $11$ was proven. Now many have no idea where to expect $N$. We really, really have no idea whether the Jacobian conjecture is true. Tags:big-list, conjectures, open-problem, soft-question linear algebra – Is there an "inverted" dot product? reference request – All real numbers in $[0,2]$ can be represented as $\sqrt{2 \pm \sqrt{2 \pm \sqrt{2 \pm \dots}}}$ reference request – Good book for self study of a First Course in Real Analysis
CommonCrawl
Distribution and ecological risk assessment of trace metals in surface sediments from Akaki River catchment and Aba Samuel reservoir, Central Ethiopia Alemnew Berhanu Kassegne1, 2Email author, Tarekegn Berhanu Esho3, Jonathan O. Okonkwo4 and Seyoum Leta Asfaw1 Environmental Systems Research20187:24 © The Author(s) 2018 Accepted: 16 November 2018 Due to fast urban expansion and increased industrial activities, large quantities of solid and liquid wastes contaminated by trace metals are released into the environment of the Addis Ababa city, most often untreated. This study was conducted to investigate spatial distribution, seasonal variations and ecological risk assessment of selected trace metals (Cd, Cr, Cu, Fe, Mn, Pb, Ni and Zn) in the surface sediments from Akaki River catchment and Aba Samuel reservoir, Central Ethiopia. Twenty-two surface sediment samples were collected, digested using the Mehlich-3 procedure and analyzed quantitatively using inductively coupled plasma optical emission spectrometer. The trace metals occurred in varying concentrations along the course of the sampling stations. The decreasing order of trace metal concentrations in the dry season was: Mn > Fe > Pb > Cr > Zn > Ni > Cu > Cd and in the rainy season was Mn > Fe > Pb > Cr > Ni > Zn > Cu > Cd. Little Akaki River contained a higher load of trace metals than the other regions, which is due to the existence of most of the industrial establishments and commercial activities. Relatively lower levels of trace metals were recorded at Aba Samuel reservoir due to the lower residence time of the sediment (reservoir rehabilitated recently). Ecological risk assessment using USEPA sediment guidelines, geo-accumulation index, contamination factor and pollution load index revealed the widespread pollution by Cd and Pb. These were followed by Mn, Ni and Zn. The concentrations of Pb, Cd, Mn, Ni and Zn in sediments were relatively greater and at levels that may have adverse biological effects to the surrounding biota. Therefore, regular monitoring of these pollutants in water, sediment and biota would be required. Greater Akaki River Little Akaki River Contamination of the aquatic environment by trace metals in excess of the natural loads has become a problem of increasing concern. Large quantities of trace metals are discharged into the environment due to anthropogenic activities such as urbanization, industrialization and extension of irrigation and other agricultural practices. The situation is particularly alarming in developing countries, where most rivers, lakes and reservoirs are receiving untreated wastes due to poor setup of environmental sustainability (Mwanamoki et al. 2014; Awoke et al. 2016). The most vulnerable river-reservoir systems are those crossing large cities and densely populated areas, as well as near the industrial establishments (Mwanamoki et al. 2014; Yousaf et al. 2016). Trace metals are among the conservative pollutants that are not subject to degradation process and are permanent additions to aquatic ecosystems (Igwe and Abia 2006; El Nemr et al. 2016). As a result, higher levels of trace metals are found in soil, sediment and biota. Most of these trace metals are persistent, toxic, bioaccumulative and that they exert a risk for humans and ecosystems even when the exposure is low (Gao and Chen 2012; Diop et al. 2015; Tang et al. 2016). Monitoring of trace metals in sediment is extremely important as it can serve as sources of information about the long term trends in geological and ecological conditions of the aquatic ecosystem and the corresponding catchment area (Mekonnen et al. 2012; Dhanakumar et al. 2015). The occurrence of high levels of these pollutants in sediments can be a good indicator of anthropogenic pollution, rather than natural enrichment of the sediment by geological weathering. Trace metals, originating from man-induced pollution and geological sources, have low solubility in water, and thus they get adsorbed on suspended particles and strongly accumulate in the sediments (Li et al. 2017). Therefore, sediments are reservoirs for trace metal contaminants and help to characterize the degree of environmental contamination and thus are suitable targets for pollution studies (Iqbal and Shah 2014; Liang et al. 2015). Addis Ababa, the capital city of Ethiopia and the headquarters of the African Union, with approximately 5 million population, is one of the fast expanding cities in the country. There are two major rivers draining the city from North to South. These are Greater Akaki River (GAR) and Little Akai River (LAR). Greater Akaki River and Little Akai River meet at Aba Samuel reservoir, 37 km South-West of Addis Ababa. Aba Samuel reservoir was built in 1939, for hydropower production. The reservoir was, however, abandoned for several years due huge pollution issues and siltation (Gizaw et al. 2004). It was rehabilitated recently and have come back to life in 2016. Since Addis Ababa is the country's commercial, manufacturing and cultural center, large quantities of solid, liquid and gaseous wastes are released into the environment of the city, primarily nearby water bodies most often untreated (Alemayehu 2001, 2006; Awoke et al. 2016; Aschale et al. 2017). The water bodies around Addis Ababa receive increasing amounts of unlicensed discharge of effluents from industrial and domestic wastes and the water quality is deteriorating (Akele et al. 2016). The primary sources of trace metals pollution in the river system include metal finishing industries, tannery operations, textile industries, domestic sources, agrochemicals and leachates from landfills and contaminated sites (Melaku et al. 2007). The two Akaki Rivers and Aba Samuel reservoir, which are the main focuses of this study, serve as dumping grounds and pollutant sinks from upstream Addis Ababa and surrounding catchment areas. Previous studies on trace metals in sediment are hardly representative and sufficient in the catchment area. There exists an information gap regarding the systematic study of occurrence, distribution, ecological risk and seasonal distribution of trace elements in sediment. The available literature has focused on trace metal levels in water, soil/sediment and vegetables on LAR and not including GAR, Aba Samuel reservoir and downstream areas (Itanna 2002; Arficho 2009; Prasse et al. 2012; Akele et al. 2016; Aschale et al. 2017; Woldetsadik et al. 2017). To the best of our knowledge, no such comprehensive work has been done on the level of trace metals in sediment from Akaki River catchment and Aba Samuel reservoir. Moreover, this study presented one of the earliest set of environmental monitoring data for the reservoir from the feeder Rivers following the restoration of the reservoir in 2016. Therefore, the objective of this work was to determine the occurrence, distribution, ecological risk and seasonal variation of trace metals (Cd, Cr, Cu, Fe, Mn, Ni, Pb and Zn) in surface sediments from Akaki River catchment and Aba Samuel reservoir. The Akaki catchment is located in central Ethiopia along the western margin of the main Ethiopian Rift Valley. The catchment is geographically bounded between 8°46′–9°14′N and 38°34′–39°04′E, covering an area of about 1500 km2 (Demlie and Wohnlich 2006). Addis Ababa, which lies within Akaki catchment, has a fast population growth, uncontrolled urbanization and industrialization, poor sanitation, uncontrolled waste disposal, which results in a serious deterioration of surface and ground water quality. As it is the country's commercial, manufacturing and cultural center, large quantities of solid, liquid and gaseous wastes are generated and released into the environment of the city, most often untreated (Alemayehu 2006). There are two major rivers draining into the city from North to South, namely Greater Akaki River (GAR) (locally known as Tiliku Akaki River) and Little Akai River (LAR) (locally known as Tinishu Akaki River). GAR and LAR meet at Aba Samuel reservoir, 37 km South-West of Addis Ababa. Aba Samuel reservoir was built in 1939. It was the first hydropower station in Ethiopia, but it was abandoned in 1970s, because of many years of lack of maintenance, siltation and pollution issues (Gizaw et al. 2004). It was rebuilt and revived in 2016. The local people in the Akaki River catchment and Aba Samuel reservoir use the water for irrigation, drinking water for cattle, washing clothes, waste disposal site and other domestic needs without information on the level of water quality parameters (Melaku et al. 2007). Therefore, this study has been conducted in some parts of GAR, LAR, Aba Samuel reservoir and downstream to the reservoir (Fig. 1). Map of the study area showing the sampling sites Sampling sites and sample collection Twenty-two (22) sediment samples were collected in August, 2016 and January, 2017 representing the rainy and dry seasons respectively. Composite samples were collected at the following sampling stations: GAR at Entoto Kidanemihiret Monastery (S1, control site 1), GAR at Tirunesh Beijing hospital (S2), GAR below Akaki town (S3), LAR above Geferesa reservoir (S4, control site 2), LAR at Lafto bridge (S5), LAR at Jugan Kebele, boundary between Addis Ababa and Oromia Special zone (S6), Aba Samuel reservoir below the confluence point of GAR and LAR (S7), Aba Samuel reservoir at the midpoint (S8), Aba Samuel reservoir above the Dam (S9), downstream about 50 m from the reservoir (S10) and downstream about 1000 m from the reservoir (S11). The distribution of the sampling points was chosen based on topography, the purpose of the study and anthropogenic interference. Approximately 500 g of the top few centimeters of the sediment were collected using a stainless steel Ekman bottom Grab sampler. Each sample was obtained by mixing four randomly collected sediment samples. Samples were placed in clean polyethylene bags, labeled, stored in cooler box and transported into the laboratory. In the laboratory, coarse particles, leaves or large material was removed. Subsequently, sediment samples were air dried at ambient temperature and powdered using ceramic coated grinder. The dried and powdered samples were then sub-sampled and passed through a stainless steel sieve (45 μm mesh size) and transferred to labeled double-cap polyethylene bottles until further treatment. Sample digestion and instrumental analysis For determination of trace metals, 2 g sediment samples (< 45 μm) were digested using 20 ml of Mehlich 3 extractant [0.2 M CH3COOH, 0.25 M ammonium nitrate (NH4NO3), 0.015 M ammonium fluoride (NH4F), 0.013 M HNO3, and 0.001 M ethylene diamminetetraacetic acid (EDTA)] (Mehlich 1978). An inductively coupled plasma optical emission spectrometer, ICP-OES, (Arcos FHS2, Germany) was used for the determination of trace metal concentrations in sediment samples. Argon gas (99.99%) was used as a plasma with a flow rate of 81 l/min. Calibration curves were prepared using 10, 20, 30, 40 and 50 mg/l of Fe and Mn; 0.04, 0.08, 0.12, 0.20, 0.40, 0.80, 1.20, 1.60, and 2.00 for Cu and Zn; 0.5, 1, 2, 3, 4, 5 for Cd, Ni and Pb and 1, 2, 4, 6, 8, 10 for Cr. In all cases, standard purity was ≥99.8%. Quantification of the elements were recorded at 214.438, 267.716, 324.757, 262.567, 220.353, 257.611, 231.604 and 213.856 nm, which correspond to the most sensitive emission wavelengths of Cd, Cr, Cu, Fe, Pb, Mn, Ni, and Zn respectively. The calibration curve showed linearity (r > 0.995) by the detector response for the quantified elements. This indicates good correlation between concentration and emission intensities of the detected elements and thus proper calibration of the instrument. Quality control and quality assurance All the glassware used were thoroughly washed with detergent, soaked in 10% HNO3 for 24 h and rinsed with de-ionized water. All reagents used were analytical grade. In order to validate and evaluate the accuracy of the method used, certified reference material (ISE-952) obtained from Wageningen University Environmental Sciences section, Netherlands, was employed. Blank analyses were carried out to check interference from the laboratory. Mean recovery rates of the 4 metals were: Zn, 100.52%; Fe, 106.69%; Mn, 118.52%; Cu, 74.14%. The limit of detection (LOD) based on three times the standard deviation (3σ) of the blank and the limit of quantification (LOQ) based on ten times the standard deviation (10σ) of the blank for the ICP-OES were calculated for each analyte ions. The results are summarized in Table 1. The LOD was found to be in the range of 0.07–1.06 mg/kg, whereas LOQ ranged between 0.23 and 3.52 mg/kg. These ranges were found satisfactory for the determination of analyte ions in sediment samples. LOD and LOQ values of elements analyzed by ICP-OES Assessment of sediment contamination The excessive accumulation of trace metals in sediments posed a potential ecological risk to freshwater ecosystems (Olivares-Rieumont et al. 2005; Chen et al. 2007). Different pollution assessment methods of trace metals were applied to evaluate the pollution degree and potential ecological risk posed by trace metals in sediment of Akaki River catchment and Aba Samuel reservoir. To this end, USEPA sediment guidelines, geo-accumulation index (Igeo), contamination factor and pollution load index were used (Wang et al. 2014). Geo-accumulation index (Igeo): the geo-accumulation index (Igeo) is a geochemical criterion used to assess heavy metal accumulation in surface sediment studies (Muller 1981; Singh 2001; Aschale et al. 2017). It is expressed as $${\text{Igeo }} = { \log }_{ 2} \, \left[ {{\text{C}}_{\text{n}} / 1. 5 {\text{B}}_{\text{n}} } \right].$$ where Cn is the measured total concentration of the element n in the sediment and Bn is the average concentration of element n in shale (background) value. The constant 1.5 is introduced to include possible variations of the background values due to lithogenic effects in sediments (Loska et al. 2004). Thus, the background concentrations (mg/kg) of 0.3 for Cd, 90.0 for Cr, 45.0 for Cu, 46,700.0 for Fe, 20.0 for Pb, 850.0 for Mn, 68.0 for Ni and 95.0 for Zn are used in this study (Turekian and Wedepohl 1961). The background values were used to assess the degree of contamination and to understand the distribution of elements of anthropogenic origin in the study areas. According to Muller (1981), the corresponding relationships between Igeo and the pollution level are given as follows: unpolluted (Igeo ≤ 0), unpolluted to moderately polluted (0 < Igeo ≤ 1), moderately polluted (1 < Igeo ≤ 2), moderately to heavily polluted (2 < Igeo ≤ 3), heavily polluted (3 < Igeo ≤ 4), heavily to extremely polluted (4 < Igeo ≤ 5) and extremely polluted (Igeo > 5). Contamination factor The assessment of sediment contamination was also carried using the contamination factor (CF). The CF is the single element index and is represented by the following equation $${\text{CF }} = {\text{ C}}_{\text{o}} /{\text{C}}_{\text{n}}$$ where Co is the mean content of metals from at least five sampling sites and Cn is the background value of the individual metal. The CF may indicate low contamination (CF < 1), moderate contamination (1 < CF < 3), considerable contamination (3 < CF < 6) and very high contamination (CF > 6) (Hakanson 1980). Pollution load index (PLI) Pollution load index (PLI) was examined to assess the overall pollution status of a sampling site. The index was determined by calculating the geometrical mean of the concentrations of all the trace elements in the particular sampling site (Usero et al. 1997; Chakravarty and Patgiri 2009). The PLI is computed by the formula: \({\text{PLI }} = \, \left( {{\text{CF}}_{ 1} \times {\text{CF}}_{ 2} \times {\text{CF}}_{ 3} \times \cdots \times {\text{CF}}_{\text{n}} } \right)^{{ 1/{\text{n}}}},\) where n is the number of metals investigated and CF is the contamination factor. The PLI value > 1 is polluted whereas PLI value < 1 indicates no pollution (Chakravarty and Patgiri 2009). Analysis of variance (ANOVA) was applied to assess significant differences in trace element concentrations at the various sampling sites. Multivariate analysis of the element concentration was performed through cluster analysis technique. It was performed to classify elements of different sources on the basis of their concentration similarities using dendrograms, and identify relatively homogeneous groups of variables with similar properties. Pearson's correlation coefficient was used to determine the association and possible sources of trace metals. Statistical analyses of the results were carried out using Origin Pro (version 9.4, 2017) and Microsoft Excel 2007. Concentrations of trace metals in sediment samples The average concentrations along with standard deviations of 8 selected trace metals in sediment samples from the Akaki River catchment and Aba Samuel reservoir, Ethiopia in two seasons are presented in Table 2. Based on the elemental concentrations, the pattern in sediment was: Mn > Fe > Pb > Cr > Zn > Ni > Cu > Cd in the dry season and Mn > Fe > Pb > Cr > Ni > Zn > Cu > Cd in the rainy season. In both seasons, a similar pattern was observed. The concentration (mg/kg) ranges of trace metals in the dry season were 2.1–2.9 for Cd, 16.2–43.7 for Cr, 1.6–15.3 for Cu, 406.4–844.8 for Fe, 124.4–256.4 for Pb, 335.5–1319.2 for Mn, 15.6–36.2 for Ni and 4–110 for Zn. Similarly, in the rainy season the concentrations (mg/kg) were in the range of 2.5–3.1 for Cd, 18.3–29.4 for Cr, 2.1–6.2 for Cu, 415.2–1442 for Fe, 101.4–133.7 for Pb, 385.1–1833.4 for Mn, 14.6–24 for Ni and 4.8–38.2 for Zn. In both seasons, the minimum concentration was observed for the known toxic elements (such as Cd and Cu) while the highest concentration is observed for Mn and Fe. Iron and manganese pollution of the catchment area, possibly arises from effluents from iron and steel manufacturing industries established within the catchment area of Akaki River (Melaku et al. 2007). The highest concentrations of Mn and Fe observed could also be related to the geological sources in addition to anthropogenic inputs (Alemayehu 2006). The geology of Addis the Ababa area is characterized by basaltic volcanic rocks with minor amounts of Quaternary alluvial sediments (Demlie et al. 2008). The rocks underlying the city and its environs were altered by intensive hydrothermal activity resulting in the characteristics reddish color of the residual soils (Gizaw 2002). Kaolin deposits found in many parts of the city are particularly good evidence of hydrothermal activity on lava flows. Alemayehu (2006) indicated that rock and soil outcrops of the Addis Ababa area are anomalously rich in trace metals derived from hydrothermal activity, which are related to geologic sources. From the hydrogeological point of view, the major rock types forming a reservoir of groundwater in the Addis Ababa area are considered to be the volcanic rocks consisting of basalts, trachytes, rhyolites, scoriaes and trachy-basalts. Studies indicated that the main aquifers in the Addis Ababa area include: shallow aquifers, deep aquifers and thermal aquifers (located at depths greater than 300 m) (Alemayehu 2006; Demlie et al. 2007). Mean concentrations of trace metals (mean ± SD, mg/kg dry weight) in sediment samples Dry season GAR at Entoto Kidanemihiret Monastery 2.5 ± 0.12 22.0 ± 1.11 436.4 ± 0.37 GAR at Tirunesh Beijing Hospital GAR below Akaki town LAR above Gefersa reservoir LAR at Lafto Bridge LAR at Jugan kebele Below the confluence point of GAR and LAR Aba Samuel reservoir at the midpoint Aba Samuel reservoir above the Dam Aba Samuel reservoir below the Dam Aba Samuel reservoir 1000 m downstream 1103.9 ± 8.55 1319.2 ± 10.51 LAR at ugan kebele Table 3 compares the results obtained from Akaki River Catchment and Aba Samuel reservoir and those from other freshwater ecosystems to understand the extent of trace metal pollution of the study area. A comparison between a study in LAR (Aschale et al. 2016) and this study indicates that samples from the latter showed higher concentrations of Cd and Pb and lower average concentrations of Cr, Cu, Fe, Mn and Zn. The Pb contamination in the sediments of this study was relatively higher than values from all the other studies. The concentrations of Cd, Cr, and Ni were within the ranges observed in the other polluted sediments. The levels of Cu and Zn were generally lower than values for other sediments. Overall, this comparison indicated that there was high accumulation of trace metals in the sediments of Akaki River and Aba Samuel reservoir and that it requires special care and management interventions such as proper waste collection, treatment and disposal. Average trace metals contents (mg/kg) in sediment from Akaki River catchment and Aba Samuel reservoir compared with aquatic environments from Ethiopia and other parts of the world Aba Samuel reservoir, Ethiopiaa This study GAR, Ethiopiaa LAR, Ethiopiaa LAR, Ethiopia Aschale et al. (2016) Lake Awassa, Ethiopia Yohannes et al. (2013) Lake Ziway, Ethiopia Mekonnen et al. (2015) Awash River Basin, Ethiopia Dirbaba et al. (2018) Lake Victoria, Tanzania Kishe and Machiwa (2003) Buriganga River, Bangladesh Mohiuddin et al. (2015) Tembi River, Iran Shanbehzadeh et al. (2014) Lijiang River, China Xu et al. (2016) Ergene River, Turkey Halli et al. (2014) aAverage of three sampling stations in the dry season was taken Spatial distribution and seasonal variations of Trace metals in sediment The spatial distribution of trace elements in the river and reservoir sediment depends on many factors including distance of the element sources to the reservoir, the chemical characteristics of the element and the hydrological conditions of the river and reservoir system (Zhang et al. 2013). The level of trace elements in surface sediments of Akaki River catchment and Aba Samuel reservoir is shown in Figs. 2 and 3. In order to better understand the distribution and seasonal variation of trace metals, the study area was divided into four regions: GAR, LAR, reservoir and Downstream to the reservoir. Overall, LAR is more contaminated than GAR (Melaku et al. 2007; Akele et al. 2016). Statistical analyses of the results (p < 0.05) indicated that there were no significant spatial variations of the trace metal among the sampling stations in both seasons except Pb. The concentration of Pb varied significantly from upstream to downstream area in both seasons (p = 0.02). Dry season trace metal concentrations were slightly higher than the rainy season for Pb, Cr, Zn, Ni, and Cu. This might be attributed to the lower dilution process in the dry season than in the rainy season. During the dry season, the highest concentrations of Cr, Pb, Mn, and Ni were observed at S6 and Cu at S5 (Figs. 2 and 3). Both sampling sites lie along the Little Akaki River (LAR), which is in agreement with the previous results (Melaku et al. 2007; Akele et al. 2016). The average levels of Mn and Fe were higher in the rainy than in the dry season, which might be attributed to the anthropogenic and geologic inputs (Alemayehu 2006). It is presumed that pollutants from GAR and LAR finally ended up at the Aba Samuel reservoir. However, the levels of most trace metals investigated were lower at the Aba Samuel reservoir (S7, S8 and S9) than the upstream areas (Figs. 2 and 3). The relatively lower concentration of sediment-bound trace metals in the reservoir might have been due to less accumulated metals in the sediment because reservoir was rehabilitated in 2016. Furthermore, the natural processes that can attenuate the concentration of the chemicals/pollutants on their pathway (mixing, dilution, volatilization and biological degradation) might have contributed to the attenuation. Trace metals in sediment. a Cu and Cr; b Ni and Cd Trace metals in sediment. a Mn and Fe; b for Zn and Pb Correlation analysis Pearson's correlation coefficients were computed to see if the elements were interrelated with each other in the sediment samples from the different sampling sites in both dry and rainy seasons. Examination of correlations also provides clues on the source(s) of pollution, distribution and similarity of behaviors of trace metals (Zhang et al. 2013; Diop et al. 2015). Table 4 shows the correlation matrix of the determined elements. A significant positive correlation was observed for Pb with Cr (r = 0.85), Mn with Cr (r = 0.81), Mn with Fe (r = 0.73), Mn with Pb (r = 0.60), Ni with Cr (r = 0.97), Ni with Pb (r = 0.85) and Ni with Mn (r = 0.83) in the dry season. Similarly, a significant positive correlation was observed between Fe with Cd (r = 0.75), Fe with Cr (r = 0.67), Mn with Cd (r = 0.66), Mn with Cr (r = 0.66), Mn with Fe (r = 0.99), Ni with Cr (0.81), Ni with Fe (r = 0.61) in the rainy season. This significant positive correlation suggests that the elements might have a common origin. Concentration of Zn was not significantly correlated with any of the studied trace metals. Significant negative correlations were also found between Cd with Cr (r = − 0.75), Cd with Pb (r = − 0.72), Cd with Mn (r = − 0.72), Cd with Ni (r = − 0.71) in the dry season and Pb with Fe (r = − 0.78), Mn with Pb (r = − 0.82) in the rainy season. Correlation matrix among the different trace metals in sediment from Akaki River catchment and Aba Samuel reservoir in dry and rainy seasons − 0.75* − 0.21 *Correlation is significant at p < 0.05 level (two-tailed) Cluster analysis The hierarchical clustering by applying group average method and Euclidean distances for similarities in the variables was performed on the dataset. Altogether, 8 variables (Cd, Cr, Cu, Fe, Pb, Mn, Ni and Zn) from 11 sampling sites in dry and rainy seasons were subjected to the cluster analysis. The dendrogram derived from the cluster analysis is shown in Fig. 4. In both seasons, similar types of clusters of the elements were observed. When we cut the dendrogram at an imaginary distance between 1000 and 2000 cm, it leaves two major clusters. Cluster 1 (Cd, Cu, Cr, Ni, Zn and Pb) and cluster 2 (Fe and Mn). From the dendrogram, there are two distinct source factors; one that relates to the soil/geologic inputs (which may introduce Fe and Mn) and another that relates to a variety of activities in the catchment including anthropogenic sources, which collectively contributed to the remaining metals in the sediment. Elements belonging to the same clusters or groups are likely to have originated from common sources (Faisal et al. 2014). The source factors for the cluster analysis of some of the trace metals are presented as follows. Chromium (Cr) contamination of the study area might have originated from one or some of the industries including electroplating and tannery industries, paints and inks, wood preservatives, textile and refractoriness. The highest concentration of Cr was observed at S6 where the majority of Tannery industries were located on the bank of Little Akaki River. The reason for the elevated concentration of Cr at S4 (control site), is not clear. Ni pollution in the study area might arise from sources like domestic wastes, municipal sewage, electroplating coal, oil combustion, pigments and batteries (Aschale et al. 2017). Zn pollution in the study area might arise from the expected sources such as textile and metal works/iron and steel works. In addition to the geological sources, anthropogenic Pb pollution in the study area may arise as a result of activities such as industrial discharge from smelters, paints and ceramics, through vehicular emissions, runoff from contaminated land areas and sewage effluent. Cluster analyses of 8 trace elements. a Dry season; b in the rainy season Assessment of trace metals pollution Assessment of sediment pollution using sediment contamination guidelines After generating reliable data on the level of trace metals in sediment, interpretive tools are required to relate sediment chemistry information to the risk. To this end, numerical sediment quality guidelines (SQGs) established based on biological tests can be used (Macdonald et al. 1996). Most recent SQGs are derived from matching chemistry and toxicity data. The average concentration of trace metals in surface sediments and guideline values is presented in Table 5. Based on the guideline, the river system and reservoir were non-polluted with Cd and Cu, but non-to moderately polluted with Cr, Ni and Zn. All sampling sites were heavily polluted with Pb. Concentration of trace metals in sediments and its comparison with sediment quality guidelines (SQGs) in mg/kg dry wt This study mean This study range SQG non polluted SQG moderately polluted SQG heavily polluted > 6 < 25 > 75 > 200 The concentration of Ni in the analyzed samples were within the same range or slightly higher than the background values for sediment quality guidelines. Only one site, S11 in the dry season is moderately polluted with Zn. In both seasons, all the sampling stations were heavily polluted with Pb. The Federal Democratic Republic of Ethiopia (FDRE) has formulated three proclamations that are directly and/or indirectly related to the environment and pollution (Mekonnen et al. 2015). However, based on the results obtained in this study, the proclamations seem to have not properly implemented. The river system and reservoir need immediate attention of those trace metals having higher concentrations. Unless control measures are made possible, the situation could be worsened and affect biota in Akaki River system and Aba Samuel reservoir and downstream Awash River, which is the most productive inland river in Ethiopia. "Effects range low" (ERL) and "effects range median" (ERM) developed by the National Oceanic and Atmospheric Administration are another sediment toxicity guidelines for trace metals and other contaminants (Long et al. 1995; Macdonald et al. 1996). ERL and ERM values identify threshold concentrations that, if exceeded, are expected to have adverse ecological or biological effects (Mekonnen et al. 2015). Based on the ERL- ERM range the level of Pb at S6 in LAR in the dry season could be toxic to bottom dwelling aquatic organisms (Table 6), while for Cr, Cu and Ni are less than the ERL range. Some of the results from the sampling sites lie between ERL-ERM range for Cd, Pb and Ni. Concentrations of trace metals in River and reservoir sediment and its comparison with ERL-ERM ranges of SQGs (mg/kg, dry wt) ERL-ERM range Percentage of sampling sites < ERL-ERM range Between ERL-ERM range > ERL-ERM range Geo-accumulation index, contamination factor and pollution load index Geo-accumulation index In this study, the calculated value of the geo-accumulation index (Igeo) is presented in Table 7. According to the Muller scale, the calculated results of Igeo values (Table 7) indicated that the sediments from the 11 sampling sites were found to be in class 0, thus are uncontaminated with Cr, Cu, Fe, Ni, and Zn. Mn concentrations represent unpolluted conditions at all stations except S6 (Igeo = 0.05) in the dry season and S2 (Igeo = 0.26) and S4 (Igeo = 0.52) in the rainy seasons. However, all the sediment samples were moderately to strongly contaminated with Cd. Similarly, sediment samples were moderately to strongly contaminated with Pb. Geoaccumulation index (Igeo) values of trace elements in the sediment Contamination factor (CF) and pollution load index (PLI) Pollution severity and its variation along the sites were determined with the use of pollution load index (PLI). This index is a quick tool to compare the pollution status of different sampling locations. The contamination factor values (Table 8) for Cr, Cu, Fe and Ni were (< 1) at all the sampling sites in both seasons indicating low contamination. The CF of Zn represents low contamination at all the sampling sites except S11 (CF = 1.16) in the dry season. CF values for Cd in both dry and rainy seasons were > 6 at all the sampling sites, indicating very high contamination. Cadmium is one of the highly toxic, non-essential elements. Therefore, even at low concentrations, Cd could be harmful to living organisms. This amount of Cd from the study area may be attributed to the release of chemicals from sewage and industrial wastes from the nearby Addis Ababa city. The CF value of Pb > 6 in all the sampling sites except S1 (CF = 5.58), S2 (CF = 5.72) and S4 (CF = 5.07) in the rainy season, indicating very high contamination. The study area might be exposed to Pb pollution from various activities such as industrial discharge from smelters, paints and ceramics, through vehicular emissions, runoff from contaminated land areas and sewage effluent. CF values for Mn at S2 (CF = 1.30) and S10 (CF = 1.25) in the dry season and S2 (CF = 1.80) and S4 (CF = 2.16) in the rainy season suggesting moderate contamination, while values of CF < 1 in all other sites indicate low contamination. The relatively higher CF value of Mn in the rainy season is in agreement with the cluster analysis result that may in turn suggest soil runoff/geologic inputs as the potential source of the trace metal. The values of PLI (Table 7) were found to be generally low (< 1) in all the studied stations. The higher value of PLI (0.59 at S5 and 0.58 at S11 in the dry season) implies appreciable input of trace metals from anthropogenic sources (Table 8). Contamination factor (CF) and pollution load index (PLI) for trace elements in sediments Contamination factor (CF) This study was targeted at generating up-to-date data on the spatial and seasonal variation and contamination levels of trace metals in surface sediments from Akaki River catchment and Aba Samuel reservoir, Central Ethiopia. The decreasing order of trace metal concentrations in the dry season was: Mn > Fe > Pb > Cr > Zn > Ni > Cu > Cd and in the rainy season was Mn > Fe > Pb > Cr > Ni > Zn > Cu > Cd. When comparing the sampling regions in Akaki River Catchment and Aba Samuel reservoir, Little Akaki River contained a higher trace metal load than the other regions. From the result, it can be concluded that the catchment area have high influx of trace metals as a result of uncontrolled urbanization, industrialization, poor sanitation and uncontrolled waste disposal from municipal, industrial and agricultural sources in the upstream Addis Ababa city. However, relatively lower concentration of sediment-bound trace metals were recorded in the reservoir which might be due to less accumulated metals in the sediment because the reservoir was rehabilitated in 2016. Ecological risk assessment using the USEPA guideline, Igeo, CF and PLI revealed the widespread pollution by Cd and Pb. These were followed by Mn, Ni and Zn. Hence, high level of trace metals in sediments probably have adverse effects to the bottom dwelling aquatic organisms as well as to the health of the people who depend on the water for various activities. Therefore, strict policy measures are required to decrease the degree of contamination since some of the elements are known to be toxic to biota. Furthermore, regular monitoring of these pollutants in water, sediment and biota is recommended. ANOVA: CF: ERL: effects range low ERM: effects range median GAR: ICP-OES: inductively coupled plasma optical emission spectrometer Igeo: LAR: Little Akai River LOD: limit of detection LOQ: limit of quantification PLI: pollution load index SD: SQGs: sediment quality guidelines All authors have contributed at different stages of this study. ABK designed the study, collected and analyzed samples and interpreted the data. He also wrote the draft manuscript. SLA and TBE involved on the design of the study, supervised the progress and provide comments on the manuscript. JOO supervised the work and provided comments on the manuscript. All authors read and approved the final manuscript. The first author would like to thank Addis Ababa University for financial support. The dataset and materials used for this manuscript is available and can be shared whenever necessary. The data was generated by the author from the field sample collection, processing and laboratory analysis. Note applicable. The authors greatly acknowledge Addis Ababa University Vice President office for Research and Technology Transfer for Financial support via thematic research project. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Centre for Environmental Science, Addis Ababa University, P. O. Box 1176, Addis Ababa, Ethiopia Department of Chemistry, Debre Berhan University, P. O. Box 445, Debre Berhan, Ethiopia Central Research Laboratories, Addis Ababa Science and Technology University, P. O. Box 16417, Addis Ababa, Ethiopia Department of Environmental, Water & Earth Sciences, Tshwane University of Technology, 175 Nelson Mandela Drive, Arcadia, Private Bag X680, Pretoria, South Africa Akele M, Kelderman P, Koning C, Irvine K (2016) Trace metal distributions in the sediments of the Little Akaki River, Addis Ababa, Ethiopia. Environ Monit Assess 188:389View ArticleGoogle Scholar Alemayehu T (2001) The impact of uncontrolled waste disposal on surface water quality in Addis Ababa, Ethiopia. SINET 24:93–104Google Scholar Alemayehu T (2006) Heavy metal concentration in the urban environment of Addis Ababa, Ethiopia. Soil Sediment Contam 15:591–602View ArticleGoogle Scholar Arficho DF (2009) Status, Distribution, and Phytoavailability of Heavy Metals and Metalloids in Soils Irrigated with Wastewater from Akaki River, Ethiopia: Implications for environmental management of heavy metal/metalloid affected soils. Addis Ababa University, Addis AbabaGoogle Scholar Aschale M, Sileshi Y, Kelly-Quinn M, Hailu D (2016) Evaluation of potentially toxic element pollution in the benthic sediments of the water bodies of the city of Addis Ababa, Ethiopia. J Environ Chem Eng 4:4173–4183View ArticleGoogle Scholar Aschale M, Sileshi Y, Kelly-Quinn M, Hailu D (2017) Pollution assessment of toxic and potentially toxic elements in agricultural soils of the city Addis Ababa, Ethiopia. Bull Environ Contam Toxicol 98:234–243View ArticleGoogle Scholar Awoke A, Beyene A, Kloos H, Goethals PL, Triest L (2016) River water pollution status and water policy scenario in Ethiopia: raising awareness for better implementation in developing countries. Environ Manag 58:694–706View ArticleGoogle Scholar Chakravarty M, Patgiri AD (2009) Metal pollution assessment in sediments of the Dikrong River, NE India. J Hum Ecol 27:63–67View ArticleGoogle Scholar Chen CW, Kao CM, Chen CF, Dong CD (2007) Distribution and accumulation of heavy metals in the sediments of Kaohsiung Harbor, Taiwan. Chemosphere 66:1431–1440View ArticleGoogle Scholar Demlie M, Wohnlich S (2006) Soil and groundwater pollution of an urban catchment by trace metals: case study of the Addis Ababa region, central Ethiopia. Environ Geol 51:421–431View ArticleGoogle Scholar Demlie M, Wohnlich S, Wisotzky F, Gizaw B (2007) Groundwater recharge, flow and hydrogeochemical evolution in a complex volcanic aquifer system, central Ethiopia. Hydrogeol J 15:1169–1181View ArticleGoogle Scholar Demlie M, Wohnlich S, Ayenew T (2008) Major ion hydrochemistry and environmental isotope signatures as a tool in assessing groundwater occurrence and its dynamics in a fractured volcanic aquifer system located within a heavily urbanized catchment, central Ethiopia. J Hydrol 353:175–188View ArticleGoogle Scholar Dhanakumar S, Solaraj G, Mohanraj R (2015) Heavy metal partitioning in sediments and bioaccumulation in commercial fish species of three major reservoirs of river Cauvery delta region, India. Ecotoxicol Environ Saf 113:145–151View ArticleGoogle Scholar Diop C, Dewaelé D, Cazier F, Diouf A, Ouddane B (2015) Assessment of trace metals contamination level, bioavailability and toxicity in sediments from Dakar coast and Saint Louis estuary in Senegal, West Africa. Chemosphere 138:980–987View ArticleGoogle Scholar Dirbaba NB, Xue Y, Wu H, Wang J (2018) Occurrences and Ecotoxicological Risk Assessment of Heavy Metals in Surface Sediments from Awash River Basin, Ethiopia. Water 10:535View ArticleGoogle Scholar El Nemr A, El-Said GF, Ragab S, Khaled A, El-Sikaily A (2016) The distribution, contamination and risk assessment of heavy metals in sediment and shellfish from the Red Sea coast, Egypt. Chemosphere 165:369–380View ArticleGoogle Scholar Faisal B, Majumder RK, Uddin MJ, Abdul M (2014) Studies on heavy metals in industrial effluent, river and groundwater of Savar industrial area, Bangladesh by principal component analysis. Int J Geomatics Geosci 5:182–191Google Scholar Gao X, Chen CTA (2012) Heavy metal pollution status in surface sediments of the coastal Bohai Bay. Water Res 46:1901–1911View ArticleGoogle Scholar Gizaw B (2002) Hydrochemical and environmental investigation of the Addis Ababa region, Ethiopia. Unpublished PhD Thesis, Ludwig Macmillan, University of Munich, MunichGoogle Scholar Gizaw E, Legesse W, Haddis A, Deboch B, Birke W (2004) Assessment of factors contributing to eutrophication of Aba Samuel Water reservoir in Addis Ababa, Ethiopia. Ethiop J Health Sci 14:112–223Google Scholar Hakanson L (1980) An ecological risk index for aquatic pollution control. A sedimentological approach. Water Res 14:975–1001View ArticleGoogle Scholar Halli M, Sari E, Kurt MA (2014) Assessment of arsenic and heavy metal pollution in surface sediments of the Ergene River, Turkey. Pol J Environ Stud 23:1581Google Scholar Igwe J, Abia A (2006) A bioseparation process for removing heavy metals from waste water using biosorbents. Afr J Biotechnol 5:11Google Scholar Iqbal J, Shah MH (2014) Occurrence, risk assessment, and source apportionment of heavy metals in surface sediments from Khanpur Lake, Pakistan. J Anal Sci Technol 5:28View ArticleGoogle Scholar Itanna F (2002) Metals in leafy vegetables grown in Addis Ababa and toxicological implications. Ethiop J Health Dev 16:295–302View ArticleGoogle Scholar Kishe M, Machiwa J (2003) Distribution of heavy metals in sediments of Mwanza Gulf of Lake Victoria, Tanzania. Environ Int 28:619–625View ArticleGoogle Scholar Li N, Tian Y, Zhang J, Zuo W, Zhan W, Zhang J (2017) Heavy metal contamination status and source apportionment in sediments of Songhua River Harbin region, Northeast China. Environ Sci Poll Res 24:3214–3225View ArticleGoogle Scholar Liang J, Liu J, Yuan X, Zeng G, Lai X, Li X, Wu H, Yuan Y, Li F (2015) Spatial and temporal variation of heavy metal risk and source in sediments of Dongting Lake wetland, mid-south China. J Environ Sci Health 50:100–108View ArticleGoogle Scholar Long ER, Macdonald DD, Smith SL, Calder FD (1995) Incidence of adverse biological effects within ranges of chemical concentrations in marine and estuarine sediments. Environ Manag 19:81–97View ArticleGoogle Scholar Loska K, Wiechuła D, Korus I (2004) Metal contamination of farming soils affected by industry. Environ Int 30:159–165View ArticleGoogle Scholar Macdonald DD, Carr RS, Calder FD, Long ER, Ingersoll CG (1996) Development and evaluation of sediment quality guidelines for Florida coastal waters. Ecotoxicology 5:253–278View ArticleGoogle Scholar Mehlich A (1978) New extractant for soil test evaluation of phosphorus, potassium, magnesium, calcium, sodium, manganese and zinc. Commun Soil Sci Plant Anal 9:477–492View ArticleGoogle Scholar Mekonnen KN, Ambushe AA, Chandravanshi BS, Abshiro MR, McCrindle RI, Panichev N (2012) Distribution of mercury in the sediments of some freshwater bodies in Ethiopia. Toxicol Environ Chem 94:1678–1687View ArticleGoogle Scholar Mekonnen KN, Ambushe AA, Chandravanshi BS, Redi-Abshiro M, McCrindle RI (2015) Occurrence, distribution, and ecological risk assessment of potentially toxic elements in surface sediments of Lake Awassa and Lake Ziway, Ethiopia. J Environ Sci Health 50:90–99View ArticleGoogle Scholar Melaku S, Wondimu T, Dams R, Moens L (2007) Pollution status of Tinishu Akaki River and its tributaries (Ethiopia) evaluated using physico-chemical parameters, major ions, and nutrients. Bull Chem Soc Ethiop 21:13–22View ArticleGoogle Scholar Mohiuddin K, Alam M, Ahmed I, Chowdhury A (2015) Heavy) metal pollution load in sediment samples of the Buriganga river in Bangladesh. J Bangladesh Agril Univ 13:229–238View ArticleGoogle Scholar Muller G (1981) The heavy metal pollution of the sediments of Neckars and its tributary: a stocktaking. Chem Ztg 105:157–164Google Scholar Mwanamoki PM, Devarajan N, Thevenon F, Birane N, de Alencastro LF, Grandjean D, Mpiana PT, Prabakar K, Mubedi JI, Kabele CG (2014) Trace metals and persistent organic pollutants in sediments from river-reservoir systems in Democratic Republic of Congo (DRC): spatial distribution and potential ecotoxicological effects. Chemosphere 111:485–492View ArticleGoogle Scholar Olivares-Rieumont S, de la Rosa D, Lima L, Graham DW, Katia D, Borroto J, Martínez F, Sánchez J (2005) Assessment of heavy metal levels in Almendares River sediments—Havana City, Cuba. Water Res 39:3945–3953View ArticleGoogle Scholar Prasse C, Zech W, Itanna F, Glaser B (2012) Contamination and source assessment of metals, polychlorinated biphenyls, and polycyclic aromatic hydrocarbons in urban soils from Addis Ababa, Ethiopia. Toxicol Environ Chem 94:1954–1979View ArticleGoogle Scholar Shanbehzadeh S, Vahid Dastjerdi M, Hassanzadeh A, Kiyanizadeh T (2014) Heavy metals in water and sediment: a case study of Tembi River. J Environ Public Health 2014:858720View ArticleGoogle Scholar Singh M (2001) Heavy metal pollution in freshly deposited sediments of the Yamuna River (the Ganges River tributary): a case study from Delhi and Agra urban centres, India. Environ Geol 40:664–671View ArticleGoogle Scholar Tang W, Shan B, Zhang H, Zhu X, Li S (2016) Heavy metal speciation, risk, and bioavailability in the sediments of rivers with different pollution sources and intensity. Environ Sci Poll Res 23:23630–23637View ArticleGoogle Scholar Turekian KK, Wedepohl KH (1961) Distribution of the elements in some major units of the earth's crust. Geol Soc Am Bull 72:175–192View ArticleGoogle Scholar Usero J, Gonzalez-Regalado E, Gracia I (1997) Trace metals in the bivalve molluscs Ruditapes decussatus and Ruditapes philippinarum from the Atlantic Coast of Southern Spain. Environ Int 23:291–298View ArticleGoogle Scholar Wang L, Wang Y, Zhang W, Xu C, An Z (2014) Multivariate statistical techniques for evaluating and identifying the environmental significance of heavy metal contamination in sediments of the Yangtze River, China. Environ Earth Sci 71:1183–1193View ArticleGoogle Scholar Woldetsadik D, Drechsel P, Keraita B, Itanna F, Gebrekidan H (2017) Heavy metal accumulation and health risk assessment in wastewater-irrigated urban vegetable farming sites of Addis Ababa, Ethiopia. Int J Food Contam 4:9View ArticleGoogle Scholar Xu D, Wang Y, Zhang R, Guo J, Zhang W, Yu K (2016) Distribution, speciation, environmental risk, and source identification of heavy metals in surface sediments from the karst aquatic environment of the Lijiang River, Southwest China. Environ Sci Poll Res 23:9122–9133View ArticleGoogle Scholar Yohannes YB, Ikenaka Y, Saengtienchai A, Watanabe KP, Nakayama SM, Ishizuka M (2013) Occurrence, distribution, and ecological risk assessment of DDTs and heavy metals in surface sediments from Lake Awassa—Ethiopian Rift Valley Lake. Environ Sci Poll Res 20:8663–8671View ArticleGoogle Scholar Yousaf B, Liu G, Wang R, Imtiaz M, Zia-Ur-rehman M, Munir MA, Niu Z (2016) Bioavailability evaluation, uptake of heavy metals and potential health risks via dietary exposure in urban-industrial areas. Environ Sci Poll Res 23:22443–22453View ArticleGoogle Scholar Zhang D, Zhang X, Tian L, Ye F, Huang X, Zeng Y, Fan M (2013) Seasonal and spatial dynamics of trace elements in water and sediment from Pearl River Estuary, South China. Environ Earth Sci 68:1053–1063View ArticleGoogle Scholar
CommonCrawl
Physics And Astronomy (53) Materials Research (42) MRS Online Proceedings Library Archive (40) Journal of the Royal Asiatic Society (3) Laser and Particle Beams (3) Animal Science (2) Bulletin of the School of Oriental and African Studies (2) Clay Minerals (2) High Power Laser Science and Engineering (2) International Astronomical Union Colloquium (2) Journal of Plasma Physics (2) Advances in Applied Probability (1) Geological Magazine (1) Materials Research Society (42) The Association for Asian Studies (4) Royal Asiatic Society JRA (3) Applied Probability Trust (1) Entomological Society of Canada TCE ESC (1) test society (1) World and Regional Geology (7) Cambridge Textbooks (1) The Hong Kong Legal System Stefan H. C. Lo, Kevin Kwok-yin Cheng, Wing Hong Chui Expected online publication date: January 2020 This book provides an introduction to the legal system in Hong Kong. Understanding Hong Kong's legal system today requires both an understanding of the British origins of much of the laws and legal institutions as well as the uniquely Hong Kong developments in the application of the Basic Law under 'one country, two systems'. These features of the Hong Kong legal system are explored in this book, which takes into account developments in the two decades or so of the new legal framework in Hong Kong since the 1997 handover. In providing both an exposition of the legal institutions in Hong Kong and legal method under Hong Kong's legal system (including practical guidance and examples on case law, statutory interpretation and legal research), this book is ideal for first-year law students, students of other disciplines who study law and readers who have an interest in Hong Kong's unique legal system. Equivalency of the diagnostic accuracy of the PHQ-8 and PHQ-9: a systematic review and individual participant data meta-analysis Yin Wu, Brooke Levis, Kira E. Riehm, Nazanin Saadat, Alexander W. Levis, Marleine Azar, Danielle B. Rice, Jill Boruff, Pim Cuijpers, Simon Gilbody, John P.A. Ioannidis, Lorie A. Kloda, Dean McMillan, Scott B. Patten, Ian Shrier, Roy C. Ziegelstein, Dickens H. Akena, Bruce Arroll, Liat Ayalon, Hamid R. Baradaran, Murray Baron, Charles H. Bombardier, Peter Butterworth, Gregory Carter, Marcos H. Chagas, Juliana C. N. Chan, Rushina Cholera, Yeates Conwell, Janneke M. de Man-van Ginkel, Jesse R. Fann, Felix H. Fischer, Daniel Fung, Bizu Gelaye, Felicity Goodyear-Smith, Catherine G. Greeno, Brian J. Hall, Patricia A. Harrison, Martin Härter, Ulrich Hegerl, Leanne Hides, Stevan E. Hobfoll, Marie Hudson, Thomas Hyphantis, MD Inagaki, Nathalie Jetté, Mohammad E. Khamseh, Kim M. Kiely, Yunxin Kwan, Femke Lamers, Shen-Ing Liu, Manote Lotrakul, Sonia R. Loureiro, Bernd Löwe, Anthony McGuire, Sherina Mohd-Sidik, Tiago N. Munhoz, Kumiko Muramatsu, Flávia L. Osório, Vikram Patel, Brian W. Pence, Philippe Persoons, Angelo Picardi, Katrin Reuter, Alasdair G. Rooney, Iná S. Santos, Juwita Shaaban, Abbey Sidebottom, Adam Simning, MD Stafford, Sharon Sung, Pei Lin Lynnette Tan, Alyna Turner, Henk C. van Weert, Jennifer White, Mary A. Whooley, Kirsty Winkley, Mitsuhiko Yamada, Andrea Benedetti, Brett D. Thombs Journal: Psychological Medicine , First View Published online by Cambridge University Press: 12 July 2019, pp. 1-13 Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9. We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy. 16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01). PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar. Quantum electrodynamics experiments with colliding petawatt laser pulses HPL_EP HEDP and High Power Laser 2018 I. C. E. Turcu, B. Shen, D. Neely, G. Sarri, K. A. Tanaka, P. McKenna, S. P. D. Mangles, T.-P. Yu, W. Luo, X.-L. Zhu, Y. Yin Journal: High Power Laser Science and Engineering / Volume 7 / 2019 Published online by Cambridge University Press: 14 February 2019, e10 A new generation of high power laser facilities will provide laser pulses with extremely high powers of 10 petawatt (PW) and even 100 PW, capable of reaching intensities of $10^{23}~\text{W}/\text{cm}^{2}$ in the laser focus. These ultra-high intensities are nevertheless lower than the Schwinger intensity $I_{S}=2.3\times 10^{29}~\text{W}/\text{cm}^{2}$ at which the theory of quantum electrodynamics (QED) predicts that a large part of the energy of the laser photons will be transformed to hard Gamma-ray photons and even to matter, via electron–positron pair production. To enable the investigation of this physics at the intensities achievable with the next generation of high power laser facilities, an approach involving the interaction of two colliding PW laser pulses is being adopted. Theoretical simulations predict strong QED effects with colliding laser pulses of ${\geqslant}10~\text{PW}$ focused to intensities ${\geqslant}10^{22}~\text{W}/\text{cm}^{2}$ . Are reusable blood collection tube holders the culprit for nosocomial hepatitis C virus transmission? Dominic N. C. Tsang, Margaret Ip, Paul K. S. Chan, Patricia Tai Yin Ching, Hung Suet Lam, Wing Hong Seto Journal: Infection Control & Hospital Epidemiology / Volume 40 / Issue 2 / February 2019 ID-Full Field Microscopy of Elastic and Inelastic Scattering with Transmission off-axis Fresnel Zone Plates F. Doring, F. Marschall, Z. Yin, B. Rosner, M. Beye, P. Miedema, K. Kubicek, L. Glaser, D. Raiser, J. Soltau, V.A. Guzenko, J. Viefhaus, J. Buck, M. Risch, S. Techert, C. David Adipose tissue uncoupling protein 1 levels and function are increased in a mouse model of developmental obesity induced by maternal exposure to high-fat diet E. Bytautiene Prewit, C. Porter, M. La Rosa, N. Bhattarai, H. Yin, P. Gamble, T. Kechichian, L. S. Sidossis Journal: Journal of Developmental Origins of Health and Disease / Volume 9 / Issue 4 / August 2018 With brown adipose tissue (BAT) becoming a possible therapeutic target to counteract obesity, the prenatal environment could represent a critical window to modify BAT function and browning of white AT. We investigated if levels of uncoupling protein 1 (UCP1) and UCP1-mediated thermogenesis are altered in offspring exposed to prenatal obesity. Female CD-1 mice were fed a high-fat (HF) or standard-fat (SF) diet for 3 months before breeding. After weaning, all pups were placed on SF. UCP1 mRNA and protein levels were quantified using quantitative real-time PCR and Western blot analysis, respectively, in brown (BAT), subcutaneous (SAT) and visceral (VAT) adipose tissues at 6 months of age. Total and UCP1-dependent mitochondrial respiration were determined by high-resolution respirometry. A Student's t-test and Mann–Whitney test were used (significance: P<0.05). UCP1 mRNA levels were not different between the HF and SF offspring. UCP1 protein levels, total mitochondrial respiration and UCP1-dependent respiration were significantly higher in BAT from HF males (P=0.02, P=0.04, P=0.005, respectively) and females (P=0.01, P=0.04, P=0.02, respectively). In SAT, the UCP1 protein was significantly lower in HF females (P=0.03), and the UCP1-dependent thermogenesis was significantly lower from HF males (P=0.04). In VAT, UCP1 protein levels and UCP1-dependent respiration were significantly lower only in HF females (P=0.03, P=0.04, respectively). There were no differences in total respiration in SAT and VAT. Prenatal exposure to maternal obesity leads to significant increases in UCP1 levels and function in BAT in offspring with little impact on UCP1 levels and function in SAT and VAT. Seed treatment with glycine betaine enhances tolerance of cotton to chilling stress C. Cheng, L. M. Pei, T. T. Yin, K. W. Zhang Journal: The Journal of Agricultural Science / Volume 156 / Issue 3 / April 2018 Chilling injury is an important natural stress that can threaten cotton production, especially at the sowing and seedling stages in early spring. It is therefore important for cotton production to improve chilling tolerance at these stages. The current work examines the potential for glycine betaine (GB) treatment of seeds to increase the chilling tolerance of cotton at the seedling stage. Germination under cold stress was increased significantly by GB treatment. Under low temperature, the leaves of seedlings from treated seeds exhibited a higher net photosynthetic rate (PN), higher antioxidant enzyme activity including superoxide dismutase, ascorbate peroxidase and catalase, lower hydrogen peroxide (H2O2) content and less damage to the cell membrane. Enzyme activity was correlated negatively with H2O2 content and degree of damage to the cell membrane but correlated positively with GB content. The experimental results suggested that although GB was only used to treat cotton seed, the beneficial effect caused by the preliminary treatment of GB could play a significant role during germination that persisted to at least the four-leaf seedling stage. Therefore, it is crucial that this method is employed in agricultural production to improve chilling resistance in the seedling stage by soaking the seeds in GB. Electrochemically Induced Phase Evolution of Lithium Vanadium Oxide: Complementary Insights Gained via Ex-Situ, In-Situ, and Operando Experiments and Density Functional Theory Jiefu Yin, Wenzao Li, Mikaela Dunkin, Esther S. Takeuchi, Kenneth J. Takeuchi, Amy C. Marschilok Journal: MRS Advances / Volume 3 / Issue 22 / 2018 Understanding the structural evolution of electrode material during electrochemical activity is important to elucidate the mechanism of (de)lithiation, and improve the electrochemical function based on the material properties. In this study, lithium vanadium oxide (LVO, LiV3O8) was investigated using ex-situ, in-situ, and operando experiments. Via a combination of in-situ X-ray diffraction (XRD) and density functional theory results, a reversible structural evolution during lithiation was revealed: from Li poor α phase (LiV3O8) to Li rich α phase (Li2.5V3O8) and finally β phase (Li4V3O8). In-situ and operando energy dispersive X-ray diffraction (EDXRD) provided tomographic information to visualize the spatial location of the phase evolution within the LVO electrode while inside a sealed lithium ion battery. A Collector for Interplanetary Dust Particles and Space Debris Xu Yin-Lin, Zhang He-Qi, Zhang Nan, Yu Min, Xie Ping, C. Y. Fan Journal: International Astronomical Union Colloquium / Volume 150 / 1996 We are constructing a collector for capturing Interplanetary Dust Particles (IDPs) and space debris on space shuttle. The unit consists of three pieces of thin polyester film, equally spaced 7 cm apart, and an aerogel disk of 3 cm thickness. For each particle captured in the aerogel disk, we determine its direction of impact and its speed, from which we can trace its trajectory. The purpose of the experiment is to study the compositions of IDPs from different origins. Very high expander processing of maize on animal performance, digestibility and product quality of finishing pigs and broilers R. Puntigam, K. Schedle, C. Schwarz, E. Wanzenböck, J. Eipper, E.-M. Lechner, L. Yin, M. Gierus Journal: animal / Volume 12 / Issue 7 / July 2018 The present study investigated the effect of hydrothermic maize processing and supplementation of amino acids (AA) in two experiments. In total, 60 barrows and 384 broilers were fed four diets including either unprocessed (T1), or hydrothermically processed maize, that is short- (T2), or long-term conditioned (LC) (T3), and subsequently expanded maize of the same batch. Assuming a higher metabolizable energy (ME) content after processing, the fourth diet (T4) contains maize processed as treatment T3, but AA were supplemented to maintain the ideal protein value. Performance, digestibility and product quality in both species were assessed. Results show that in pigs receiving T4 the average daily feed intake was lower compared with the other treatments, whereas no difference was observed in broilers. The T3 improved the feed conversion rate compared with T1 (P<0.10) for both species. In contrast, average daily gain (ADG) (1277 g/day for T2 and 1267 g/day for T3 v. 971 g/day for T1) was only altered in pigs. The hydrothermic maize processing increased the apparent total tract digestibility (ATTD) of dry matter, starch and ether extract after acid hydrolysis. This may be a consequence of higher ATTD of gross energy in the finishing phase for both animal species, suggesting a higher ME content in diets with processed maize. The higher ME content of diets with processed maize is supported also by measurements of product quality. Supplementation of AA in T4 enhanced the loin depth in pigs as well as the amount of breast meat in broilers. Further effects of processing maize on meat quality were the reduced yellowness and antioxidative capacity (P<0.10) for broilers, likely due to the heat damage of xanthophylls and tocopherols. Processing also increased springiness and chewiness (P<0.10) of the broilers breast meat, whereas the loin meat of pigs showed a decreased lightness and yellowness (P<0.10) in meat when hydrothermic processed maize was used (for T2, T3 and T4). LC processed maize (T3) showed the lowest springiness in pork, however the supplementation of AA in T4 did not show differences between the treatments. Shown results demonstrated positive effects of hydrothermic processing of maize on animal performance and digestibility in both species. However, effects on carcass characteristics and product quality differed. The negative effects on product quality could be partly compensated with the AA supplementation, whereas a change in meat colour and reduced antioxidative capacity was observed in all groups fed hydrothermic maize processing. Dietary supplementation with a nucleotide-rich yeast extract modulates gut immune response and microflora in weaned pigs in response to a sanitary challenge S. M. Waititu, F. Yin, R. Patterson, A. Yitbarek, J. C. Rodriguez-Lecompte, C. M. Nyachoti Journal: animal / Volume 11 / Issue 12 / December 2017 Published online by Cambridge University Press: 20 June 2017, pp. 2156-2164 An experiment was carried out to evaluate the short-term effect of supplementing a nucleotide-rich yeast extract (NRYE) on growth performance, gut structure, immunity and microflora of piglets raised under sanitary and unsanitary conditions. A total of 84, 21-day old piglets were used in this study; 42 piglets were raised in a room designated as the clean room that was washed once per week, whereas the other 42 piglets were raised in a room designated as the unclean room in which 7 kg of manure from the sow herd was spread on each pen floor on day 1 and 7 and the room was not washed throughout the experiment. The pigs were fed a corn–soybean meal-based diet without or with 0.1% NRYE. Each treatment had 7 replicate pens in each room, and each pen housed 3 pigs. Feed disappearance and BW were recorded on day 1 and 14. On day 14, one pig per pen was euthanized to collect ileum, mesenteric lymph nodes and spleen tissues, and cecum and colon digesta. Overall, NRYE supplementation did not affect growth performance in both clean and unclean conditions, improved kidney weight in both clean (P=0.0002) and unclean room (P<0.0001) and tended to improve the villus height/crypt depth ratio in the clean room (P=0.073). Supplementing NRYE was associated with upregulation of Ileal programmed cell death gene-1 (P=0.0003), interleukin (IL)-1β (P<0.0001), IL-6 (P=0.0003), IL-10 (P<0.0001) and tumor necrosis factor-α (TNF-α) (P<0.0001) in pigs raised in the unclean room. Supplementing the NRYE in pigs raised in the clean room suppressed growth of cecal Enterobacteriacea (P<0.0001) members and colonic Enterococcus spp. (P<0.019), improved proliferation of cecal Lactobacillus spp. (P<0.002) and colonic Clostridium cluster IV (P<0.011) and XVIa members (P<0.0002). Supplementing the NRYE in the unclean room improved proliferation of cecal Clostridium cluster IV (P<0.026) and suppressed proliferation of colonic Enterococcus spp. (P<0.037). In conclusion, supplementing the NRYE to piglets under unsanitary conditions improved ileal immune response by upregulating inflammatory cytokines, and positively modulated proliferation of beneficial gut bacteria and suppression of harmful ones in both clean and unclean rooms. Risk factors for the treatment outcome of retreated pulmonary tuberculosis patients in China: an optimized prediction model X.-M. WANG, S.-H. YIN, J. DU, M.-L. DU, P.-Y. WANG, J. WU, C. M. HORBINSKI, M.-J. WU, H.-Q. ZHENG, X.-Q. XU, W. SHU, Y.-J. ZHANG Journal: Epidemiology & Infection / Volume 145 / Issue 9 / July 2017 Retreatment of tuberculosis (TB) often fails in China, yet the risk factors associated with the failure remain unclear. To identify risk factors for the treatment failure of retreated pulmonary tuberculosis (PTB) patients, we analyzed the data of 395 retreated PTB patients who received retreatment between July 2009 and July 2011 in China. PTB patients were categorized into 'success' and 'failure' groups by their treatment outcome. Univariable and multivariable logistic regression were used to evaluate the association between treatment outcome and socio-demographic as well as clinical factors. We also created an optimized risk score model to evaluate the predictive values of these risk factors on treatment failure. Of 395 patients, 99 (25·1%) were diagnosed as retreatment failure. Our results showed that risk factors associated with treatment failure included drug resistance, low education level, low body mass index (<18·5), long duration of previous treatment (>6 months), standard treatment regimen, retreatment type, positive culture result after 2 months of treatment, and the place where the first medicine was taken. An Optimized Framingham risk model was then used to calculate the risk scores of these factors. Place where first medicine was taken (temporary living places) received a score of 6, which was highest among all the factors. The predicted probability of treatment failure increases as risk score increases. Ten out of 359 patients had a risk score >9, which corresponded to an estimated probability of treatment failure >70%. In conclusion, we have identified multiple clinical and socio-demographic factors that are associated with treatment failure of retreated PTB patients. We also created an optimized risk score model that was effective in predicting the retreatment failure. These results provide novel insights for the prognosis and improvement of treatment for retreated PTB patients. Influenza hospitalizations in Australian children J. LI-KIM-MOY, J. K. YIN, C. C. BLYTH, A. KESSON, R. BOOY, A. C. CHENG, K. MACARTNEY Journal: Epidemiology & Infection / Volume 145 / Issue 7 / May 2017 Australia's National Immunisation Program (NIP) provides free influenza vaccination for children at high risk of severe influenza; a pilot-funded programme for vaccine in all children aged 6 months to <5 years in one of eight states, has seen poor vaccine impact, related to recent vaccine safety concerns. This retrospective review examined influenza hospitalizations in children aged <16 years from three seasons (2011–2013) at two paediatric hospitals on opposite sides of the country. Comparisons of this cohort were made with state-based data on influenza-coded hospitalizations and national immunization register data on population-level immunization coverage. Of 740 hospitalizations, the majority were aged <5 years (476/740, 64%), and a substantial proportion (57%) involved healthy children, not currently funded for influenza vaccine. Intensive care unit admission occurred in 8·5%, and 1·5% of all children developed encephalitis. Use of antiviral therapy was uncommon (20·5%) and decreasing. Of those hospitalized, only 5·0% of at-risk children, who are currently eligible for free vaccine, and 0·7% of healthy children were vaccinated prior to hospitalization. This was consistent with low population-wide estimates of influenza vaccine uptake. It highlights the need to examine alternative strategies, such as universally funded paediatric influenza vaccination, to address disease burden in Australian children. Developing one-dimensional implosions for inertial confinement fusion science HEDP and HPL 2016 J. L. Kline, S. A. Yi, A. N. Simakov, R. E. Olson, D. C. Wilson, G. A. Kyrala, T. S. Perry, S. H. Batha, E. L. Dewald, J. E. Ralph, D. J. Strozzi, A. G. MacPhee, D. A. Callahan, D. Hinkel, O. A. Hurricane, R. J. Leeper, A. B. Zylstra, R. R. Peterson, B. M. Haines, L. Yin, P. A. Bradley, R. C. Shah, T. Braun, J. Biener, B. J. Kozioziemski, J. D. Sater, M. M. Biener, A. V. Hamza, A. Nikroo, L. F. Berzak Hopkins, D. Ho, S. LePape, N. B. Meezan, D. S. Montgomery, W. S. Daughton, E. C. Merritt, T. Cardenas, E. S. Dodd Published online by Cambridge University Press: 12 December 2016, e44 Experiments on the National Ignition Facility show that multi-dimensional effects currently dominate the implosion performance. Low mode implosion symmetry and hydrodynamic instabilities seeded by capsule mounting features appear to be two key limiting factors for implosion performance. One reason these factors have a large impact on the performance of inertial confinement fusion implosions is the high convergence required to achieve high fusion gains. To tackle these problems, a predictable implosion platform is needed meaning experiments must trade-off high gain for performance. LANL has adopted three main approaches to develop a one-dimensional (1D) implosion platform where 1D means measured yield over the 1D clean calculation. A high adiabat, low convergence platform is being developed using beryllium capsules enabling larger case-to-capsule ratios to improve symmetry. The second approach is liquid fuel layers using wetted foam targets. With liquid fuel layers, the implosion convergence can be controlled via the initial vapor pressure set by the target fielding temperature. The last method is double shell targets. For double shells, the smaller inner shell houses the DT fuel and the convergence of this cavity is relatively small compared to hot spot ignition. However, double shell targets have a different set of trade-off versus advantages. Details for each of these approaches are described. "Cosmic Windows" Sky Surveys J. J. Condon, W. D. Cotton, Q. F. Yin, T. M. Heckman, C. J. Lonsdale, H. E. Smith, C. D. Martin, D. Schiminovich, S. J. Oliver, H. J. A. Röttgering Far-infrared (FIR), ultraviolet (UV), and soft X-ray observations are easily degraded by dust and gas between the source and the telescope. They must be made from space, where they are still affected by the interstellar medium (ISM) of our Galaxy. Fortunately the ISM is quite patchy, with several "cosmic windows" covering ∼ 100 deg2 of sky having exceptionally low interstellar extinction and cirrus emission. Since the universe is nearly isotropic, these windows contain representative samples of cosmologically distant sources and will be the targets of deep multiwavelength studies including SWIRE, GALEX/DIS, and XMM-LSS. Overlapping optical and radio surveys provide essential source identifications, redshifts, morphologies, and continuum spectra. The prototype VLA survey (see http://www.cv.nrao.edu/sirtf_fls/) covers the 5 deg2 SIRTF First-Look Survey (FLS) and is being used to identify the expected FIR sources in advance. Most will be star-forming galaxies obeying the very tight far-infrared/radio correlation and thus continuum radio sources stronger than S ≈ 100 μJy at 1.4 GHz. Proposed VLA surveys covering the remaining "cosmic windows" will be useful for studying the evolution of obscured AGNs, clusters, and other uncommon objects. Impaired glucose tolerance in first-episode drug-naïve patients with schizophrenia: relationships with clinical phenotypes and cognitive deficits D. C. Chen, X. D. Du, G. Z. Yin, K. B. Yang, Y. Nie, N. Wang, Y. L. Li, M. H. Xiu, S. C. He, F. D. Yang, R. Y. Cho, T. R. Kosten, J. C. Soares, J. P. Zhao, X. Y. Zhang Schizophrenia patients have a higher prevalence of type 2 diabetes mellitus with impaired glucose tolerance (IGT) than normals. We examined the relationship between IGT and clinical phenotypes or cognitive deficits in first-episode, drug-naïve (FEDN) Han Chinese patients with schizophrenia. A total of 175 in-patients were compared with 31 healthy controls on anthropometric measures and fasting plasma levels of glucose, insulin and lipids. They were also compared using a 75 g oral glucose tolerance test and the homeostasis model assessment of insulin resistance (HOMA-IR). Neurocognitive functioning was assessed using the MATRICS Consensus Cognitive Battery (MCCB). Patient psychopathology was assessed using the Positive and Negative Syndrome Scale (PANSS). Of the patients, 24.5% had IGT compared with none of the controls, and they also had significantly higher levels of fasting blood glucose and 2-h glucose after an oral glucose load, and were more insulin resistant. Compared with those patients with normal glucose tolerance, the IGT patients were older, had a later age of onset, higher waist or hip circumference and body mass index, higher levels of low-density lipoprotein and triglycerides and higher insulin resistance. Furthermore, IGT patients had higher PANSS total and negative symptom subscale scores, but no greater cognitive impairment except on the emotional intelligence index of the MCCB. IGT occurs with greater frequency in FEDN schizophrenia, and shows association with demographic and anthropometric parameters, as well as with clinical symptoms but minimally with cognitive impairment during the early course of the disorder. Magnetohydrodynamic Simulation of the Evolution of Bipolar Magnetic Regions S. T. Wu, C. L. Yin, P. Mcintosh, E. Hildner Published online by Cambridge University Press: 12 April 2016, pp. 98-107 It has been recognized that the magnetic flux observed on the solar surface appears first in low latitudes, and then this flux is gradually dispersed by super granular convective motions and meridional circulation. Theoretically, the magnetic flux transport could be explained by the interactions between magnetic fields and plasma flows on the solar surface through the theory of magnetohydrodynamics. To understand this physical scenario, a quasi-three-dimensional, time-dependent, MHD model with differential rotation, meridional flow and effective diffusion as well as cyclonic turbulence effects is developed. Numerical experiments are presented for the study of Bipolar Magnetic Regions (BMRs). When the MHD effects are ignored, our model produced the classical results (Leighton, Astrophys. J., 146, 1547, 1964). The full model's numerical results demonstrate that the interaction between magnetic fields and plasma flow (i.e., MHD effects), observed together with differential rotation and meridional flow, gives rise to the observed complexity of the evolution of BMRs. Why do disk galaxies present a common gas-phase metallicity gradient? R. Chang, Shuhui Zhang, Shiyin Shen, Jun Yin, Jinliang Hou Journal: Proceedings of the International Astronomical Union / Volume 11 / Issue S321 / March 2016 Published online by Cambridge University Press: 21 March 2017, p. 134 CALIFA data show that isolated disk galaxies present a common gas-phase metallicity gradient, with a characteristic slope of -0.1dex/re between 0.3 and 2 disk effective radius re (Sanchez et al. 2014). Here we construct a simple model to investigate which processes regulate the formation and evolution. Lipopolysaccharide markedly changes glucose metabolism and mitochondrial function in the longissimus muscle of pigs H. Sun, Y. Huang, C. Yin, J. Guo, R. Zhao, X. Yang Most previous studies on the effects of lipopolysaccharide (LPS) in pigs focused on the body's immune response, and few reports paid attention to body metabolism changes. To better understand the glucose metabolism changes in skeletal muscle following LPS challenge and to clarify the possible mechanism, 12 growing pigs were employed. Animals were treated with either 2 ml of saline or 15 µg/kg BW LPS, and samples were collected 6 h later. The glycolysis status and mitochondrial function in the longissimus dorsi (LD) muscle of pigs were analyzed. The results showed that serum lactate content and NADH content in LD muscle significantly increased compared with the control group. Most glycolysis-related genes expression, as well as hexokinase, pyruvate kinase and lactic dehydrogenase activity, in LD muscle was significantly higher compared with the control group. Mitochondrial complexes I and IV significantly increased, while mitochondrial ATP concentration markedly decreased. Significantly increased calcium content in the mitochondria was observed, and endoplasm reticulum (ER) stress has been demonstrated in the present study. The results showed that LPS treatment markedly changes glucose metabolism and mitochondrial function in the LD muscle of pigs, and increased calcium content induced by ER stress was possibly involved. The results provide new clues for clarifying metabolic diseases in muscle induced by LPS. Scaling of ion energies in the relativistic-induced transparency regime D. Jung, B.J. Albright, L. Yin, D.C. Gautier, B. Dromey, R. Shah, S. Palaniyappan, S. Letzring, H.-C. Wu, T. Shimada, R.P. Johnson, D. Habs, M. Roth, J.C. Fernández, B.M. Hegelich Journal: Laser and Particle Beams / Volume 33 / Issue 4 / December 2015 Experimental data are presented showing maximum carbon C6+ ion energies obtained from nm-scaled targets in the relativistic transparent regime for laser intensities between 9 × 1019 and 2 × 1021 W/cm2. When combined with two-dimensional particle-in-cell simulations, these results show a steep linear scaling for carbon ions with the normalized laser amplitude a0 ( $a_0 \propto \sqrt ( I)$ ). The results are in good agreement with a semi-analytic model that allows one to calculate the optimum thickness and the maximum ion energies as functions of a0 and the laser pulse duration τλ for ion acceleration in the relativistic-induced transparency regime. Following our results, ion energies exceeding 100 MeV/amu may be accessible with currently available laser systems.
CommonCrawl
Research article | Open | Open Peer Review | Published: 16 March 2016 Utilization of antihypertensive drugs in obesity-related hypertension: a retrospective observational study in a cohort of patients from Southern Italy Mauro Cataldi ORCID: orcid.org/0000-0001-7787-34061, Ornella di Geronimo2, Rossella Trio2, Antonella Scotti2, Andrea Memoli3, Domenico Capone1 & Bruna Guida2 BMC Pharmacology and Toxicologyvolume 17, Article number: 9 (2016) | Download Citation Although the pathophysiological mechanisms of arterial hypertension are different in obese and lean patients, hypertension guidelines do not include specific recommendations for obesity-related hypertension and, therefore, there is a considerable uncertainty on which antihypertensive drugs should be used in this condition. Moreover, studies performed in general population suggested that some antihypertensive drugs may increase body weight, glycemia and LDL-cholesterol but it is unclear how this impact on drug choice in clinical practice in the treatment of obese hypertensive patients. Therefore, in order to identify current preferences of practitioners for obesity-related hypertension, in the present work we evaluated antihypertensive drug therapy in a cohort of 129 pharmacologically treated obese hypertensive patients (46 males and 83 females, aged 51.95 ± 10.1 years) that came to our observation for a nutritional consultation. Study design was retrospective observational. Differences in the prevalence of use of the different antihypertensive drug classes among groups were evaluated with χ2 square analysis. Threshold for statistical significance was set at p < 0.05. 41.1 % of the study sample was treated with one, 36.4 % with two and the remaining 22.5 % with three or more antihypertensive drugs. In patients under single drug therapy, β-blockers, ACEIs and ARBs accounted each for about 25 % of prescriptions. The prevalence of use of β-blockers was about sixfold higher in females than males. Diuretics were virtually never used in monotherapy regimens but were used in more than 60 % of patients on dual antihypertensive therapy and in all patients assuming three or more drugs. There was no significant difference in the prevalence of use of any of the aforementioned drugs among patients with obesity of type I, II and III or between patients with or without metabolic syndrome. Our data show that no first choice protocol seems to be adopted in clinical practice for the treatment of obesity-related hypertension. Importantly, physicians do not seem to differentiate drug use according to the severity of obesity or to the presence of metabolic syndrome or to avoid drugs known to detrimentally affect body weight and metabolic profile in general population. Overweight and obesity are major risk factors for arterial hypertension [1]. Large epidemiological studies showed that the prevalence of hypertension increases with body weight and is almost doubled in frank obesity [1]. Specifically, about 34 % of normal weight patients are hypertensive whereas this percentage rises up to 60 % in overweight and exceeds 70 % in obese patients [2]. Moreover, the majority of the hypertensive patients seen by general practitioners are overweight or obese [2]. Perspective studies also showed that in non-hypertensive subjects, overweight does increase the chance of later developing new arterial hypertension [3]. Mounting evidence supports the idea that different pathogenetic mechanisms are responsible for obesity-related hypertension and for hypertension of lean subjects [1, 4]. Specifically, the main determinant of hypertension in lean people is peripheral vasoconstriction, whereas obesity-related hypertension depends on sympathetic nervous system hyperactivation and on the consequent increase in cardiac output and renin and aldosterone release [1, 4]. The mechanism responsible for sympathetic hyperactivation in obesity seems to be related to the release from adipose tissue of substances such as adipokines, inflammatory cytokines and free fatty acids that may activate autonomic neurotransmission either directly or indirectly, by affecting insulin sensitivity [1, 4–6]. Moreover angiotensin-II (Ang II) and aldosterone that raise blood pressure and promote Na+ retention, are both synthesized in adipose tissue [1, 7]. Nonalcholic fatty liver disease (NAFDL) [8, 9] that often coexists with obesity, also has a significant role both in activating the renin-angiotensin-aldosterone (RAA) system and in causing insulin resistance. NAFDL may actually represent and independent cardiovascular risk factor [10] that according to current guidelines, can be corrected lifestyle and dietetic treatment [11]. There is still a considerable uncertainty on which should be the best pharmacological approach to treat obesity-related hypertension and major guidelines do not expressly address this point [12–14]. Because of the above mentioned pathophysiological differences between lean and obesity-related hypertension, it was suggested that drugs targeting the pathogenetic mechanism of obesity-related hypertension should be preferred in this condition [1, 15]. Specifically, drugs targeting the RAA system could be a rational choice because of Ang II and aldosterone release from the adipose tissue [16]. β-blockers could also be an option because they counteract the sympathetic overactivation occurring in this condition [1]. However, it has been strongly suggested that when prescribing drug therapy in obesity-related hypertension, the effect of treatment on body weight and metabolic profile should be carefully considered. Indeed, a note of caution has been raised on the use β-blocker and thiazide diuretics because of the possible detrimental effect of these drugs on body weight and metabolic control [17–19]. The scenario is even more complicated when multiple antihypertensive drugs are required as very often happens in patients with obesity-related hypertension because of the poor responsiveness of this disease to single drug therapy [1, 20]. The detrimental effect on metabolism and body weight of selected antihypertensive drugs is, indeed, greatly increased when they are used in combinations as, for instance, in the case of thiazide diuretics and β-blockers [21]. In the absence of guideline directions it is unclear how, in clinical practice, these safety concerns influence the choice of antihypertensive therapy for obese patients and whether, because of these concerns, different drugs are used in people with different degrees of obesity. Therefore, in the present paper, we performed a retrospective study on a cohort of pharmacologically-treated obese patients that came to our observation for a nutritional consultation, with the aim of identifying which antihypertensive drugs were more often used in obesity-related hypertension in a real clinical context. This was a retrospective study. Study sample was composed of 129 obese hypertensive patients (BMI ≥ 30) that came to our observation at the Physiology Nutrition Unit of the Federico II University of Naples for a dietitian advice. Only patients with uncomplicated arterial hypertension were included in the study whereas those with angina, arrhythmias or heart failure were excluded. Because of the retrospective design of the study ethical approval was waived according to current Italian legislation (Agenzia Italiana del Farmaco, Determinazione 20 Marzo 2008, Gazzetta Ufficiale della Repubblica Italiana n° 76, 31-3-2008). From the medical records of these patients we retrieved information on age, sex, systolic and diastolic blood pressure and the complete medical history including the list of all the drugs taken at the time of evaluation. In addition, we recollected anthropometric and body composition data that are routinely recorded during patient evaluation at our unit including height, body weight (BW), waist circumference (WC), total (TBW%) and extracellular (ECW%) water and fat (FM%), fat free (FFM%) and muscle mass (MM% of FFM). Body composition was assessed by bioelectrical impedance analysis using a tetrapolar, 50 kHz bioelectrical impedance analyzer (BIA 101 RJL, Akern Bioresearch, Firenze, Italy) [22]. Visceral adiposity was estimated by measuring the visceral adiposity index (VAI), a validated indicator of visceral fat mass [23], using the following equations: $$ FemaleVAI=\left(\frac{WC}{36.58+\left(1.89\times BMI\right)}\right)\times \left(\frac{TG}{0.81}\right)\times \left(\frac{1.52}{HDL}\right) $$ $$ MaleVAI=\left(\frac{WC}{39.68+\left(1.88\times BMI\right)}\right)\times \left(\frac{TG}{1.03}\right)\times \left(\frac{1.31}{HDL}\right) $$ Patients were classified in two groups according to whether their VAI values were above or below the cut-off values that Amato et al. [24] identified as cardiovascular risk discriminants in Caucasian. Because these values differ in different age groups, we first stratified patients according to their age, then we attributed each of the patients of each age group either to the high or low cardiovascular risk VAI group and, then we pooled altogether people of different age either in the group with VAI below or in the group with VAI above the cutoff value. Blood chemicals measurements including glycemia, total HDL- and LDL-cholesterol, serum albumin and total protein and transaminases were also obtained. Statistical analysis was performed with the IBM Statistical Package for Social Science (SPSS) Advanced Statistics software (release 20.0) (Armonk, New York, USA). Continuous data were examined for normality with the Shapiro-Wilk test. Normally distributed data are reported as mean ± standard deviation whereas the median with the 25–75 percentiles is shown for not normally distributed and categorical data. Patients were classified based on the number of anthypertensive drugs that they were treated with (one, two and three or more) and on the pharmacological class these drugs belonged to (ACEIs, ARBs, Ca2+-channel antagonists, β-blockers and diuretics). Student's t test and Mann-Whitney U test were used for two group comparisons of normally and not-normally distributed data, respectively. Differences in the prevalence of the different antihypertensive drug classes in males and females were evaluated with χ2 square analysis. Threshold for statistical significance was set at p < 0.05. Characteristics of the study sample Study sample consisted of 129 patients (aged 51.95 ± 10.1 years) with obesity-related hypertension all pharmacologically treated. 46 patients were males and the remaining 83 were females (36 in premenopausal and 47 in postmenopausal status). Table 1 reports the mean anthropometric, body composition and biochemical data of the study population. 59 patients had class I, 34 class II and 36 class III obesity. 41.1 % of the study sample received a single antihypertensive drug, 36.4 % a combination of two antihypertensive drugs and the remaining 22.5 % three or more antihypertensive drugs. No significant difference was observed in the age of the patients one, two and three or more drugs (Table 2). 16 patients were also treated with lipid-lowering agents, 4 with antidiabetic drugs and 5 with both antidiabetic and lipid-lowering drugs. Table 1 Anthropometric and metabolic data of the whole patient cohort and of patients taking only antihypertensive drugs Table 2 Prevalence of use of different antihypertensive drug classes in patients treated with one, two or three or more antihypertensive drugs Antihypertensive drugs used in monotherapy, dual therapy and in multiple drug therapy Table 2 reports the prevalence of use of different antihypertensive drug classes in patients treated with one, two and three or more antihypertensive drugs. In patients on monotherapy, no single class was prevalent over the others and β-blockers, ACEIs and ARBs accounted each for about 25 % of prescriptions. Ca2+ channel blockers (CCBs) were used in about 13 % and diuretics in about 2 % of patients. A similar pattern of drug use was observed also when patients treated with lipid-lowering or hypoglycemic drugs were excluded from the analysis. When the two sexes were separately examined, a strong gender-related difference emerged only in the prevalence of use of β-blockers that was about sixfold higher in females than males. The most remarkable difference that we noticed when we compared patients on monotherapy with those treated with two or more drugs was a markedly higher use of diuretics. These drugs were virtually absent in monotherapy regimens but were used in more than 60 % of patients on dual antihypertensive therapy and in all patients assuming three or more drugs. Prevalence of use of CCBs was not significantly different in patients on mono- or dual-therapy whereas it significantly increased in patients treated with three or more drugs. In this group, about 35 % of patients assumed CCBs with a strong difference between sexes. Indeed, more than 60 % of males and less than 10% of females were treated with these drugs. ACEIs and ARBs were used by about 40 % and 50 % of patients under dual therapy, respectively, with a significant sex difference in ACEI prescription that was more prevalent in males than in females (Table 2). Remarkably, in patients treated with three or more drugs, ARBs were used more often than ACEIs. Antihypertensive drugs utilization in patients with different degrees of obesity To establish whether different drugs are prescribed in patients with different degrees of obesity we stratified our patients according to their BMI and compared the prevalence of use of β-blockers, ACEIs, ARBs, CCBs, diuretics and α1 adrenoceptor blockers. As shown in Table 3 we did not find any significant difference in the prevalence of use of any of the aforementioned drugs among patients with type I, II and III obesity. Table 3 Prevalence of use of different antihypertensive drug classes in patients with obesity of class I, II and III Because BMI does not reflect only the amount of fat in the body but it is also influenced by lean tissue mass, it could not represent the best parameter to quantify the the impact of obesity on arterial blood pressure. Recent evidence suggests that specific age-related cut-off values of the visceral adiposity index, a parameter that faithfully represent metabolically active visceral fat [23], could identify people with high cardiovascular risk [24]. Therefore, we compared the use of the different classes of antihypertensive drugs in patients above and below this cut-off value. The results reported in Table 4, did not show any significant difference with the only exception of ARB use that was higher in patients with values of VAI above cut-off. However, this difference was significant only if the whole patient population was considered and not if patients receiving only antihypertensive drugs were evaluated (Table 4). Table 4 Prevalence of use of different antihypertensive drug classes in patients below and above the high cardiovascular risk VAI cutoff value Antihypertensive drugs used in patients with or without metabolic syndrome 67 (51.9 %) of the 129 patients of the whole population and 49 (47.1 %) of the 104 treated only with antihypertensive drugs had metabolic syndrome according to the ATP III criteria [25]. There was no significant difference in the prevalence of use of β-blockers, ACEIs, ARBs, CCBs, diuretics and α1 adrenoceptor blockers in patients with or without this syndrome neither when we considered the whole patient population or the patients treated only with antihypertensive drugs and no antidiabetic or lipid-lowering drugs (Table 5). Table 5 Prevalence of use of different antihypertensive drug classes in patients with or without metabolic syndrome In the present study we retrospectively evaluated the utilization of antihypertensive drugs in a cohort of obese hypertensive patients to establish whether in obesity-related hypertension, practitioners preferentially use drugs known not to negatively affect metabolism or body weight. The main finding of our study was that, instead, our obese patients were treated with drugs belonging to all the main antihypertensive drug classes, also including those expected to increase body weight or worsen metabolic profile. The analysis of patients on monotherapy showed that no single antihypertensive drug was used as first choice with the patients almost equally distributed among those taking β-blockers, ACEIs and ARBs whereas use prevalence of CCBs was only slightly lower. The finding that about 25 % of patients on monotherapy were treated with β-blockers was unexpected. Indeed, although these drugs may counteract the sympathetic hyperactivity that is responsible for obesity-related hypertension, concerns have been raised on their tolerability in this clinical condition because current evidence suggests that they could increase body weight and worsen metabolic status [1, 17]. Interestingly, almost all of the patients taking β-blockers as single drug therapy were women. A possible explanation of this finding is that β-blockers may cause erectile dysfunction and, therefore, are not well accepted by male patients [26]. Thiazide diuretics were virtually never used as single drugs in our population although they are considered first choices drugs in current guidelines [12, 13]. This suggests that practitioners had the perception that thiazide diuretics should better not be used in obesity and that they modified their prescriptions accordingly. Specifically, published evidence that these diuretics may worsen metabolic profile and cause impotence was probably responsible for keeping low their use in our group of obese patients [18, 26, 27]. While almost never used in single drug therapy, diuretics were perceived as important drugs in multiple-drug therapy as all the patients of our cohort that were treated with three or more drugs took diuretics in various combinations with drugs acting on RAA system and CCBs. An interesting finding was that patients treated with three or more drugs took less ACEIs and more ARBs in comparison with those on monotherapy. This was not unexpected considering that ARBs are often used when patients stop responding to ACEIs because of Ang II escape. Therefore we can hypothesize that patients treated with multiple drugs switched from ACEIs to ARBs sometime before coming to our observation because of acquired drug resistance. We did not find any significant difference in the prevalence of use of any drug class when comparing patients with obesity class I, II or III. Moreover, there was no significant difference between patients with or without metabolic syndrome. No difference was observed also when comparing patients with high or low values of VAI, with the only exception of a higher prevalence of ARB use in patients with high VAI. However, this difference was significant only when the whole patient population also including those taking lipid lowering or antidiabetic drugs was considered. Therefore, it probably does not reflect different drug choice related to hypertension per se but could be dependent on the presence of a concomitant disease such as diabetes. Indeed, ARBs (and ACEIs) could represent first choice drugs in diabetes especially in the presence of renal damage [12]. Collectively, our results seem to suggest that although concerns have been raised on the use of some antihypertensive drugs because of their effect on body weight and metabolism [28–31], the degree of obesity or the presence of its metabolic complications did not influence drug choice in our patients. These data could also suggest that, in our patients, the class of antihypertensive drugs used was not a major determinant either of BMI or of metabolic control. However, our study was not specifically designed to address this question, and further randomized prospective studies will be needed to address this point also considering that available information in the literature is very limited. Specifically, a few studies showed that drugs acting on RAA system and, in particular, ARBs could improve metabolic status in patients with metabolic syndrome [32], decreasing visceral fat accumulation [33] and improving insulin sensitivity and lipid profile [34], whereas visceral adiposity increases the risk of developing adverse metabolic effects upon treatment with β-blockers or thiazide diuretics [35]. Moreover, an important limitation of these studies was the short duration of drug exposure ranging around several weeks. The issue of establishing whether adiposity or metabolic status are affected by specific antihypertensive drug classes in obesity is potentially clinically relevant because, by interfering with these parameters, drug therapy could positively or negatively influence long term prognosis in this condition. Under this respect, it is worth mentioning the recent evidence reported by the Blood Pressure Lowering Treatment Trialists' Collaboration [36] showing that in obesity-related hypertension the outcome measured as a composite of major cardiovascular events including stroke, coronary heart disease, heart failure, and cardiovascular death, is independent from the class of antihypertensive drugs taken by the patients. Although our retrospective study was performed on medical records from a single institution, we believe that it is actually representative of a larger population of obese patients. Indeed, our unit is not a primary center for the treatment of hypertension but a nutritional consultation with its catchment area extending across the larger Naples metropolitan region. Therefore, the data on antihypertensive drug treatment that we analyzed in the present study do not reflect the therapeutic choices of physicians working in a single center but those of primary care physicians or of cardiologists taking care of the patients in many other institutions that sent us their patients only for a nutritional advice. A limitation of the study is that, we cannot exclude a selection bias because the patients that we examined were actually sent to our observation by other physicians to further improve their medical treatment. This implies that our study sample could have been theoretically composed by patients taking benefits of higher standard than average medical care. In conclusion, we showed that the pharmacological approach to the treatment of obesity-related hypertension is highly heterogeneous as different drug classes are used either alone or in combination and no first choice protocol seems to be adopted in clinical practice. Importantly, we found no evidence that physicians differentiate drug use according to the severity of obesity, to visceral fat accumulation or to the presence of metabolic syndrome. There is an urgent need of further data to provide informed directions that could help practitioners in choosing the right therapy for hypertensive obese patients. Specifically, well designed randomized trials are needed to establish whether the detrimental effect of some antihypertensive drugs that were observed in general population also occur in obese patients. ACEI: ACE inhibitor ARB: Angiotensin II receptor blocker Ang II: Angiotensin-II BW: CCB: Calcium channel blocker ECW%: Percent extracellular water FFM%: Percent fat free mass FM%: Percent fat mass MM%: Muscle mass as percent of free fat mass Nonalcholic fatty liver disease RAA: Renin-angiotensin-aldosterone TBW%: Percent total body water VAI: Visceral adiposity index Jordan J, Yumuk V, Schlaich M, Nilsson PM, Zahorska-Markiewicz B, Grassi G, et al. Joint statement of the European Association for the Study of Obesity and the European Society of Hypertension: obesity and difficult to treat arterial hypertension. J Hypertens. 2012;30:1047–55. Bramlage P, Pittrow D, Wittchen HU, Kirch W, Boehler S, Lehnert H, et al. Hypertension in overweight and obese primary care patients is highly prevalent and poorly controlled. Am J Hypertens. 2004;17:904–10. Wilson PW, D'Agostino RB, Sullivan L, Parise H, Kannel WB. Overweight and obesity as determinants of cardiovascular risk: the Framingham experience. Arch Intern Med. 2002;162:1867–72. De Marco VG, Aroor AR, Sowers JR. The pathophysiology of hypertension in patients with obesity. Nat Rev Endocrinol. 2014;10:364–76. Haynes WG. Role of leptin in obesity-related hypertension. Exp Physiol. 2005;90:683–8. Landsberg L. Insulin-mediated sympathetic stimulation: role in the pathogenesis of obesity-related hypertension (or, how insulin affects blood pressure, and why). J Hypertens. 2001;19:523–8. Briones AM, Nguyen Dinh Cat A, Callera GE, Yogi A, Burger D, He Y, et al. Adipocytes produce aldosterone through calcineurin-dependent signaling pathways: implications in diabetes mellitus-associated obesity and vascular dysfunction. Hypertension. 2012;59:1069–78. Tarantino G. Should nonalcoholic fatty liver disease be regarded as a hepatic illness only? World J Gastroenterol. 2007;13:4669–72. Tarantino G, Finelli C. What about non-alcoholic fatty liver disease as a new criterion to define metabolic syndrome? World J Gastroenterol. 2013;19:3375–84. Huh JH, Ahn SV, Koh SB, Choi E, Kim JY, Sung KC, et al. A prospective study of fatty liver index and incident hypertension: the KoGES-ARIRANG study. PLoS One. 2015;10:e0143560. Chalasani N, Younossi Z, Lavine JE, Diehl AM, Brunt EM, Cusi K, et al. The diagnosis and management of non-alcoholic fatty liver disease: practice guideline by the American Gastroenterological Association, American Association for the Study of Liver Diseases, and American College of Gastroenterology. Gastroenterology. 2012;142:1592–609. James PA, Oparil S, Carter BL, Cushman WC, Dennison-Himmelfarb C, Handler J, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA. 2014;311:507–20. ESH/ESC Task Force for the Management of Arterial Hypertension. Practice guidelines for the management of arterial hypertension of the European Society of Hypertension (ESH) and the European Society of Cardiology (ESC): ESH/ESC Task Force for the Management of Arterial Hypertension. J Hypertens. 2013;31:1925–38. Hypertension. Clinical management of primary hypertension in adults. NICE clinical guideline 127. Available from http://www.nice.org.uk/guidance/cg127/chapter/1-Guidance Weber MA, Jamerson K, Bakris GL, Weir MR, Zappe D, Zhang Y, et al. Effects of body size and hypertension treatments on cardiovascular event rates: subanalysis of the ACCOMPLISH randomised controlled trial. Lancet. 2013;381:537–45. Jansen PM, Danser JA, Spiering W, van den Meiracker AH. Drug mechanisms to help in managing resistant hypertension in obesity. Curr Hypertens Rep. 2010;12:220–5. Pischon T, Sharma AM. Use of beta-blockers in obesity hypertension: potential role of weight gain. Obes Rev. 2001;2:275–80. Elliott WJ, Meyer PM. Incident diabetes in clinical trials of antihypertensive drugs: a network meta-analysis. Lancet. 2007;369:201–7. Stump CS, Hamilton MT, Sowers JR. Effect of antihypertensive agents on the development of type 2 diabetes mellitus. Mayo Clin Proc. 2006;81:1637–38. Lloyd-Jones DM, Evans JC, Larson MG, O'Donnell CJ, Roccella EJ, Levy D. Differential control of systolic and diastolic blood pressure : factors associated with lack of blood pressure control in the community. Hypertension. 2000;36:594–9. Cooper-DeHoff RM, Wen S, Beitelshees AL, Zineh I, Gums JG, Turner ST, et al. Impact of abdominal obesity on incidence of adverse metabolic effects associated with antihypertensive medications. Hypertension. 2010;55:61–8. Guida B, Cataldi M, Maresca ID, Germanò R, Trio R, Nastasi AM, et al. Dietary intake as a link between obesity, systemic inflammation, and the assumption of multiple cardiovascular and antidiabetic drugs in renal transplant recipients. Biomed Res Int. 2013;2013:363728. Amato MC, Giordano C, Galia M, Criscimanna A, Vitabile S, Midiri M, et al. Visceral Adiposity Index: a reliable indicator of visceral fat function associated with cardiometabolic risk. Diabetes Care. 2010;33:920–2. Amato MC, Giordano C, Pitrone M, Galluzzo A. Cut-off points of the visceral adiposity index (VAI) identifying a visceral adipose dysfunction associated with cardiometabolic risk in a Caucasian Sicilian population. Lipids Health Dis. 2011;10:183. Grundy SM, Brewer Jr HB, Cleeman JI, Smith Jr SC, Lenfant C, American Heart Association, et al. Definition of metabolic syndrome: report of the National Heart, Lung, and Blood Institute/American Heart Association conference on scientific issues related to definition. Circulation. 2004;109:433–8. Keene LC, Davies PH. Drug-related erectile dysfunction. Adverse Drug React Toxicol Rev. 1999;18:5–24. Grimm C, Köberlein J, Wiosna W, Kresimon J, Kiencke P, Rychlik R. New-onset diabetes and antihypertensive treatment. GMS Health Technol Assess. 2010; 6:Doc03. Leslie WS, Hankey CR, Lean ME. Weight gain as an adverse effect of some commonly prescribed drugs: a systematic review. QJM. 2007;100:395–404. Dentali F, Sharma AM, Douketis JD. Management of hypertension in overweight and obese patients: a practical guide for clinicians. Curr Hypertens Rep. 2005;7:330–6. Doggrell SA. Clinical evidence for drug treatments in obesity-associated hypertensive patients--a discussion paper. Methods Find Exp Clin Pharmacol. 2005;27:119–25. Wofford MR, Smith G, Minor DS. The treatment of hypertension in obese patients. Curr Hypertens Rep. 2008;10:143–50. Putnam K, Shoemaker R, Yiannikouris F, Cassis LA. The renin-angiotensin system: a target of and contributor to dyslipidemias, altered glucose homeostasis, and hypertension of the metabolic syndrome. Am J Physiol Heart Circ Physiol. 2012;302:H1219–30. Chujo D, Yagi K, Asano A, Muramoto H, Sakai S, Ohnishi A, et al. Telmisartan treatment decreases visceral fat accumulation and improves serum levels of adiponectin and vascular inflammation markers in Japanese hypertensive patients. Hypertens Res. 2007;30:1205–10. Jordan J, Engeli S, Boschmann M, Weidinger G, Luft FC, Sharma AM, et al. Hemodynamic and metabolic responses to valsartan and atenolol in obese hypertensive patients. J Hypertens. 2005;23:2313–8. Blood Pressure Lowering Treatment Trialists' Collaboration, Ying A, Arima H, Czernichow S, Woodward M, Huxley R, et al. Effects of blood pressure lowering on cardiovascular risk according to baseline body-mass index: a meta-analysis of randomised trials. Lancet. 2015;385:867–74. Division of Pharmacology, Department of Neuroscience, Reproductive and Odontostomatologic Sciences, Federico II University of Naples, Via Pansini n°5, Naples, 80131, Italy Mauro Cataldi & Domenico Capone Division of Physiology, Department of Clinical Medicine and Surgery, Federico II University of Naples, Naples, Italy Ornella di Geronimo , Rossella Trio , Antonella Scotti & Bruna Guida Division of Nephrology, Department of Public Health, Federico II University of Naples, Naples, Italy Andrea Memoli Search for Mauro Cataldi in: Search for Ornella di Geronimo in: Search for Rossella Trio in: Search for Antonella Scotti in: Search for Andrea Memoli in: Search for Domenico Capone in: Search for Bruna Guida in: Correspondence to Mauro Cataldi. The authors have no competing interest to declare. OdG, RT and AS collected the data, MC, AM DC and BG analyzed the data, MC, and BG designed the study, and wrote the paper. All authors read, contributed to, and approved the final version of the manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Antihypertensive Drug Thiazide Diuretic Antihypertensive Drug Class
CommonCrawl
What started the US Dust Bowl of the 1930s, and could it happen again? Most of us know about the Dust Bowl: the huge storms of dirt and dust that swept across America in the 1930's. But what I'm wondering is... What actually triggered the start of the Dust Bowl? Is it very likely that it will occur again? I've often heard that it was the farmers plowing up to much soil, but that doesn't make any sense to me because we're definitely plowing up more soil now than we were then. climate soil dust-bowl Azzie RogersAzzie Rogers $\begingroup$ You know, this is actually one of the most interesting questions I've seen on the site so far. +1 $\endgroup$ – hichris123♦ Apr 21 '14 at 23:56 $\begingroup$ i agree, favorited. $\endgroup$ – Neo Apr 21 '14 at 23:59 $\begingroup$ I'm unsure if storms is an applicable tag, perhaps drought or desertification would also be appropriate? $\endgroup$ – Siv Apr 25 '14 at 13:58 $\begingroup$ @Siv Good idea. Editing... $\endgroup$ – Azzie Rogers Apr 25 '14 at 16:08 $\begingroup$ The Grapes of Wrath is a good read. $\endgroup$ – SoilSciGuy May 7 '14 at 0:59 The Dustbowl occurred during the 1930s because of a combination of man-induced drought and the (mis)use of relatively new farming practices. In the 1920s, the spread of automotive (and tractor) technologies made it possible to "plow up" the Great Plains. This resulted in the loss of a lot of natural moisture and the creation of drought conditions in lands that had been marginal to begin with regarding the adequacy of the water supply. Even by the 1930s we still had not learned our lesson. There were conservation techniques using the new technologies, but we weren't using them. For example, the tractors would still plow up and down slopes (the "easy" way) making it easy for topsoil to run down hills and into the rivers. More to the point, the newly dry and loosened topsoil would then go into the air and create a "Dust Bowl" when the winds kicked up. There was a real fear that the agricultural Midwest (basically the Plains states) would turn into a desert. It was only after an additional half a decade of bitter trial and error (and a new generation of better educated farmers) that things changed. By the mid 1930s, it was a relatively new technique called contour farming, which consisted of using motor vehicles to plow horizontally around hills instead of vertically, that saved the day. Also, farmers started planting trees as "windbreaks" in strategic locations. IMHO, it could happen again, probably in a developing country like China, which is trying to "catch up" to the United States, and has been prone to adopting our bad habits of an earlier era. Bottom line, the Dust Bowl was (largely) a man-made, not a natural phenomenon. And the operative principle was attributed to Confucius: "Men are the same everywhere. Only their habits are different." $\begingroup$ Thanks! After you mentioned it happening again in countries like China, I looked it up and found this:earth-policy.org/plan_b_updates/2012/update110 Quoting from the article, "Today two new dust bowls are forming: one in northern China and southern Mongolia and the other in Africa south of the Sahara. ...the main culprit in Asia and Africa is overgrazing. Although arid or semiarid grasslands are typically better suited for grazing livestock than for farming, once they are overstocked their protective grass covering deteriorates and they face erosion all the same." $\endgroup$ – Azzie Rogers Apr 22 '14 at 1:22 $\begingroup$ You might also mention the practice of planting large swaths of trees to provide a wind break to mitigate winds stirring up the dust that was used in the 1930s. $\endgroup$ – casey♦ Apr 22 '14 at 2:03 $\begingroup$ there are some interesting counter measures in Africa and China ("Great Green Wall") en.wikipedia.org/wiki/Great_Green_Wall and en.wikipedia.org/wiki/Great_Green_Wall $\endgroup$ – GeoEki Apr 25 '14 at 16:16 $\begingroup$ Small issue, but the drought wasn't man-made. It just came. Also, I agree that poor farming habits and motorized farming caused the loose topsoil which caused the dust, but the salvation was a combination of farming practices (including crop rotation and fallow fields), legislation, and good old-fashion rain. $\endgroup$ – Richard Apr 30 '14 at 19:35 $\begingroup$ Along the lines of counter measures - about 17 minutes into this PBS special, they talk about how one man spent years turning a desert back into farm land. Using trees as wind breaks, also using termite mounds and moss as methods to keep water in the soil. It's quite inspirational. The whole show is good, but the part 17 minutes in is amazing. video.pbs.org/video/2365431678 $\endgroup$ – userLTK Apr 26 '15 at 21:30 The Dust Bowl was caused by farmers tilling up too much soil. However, there were also other factors. Weather also played a key role. Because there were several years of drought, and farmers had plowed up so much ground, when high winds came along the soil was blown off the ground, and the rest is history. This became somewhat of a vicious circle: no crops grew and so the dust could not be controlled. As far as it happening again... Yes, I suppose it could and on a large scale, but run for cover. Because there would be some other very serious issues. Its occurrence would likely mean that there was little to no water available for irrigation. If that were the case, then I can not imagine how people would be responding to it. L.B.L.B. There are two books I love to recommend on this. The Worst Hard Time is a history book with a lot of first-person reports and a summary of what people were thinking before the dust bowl -- what the optimism looked like. The saddest bit I remember was that there were responsible ranchers who didn't over-plow, who had intact prairie -- until their neighbor's dust buried them and killed their grass and herds. The other is on huger scale, Dirt: The Erosion of Civilizations. Big erosive events are recorded in sediment fans (often in the ocean at river-mouths). Analyzing these, Montgomery argues that there have been six cases of organization->ag->regional market->over-farming->erosion->collapse, and that we're plausibly in the seventh. cphlewiscphlewis Climate modelling by NASA suggests a change in the Jet Stream was partially responsible for the 1930s drought in the US. Cooler than normal tropical Pacific Ocean temperatures and warmer than normal tropical Atlantic Ocean temperatures created ideal drought conditions due to the unstable sea surface temperatures. The result was dry air and high temperatures in the Midwest from about 1931 to 1939. Sea surface temperatures create shifts in weather patterns. One way is by changing the patterns in the jet stream. In the 1930's, the jet stream was weakened causing the normally moisture rich air from the Gulf of Mexico to become drier. Low level winds further reduced the normal supply of moisture from the Gulf of Mexico and reduced rainfall throughout the US Midwest. Other conditions at the time included: a high-pressure system in winter sat over the west coast of the United States and turned away wet weather— a pattern similar to that which occurred in the winter of 2013-14. Second, the spring of 1934 saw dust storms, caused by poor land management practices, suppress rainfall. FredFred Regarding the "and could it happen again" part of your question; possibly, due to climate change. As others on here have noted, the original dust bowl was a combination of crappy soil conservation practices and a prolonged drought. Our soil conservation practices have improved a lot, but we still plant water-intensive crops like monoculture corn in places where it historically never would have grown. For instance, in the southern Great Plains where it's really dry and native vegetation usually consists of low-shrubs and short grasses. We're able to grow crops in these areas now, because of center-pivot irrigation technology (started in the 1940's) and groundwater pumping. But as our global climate system continues to warm and our groundwater resources become increasingly scarce, it's unlikely that we will be able to continue these practices in the future. Some authors think that a sudden, sustained drought in these areas consistent with what was observed in the dust bowl could lead to similar problems in the Great Plains observed in the 1930's. Just finished reading this paper in Nature Plants on this very topic : https://www.nature.com/articles/nplants2016193 Kyle TaylorKyle Taylor Not the answer you're looking for? Browse other questions tagged climate soil dust-bowl or ask your own question. Seeking details for the "historical rate" of arable land loss What would happen if we could revert $\mathrm{CO}_2$ production but took it too far? Wet and dry seasons - what are they and how do they form? What would happen if the same soil received waste water contaminants for a long time? The Dust Bowl: how's the recovery of all that topsoil going? Reisdual soil and the atmosphere influence What is the difference between Saprolite and Regolith? Could dust blown up by a comet impact render the air unbreathable?
CommonCrawl
A question about formalized theories that may be both consistent and w-consistent Let T be a first order set theory formalized in the language L(ZF) of ZF, which has "membership" and "=" as its only atomic predicates. For each positive integer n, let P(n) be the sentence which expresses that "there exists a set having exactly n elements". P(n) can be formalized in L(ZF) and is an axiom of T for each positive integer n. Note that L(ZF) does not need to contain any terms that are constants for this to be possible. Now let Q be the sentence stating that "there exists a finite set (in Tarski's sense of finite) which can be mapped onto every non-empty set". Q is also an axiom of T that can be formalized in L(ZF) and should be consistent with every finite collection of sentences of the form P(n). However with infinitely many axioms it would seem appropriate to call T at least a w-inconsistent theory. But could T still be consistent? The answer is not clear to me since it would depend upon what other axioms T has. T must have some other axioms (such as the axiom of pairing) in order to define the mapping of one set onto another as the sentence Q describes. We must be careful that these additional axioms are not inconsistent with Q. We should probably not want a power set axiom, for example. But, with a proper choice of the new axioms could we end up with a consistent T? If so, T would be an example of a consistent but w-inconsistent formalized theory whose language need contain no constants and no formulae of the form N(x) intended to be interpreted as "x is a positive integer". set-theory Garabed GulbenkianGarabed Gulbenkian $\begingroup$ Tarski finiteness is somewhat of an ambiguous term. One definition is equivalent to finiteness as we think about it, and the other one requires the use of the axiom of choice in order to be equivalent to finiteness. $\endgroup$ – Asaf Karagila Mar 11 '13 at 21:17 If we replace your axiom $Q$ by the stronger-seeming assertion $Q^+$ that asserts: "there is a largest set, which contains all other sets as subsets, but which is Tarski finite", then the corresponding theory $T$, asserting every $P(n)$ and also $Q^+$, is consistent, but not $\omega$-consistent. To see this, simply consider the ultrapower model $M=\Pi \langle V_m,{\in}\rangle/U$, where $U$ is a nonprincipal ultrafilter on $\mathbb{N}$ and $V_m$ consists of the sets of rank less than $m$ in the von Nuemann hierarchy, defined by iterating the power set via $V_0=\emptyset$ and $V_{m+1}=P(V_m)$. Each statement $P(n)$ is true in all but finitely many $V_m$, and hence true in $M$. Similarly, each $V_{m+1}$ satisfies $Q^+$, since the object $V_m$ is largest in $V_{m+1}$ and Tarski finite there, and so $Q^+$ also is true in $M$. Alternatively, one could consider $V_N$ for nonstandard $N$ inside a nonstandard model of finite set theory $\text{ZFC}^{\neg\infty}$. These models furthermore satisfy some nice axioms, such as extensionality, union, foundation, axiom of choice, collection, replacement, separation, pairing restricted to sets of non-maximal rank, power set restricted to sets of non-maximal rank and induction on the von Neumann natural numbers. And so one can place all these axioms also into $T$. This theory, however, is clearly not $\omega$-consistent. Joel David HamkinsJoel David Hamkins $\begingroup$ Thanks, Joel, for a very complete answer. Your axiom Q+ accomplishes everything that axiom Q does, while avoiding the problem of some additional axioms being possibly needed to define mappings. Nothing new would seem to be needed to express that a set is finite in the sense of Tarski. So, if the only axioms of T are axiom Q+ and an infinite collection of sentences of the form P(n), T may be one of the simplest possible examples of a formalized theory that is both consistent and w-inconsistent. $\endgroup$ – Garabed Gulbenkian Mar 13 '13 at 18:51 Not the answer you're looking for? Browse other questions tagged set-theory or ask your own question. ZFC, set membership and FOL A question about the comparability of large cardinals. A question about Kunen's inconsistency theorem A question about "local" versus "global" large cardinal axioms One or two questions about so-called "absolute" set theories A question about how much set theory can be developed based on the "subset" relation rather than the "elementhood" relation A "Completion" of $ZFC^-$ Is ZFC plus a truth predicate capable of variable substitution consistent?
CommonCrawl
Stochastic Partial Differential Equations: Analysis and Computations June 2014 , Volume 2, Issue 2, pp 233–261 | Cite as Determining white noise forcing from Eulerian observations in the Navier-Stokes equation Viet Ha Hoang Kody J. H. Law Andrew M. Stuart The Bayesian approach to inverse problems is of paramount importance in quantifying uncertainty about the input to, and the state of, a system of interest given noisy observations. Herein we consider the forward problem of the forced 2D Navier-Stokes equation. The inverse problem is to make inference concerning the forcing, and possibly the initial condition, given noisy observations of the velocity field. We place a prior on the forcing which is in the form of a spatially-correlated and temporally-white Gaussian process, and formulate the inverse problem for the posterior distribution. Given appropriate spatial regularity conditions, we show that the solution is a continuous function of the forcing. Hence, for appropriately chosen spatial regularity in the prior, the posterior distribution on the forcing is absolutely continuous with respect to the prior and is hence well-defined. Furthermore, it may then be shown that the posterior distribution is a continuous function of the data. We complement these theoretical results with numerical simulations showing the feasibility of computing the posterior distribution, and illustrating its properties. Bayesian inversion Navier-Stokes equation White noise forcing The Bayesian approach to inverse problems has grown in popularity significantly over the last decade, driven by algorithmic innovation and steadily increasing computer power [10]. Recently there have been systematic developments of the theory of Bayesian inversion on function spaces [3, 11, 12, 13, 14, 18] and this has led to new sampling algorithms which perform well under mesh-refinement [2, 15, 21]. In this paper we add to this growing interest in the Bayesian formulation of inversion, in the context of a specific PDE inverse problem, motivated by geophysical applications such as data assimilation in the atmosphere and ocean sciences, and demonstrate that fully Bayesian probing of the posterior distribution is feasible. The primary goal of this paper is to demonstrate that the Bayesian formulation of inversion for the forced Navier-Stokes equation, introduced in [3], can be extended to the case of white noise forcing. The paper [3] assumed an Ornstein-Uhlenbeck structure in time for the forcing, and hence did not include the white noise case. It is technically demanding to extend to the case of white noise forcing, but it is also of practical interest. This practical importance stems from the fact that the Bayesian formulation of problems with white noise forcing corresponds to a statistical version of the continuous time weak constraint 4DVAR methodology [22]. The 4DVAR approach to data assimilation currently gives the most accurate global short term weather forecasts available [16] and this is arguably the case because, unlike ensemble filters which form the major competitor, 4DVAR has a rigorous statistical interpretation as a maximum a posteriori (or MAP) estimator—the point which maximizes the posterior probability. It is therefore of interest to seek to embed our understanding of such methods in a broader Bayesian context. To explain the connection between our work and the 4DVAR methodology and, in particular, to explain the terminology used in the data assimilation community, it is instructive to consider the finite dimensional differential equation $$\begin{aligned} \frac{du}{dt}=f(u)+\xi , \quad u(0)=u_0 \end{aligned}$$ on \(\mathbb R^n\). Assume that we are given noisy observations \(\{y_j\}\) of the solution \(u_j=u(t_j)\) at times \(t_j=jh\) so that $$\begin{aligned} y_j=u_j+\eta _j, \quad j=1,\ldots , J, \end{aligned}$$ for some noises \(\eta _j.\) An important inverse problem is to determine the initial condition \(u_0\) and forcing \(\xi \) which best fit the data. If we view the solution \(u_j\) as a function of the initial condition and forcing, then a natural regularized least squares problem is to determine \(u_0\) and \(\xi \) to minimize $$\begin{aligned} I(u_0,\xi )=\sum _{j=1}^J \left| \varGamma ^{-\frac{1}{2}}\left( y_j-u_j(u_0,\xi ) \right) \right| ^2+|\varSigma ^{-\frac{1}{2}}u_0|^2+\Vert \mathsf Q^{-\frac{1}{2}}\xi \Vert ^2 \end{aligned}$$ where \(|\cdot |, \Vert \cdot \Vert \) denote the \(\mathbb R^n-\)Euclidean and \(L^2\bigl ([0,T];\mathbb R^n\bigr )\) norms respectively, \(\varGamma , \varSigma \) denote covariance matrices and \(\mathsf Q\) a covariance operator. This is a continuous time analogue of the 4DVAR or variational methodology, as described in the book of Bennett [1]. In numerical weather prediction the method is known as weak constraint 4DVAR, and as 4DVAR if \(\xi \) is set to zero (so that the ODE $$\begin{aligned} \frac{du}{dt}=f(u), \quad u(0)=u_0 \end{aligned}$$ is satisfied as a hard constraint), the term \(\Vert \mathsf Q^{-\frac{1}{2}}\xi \Vert ^2\) dropped, and the minimization is over \(u_0\) only. As explained in [10], these minimization problems can be viewed as probability maximizers for the posterior distribution of a Bayesian formulation of the inverse problem—the so-called MAP estimators. In this interpretation the prior measures on \(u_0\) and \(\xi \) are centred Gaussians with covariances \(\varSigma \) and \(\mathsf Q\) respectively. Making this connection opens up the possibility of performing rigorous statistical inversion, and thereby estimating uncertainty in predictions made. The ODEs arising in atmosphere and ocean science applications are of very high dimension due to discretizations of PDEs. It is therefore conceptually important to carry through the program in the previous paragraph, and in particular Bayesian formulation of the inversion, for PDEs; the paper [5] explains how to define MAP estimators for measures on Hilbert spaces and the connection to variational problems. The Navier-Stokes equation in 2D provides a useful canonical example of a PDE of direct relevance to the atmosphere and ocean sciences. When the prior covariance operator \(\mathsf Q\) is chosen to be that associated to an Ornstein-Uhleneck operator in time, the Bayesian formulation for the 2D Navier-Stokes equation has been carried out in [3]. Our goal in this paper is to extend to the more technically demanding case where \(\mathsf Q\) is the covariance operator associated with a white noise in time, with spatial correlation \(Q\). We will thus use the prior model \(\xi dt={dW}\) where \(W\) is a \(Q-\)Wiener process in an appropriate Hilbert space, and consider inference with respect to \(W\) and \(u_0.\) In the finite dimensional setting the differences between the case of coloured and white noise forcing, with respect to the inverse problem, are much less substantial and the interested reader may consult [7] for details. The key tools required in applying the function space Bayesian approach in [18] are the proof of continuity of the forward map from the function space of the unknowns to the data space, together with estimates of the dependence of the forward map upon its point of application, sufficient to show certain integrability properties with respect to the prior. This program is carried out for the 2D Navier-Stokes equation with Ornstein-Uhlenbeck priors on the forcing in the paper [3]. However to use priors which are white in time adds further complications since it is necessary to study the stochastically forced 2D Navier-Stokes equation and to establish continuity of the solution with respect to small changes in the Brownian motion \(W\) which defines the stochastic forcing. We do this by employing the solution concept introduced by Flandoli in [6], and using probabilistic estimates on the solution derived by Mattingly in [17]. In Sect. 2 we describe the relevant theory of the forward problem, employing the setting of Flandoli. In Sect. 3 we build on this theory, using the estimates of Mattingly to verify the conditions in [18], resulting in a well-posed Bayesian inverse problem for which the posterior is Lipschitz in the data with respect to Hellinger metric. Section 4 extends this to include making inference about the initial condition as well as the forcing. Finally, in Sect. 5, we present numerical results which demonstrate feasibility of sampling from the posterior on white noise forces, and demonstrate the properties of the posterior distribution. 2 Forward problem In this section we study the forward problem of the Navier-Stokes equation driven by white noise. Section 2.1 describes the forward problem, the Navier-Stokes equation, and rewrites it as an ordinary differential equation in a Hilbert space. In Sect. 2.2 we define the functional setting used throughout the paper. Section 2.3 highlights the solution concept that we use, leading in Sect. 2.4 to proof of the key fact that the solution of the Navier-Stokes equation is continuous as a function of the rough driving of interest and the initial condition. All our theoretical results in this paper are derived in the case of Dirichlet (no flow) boundary conditions. They may be extended to the problem on the periodic torus \(\mathbb {T}^d\), but we present the more complex Dirichlet case only for brevity. Let \(D\in \mathbb {R}^2\) be a bounded domain with smooth boundary. We consider in \(D\) the Navier-Stokes equation $$\begin{aligned} \partial _t u-\nu \Delta u+u\cdot \nabla u&= f-\nabla p,\quad (x,t) \in D \times (0,\infty ) \nonumber \\ \nabla \cdot u&= 0,\quad (x,t) \in D \times (0,\infty )\nonumber \\ u&= 0,\quad (x,t)\in \partial D \times (0,\infty ),\nonumber \\ u&= u_0,\quad (x,t) \in D \times \{0\}. \end{aligned}$$ We assume that the initial condition \(u_0\) and the forcing \(f(\cdot ,t)\) are divergence-free. We will in particular work with Eq. (3) below, obtained by projecting (1) into the space of divergence free functions—the Leray projector [19]. We denote by \(\mathsf {V}\) the space of all divergence-free smooth functions from \(D\) to \(\mathbb {R}^2\) with compact support, by \(\mathbb {H}\) the closure of \(\mathsf {V}\) in \((L^2(D))^2\), and by \(\mathbb {H}^1\) the closure of \(\mathsf {V}\) in \((H^1(D))^2\). Let \(\mathbb {H}^2=(H^2(D))^2\bigcap \mathbb {H}^1\). The initial condition \(u_0\) is assumed to be in \(\mathbb {H}\). We define the linear Stokes' operator \(A:\mathbb {H}^2\rightarrow \mathbb {H}\) by \(A u=-\Delta u\) noting that the assumption of compact support means that Dirichlet boundary condition is imposed on the Stokes' operator \(A\). Since \(A\) is selfadjoint, \(A\) possesses eigenvalues \(0<\lambda _1\le \lambda _2\le \cdots \) with the corresponding eigenvectors \(e_1, e_2, \ldots \in \mathbb {H}^2\). We denote by \(\langle \cdot ,\cdot \rangle \) the inner product in \(\mathbb {H}\), extended to the dual pairing on \(\mathbb {H}^{-1} \times \mathbb {H}^1\). We then define the bilinear form \(B: \mathbb {H}^1\times \mathbb {H}^1\rightarrow \mathbb {H}^{-1}\) $$\begin{aligned} \langle B(u,v),z\rangle =\int \limits _Dz(x)\cdot (u(x)\cdot \nabla )v(x)dx \end{aligned}$$ which must hold for all \(z \in \mathbb {H}^1.\) From the incompressibility condition we have, for all \(z \in \mathbb {H}^1\), $$\begin{aligned} \langle B(u,v),z\rangle =-\langle B(u,z),v\rangle . \end{aligned}$$ By projecting problem (1) into \(\mathbb {H}\) we may write it as an ordinary differential equation in the form $$\begin{aligned} du(t)=-\nu A udt-B(u,u)dt+dW(t),\quad u(0)=u_0\in \mathbb {H}, \end{aligned}$$ where \(dW(t)\) is the projection of the forcing \(f(x,t)dt\) into \(\mathbb {H}\). We will define the solution of this equation pathwise, for suitable \(W\), not necessarily differentiable in time. 2.2 Function spaces For any \(s\ge 0\) we define \(\mathbb {H}^s\subset \mathbb {H}\) to be the Hilbert space of functions \(u=\sum _{k=1}^\infty u_ke_k\in \mathbb {H}\) such that $$\begin{aligned} \sum _{k=1}^\infty \lambda _k^{s}u_k^2<\infty ; \end{aligned}$$ we note that the \(\mathbb {H}^j\) for \(j \in \{0,1,2\}\) coincide with the preceding definitions of these spaces. The space \(\mathbb {H}^s\) is endowed with the inner product $$\begin{aligned} \langle u,v\rangle _{\mathbb {H}^s}=\sum _{k=1}^\infty \lambda _k^{s}u_kv_k, \end{aligned}$$ for \(u=\sum _{k=1}^\infty u_ke_k\), \(v=\sum _{k=1}^\infty v_ke_k\) in \(\mathbb {H}\). We denote by \(\mathbb {V}\) the particular choice \(s=\frac{1}{2}+\epsilon \), namely \(\mathbb {H}^{\frac{1}{2}+\epsilon }\), for given \(\epsilon >0\). In what follows we will be particularly interested in continuity of the mapping from the forcing \(W\) into linear functionals of the solution of (3). To this end it is helpful to define the Banach space \({\mathbb X}:=C([0,T];\mathbb {V})\) with the norm $$\begin{aligned} \Vert W\Vert _{{\mathbb X}}=\sup _{t\in (0,T)}\Vert W(t)\Vert _{\mathbb {V}}. \end{aligned}$$ 2.3 Solution concept In what follows we define a solution concept for Eq. (3) for each forcing function \(W\) which is continuous, but not necessarily differentiable, in time. We always assume that \(W(0)=0.\) Following Flandoli [6], for each \(W\in {\mathbb X}\) we define the weak solution \(u(\cdot ;W)\in C([0,T];\mathbb {H})\bigcap L^2([0,T];\mathbb {H}^{1/2})\) of (3) as a function that satisfies $$\begin{aligned} \langle u(t),\phi \rangle +{\nu }\int \limits _0^t\langle u(s),A\phi \rangle ds-\int \limits _0^t\langle {B\bigl (u(s),\phi \bigr ),u(s)}\rangle dx= \langle u_0,\phi \rangle +\langle W(t),\phi \rangle ,\nonumber \\ \end{aligned}$$ for all \(\phi \in \mathbb {H}^2\) and all \(t\in (0,T)\); note the integration by parts on the Stokes' operator and the use of (2) to derive this identity from (3). Note further that if \(u\) and \(W\) are sufficiently smooth, (4) is equivalent to (3). To employ this solution concept we first introduce the concept of a solution of the linear equation $$\begin{aligned} dz(t)=-\nu A zdt+dW(t),\quad z(0)=0\in \mathbb {H}\end{aligned}$$ where \(W\) is a deterministic continuous function obtaining values in \(\mathbb {X}\) but not necessarily differentiable. We define a weak solution of this equation as a function \(z\in C([0,T];\mathbb {H})\) such that $$\begin{aligned} \langle z(t),\phi \rangle +{\nu } \int \limits _0^t\langle z(s),A\phi \rangle ds=\langle W(t),\phi \rangle \end{aligned}$$ for all \(\phi \in \mathbb {H}^2\). Then for this function \(z(t)\) we consider the solution \(v\) of the equation $$\begin{aligned} dv(t)=-\nu A vdt-B(z+v,z+v)dt,\ \ v(0)=u_0\in \mathbb {H}. \end{aligned}$$ As we will show below, \(z(t)\) possesses sufficiently regularity so (7) possesses a weak solution \(v\). We then deduce that \(u=z+v\) is a weak solution of (3) in the sense of (4). When we wish to emphasize the dependence of \(u\) on \(W\) (and similarly for \(z\) and \(v\)) we write \(u(t;W).\) We will now show that the function \(z\) defined by $$\begin{aligned} z(t)&= \int \limits _0^t e^{-\nu A(t-s)}dW(s)\nonumber \\&= W(t)-\int \limits _0^ t\nu A e^{-\nu A(t-s)}W(s)ds \end{aligned}$$ satisfies the weak formula (6). Let \(w_k=\langle W, e_k \rangle \), that is $$\begin{aligned} W(t)\,{:=}\,\sum _{k=1}^\infty {w_k}(t)e_k\in {\mathbb X}. \end{aligned}$$ We then deduce from (8) that $$\begin{aligned} z(t;W)=W(t)-\sum _{k=1}^\infty \left( \int \limits _0^t w_k(s){\nu }\lambda _ke^{(t-s)(-{\nu }\lambda _k)}ds\right) e_k. \end{aligned}$$ We have the following regularity property for \(z\): Lemma 1 For each \(W \in {\mathbb X}\), the function \(z=z(\cdot ;W)\in C([0,T];\mathbb {H}^{1/2})\). We first show that for each \(t, z(t;W)\) as defined in (10) belongs to \(\mathbb {H}^{1/2}\). Fixing an integer \(M>0\), using inequality \(a^{1-\epsilon /2}e^{-a}<c\) for all \(a>0\) for an appropriate constant \(c\), we have $$\begin{aligned}&\sum _{k=1}^M\lambda _k^{1/2}\left( \int \limits _0^t\nu \lambda _ke^{(t-s)(-\nu \lambda _k)}w_k(s)ds\right) ^{2}\\&\quad \le \sum _{k=1}^M\lambda _k^{1/2}\left( \int \limits _0^t{c\over (t-s)^{1-\epsilon /2}}\lambda _k^{\epsilon /2}|w_k(s)|dx\right) ^{2}. \end{aligned}$$ Therefore, $$\begin{aligned} \left\| \sum _{k=1}^M\int \limits _0^t\nu \lambda _ke^{(t-s)(-\nu \lambda _k)}w_k(s)e_kds\right\| _{\mathbb {H}^{1/2}}&\le \left\| \sum _{k=1}^M\int \limits _0^t{c\over (t-s)^{1-\epsilon /2}}\lambda _k^{\epsilon /2}|w_k(s)|e_kds\right\| _{\mathbb {H}^{1/2}}\\&\le \int \limits _0^t{c\over (t-s)^{1-\epsilon /2}}\left\| \sum _{k=1}^M\lambda _k^{\epsilon /2}|w_k(s)|e_k\right\| _{\mathbb {H}^{1/2}}ds\\&\le \max _{s\in (0,T)}\Vert W(s)\Vert _{\mathbb {H}^{1/2+\epsilon }}\int \limits _0^t{c\over (t-s)^{1-\epsilon /2}}ds, \end{aligned}$$ which is uniformly bounded for all \(M\). Therefore, $$\begin{aligned} \sum _{k=1}^{\infty }\left( \int \limits _0^tw_k(s)\nu \lambda _ke^{(t-s)(-\nu \lambda _k)}ds\right) e_k\in \mathbb {H}^{1/2}. \end{aligned}$$ It follows from (10) that, since \(W \in {\mathbb X}\), for each \(t\), \(z(t;W)\in \mathbb {H}^{1/2}\) as required. Furthermore, for all \(t\in (0,T)\) $$\begin{aligned} \Vert z(t;W)\Vert _{\mathbb {H}^{1/2}}\le c\Vert W\Vert _{{\mathbb X}}. \end{aligned}$$ Now we turn to the continuity in time. Arguing similarly, we have that $$\begin{aligned}&\left\| \sum _{k=M}^\infty \left( \int \limits _0^tw_k(s)\nu \lambda _ke^{(t-s)(-\nu \lambda _k)}ds\right) e_k\right\| _{\mathbb {H}^{1/2}}\\&\quad \le \int \limits _0^t{c\over (t-s)^{1-\epsilon /2}}\left\| \sum _{k=M}^\infty w_k(s)e_k\right\| _{\mathbb {H}^{1/2+\epsilon }}ds\\&\quad \le \left( \int \limits _0^t{c\over (t-s)^{(1-\epsilon /2)^p}}ds\right) ^{1/p}\left( \int \limits _0^t\left\| \sum _{k=M}^\infty w_k(s)e_k\right\| ^q_{\mathbb {H}^{1/2+\epsilon }}ds \right) ^{1/q}, \end{aligned}$$ for all \(p,q>0\) such that \(1/p+1/q=1\). From the Lebesgue dominated convergence theorem, $$\begin{aligned} \lim _{M\rightarrow \infty }\int \limits _0^{t}\left\| \sum _{k=M}^\infty w_k(s)e_k\right\| ^q_{\mathbb {H}^{1/2+\epsilon }}ds=0; \end{aligned}$$ and when \(p\) sufficiently close to 1, $$\begin{aligned} \int \limits _0^t{c\over (t-s)^{(1-\epsilon /2)p}}ds \end{aligned}$$ is finite. We then deduce that $$\begin{aligned} \lim _{M\rightarrow \infty }\left\| \sum _{k=M}^\infty \left( \int \limits _0^tw_k(s)\nu \lambda _ke^{(t-s)(-\nu \lambda _k)}ds\right) e_k\right\| _\mathbb {H}^{1/2}=0, \end{aligned}$$ uniformly for all \(t\). Fixing \(t\in (0,T)\) we show that $$\begin{aligned} \lim _{t'\rightarrow t}\Vert z(t;W)-z({t';W})\Vert _{\mathbb {H}^{1/2}}=0. \end{aligned}$$ $$\begin{aligned}&\Vert z(t;W)-z(t';W)\Vert _{\mathbb {H}^{1/2}}\le \Vert W(t)-W(t')\Vert _{\mathbb {H}^{1/2}}\\&\quad + \left\| \sum _{k=1}^{M-1}\left( \int \limits _0^tw_k(s)\nu \lambda _ke^{(t-s)(-\nu \lambda _k)}ds-\int \limits _0^{t'}w_k(s)\nu \lambda _ke^{(t'-s)(-\nu \lambda _k)}ds\right) e_k\right\| _{\mathbb {H}^{1/2}}\\&\quad +\left\| \sum _{k=M}^\infty \left( \int \limits _0^tw_k(s)\nu \lambda _ke^{(t-s)(-\nu \lambda _k)}ds\right) e_k\right\| _{\mathbb {H}^{1/2}}\\&\quad +\left\| \sum _{k=M}^\infty \left( \int \limits _0^{t'}w_k(s)\nu \lambda _ke^{(t'-s)(-\nu \lambda _k)}ds\right) e_k\right\| _{\mathbb {H}^{1/2}}. \end{aligned}$$ For \(\delta >0\), when \(M\) is sufficiently large, the argument above shows that $$\begin{aligned}&\left\| \sum _{k=M}^\infty \left( \int \limits _0^tw_k(s)\nu \lambda _ke^{(t-s)(-\nu \lambda _k)}ds\right) e_k\right\| _{\mathbb {H}^{1/2}}\\&+\left\| \sum _{k=M}^\infty \left( \int \limits _0^{t'}w_k(s)\nu \lambda _ke^{(t'-s)(-\nu \lambda _k)}ds\right) e_k\right\| _{\mathbb {H}^{1/2}}<\delta /3. \end{aligned}$$ Furthermore, when \(|t'-t|\) is sufficiently small, $$\begin{aligned} \left\| \sum _{k=1}^{M-1}\left( \int \limits _0^tw_k(s)\nu \lambda _ke^{(t-s)(-\nu \lambda _k)}ds-\int \limits _0^{t'}w_k(s)\nu \lambda _ke^{(t'-s)(-\nu \lambda _k)}ds\right) e_k\right\| _{\mathbb {H}^{1/2}}<\delta /3. \end{aligned}$$ Finally, since \(W \in {\mathbb X}\), for \(|t'-t|\) is sufficiently small we have $$\begin{aligned} \Vert W(t)-W(t')\Vert _{\mathbb {H}^{1/2}} <\delta /3 \end{aligned}$$ Thus when \(|t'-t|\) is sufficiently small, \(\Vert z(t;W)-z(t';W)\Vert _{\mathbb {H}^{1/2}}<\delta \). The conclusion follows.\(\square \) Having established regularity, we now show that \(z\) is indeed a weak solution of (5). For each \(\phi \in \mathbb {H}^2\), \(z(t)=z(t;W)\) satisfies (6). It is sufficient to show this for \(\phi =e_k\). We have $$\begin{aligned} \int \limits _0^t\langle z(s),Ae_k\rangle ds&= \int \limits _0^t\langle W(s),Ae_k\rangle ds-\int \limits _0^t\int \limits _0^s {w_k(\tau )\nu }\lambda _k^2e^{(s-\tau )(- {\nu }\lambda _k)}d\tau ds\\&= \lambda _k\int \limits _0^t {w_k}(s)ds-{\nu }\lambda _k^2\int \limits _0^t {w_k}(\tau )\Bigl (\int \limits _\tau ^te^{(s-\tau )(-{\nu }\lambda _k)} ds\Bigr )d\tau \\&= \lambda _k\int \limits _0^t{w_k}(s)ds-\lambda _k\int \limits _0^t{w_k}(\tau )d\tau +\lambda _k\int \limits _0^t{w_k}(\tau )e^{(t-\tau )(-{\nu }\lambda _k)}d\tau \\&= \lambda _k\int \limits _0^t{w_k}(\tau )e^{(t-\tau )(-{\nu }\lambda _k)}d\tau . \end{aligned}$$ On the other hand, $$\begin{aligned} \langle z(t),e_k\rangle =\langle W(t),e_k\rangle -{\nu } \lambda _k\int \limits _0^t{w_k}(s)e^{(t-s)(-\nu \lambda _k)}ds. \end{aligned}$$ The result then follows.\(\square \) We now turn to the following result, which concerns \(v\) and is established on page 416 of [6], given the properties of \(z(\cdot ;W)\) established in the preceding two lemmas. For each \(W\in {\mathbb X}\), problem (7) has a unique solution \(v\) in the function space \(C(0,T;\mathbb {H})\bigcap L^2(0,T;\mathbb {H}^1)\). We then have the following existence and uniqueness result for the Navier-Stokes Eq. (3), more precisely for the weak form (4), driven by rough additive forcing [6]: For each \(W\in {\mathbb X}\), problem (4) has a unique solution \(u\in C(0,T;\mathbb {H})\bigcap L^2(0,T;\mathbb {H}^{1/2})\) such that \(u-z\in L^2(0,T;\mathbb {H}^1)\). A solution \(u\) for (4) can be taken as $$\begin{aligned} u(t;W)=z(t;W)+v(t;W). \end{aligned}$$ From the regularity properties of \(z\) and \(v\) in Lemmas 1 and 3, we deduce that \(u\in C(0,T;\mathbb {H})\bigcap L^2(0,T;\mathbb {H}^{1/2})\). Assume that \(\bar{u}(t;W)\) is another solution of (4). Then \(\bar{v}(t;W)=\bar{u}(t;W)-z(t;W)\) is a solution in \(C(0,T;\mathbb {H})\bigcap L^2(0,T;\mathbb {H}^1)\) of (7). However, (7) has a unique solution in \(C(0,T;\mathbb {H})\bigcap L^2(0,T;\mathbb {H}^1)\). Thus \(\bar{v}=v\).\(\square \) 2.4 Continuity of the forward map The purpose of this subsection is to establish continuity of the forward map from \(W\) into the weak solution \(u\) of (3), as defined in (4), at time \(t>0.\) In fact we prove continuity of the forward map from \((u_0,W)\) into \(u\) and for this it is useful to define the space \({\mathcal H}=\mathbb {H}\times {\mathbb X}\) and denote the solution \(u\) by \(u(t;u_0,W)\). Theorem 1 For each \(t>0\), the solution \(u(t;\cdot ,\cdot )\) of (3) is a continuous map from \({\mathcal H}\) into \(\mathbb {H}\). First we fix the initial condition and just write \(u(t;W)\) for simplicity. We consider Eq. (3) with driving \(W \in \mathbb {X}\) given by (9) and by \(W' \in \mathbb {X}\) defined by $$\begin{aligned} W'(s)=\sum _{k=1}^\infty {w_{k}}'(s)e_{k}\in {\mathbb X}. \end{aligned}$$ We will prove that, for \(W,W'\) from a bounded set in \(\mathbb {X}\), there is \(c={c}(T)>0\), such that $$\begin{aligned} \sup _{t\in (0,T)}\Vert z(t;W)-z(t;W')\Vert _{\mathbb {H}^{1/2}}\le c \Vert W-W'\Vert _{{\mathbb X}} \end{aligned}$$ and, for each \(t \in (0,T)\), $$\begin{aligned} \Vert v(t;W)-v(t;W')\Vert _{\mathbb {H}}^2\le c\sup _{s\in (0,T)}\Vert z(s;W)-z(s;W')\Vert _{L^4(D)}^2. \end{aligned}$$ This suffices to prove the desired result since Sobolev embedding yields, from (14), $$\begin{aligned} \Vert v(t;W)-v(t;W')\Vert _{\mathbb {H}}^2\le c\sup _{s\in (0,T)}\Vert z(s;W)-z(s;W')\Vert _{\mathbb {H}^{\frac{1}{2}}}^2. \end{aligned}$$ Since \(u=z+v\) we deduce from (13) and (15) that \(u\) as a map from \({\mathbb X}\) to \(\mathbb {H}\) is continuous. To prove (13) we note that $$\begin{aligned}&\Vert z(t;W)-z(t;W')\Vert _{\mathbb {H}^{\frac{1}{2}}} \le \Vert W(t)-W'(t)\Vert _{\mathbb {H}^{\frac{1}{2}}}\nonumber \\&\quad + \left\| \int \limits _0^t \nu Ae^{-\nu A(t-s)}\left( W(s)-W'(s)\right) ds\right\| _{\mathbb {H}^{\frac{1}{2}}} \end{aligned}$$ $$\begin{aligned}&\sup _{t \in (0,T)}\Vert z(t;W)-z(t;W')\Vert _{\mathbb {H}^{\frac{1}{2}}} \le \Vert W-W'\Vert _{{\mathbb X}}\\&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad +\sup _{t \in (0,T)}\left\| \int \limits _0^t \nu Ae^{-\nu A(t-s)}\left( W(s)-W'(s)\right) ds\right\| _{\mathbb {H}^{\frac{1}{2}}}. \end{aligned}$$ Thus it suffices to consider the last term on the right hand side. We have $$\begin{aligned}&\left\| \int \limits _0^t Ae^{-\nu A(t-s)}\left( W(s)-W'(s)\right) ds\right\| _{\mathbb {H}^{\frac{1}{2}}}^2\\&\qquad =\left\| \sum _{k=1}^{\infty }\int \limits _0^t\lambda _ke^{(t-s)(-\nu \lambda _k)}(w'_k(s)-w_k(s))e_kds\right\| _{\mathbb {H}^{1/2}}^2\\&\qquad =\sum _{k=1}^\infty \lambda _k^{1/2}\left( \int \limits _0^t\lambda _ke^{(t-s)(-{\nu }\lambda _k)}(w_k'(s)-w_k(s))ds\right) ^{2}\\&\qquad \le \sum _{k=1}^\infty \lambda _k^{1/2}\left( \int \limits _0^t\lambda _ke^{(t-s)(-{\nu }\lambda _k)}|w_k'(s)-w_k(s)|ds\right) ^{2}\\&\qquad \le \sum _{k=1}^\infty \lambda _k^{1/2}\left( \int \limits _0^t{c\over (t-s)^{1-\epsilon /2}}\lambda _k^{\epsilon /2}|w_k'(s)-w_k(s)|ds\right) ^{2} \end{aligned}$$ where we have used the fact that \(a^{1-\epsilon /2}e^{-a}<c\) for all \(a>0\) for an appropriate constant \(c\). From this, we deduce that $$\begin{aligned}&\left\| \int \limits _0^t Ae^{-\nu A(t-s)}\bigl (W(s)-W'(s)\bigr )ds\right\| _{\mathbb {H}^{\frac{1}{2}}}\\&\qquad \le \left\| \sum _{k=1}^\infty \int \limits _0^t{c\over (t-s)^{1-\epsilon /2}}\lambda _k^{\epsilon /2}|w_k'(s)-w_k(s)|e_kds\right\| _{\mathbb {H}^{1/2}}\\&\qquad \le \int \limits _0^t{c\over (t-s)^{1-\epsilon /2}}\left\| \sum _{k=1}^\infty \lambda _k^{\epsilon /2}|w_k'(s)-w_k(s)|e_k\right\| _{\mathbb {H}^{1/2}}ds\\&\qquad \le \int \limits _0^t{c\over (t-s)^{1-\epsilon /2}}ds\sup _{s\in (0,T)}\left\| \sum _{k=1}^\infty \lambda _k^{\epsilon /2}|w_k'(s)-w_k(s)|e_k\right\| _{\mathbb {H}^{1/2}}\\&\qquad \le c\sup _{s\in (0,T)}\Vert W'(s)-W(s)\Vert _{\mathbb {V}}. \end{aligned}$$ Therefore (13) holds. We now prove (14). We will use the following estimate for the solution \(v\) of (7) which is proved in Flandoli [6], page 412, by means of a Gronwall argument: $$\begin{aligned} \sup _{s\in (0,T)} \Vert v(s)\Vert ^2_{\mathbb {H}}+\int \limits _0^T\Vert v(s)\Vert ^2_{\mathbb {H}^1}\le C\left( T,\sup _{s\in (0,T)}\Vert z(s)\Vert _{L^4(D)}\right) . \end{aligned}$$ We show that the map \(C([0,T];L^4(D))\ni z(\cdot ;W)\mapsto v(\cdot ;W)\in \mathbb {H}\) is continuous. For \(W\) and \(W'\) in \({\mathbb X}\), define \(v=v(t;W), v'=v(t;W'), z=z(t;W), z'=z(t;W'), e=v-v'\) and \(\delta =z-z'\). Then we have $$\begin{aligned} {de\over dt}+\nu Ae+ B\bigl (v+z,v+z\bigr )-B\bigl (v'+z',v'+z'\bigr )=0. \end{aligned}$$ From this, we have $$\begin{aligned}&{1\over 2}{d\Vert e\Vert _{\mathbb {H}}^2\over dt}+\nu \Vert e\Vert _{\mathbb {H}^1}^2=-\bigl \langle B\bigl (v+z,v+z\bigr ),e\bigr \rangle \\&\quad +\,\big \langle B\bigl (v'+z',v'+z'\bigr ),e\big \rangle . \end{aligned}$$ From (2) we obtain $$\begin{aligned}&{1\over 2}{d\Vert e\Vert _{\mathbb {H}}^2\over dt}+\nu \Vert e\Vert _{\mathbb {H}^1}^2= +\bigl \langle B\bigl (v+z,e\bigr ),v+z\bigr \rangle \\&\quad -\,\big \langle B\bigl (v'+z',e\bigr ),v'+z'\big \rangle =\bigl \langle B\bigl (v+z,e\bigr ),v+z-v'-z'\bigr \rangle \\&\quad -\,\big \langle B\bigl (v'+z'-v-z,e\bigr ),v'+z'\big \rangle =\bigl \langle B\bigl (v+z,e\bigr ),e+\delta \bigr \rangle \\&\quad +\,\big \langle B\bigl (e+\delta ,e\bigr ),v'+z'\big \rangle \le (\Vert e\Vert _{L^4(D)}+\Vert \delta \Vert _{L^4(D)}) (\Vert v\Vert _{L^4(D)}+\Vert z\Vert _{L^4(D)}\\&\quad +\,\Vert v'\Vert _{L^4(D)}+\Vert z'\Vert _{L^4(D)})\Vert e\Vert _{\mathbb {H}^1}. \end{aligned}$$ We now use the following interpolation inequality $$\begin{aligned} \Vert e\Vert _{L^4(D)}\le c_0\Vert e\Vert _{\mathbb {H}^1}^{1/2}\Vert e\Vert _{\mathbb {H}}^{1/2}, \end{aligned}$$ which holds for all two dimensional domains \(D\) with constant \(c_0\) depending only on \(D\); see Flandoli [6]. Using this we obtain $$\begin{aligned}&{1\over 2}{d\Vert e\Vert ^2_{\mathbb {H}}\over dt}+\nu \Vert e\Vert ^2_{\mathbb {H}^1}\le c_1\left( \Vert e\Vert _{\mathbb {H}^1}^{3/2}\Vert e\Vert _{\mathbb {H}}^{1/2}+\Vert \delta \Vert _{L^4(D)}\Vert e\Vert _{\mathbb {H}^1}\right) \\&\quad \cdot \left( \Vert v\Vert _{L^4(D)}+\Vert v'\Vert _{L^4(D)}+\Vert z\Vert _{L^4(D)}+\Vert z'\Vert _{L^4(D)}\right) \end{aligned}$$ for a positive constant \(c_1\). From the Young inequality, we have $$\begin{aligned}&\Vert e\Vert ^{3/2}_{\mathbb {H}^1}\Vert e\Vert _{\mathbb {H}}^{1/2}\left( \Vert v\Vert _{L^4(D)}+\Vert v'\Vert _{L^4(D)}+\Vert z\Vert _{L^4(D)}+\Vert z'\Vert _{L^4(D)}\right) \\&\quad \le \frac{3}{4}c_2^{4/3}\Vert e\Vert _{\mathbb {H}^1}^2+\frac{1}{4c_2^4}\Vert e\Vert _{\mathbb {H}}^2\left( \Vert v\Vert _{L^4(D)}+\Vert v'\Vert _{L^4(D)}+\Vert z\Vert _{L^4(D)}+\Vert z'\Vert _{L^4(D)}\right) ^4 \end{aligned}$$ $$\begin{aligned}&\Vert \delta \Vert _{L^4(D)}\Vert e\Vert _{\mathbb {H}^1}\left( \Vert v\Vert _{L^4(D)}+\Vert v'\Vert _{L^4(D)}+\Vert z\Vert _{L^4(D)}+\Vert z'\Vert _{L^4(D)}\right) \\&\qquad \le \frac{c_3^2}{2}\Vert e\Vert _{\mathbb {H}^1}^2+\frac{1}{2c_3^2}\Vert \delta \Vert ^2_{L^4(D)}\left( \Vert v\Vert _{L^4(D)}+\Vert v'\Vert _{L^4(D)}+\Vert z\Vert _{L^4(D)}+\Vert z'\Vert _{L^4(D)}\right) ^2 \end{aligned}$$ for all positive constants \(c_2\) and \(c_3\). Choosing \(c_2\) and \(c_3\) so that \(c_1(3c_2^{4/3}/4+c_3^2/2)=\nu \), we deduce that there is a positive constant \(c\) such that $$\begin{aligned} {1\over 2}{d\Vert e\Vert _{\mathbb {H}}^2\over dt}+\nu \Vert e\Vert _{\mathbb {H}^1}^2\le \nu \Vert e\Vert _{\mathbb {H}^1}^2 +c\Vert e\Vert _{\mathbb {H}}^2\cdot I_4 +c\Vert \delta \Vert _{L^4(D)}^2\cdot I_2 \end{aligned}$$ where we have defined $$\begin{aligned} I_2&=\Vert v\Vert _{L^4(D)}^2+\Vert v'\Vert _{L^4(D)}^2+\Vert z\Vert _{L^4(D)}^2+\Vert z'\Vert _{L^4(D)}^2\\ I_4&=\Vert v\Vert _{L^4(D)}^4+\Vert v'\Vert _{L^4(D)}^4+\Vert z\Vert _{L^4(D)}^4+\Vert z'\Vert _{L^4(D)}^4. \end{aligned}$$ From Gronwall's inequality, we have $$\begin{aligned} \Vert e(t)\Vert _{\mathbb {H}}^2\le c\int \limits _0^t\bigg (e^{\int \limits _s^t I_4(s')ds'} \bigg ) \Vert \delta (s)\Vert _{L^4(D)}^2 I_2(s) ds. \end{aligned}$$ Applying the interpolation inequality (18) to \(v(s';W)\), we have that $$\begin{aligned} \int \limits _0^T\Vert v(s';W)\Vert _{L^4(D)}^4 ds'\le c\sup _{s'\in (0,T)}\Vert v(s';W)\Vert _{\mathbb {H}}^2\int \limits _0^T\Vert v(s';W)\Vert _{\mathbb {H}^1}^2ds', \end{aligned}$$ which is bounded uniformly when \(W\) belongs to a bounded subset of \({\mathbb X}\) due to (16). Using this estimate, and a similar estimate on \(v'\), together with (11) and Sobolev embedding of \(\mathbb {H}^{\frac{1}{2}}\) into \(L^4(D)\), we deduce that $$\begin{aligned} \Vert e(t)\Vert _{\mathbb {H}}^2\le c\sup _{0\le s\le T}\Vert \delta (s)\Vert _{L^4(D)}^2. \end{aligned}$$ We now extend to include continuity with respect to the initial condition. We show that \(u(\cdot ,t;u_0,W)\) is a continuous map from \({\mathcal H}\) to \(\mathbb {H}\). For \(W\in {\mathbb X}\) and \(u_0\in \mathbb {H}\), we consider the following equation: $$\begin{aligned} \frac{dv}{dt}+A v+B(v+z,v+z)=0, \quad v(0)=u_0. \end{aligned}$$ We denote the solution by \(v(t)=v(t;u_0,W)\) to emphasize the dependence on initial condition and forcing which is important here. For \((u_0,W)\in {\mathcal H}\) and \((u_0',W')\in {\mathcal H}\), from (19) and Gronwall's inequality, we deduce that $$\begin{aligned}&\Vert v(t;u_0,W)-v(t;u_0',W')\Vert ^2_{\mathbb {H}}\le \Vert u_0-u_0'\Vert _{\mathbb {H}}^2e^{\int \limits _0^t I_4(s'))ds'}\\&\quad +\, c\int \limits _0^t\bigg (e^{\int _s^t I_4(s')ds'} \cdot \Vert z(s;W)-z(s;W')\Vert _{L^4(D)}^2. I_2(s)\bigg )ds. \end{aligned}$$ We then deduce that $$\begin{aligned} \Vert v(t;u_0,W)-v(t;u_0',W')\Vert ^2_{\mathbb {H}}&\le c\Vert u_0-u_0'\Vert ^2_{\mathbb {H}}+c\sup _{0\le s\le T}\Vert z(s;W)-z(s;W')\Vert _{L^4(D)}^2\\&\le c\Vert u_0-u_0'\Vert ^2_{\mathbb {H}} \quad +c\sup _{t\in (0,T)}\Vert W(t)-W'(t)\Vert _{\mathbb {H}^{1/2+\epsilon }}. \end{aligned}$$ This gives the desired continuity of the forward map.\(\square \) 3 Bayesian inverse problems with model error In this section we formulate the inverse problem of determining the forcing to Eq. (3) from knowledge of the velocity field; more specifically we formulate the Bayesian inverse problem of determining the driving Brownian motion \(W\) from noisy pointwise observations of the velocity field. Here we consider the initial condition to be fixed and hence denote the solution of (3) by \(u(t;W)\); extension to the inverse problem for the pair \((u_0,W)\) is given in the following section. We set-up the likelihood in Sect. 3.1. Then, in Sect. 3.2, we describe the prior on the forcing which is a Gaussian white-in-time process with spatial correlations, and hence a spatially correlated Brownian motion prior on \(W\). This leads, in Sect. 3.3, to a well-defined posterior distribution, absolutely continuous with respect to the prior, and Lipschitz in the Hellinger metric with respect to the data. To prove these results we employ the framework for Bayesian inverse problems developed in Cotter et al. [3] and Stuart [18]. In particular, Corollary 2.1 of [3] and Theorem 6.31 of [18] show that, in order to demonstrate the absolute continuity of the posterior measure with respect to the prior, it suffices to show that the mapping \({\mathcal G}\) in (23) is continuous with respect to the topology of \(\mathbb {X}\) and to choose a prior with full mass on \(\mathbb {X}\). Furthermore we then employ the proofs of Theorem 2.5 of [3] and Theorem 4.2 of [18] to show the well-posedness of the posterior measure; indeed we show that the posterior is Lipschitz with respect to data, in the Hellinger metric. 3.1 Likelihood Fix a set of times \(t_j\in (0,T)\), \(j=1,\ldots , J.\) Let \(\ell \) be a collection of \(K\) bounded linear functionals on \(\mathbb {H}\). We assume that we observe, for each \(j\), \(\ell \bigl (u(\cdot ,t_j;W)\bigr )\) plus a draw from a centered \(K\) dimensional Gaussian noise \(\vartheta _j\) so that $$\begin{aligned} \delta _j=\ell \bigl (u(\cdot ,t_j;W)\bigr )+\vartheta _j \end{aligned}$$ is known to us. Concatenating the data we obtain $$\begin{aligned} \delta ={\mathcal G}(W)+\vartheta \end{aligned}$$ where \(\delta ,\vartheta \in \mathbb R^{JK}\) and \({\mathcal G}: {\mathbb X}\rightarrow \mathbb R^{JK}.\) The observational noise \(\vartheta \) is a draw from the \(JK\) dimensional Gaussian random variable with the covariance matrix \(\Sigma \). In the following we will define a prior measure \(\rho \) on \(W\) and then determine the conditional probability measure \(\rho ^\delta =\mathbb {P}(W|\delta )\). We will then show that \(\rho ^{\delta }\) is absolutely continuous with respect to \(\rho \) and that the Radon–Nikodym derivative between the measures is given by $$\begin{aligned} \frac{d{\rho }^{\delta }}{d\rho }\propto \exp \left( -\varPhi (W;\delta )\right) , \end{aligned}$$ $$\begin{aligned} \varPhi (W;\delta )={1\over 2}\left| \varSigma ^{-\frac{1}{2}}\left( \delta -{\mathcal G}(W)\right) \right| ^2. \end{aligned}$$ The right hand side of (24) is the likelihood of the data \(\delta \). 3.2 Prior We construct our prior on the time-integral of the forcing, namely \(W\). Let \(Q\) be a linear operator from the Hilbert space \(\mathbb {H}^{\frac{1}{2}+\epsilon }\) into itself with eigenvectors \(e_k\) and eigenvalues \(\sigma _k^2\) for \(k=1,2,\ldots \). We make the following assumption Assumption 1 There is an \(\epsilon >0\) such that the eigenvalues \(\{\sigma _k\}\) satisfy $$\begin{aligned} \sum _{k=1}^\infty \sigma _k^2\lambda _k^{1/2+\epsilon }<\infty \end{aligned}$$ where \(\lambda _k\) are the eigenvalues of the operator \(A\) defined in Sect. 2.1. $$\begin{aligned} \sum _{k=1}^\infty \langle Qe_k,e_k\rangle _{\mathbb {H}^{\frac{1}{2}+\epsilon }}=\sum _{k=1}^\infty \sigma _k^2\lambda _k^{\frac{1}{2}+\epsilon }<\infty , \end{aligned}$$ \(Q\) is a trace class operator in \(\mathbb {H}^{\frac{1}{2}+\epsilon }\). We assume that our prior is the \(Q\)-Wiener process \(W\) with values in \(\mathbb {H}^{\frac{1}{2}+\epsilon }\) where \(W(s_1)-W(s_2)\) is Gaussian in \(\mathbb {H}^{\frac{1}{2}+\epsilon }\) with covariance \((s_1-s_2)Q\) and mean 0. This process can be written as $$\begin{aligned} W(t)=\sum _{k=1}^\infty \sigma _k e_k\omega _k(t), \end{aligned}$$ where \(\omega _k(t)\) are pair-wise independent Brownian motions (see Da Prato and Zabczyk [4], Proposition 4.1) and where the convergence of the infinite series is in the mean square norm with respect to the probability measure of the probability space that generates the randomness of \(W\). We define by \(\rho \) the measure generated by this \(Q\)-Wiener process on \({\mathbb X}\). Remark 1 We have constructed the solution to (3) for each deterministic continuous function \(W\in \mathbb {X}\). As we equip \(\mathbb {X}\) with the prior probability measure \(\rho \), we wish to employ the results from [6] concerning the solution of (3) when \(W\) is considered as a Brownian motion obtaining values in \(\mathbb {X}\). However, the solution of (3) is constructed in a slightly different way in [6] from that used in the preceding developments. We therefore show that under Assumption 1, \(\rho \) almost surely, solution \(u\) of (4) defined in (12) for each individual function \(W\) equals the unique progressively measurable solution in \(C([0,T];\mathbb {H})\bigcap L^2([0,T];\mathbb {H}^1)\) constructed in Flandoli [6] when the noise \(W\) is sufficiently spatially regular. This allows us to employ the existence of the second moment of \(\Vert u(\cdot ,t;W)\Vert _{\mathbb {H}}^2\), i.e.the finiteness of the energy \(\mathbb {E}^{\rho }[\Vert u(\cdot ,t;W)\Vert _{\mathbb {H}}^2]\), established in Mattingly [17], which we need later. For the infinite dimensional Brownian motion \(W\) defined in (26) where $$\begin{aligned} \sum _{k=1}^\infty \lambda _k^{2\beta _0-1/2}\sigma _k^2<\infty , \end{aligned}$$ for some \(\beta _0>0\), where we employ the same notation as in [6] for ease of exposition. Flandoli [6] employs the Ornstein-Uhlenbeck process $$\begin{aligned} z_\alpha (t)=\int \limits _{-\infty }^te^{-(\nu A+\alpha )(t-s)}dW(s) \end{aligned}$$ which, considered as the stochastic process, is a solution of the Ornstein-Uhlenbeck equation $$\begin{aligned} dz_\alpha (t)+Az_\alpha (t)dt+\alpha z_\alpha (t)dt=dW(t), \end{aligned}$$ where \(\alpha \) is a constant, in order to define a solution of (4). Note that if \(\beta _0>\frac{1}{2}\) then Assumption 1 is satisfied. With respect to the probability space \((\Omega ,{\mathcal F}_t,\mathbb {P})\), the expectation \(\mathbb {E}\Vert z_\alpha (t)\Vert ^2_{\mathbb {H}^{1/2+2\beta }}\) is finite for \(\beta <\beta _0\). Thus almost surely with respect to \((\Omega ,{\mathcal F}_t,\mathbb {P})\), \(z_\alpha (t)\) is sufficiently regular so that problem (7) with the initial condition \(v(0;W)=u_0-z_\alpha (0)\) is well posed. The stochastic solution to the problem (3) is defined as $$\begin{aligned} u(\cdot ,t;W)=z_\alpha (t;W)+v(t;W) \end{aligned}$$ which is shown to be independent of \(\alpha \) in [6]. When \(\beta _0>\frac{1}{2}\), \(\mathbb {E}\Vert z_\alpha (t)\Vert ^2_{\mathbb {H}^1}\) is finite so \(u(\cdot ,t;W)\in C([0,T];\mathbb {H})\bigcap L^2([0,T];\mathbb {H}^1)\). Flandoli [6] leaves open the question of the uniqueness of a generalized solution to (4) in \(C([0,T];\mathbb {H})\bigcap L^2([0,T];\mathbb {H}^{1/2})\). However, there is a unique solution in \(C([0,T];\mathbb {H})\bigcap L^2([0,T];\mathbb {H}^1)\). Almost surely with respect to the probability measure \(\rho \), solution \(u\) of (4) constructed in (12) equals the solution constructed by Flandoli [6] in (28). To see this, note that the stochastic integral $$\begin{aligned} \int \limits _0^t e^{-\nu A(t-s)}dW(s) \end{aligned}$$ can be written in the integration by parts form (10). Therefore, with respect to \(\rho \), $$\begin{aligned} \mathbb {E}^{\rho }\left[ \Vert z(t)\Vert _{L^2(0,T;\mathbb {H}^1)}^2\right]&= \int \limits _0^T\mathbb {E}^\rho \left[ \sum _{k=1}^\infty \lambda _k\sigma _k^2\left( \int \limits _0^te^{-\nu \lambda _k(t-s)}d\omega _k(s)\right) ^{\!\!\!\!2}\right] dt\\&= \int \limits _0^T\left( \sum _{k=1}^\infty \lambda _k\sigma _k^2\int \limits _0^te^{-2\nu \lambda _k(t-s)}ds\right) dt\\&= {1\over 2\nu }\int \limits _0^T\sum _{k=1}^\infty \sigma _k^2(1-e^{-2\nu \lambda _kt})dt \end{aligned}$$ which is finite. Therefore \(\rho \) almost surely, \(z(t)\in L^2(0,T;\mathbb {H}^1)\). Thus \(u(t;W)\in C(0,T;\mathbb {H})\bigcap L^2(0,T;\mathbb {H}^1)\). We can then argue that \(\rho \) almost surely, the solution \(u\) constructed in (12) equals Flandoli's solution in (28) which we denote by \(u_\alpha \) (even though it does not depend on \(\alpha \)) as follows. As \(u_\alpha \in C([0,T];H)\bigcap L^2([0,T];\mathbb {H}^1)\), \(v_\alpha (t;W)=u_\alpha (t;W)-z(t;W)\in C([0,T];H)\bigcap L^2([0,T];\mathbb {H}^1)\) and satisfies (7). As for each \(W\), (7) has a unique solution in \(C([0,T];H)\bigcap L^2([0,T];\mathbb {H}^1)\), so \(v_\alpha (t;W)=v(t;W)\). Thus almost surely, the Flandoli [6] solution equals the solution \(u\) in (12). This is also the argument to show that (3) has a unique solution in \(C([0,T];\mathbb {H})\bigcap L^2([0,T];\mathbb {H}^1)\). 3.3 Posterior The conditional probability measure \(\mathbb {P}(W|\delta )=\rho ^\delta \) is absolutely continuous with respect to the prior measure \(\rho \) with the Radon–Nikodym derivative being given by (24). Furthermore, when \(|\delta |<r\) and \(|\delta '|<r\) there is a constant \(c{=c(r)}\) so that $$\begin{aligned} d_\mathrm{Hell}(\rho ^\delta ,\rho ^{\delta '})\le c|\delta -\delta '|. \end{aligned}$$ Note that \(\rho (\mathbb {X})=1.\) It follows from Corollary 2.1 of Cotter et al. [3] and Theorem 6.31 of Stuart [18] that, in order to demonstrate that \(\rho ^{\delta } \ll \rho \), it suffices to show that the mapping \({\mathcal G}: {\mathbb X}\rightarrow \mathbb R^{JK}\) is continuous; then the Randon-Nikodym derivative (24) defines the density of \(\rho ^{\delta }\) with respect to \(\rho .\) As \(\ell \) is a collection of bounded continuous linear functionals on \(\mathbb {H}\), the continuity of \({\mathcal G}\) with respect to the topology of \(\mathbb {X}\) follows from Theorem 1. We now turn to the Lipschitz continuity of the posterior in the Hellinger metric. The method of proof is very similar to that developed in the proofs of Theorem 2.5 in [3] and Theorem 4.2 in [18]. We define $$\begin{aligned} Z(\delta )\,{:=}\,\int \limits _{{\mathbb X}}\exp (-\varPhi (W;\delta ))d\rho (W). \end{aligned}$$ Mattingly [17] shows that for each \(t\), the second moment \(\mathbb {E}^\rho (\Vert u(\cdot ,t;W)\Vert _{\mathbb {H}}^2)\) is finite. Fixing a large constant \(M\), the \(\rho \) measure of the set of paths \(W\) such that \(\max _{j=1,\ldots ,J}\Vert u(\cdot ,t_j;W)\Vert _{\mathbb {H}}\le M\) is larger than \(1-cJ/M^2>1/2\). For those paths \(W\) in this set we have, $$\begin{aligned} \varPhi (W;\delta )\le c(|\delta |+M). \end{aligned}$$ From this, we deduce that \(Z(\delta )>0\). Next, we have that $$\begin{aligned} |Z(\delta )-Z(\delta ')|&\le \int \limits _{{\mathbb X}}|\varPhi (W;\delta )-\varPhi (W;\delta ')|d\rho (W)\\&\le c\int \limits _{{\mathbb X}}(|\delta |+|\delta '|+2\sum _{j=1}^J|\ell (u(t_j;W))|_{\mathbb {R}^K})|\delta -\delta '|d\rho (W)\\&\le c|\delta -\delta '|. \end{aligned}$$ We then have $$\begin{aligned} 2d_\mathrm{Hell}(\rho ^\delta ,\rho ^{\delta '})^2&\le \int \limits _{{\mathbb X}}\left( \!Z(\delta )^{-1/2}\exp (-{1\over 2}\varPhi (W;\delta ))\!-\!Z(\delta ')^{-1/2}\exp (\!-\!{1\over 2}\varPhi (W';\delta '))\!\right) ^2 \\&\quad d\rho (W)\\&\le I_1+I_2, \end{aligned}$$ $$\begin{aligned} I_1={2\over Z(\delta )}\int \limits _{{\mathbb X}}\left( \exp (-{1\over 2}\varPhi (W;\delta ))-\exp (-{1\over 2}\varPhi (W;\delta '))\right) ^2d\rho (W), \end{aligned}$$ $$\begin{aligned} I_2=2|Z(\delta )^{-1/2}-Z(\delta ')^{-1/2}|^2\int \limits _{{\mathbb X}}\exp (-\varPhi (W;\delta '))d\rho (W). \end{aligned}$$ Using the facts that $$\begin{aligned} \Big |\exp (-{1\over 2}\varPhi (W;\delta ))-\exp (-{1\over 2}\varPhi (W;\delta '))\Big |\le \frac{1}{2}|\varPhi (W;\delta )-\varPhi (W;\delta ')| \end{aligned}$$ and that \(Z(\delta )>0\), we deduce that $$\begin{aligned} I_1&\le c\int \limits _\mathbb {X}|\varPhi (W;\delta )-\varPhi (W;\delta ')|^2d\rho (W)\\&\le c\int \limits _{\mathbb X}(|\delta |+|\delta '|+2\sum _{j=1}^J|\ell (u(t_j;W))|_{\mathbb {R}^K})^2|\delta -\delta '|^2d\rho (W)\\&\le c|\delta -\delta '|^2. \end{aligned}$$ Furthermore, $$\begin{aligned} |Z(\delta )^{-1/2}-Z(\delta ')^{-1/2}|^2\le c\max (Z(\delta )^{-3},Z(\delta ')^{-3})|Z(\delta )-Z(\delta ')|^2. \end{aligned}$$ From these inequalities it follows that \(d_\mathrm{Hell}(\rho ^\delta ,\rho ^{\delta '})\le c|\delta -\delta '|.\) \(\square \) 4 Inferring the initial condition In the previous section we discussed the problem of inferring the forcing from the velocity field. In practical applications it is also of interest to infer the initial condition, which corresponds to a Bayesian interpretation of 4DVAR, or the initial condition and the forcing, which corresponds to a Bayesian interpretation of weak constraint 4DVAR. Thus we consider the Bayesian inverse problem for inferring the initial condition \(u_0\) and the white noise forcing determined by the Brownian driver \(W\). Including the initial condition does not add any further technical difficulties as the dependence on the pathspace valued forcing is more subtle than the dependence on initial condition, and this dependence on the forcing is dealt with in the previous section. As a consequence we do not provide full details. Let \(\varrho \) be a Gaussian measure on the space \(\mathbb {H}\) and let \(\mu =\varrho \otimes \rho \) be the prior probability measure on the space \({\mathcal H}=\mathbb {H}\times {\mathbb X}\). We denote the solution \(u\) of (3) by \(u(x,t;u_0,W)\). We outline what is required to extend the analysis of the previous two sections to the case of inferring both initial condition and driving Brownian motion. We simplify the presentation by assuming observation at only one time \(t_0>0\) although this is easily relaxed. Given that at \(t_0\in (0,T)\), the noisy observation \(\delta \) of \(\ell (u(\cdot ,t_0;u_0,W)\) is given by $$\begin{aligned} \delta =\ell (u(\cdot ,t_0;u_0,W))+\vartheta \end{aligned}$$ where \(\vartheta \sim N(0,\Sigma )\). Letting $$\begin{aligned} \varPhi (u_0,W;\delta )={1\over 2}|\delta -\ell (u(\cdot ,t_0;u_0,W))|_\Sigma ^2, \end{aligned}$$ we aim to show that the conditional probability \(\mu ^\delta =\mathbb {P}(u_0,W|\delta )\) is given by $$\begin{aligned} {d\mu ^\delta \over d\mu }\propto \exp (-\varPhi (u_0,W;\delta )). \end{aligned}$$ We have the following result. The conditional probability measure \(\mu ^\delta =\mathbb {P}(u_0,W|\delta )\) is absolutely continuous with respect to the prior probability measure \(\mu \) with the Radon–Nikodym derivative give by (32). Further, when \(|\delta |<r\) and \(|\delta '|<r\) there is a constant \(c{=c(r)}\) such that $$\begin{aligned} d_{Hell}(\mu ^\delta ,\mu ^{\delta '})\le c|\delta -\delta '|. \end{aligned}$$ To establish the absolute continuity of posterior with respect to prior, together with the formula for the Radon–Nikodym derivative, the key issue is establishing continuity of the forward map with respect to initial condition and driving Brownian motion. This is established in Theorem 1. Since \(\mu ({\mathcal H})=1\) the first part of the theorem follows. For the Lipschitz dependency of the Hellinger distance of \(\mu ^\delta \) on \(\delta \), we use the result of Mattingly [17] which shows that, for each initial condition \(u_0\), $$\begin{aligned} \mathbb {E}^\rho (\Vert u(t;u_0,W)\Vert _{\mathbb {H}}^2)\le {{\mathcal E}_0\over 2\nu \lambda _1}+e^{-2\nu \lambda _1 t}\left( \Vert u_0\Vert _{\mathbb {H}}^2-{{\mathcal E}_0\over 2\nu \lambda _1}\right) , \end{aligned}$$ where \({\mathcal E}_0=\sum _{k=1}^\infty \sigma _k^2\). Therefore \(\mathbb {E}^\mu (\Vert u(t;u_0,W)\Vert _{\mathbb {H}}^2)\) is bounded. This enables us to establish positivity of the normalization constants and the remainder of the proof follows that given in Theorem 2.\(\square \) 5 Numerical results The purpose of this section is twofold: firstly to demonstrate that the Bayesian formulation of the inverse problem described in this paper forms the basis for practical numerical inversion; and secondly to study some properties of the posterior distribution on the white noise forcing, given observations of linear functionals of the velocity field. The numerical results move outside the strict remit of our theory in two directions. Firstly we work with periodic boundary conditions; this makes the computations fast, but simultaneously demonstrates the fact that the theory is readily extended from Dirichlet to other boundary conditions. Secondly we consider both (i) pointwise observations of the entire velocity field and (ii) observations found from the projection onto the lowest eigenfunctions of \(A\) noting that the second form of observations are bounded linear functionals on \(\mathbb {H}\), as required by our theory, whilst the first form of observations are not. To extend our theory to periodic boundary conditions requires generalization of the Flandoli [6] theory from the Dirichlet to the periodic setting, which is not a technically challenging generalization. However consideration of pointwise observation functionals requires the proof of continuity of \(u(t;\cdot , \cdot )\) as a mapping from \({\mathcal H}\) into \(\mathbb {H}^s\) spaces for \(s\) sufficiently large. Extension of the theory to include pointwise observation functionals would thus involve significant technical challenges, in particular to derive smoothing estimates for the semigroup underlying the Flandoli solution concept. Our numerical results will show that the posterior distribution for (ii) differs very little from that for (i), which is an interesting fact in its own right. Full-field point wise observations. The truth (top), expected value (middle), and absolute distance between them (bottom) of the vorticity \(w(t;W)\), for \(t=0.01\) (left, relative \(L^2\) error \(e=0.0044\)) and \(t=0.1\) (right, \(e=0.0244\)) Full-field point wise observations. The trajectories \(u_{k}(t;W)\) (left) and \(W_k\) (right), with \(k=(0,1)\) (top), \(k=(0,4)\) (middle), and \(k=(0,8)\) (bottom). Shown are expected values and standard deviation intervals as well as true values. The right hand images also show the expected value and standard deviation of the prior, indicating the decreasing information content of the data for the increasing wave numbers Full-field point wise observations. The histograms of the posterior distribution in comparison to the prior distribution of \(W_k(t=0.05)\), for \(k=(0,1)\) (left), \(k=(0,4)\) (middle), and \(k=(0,8)\) (right). These plots again illustrate the decreasing information content of the data for the increasing wave numbers Observation of Fourier modes \(\{\phi _k\}_{|k|<4}\). The truth (top), expected value (middle), and absolute distance between them (bottom) of the vorticity \(w(t;W)\), for \(t=0.01\) (left, relative \(L^2\) error \(e=0.0044\)) and \(t=0.1\) (right, \(e=0.0249\)). Notice the similarity to the results of Fig. 1 Observation of Fourier modes \(\{\phi _k\}_{|k|<4}\). The trajectories \(u_{k}(t;W)\) (left) and \(W_k\) (right), with \(k=(0,1)\) (top), \(k=(0,4)\) (middle), and \(k=(0,8)\) (bottom). Shown are expected values and standard deviation intervals as well as true values. The right hand images also show the expected value and standard deviation of the prior, indicating the decreasing information content of the data for the increasing wave numbers Observation of Fourier modes \(\{\phi _k\}_{|k|<4}\). The histograms of the posterior distribution in comparison to the prior distribution of \(W_k(t=0.05)\), for \(k=(0,1)\) (left), \(k=(0,4)\) (middle), and \(k=(0,8)\) (right). These plots again illustrate the decreasing information content of the data for the increasing wave numbers. Notice the middle panel in which one notices the posterior on \(W_{0,4}(t=0.05)\) is much closer to the prior than in Fig. 3 In Sect. 5.1 we describe the numerical method used for the forward problem. In Sect. 5.2 we describe the inverse problem and the Metropolis-Hastings MCMC method used to probe the posterior. Section 5.3 describes the numerical results. 5.1 Forward problem: numerical discretization All our numerical results are computed using a viscosity of \(\nu =0.1\) and on the periodic domain. We work on the time interval \(t \in [0,0.1].\) We use \(M=32^2\) divergence free Fourier basis functions for a spectral Galerkin spatial approximation, and employ a time-step \(\delta t = 0.01\) in a Taylor time-approximation [9]. The number of basis functions and time-step lead to a fully-resolved numerical simulation at this value of \(\nu .\) 5.2 Inverse problem: metropolis hastings MCMC Recall the Stokes' operator \(A\). We consider the inverse problem of finding the driving Brownian motion. As a prior we take a centered Brownian motion in time with spatial covariance \(\pi ^4 A^{-2}\); thus the space-time covariance of the process is \(C_0 := \pi ^4 A^{-2} \otimes (-\triangle _t)^{-1}\), where \(\triangle _t\) is the Laplacian in time with fixed homogeneous Dirichlet condition at \(t=0\) and homogeneous Neumann condition at \(t=T\). It is straightforward to draw samples from this Gaussian measure, using the fact that \(A\) is diagonalized in the spectral basis. Note that if \(W \sim \rho \), then \(W \in C(0,T;\mathbb {H}^s)\) almost surely for all \(s<1\); in particular \(W \in {\mathbb X}\). Thus \(\rho ({\mathbb X})=1\) as required. The likelihood is defined (i) by making observations of the velocity field at every point on the \(32^2\) grid implied by the spectral method, at every time \(t=n\delta t\), \(n=1,\cdots 10\), or (ii) by making observations of the projection onto eigenfunctions \(\{\phi _k\}_{|k|<4}\) of \(A\). The observational noise standard deviation is taken to be \(\gamma = 1.6\) and all observational noises are uncorrelated. To sample from the posterior distribution we employ a Metropolis-Hastings MCMC method. Furthermore, to ensure mesh-independent convergence properties, we use a method which is well-defined in function space [2]. Metropolis-Hastings methods proceed by constructing a Markov kernel \({\mathcal P}\) which satisfies detailed balance with respect to the measure \(\rho ^{\delta }\) which we wish to sample: $$\begin{aligned} \rho ^{\delta }(du) {\mathcal P}(u,dv) = \rho ^{\delta }(dv) {\mathcal P}(v,du), \quad \forall ~ u,v \in {\mathbb X}. \end{aligned}$$ Integrating with respect to \(u\), one can see that detailed balance implies \(\rho ^{\delta }{\mathcal P}= \rho ^{\delta }\). Metropolis-Hastings methods [8, 20] prescribe an accept-reject move based on proposals from another Markov kernel \({\mathcal Q},\) in order to define a kernel \({\mathcal P}\) which satisfies detailed balance. If we define the measures $$\begin{aligned} \begin{array}{lll} \nu (du,dv) &{}=&{} {\mathcal Q}(u,dv) \rho ^\delta (du) \propto {\mathcal Q}(u,dv) \exp \Bigl (-\varPhi (u;\delta )\Bigr )\rho (du) \\ \nu ^\perp (du,dv) &{}=&{} {\mathcal Q}(v,du) \rho ^\delta (dv) \propto {\mathcal Q}(v,du) \exp \Bigl (-\varPhi (v;\delta )\Bigr )\rho (dv). \end{array} \end{aligned}$$ then, provided \(\nu ^\perp \ll \nu \), the Metropolis-Hastings method is defined as follows. Given current state \(u_n\), a proposal is drawn \(u^* \sim {\mathcal Q}(u_n,\cdot )\), and then accepted with probability $$\begin{aligned} \alpha (u_n,u^*) = \mathrm{min} \left\{ 1, \frac{d\nu ^\perp }{d\nu }(u_n,u^*) \right\} . \end{aligned}$$ The resulting chain is denoted by \({\mathcal P}.\) If the proposal \({\mathcal Q}\) preserves the prior, so that \(\rho {\mathcal Q}=\rho \), then a short calculation reveals that $$\begin{aligned} \alpha (u_n,u^*) = \mathrm{min} \left\{ 1, \exp \Bigl (\varPhi (u_n;\delta )-\varPhi (u^*;\delta )\Bigr ) \right\} ; \end{aligned}$$ thus the acceptance probability is determined by the change in the likelihood in moving from current to proposed state. We use the following pCN proposal [2] which is reversible with respect to the Gaussian prior \(N(0,C_0)\): $$\begin{aligned} {\mathcal Q}(u_n,\cdot ) = N\left( \sqrt{1-\beta ^2} u_n, \beta ^2 C_0\right) . \end{aligned}$$ This hence results in the acceptance probability (36). Variants on this algorithm, which propose differently in different Fourier components, are described in [15], and can make substantial speedups in the Markov chain convergence. However for the examples considered here the basic form of the method suffices. 5.3 Results and discussion The true driving Brownian motion \(W^\dagger \), underlying the data in the likelihood, is constructed as a draw from the prior \(\rho \). We then compute the corresponding true trajectory \(u^\dagger (t)=u(t;W^\dagger )\). We use the pCN scheme (36), (37) to sample \(W\) from the posterior distribution \(\rho ^\delta \). It is important to appreciate that the object of interest here is the posterior distribution on \(W\) itself which provides estimates of the forcing, given the noisy observations of the velocity field. This posterior distribution is not necessarily close to a Dirac measure on the truth; in fact we will show that some parameters required to define \(W\) are recovered accurately whilst others are not. We first consider the observation set-up (i) where pointwise observations of the entire velocity field are made. The true initial and final conditions are plotted in Fig. 1, top two panels, for the vorticity field \(w\); the middle two panels of Fig. 1 show the posterior mean of the same quantities and indicate that the data is fairly informative, since they closely resemble the truth; the bottom two panels of Fig. 1 show the absolute difference between the fields in the top and middle panels. The true trajectory, together with the posterior mean and one standard deviation interval around the mean, are plotted in Fig. 2, for the wavenumbers \((0,1)\), \((0,4)\), and \((0,8)\), and for both the driving Brownian motion \(W\) (right) and the velocity field \(u\) (left). This figure indicates that the data is very informative about the \((0,1)\) mode, but less so concerning the \((0,4)\) mode, and there is very little information in the \((0,8)\) mode. In particular for the \((0,8)\) mode the mean and standard deviation exhibit behaviour similar to that under the prior whereas for the \((0,1)\) mode they show considerable improvement over the prior in both position of the mean and width of standard deviations. The posterior on the \((0,4)\) mode has gleaned some information from the data as the mean has shifted considerably from the prior; the variance remains similar to that under the prior, however, so uncertainty in this mode has not been reduced. Figure 3 shows the histograms of the prior and posterior for the same 3 modes as in Fig. 2 at the center time \(t=0.05\). One can see here even more clearly that the data is very informative about the \((0,1)\) mode in the left panel, less so but somewhat about the \((0,4)\) mode in the center panel, and it is not informative at all about the \((0,8)\) mode in the right panel. Figures 4, 5, and 6 are the same as Figs. 1, 2, and 3 except for the case of (ii) observation of low Fourier modes. Notice that the difference in the spatial fields are difficult to distinguish by eye, and indeed the relative errors even agree to threshold \(10^{-3}\). However, we can see that now the unobserved \((0,4)\) mode in the center panels of Figs. 5 and 6 is not informed by the data and remains distributed approximately like the prior. VHH gratefully acknowledges the financial support of the AcRF Tier 1 grant RG69/10. AMS is grateful to EPSRC, ERC, ESA and ONR for financial support for this work. KJHL is grateful to the financial support of the ESA and is currently a member of the King Abdullah University of Science and Technology (KAUST) Strategic Research Initiative (SRI) Center for Uncertainty Quantification in Computational Science. Bennett, A.F.: Inverse Modeling of the Ocean and Atmosphere. Cambridge University Press, Cambridge (2002)CrossRefzbMATHGoogle Scholar Cotter, S., Roberts, G., Stuart, A., White, D.: MCMC methods for functions: modifying old algorithms to make them faster. Stat. Sci. 28(3), 424–446 (2013)Google Scholar Cotter, S.L., Dashti, M., Robinson, J.C., Stuart, A.M.: Bayesian inverse problems for functions and applications to fluid mechanics. Inverse Probl. 25, 115008 (2009)CrossRefMathSciNetGoogle Scholar Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (2008)zbMATHGoogle Scholar Dashti, M., Law, K.J.H., Stuart, A.M., Voss, J.: Map estimators and posterior consistency in Bayesian nonparametric inverse problems. Inverse Probl. 29, 095017 (2013)CrossRefMathSciNetGoogle Scholar Flandoli, F.: Dissipative and invariant measures for stochastic Navier-Stokes equations. N0DEA 1, 403–423 (1994).Google Scholar Hairer, M., Stuart, A.M., Voss, J.: Signal processing problems on function space: Bayesian formulation, stochastic PDEs and effective MCMC methods. In: Crisan, D., Rozovsky, B. (eds.) The Oxford Handbook of Nonlinear Filtering, pp. 833–873. Oxford University Press, Oxford (2011)Google Scholar Hastings, W.K.: Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57, 97–109 (1970)CrossRefzbMATHGoogle Scholar Jentzen, A., Kloeden, P.: Taylor expansions of solutions of stochastic partial differential equations with additive noise. Ann. Probab. 38(2), 532–569 (2010)CrossRefzbMATHMathSciNetGoogle Scholar Kaipio, J., Somersalo, E.: Statistical and computational inverse problems. In: Applied Mathematical Sciences. vol. 160, Springer, New York (2004).Google Scholar Lasanen, S.: Discretizations of generalized random variables with applications to inverse problems. University of Oulu, Ann. Acad. Sci. Fenn. Math. Diss. (2002)zbMATHGoogle Scholar Lasanen, S.: Measurements and infinite-dimensional statistical inverse theory. PAMM 7, 1080101–1080102 (2007)CrossRefGoogle Scholar Lasanen, S.: Non-Gaussian statistical inverse problems. Part I: posterior distributions. Inverse Probl. Imaging 6(2), 215–266 (2012)zbMATHMathSciNetGoogle Scholar Lasanen, S.: Non-Gaussian statistical inverse problems. Part II: posterior convergence for approximated unknowns. Inverse Probl. Imaging 6(2), 267–287 (2012)Google Scholar Law, K.J.H.: Proposals which speed-up function space MCMC. J. Comput. Appl. Math 262, 127–138 (2014)Google Scholar Lorenc, A.C.: The potential of the ensemble Kalman filter for NWP a comparison with 4D-Var. Quart. J. R. Meteorol. Soc. 129(595), 3183–3203 (2003)Google Scholar Mattingly, J.C.: Ergodicity of 2D Navier-Stokes equations with random forcing and large viscosity. Commun. Math. Phys. 206, 273–288 (1999)CrossRefzbMATHMathSciNetGoogle Scholar Stuart, A.M.: Inverse problems: a Bayesian perspective. Acta Numer. 19(1), 451–559 (2010)Google Scholar Temam, R.: Navier-Stokes Equations. American Mathematical Society, New York (1984)Google Scholar Tierney, L.: A note on Metropolis-Hastings kernels for general state spaces. Ann. Appl. Probab. 8(1), 1–9 (1998)CrossRefzbMATHMathSciNetGoogle Scholar Vollmer, S.J.: Dimension-independent MCMC sampling for elliptic inverse problems with non-Gaussian priors. arXiv:1302.2213, (2013). Zupanski, D.: A general weak constraint applicable to operational 4DVAR data assimilation systems. Monthly Weather Rev. 125(9), 2274–2292 (1997)CrossRefGoogle Scholar 1.Division of Mathematical Sciences, School of Physical and Mathematical SciencesNanyang Technological UniversitySingaporeSingapore 2.Mathematics InstituteWarwick UniversityCoventry UK 3.Computer, Electrical and Mathematical Sciences & Engineering DivisionKing Abdullah University of Science and TechnologyThuwal Saudi Arabia Hoang, V.H., Law, K.J.H. & Stuart, A.M. Stoch PDE: Anal Comp (2014) 2: 233. https://doi.org/10.1007/s40072-014-0028-4 Received 19 March 2013 Accepted 07 April 2014
CommonCrawl
Periodic solutions of a model for tumor virotherapy Topology and dynamics of boolean networks with strong inhibition December 2011, 4(6): 1577-1586. doi: 10.3934/dcdss.2011.4.1577 Algebraic model of non-Mendelian inheritance Jianjun Paul Tian 1, Mathematics Department, College of William and Mary, Williamsburg, VA 23187, United States Received April 2009 Revised October 2009 Published December 2010 Evolution algebra theory is used to study non-Mendelian inheritance, particularly organelle heredity and population genetics of Phytophthora infectans. We not only can explain a puzzling feature of establishment of homoplasmy from heteroplasmic cell population and the coexistence of mitochondrial triplasmy, but also can predict all mechanisms to form the homoplasmy of cell populations, which are hypothetical mechanisms in current mitochondrial disease research. The algebras also provide a way to easily find different genetically dynamic patterns from the complexity of the progenies of Phytophthora infectans which cause the late blight of potatoes and tomatoes. Certain suggestions to pathologists are made as well. Keywords: non-Mendelian inheritance, Genetics, genetic algebras., algebraic model. Mathematics Subject Classification: Primary: 17D99, 17D92; Secondary: 92D1. Citation: Jianjun Paul Tian. Algebraic model of non-Mendelian inheritance. Discrete & Continuous Dynamical Systems - S, 2011, 4 (6) : 1577-1586. doi: 10.3934/dcdss.2011.4.1577 C. W. Jr. Birky, The inheritance of genes in mitochondria and chloroplasts: Laws, mechanisms, and models,, Annu. Rev. Genet., 35 (2001), 125. doi: 10.1146/annurev.genet.35.102401.090231. Google Scholar C. W. Jr. Birky, "Inheritance of Mitochondrial Mutations,", Mitochondrial DNA Mutations and Aging, (1998). Google Scholar Jianjun Paul Tian, "Evolution Algebras and their Applications,", Lecture Note in Mathematics, 1921 (2008). Google Scholar G. Mendel, "Experiments in Plant-Hybridization,", Classic Papers in Genetics, (1959), 1. Google Scholar Y. I. Lyubich, "Mathematical Structures in Population Genetics,", Springer-Verlag, (1992). Google Scholar A. Worz-Busekros, "Algebras in Genetics,", Lecture Notes in Biomath. 36, (1980). Google Scholar M. L. Reed, Algebraic structure of genetic inheritance,, Bull. of AMS, 34 (1997), 107. doi: 10.1090/S0273-0979-97-00712-X. Google Scholar N. W. Gillham, "Organelle Genes and Genomes,", Oxford University Press, (1994). Google Scholar C. F. Emmerson, G. K. Brown and J. Poulton, Synthesis of mitochondrial DNA in permeabilised human cultured cells,, Nucleic Acids Res., 29 (2001). Google Scholar F. Ling and T. Shibata, Mhr1p-dependent concatemeric mitochondrial DNA formation for generating yeast mitochondrial homoplasmic cells,, Mol. Biol. Cell, 15 (2004), 310. doi: 10.1091/mbc.E03-07-0508. Google Scholar Y. Tang, G. Manfredi, M. Hirano and E. A. Schon, Maintenance of human rearranged mitochondrial DNAs in long-term transmitochondrial cell lines,, Mol. Biol. Cell, 11 (2000), 2349. Google Scholar F. M. A. Samen, G. A. Secor and N. C. Gudmestad, Variability in virulence among asexual progenies of Phytophthora infestans,, Phytopathology, 93 (2003), 293. doi: 10.1094/PHYTO.2003.93.3.293. Google Scholar W. E. Fry, and S. B. Goodwin, Re-emergence of potato and tomato late blight in the United States,, Plant Disease, 81 (1997), 1349. doi: 10.1094/PDIS.1997.81.12.1349. Google Scholar Jianjun Tian, Bai-Lian Li. Coalgebraic Structure of Genetic Inheritance. Mathematical Biosciences & Engineering, 2004, 1 (2) : 243-266. doi: 10.3934/mbe.2004.1.243 Grégory Berhuy. Algebraic space-time codes based on division algebras with a unitary involution. Advances in Mathematics of Communications, 2014, 8 (2) : 167-189. doi: 10.3934/amc.2014.8.167 Jesse Berwald, Marian Gidea. Critical transitions in a model of a genetic regulatory system. Mathematical Biosciences & Engineering, 2014, 11 (4) : 723-740. doi: 10.3934/mbe.2014.11.723 Feng Rong. Non-algebraic attractors on $\mathbf{P}^k$. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 977-989. doi: 10.3934/dcds.2012.32.977 Ji Zhang, Hongxia Lv, Boer Deng, Wenxian Wang. An adaptive genetic algorithm for solving the optimization model of car flow organizat. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020200 Isaac A. García, Jaume Giné. Non-algebraic invariant curves for polynomial planar vector fields. Discrete & Continuous Dynamical Systems - A, 2004, 10 (3) : 755-768. doi: 10.3934/dcds.2004.10.755 Antonio Algaba, Natalia Fuentes, Cristóbal García, Manuel Reyes. Non-formally integrable centers admitting an algebraic inverse integrating factor. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 967-988. doi: 10.3934/dcds.2018041 Pieter C. Allaart. An algebraic approach to entropy plateaus in non-integer base expansions. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6507-6522. doi: 10.3934/dcds.2019282 Ping-Chen Lin. Portfolio optimization and risk measurement based on non-dominated sorting genetic algorithm. Journal of Industrial & Management Optimization, 2012, 8 (3) : 549-564. doi: 10.3934/jimo.2012.8.549 Boris Khots, Dmitriy Khots. P-groups applications in genetics. Conference Publications, 2001, 2001 (Special) : 224-228. doi: 10.3934/proc.2001.2001.224 Zhilan Feng, Carlos Castillo-Chavez. The influence of infectious diseases on population genetics. Mathematical Biosciences & Engineering, 2006, 3 (3) : 467-483. doi: 10.3934/mbe.2006.3.467 Jiao-Yan Li, Xiao Hu, Zhong Wan. An integrated bi-objective optimization model and improved genetic algorithm for vehicle routing problems with temporal and spatial constraints. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-18. doi: 10.3934/jimo.2018200 Daniele Bartoli, Leo Storme. On the functional codes arising from the intersections of algebraic hypersurfaces of small degree with a non-singular quadric. Advances in Mathematics of Communications, 2014, 8 (3) : 271-280. doi: 10.3934/amc.2014.8.271 Tadahiro Oh, Mamoru Okamoto, Oana Pocovnicu. On the probabilistic well-posedness of the nonlinear Schrödinger equations with non-algebraic nonlinearities. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3479-3520. doi: 10.3934/dcds.2019144 A. A. Kirillov. Family algebras. Electronic Research Announcements, 2000, 6: 7-20. Reinhard Bürger. A survey of migration-selection models in population genetics. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 883-959. doi: 10.3934/dcdsb.2014.19.883 Peng Zhou, Jiang Yu, Dongmei Xiao. A nonlinear diffusion problem arising in population genetics. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 821-841. doi: 10.3934/dcds.2014.34.821 Rongsheng Huang, Jinzhi Lei. Cell-type switches induced by stochastic histone modification inheritance. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5601-5619. doi: 10.3934/dcdsb.2019074 M. Sumon Hossain, M. Monir Uddin. Iterative methods for solving large sparse Lyapunov equations and application to model reduction of index 1 differential-algebraic-equations. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 173-186. doi: 10.3934/naco.2019013 Steffen Konig and Changchang Xi. Cellular algebras and quasi-hereditary algebras: a comparison. Electronic Research Announcements, 1999, 5: 71-75. Jianjun Paul Tian
CommonCrawl
Determination of the ED95 of intrathecal hyperbaric prilocaine with sufentanil for scheduled cesarean delivery: a dose-finding study based on the continual reassessment method P. Goffard1, Y. Vercruysse ORCID: orcid.org/0000-0001-5093-25681, R. Leloup1, J-F Fils2, S. Chevret3 & Y. Kapessidou1 BMC Anesthesiology volume 20, Article number: 293 (2020) Cite this article Scheduled cesarean section is routinely performed under spinal anesthesia using hyperbaric bupivacaine. The current study was undertaken to determine the clinically relevant 95% effective dose of intrathecal 2% hyperbaric prilocaine co-administered with sufentanil for scheduled cesarean section, using continual reassessment method. We conducted a dose-response, prospective, double-blinded study to determine the ED95 values of intrathecal hyperbaric prilocaine used with 2,5 mcg of sufentanil and 100 mcg of morphine for cesarean delivery. Each parturient enrolled in the study received an intrathecal dose of hyperbaric prilocaine determined by the CRM and the success or failure of the block was assessed as being the primary endpoint. The doses given for each cohort varied from 35 to 50 mg of HP, according to the CRM, with a final ED95 lying between 45 and 50 mg of Prilocaine after completion of the 10 cohorts. Few side effects were reported and patients were globally satisfied. The ED95 of intrathecal hyperbaric prilocaine with sufentanil 2.5 μg and morphine 100 μg for elective cesarean delivery was found to be between 45 and 50 mg. It may be an interesting alternative to other long-lasting local anesthetics in this context. The study was registered on January 30, 2017 – retrospectively registered – and results posted at the public database clinicaltrials.gov (NCT03036384). Scheduled cesarean section (CS) is routinely performed under spinal anesthesia using hyperbaric bupivacaine in combination with opioids [1,2,3]. Although efficient, its use is frequently associated with long-lasting motor block and adverse effects, mainly dose-dependent maternal hypotension [4, 5], increasing fetal risks [6, 7]. Considering the anesthetic efficacy, numerous studies have determined the dose-response relationship of the most commonly used intrathecal local anesthetics for caesarean section [8]. As well, it is currently admitted that the addition of intrathecal opioids enhances the potency of local anesthetics, while permitting a sparing effect [9, 10]. Nevertheless, nowadays, still remains the need to determine the "optimal" dose of local anesthetic for caesarean delivery, striking the balance between reliability and efficacy and adverse effects [11]. Hyperbaric prilocaine (HP) 2% is an intermediate-potency amide-type local anesthetic, providing short onset, intermediate duration of motor block and few side-effects [12, 13]. Several studies the last past years have shown its efficacy when applied for spinal anesthesia and have determined the appropriate doses for various ambulatory surgery procedures lasting up to 90 min [14,15,16]. First introduced for intrathecal use in 1965 [17], the former presentation of prilocaine was assessed in obstetrics for vaginal or cesarean delivery under continuous epidural anesthesia in 1968 [18, 19]. Good quality of anesthesia was reported with 1–2% formulations with no clinically relevant blood accumulation of prilocaine, although considerable doses had been administered via the continuous epidural mode [20]. Concerns regarding the stability of the solution related to production procedures [21] led to the withdrawal of prilocaine from the market in 1978, and no further investigations have been conducted in the obstetrics field ever since. The new 2% intrathecal hyperbaric formulation commercialised in 2005, provides relevant advantages in terms of surgical anesthesia [22] and very low reported toxicity [23], thereby being an interesting alternative to long-lasting local anesthetics for cesarean section. Proposed doses for different surgical procedures vary largely, dictating the necessity for targeted studies. The current study was undertaken to determine the clinically relevant 95% effective dose (ED95) of intrathecal 2% hyperbaric prilocaine co-administered with sufentanil for scheduled cesarean section. The doses were obtained using the continual reassessment method (CRM) [24], which has the advantage to estimate the targeted percentile on the dose-finding curve without extrapolation that lacks precision [25,26,27]. We also assessed clinical characteristics and side-effects profile associated with prilocaine's doses used. We conducted a dose-response, prospective, double-blinded study to determine the ED95 values of intrathecal hyperbaric prilocaine used with 2,5 mcg of sufentanil and 100 mcg of morphine for cesarean delivery. The study was approved by the institutional Medical Ethics Committee (President E. Stevens, Research Ethics Board number O.M.007; date of protocol approval 24 of March 2016; protocol number NB076201627436). It was retrospectively registered on January 30, 2017 and results posted at the public database clinicaltrials.gov (NCT03036384). Study population and setting The present report was established according to ROBUST criteria for Bayesian based studies [28], SPIRIT statement for interventional trials [29] and CONSORT guidelines [30]. Healthy term parturients presenting to our hospital between 1st of April and 30th of November 2016 for elective cesarean delivery were enrolled in the study after signed written informed consent had been obtained. Inclusion criteria were age between 18 and 40, American Society of Anesthesiologists physical status (ASA) class I-II, body weight less than 100 kg, height between 155 and 175 cm, singleton pregnancy, and gestational age of more than 37 completed weeks. Exclusion criteria were active labor, ruptured membranes, three or more previous caesarean deliveries, diabetes or gestational diabetes, pregnancy induced hypertension or preeclampsia, intrauterine growth retardation, placenta praevia, congenital anomaly, standard contraindications to neuraxial block, neurological impairment, and known allergy to local anesthetics. Study protocol All patients were premedicated with intravenous metoclopramide 10 mg, sodium citrate 30 ml and ranitidine 150 mg orally, 30 min before spinal anaesthesia. They received slowly upon arrival in the operating theatre 1000 ml of Ringer's lactate solution via peripheral intravenous access as regular fluid therapy, which is standard care in our institution. Continuous electrocardiography, pulse oximetry (SpO2) and non-invasive arterial blood pressure monitor were applied throughout the whole study protocol. A combined spinal-epidural (CSE) was performed at the L3/L4 or L4/L5 interspace with the parturient in sitting position, under uterine and foetal heart rate monitoring. Applying the midline approach, an 18G Tuohy needle (Vygon, Ecouen, France) was inserted into the epidural space using a loss-of-resistance-to-saline technique. The spinal component was performed under aseptic conditions with a needle-through-needle technique using a 27G Whitacre needle (Vygon, Ecouen, France), with the orifice oriented cephalad. Following observation of spontaneously flowing cerebrospinal fluid, the study solution of hyperbaric 2% prilocaine (Tachipri® Hyperbar, Nordic Pharma) at room temperature was injected over 20s associated with sufentanil 2,5 mcg and morphine 100 mcg. A multiple orifice epidural catheter was then threaded 3 cm into the epidural space, an aspiration test was performed but no drug was injected. Immediately after the procedure, parturient laid supine with a left lateral tilt to cause uterine displacement. A bladder catheter and an O2 face mask with 6 l/min O2 were applied. Each parturient enrolled in the study received an intrathecal dose of HP determined by the CRM and the success or failure of the block was assessed as being the primary endpoint. Off noted, the assessing anaesthesiologist remained blind to the administered dose. For the purpose of the study, a successful block was defined as a bilateral T4 sensory level [31] obtained within 15 min after intrathecal HP dose administration with no pain experienced upon incision and until the end of surgery. Otherwise, a failure was recorded and epidural supplementation of 5 ml bolus injections of 2% lidocaine with epinephrine 1/200.000 were administered every 5 min through the epidural catheter, in order to obtain a VAS score ≤ 3. Hypotension was defined as a 20% decrease in systolic blood pressure (SAP) compared to baseline value, recorded before spinal anaesthesia. When occurred, titration of ephedrine 5 mg or phenylephrine 100 mcg was administered at the discretion of the attending anaesthesiologist in order to keep SAP over 90% of baseline. The surgical technique was uniform for all patients, including uterine exteriorization. Blinding To ensure proper blinding throughout the study, the same anaesthesiologist prepared the study dose according to the CRM and performed the combined spinal-epidural. Another investigator, blinded to the dose, assessed the success or failure of each intrathecal block, ensured the subsequent management of the patient and collected the data throughout the study protocol. Similarly, parturient was not aware of the dose administered. Demographic variables recorded in the study were: age, weight, height, body mass index, gestational age, parity and number of previous caesarean deliveries. Regarding the new-born, weight and Apgar scores at 1, 5 and 10 min were recorded after delivery, as well as umbilical vessels pH and methemoglobinemia measured from percutaneous umbilical cord blood samples, using arterial blood analysis. The following surgical data were also collected: time from spinal anaesthesia to baby extraction, time from baby delivery to the end of surgery, the duration of surgery and total blood loss. Sensory level was assessed bilaterally by loss of cold and sensation at the midclavicular line and recorded every 2,5 min after intrathecal dose administration of HP (T0) during the first 15 min. Then, every 5 min until the end of the procedure, and every hour in the Post-anesthesia care unit (PACU) until the patient declared regaining full sensitivity, signifying complete resolution of the sensory block. The time to achieve Th4 bilateral level, the maximum level obtained and the total duration of sensory block were also registered. Bromage scale (1 = no motor block, 2 = hip blocked, 3 = hip and knee blocked; and 4 = hip, knee, and ankle blocked) was used to evaluate the motor block every 15 min after spinal anaesthesia (T0) and until the end of surgery. Patients' follow-up continued in the PACU every hour until complete recovery of motor block was observed (Bromage score = 1) and total duration of motor blockade was recorded. Total recovery of both motor and sensory blocks allowed discharge to the care-unit. Pain was assessed using a 10-cm horizontal visual analogue scale (VAS; 0–10 cm; 0: no pain and 10: worst imaginable pain) at skin incision, new-born delivery, uterine exteriorization, peritoneal and skin closure; in addition, at 5-min intervals throughout surgery and at 15-min intervals during the follow-up in the PACU. Thereafter, pain was evaluated every 4 h during the first postoperative day in the care-unit. Maternal arterial blood pressure was recorded by non-invasive measurements at baseline, at 1-min intervals after drug dose administration during the first 15 min then at 2.5-min intervals until the end of surgery and every 20 min in the post anaesthesia care unit. The necessity of using vasopressors (ephedrine or phenylephrine) when hypotension occurred as well as total administered doses were recorded. Heart rate and SpO2 were monitored continuously. Regarding side-effects, the incidence (presence or absence) of nausea, vomiting and pruritus, were recorded at 15 min intervals from intrathecal dose administration until the end of surgery and at the same time-points as pain was assessed. During the postoperative period and until hospital discharge, all parturients were questioned and examined as well for transient neurologic symptoms (TNS), urinary retention and dizziness. From a quality point of view, maternal satisfaction (yes or not) was assessed 1 h after surgery and in the care-unit ward. All collected data were registered anonymously, according to institutional ethics committee policy. Dose allocation To provide a valid estimation of the ED95 of 2% HP with sufentanil 2,5 mcg and morphine 100 mcg for caesarean section, the study design was based on the modified CRM [32]. It is an adaptive Bayesian method, designed to estimate the targeted percentile on the response curve among several dose levels, requiring small sample of patients of around 20–30 to reach valid conclusions. Originally designed for dose-toxicity finding in oncology trials, it was then extended to dose-failure in phase II trials, notably in anaesthesiology [24]. We set out to recruit 40 parturients, 4 per cohort, to benefit from spinal anaesthesia with 2% HP different given doses with sufentanil. The starting dose of 45 mg was determined using a priori estimates of the ED95 based on our previous experience. Subsequent doses were allocated based on the CRM power model (Fig. 1), with the operator remaining blind to the given doses. Indeed, the results of each cohort were analysed by the statistical advisor researcher (Mr J-F. Fils) in order to propose to the clinical investigator the next dose to allocate. Continual Reassessment Method Dose-response statistical analysis Assuming a dose-failure relationship, with higher doses being more toxic and lower doses less efficacious, we want to find the ED95; that is, the dose defined as the 5th percentile of the dose–failure relationship, which is modelled throughout a power model as follows: $$ \mathrm{P}\left(\mathrm{Y}=1/{\mathrm{x}}_{\mathrm{i}}\right)={{\mathrm{p}}_{\mathrm{i}}}^{\uptheta}, $$ where θ is the model parameter to be estimated, considered as a random variable with exponential unit prior, xi is the administrated dose to the ith patient and pi (i = 1, … k) is the initial guess of failure probabilities at the ith dose level. Six dose levels (= k) were chosen, specifically 30, 35, 40, 45, 50 and 55 mg, whose range was based on our previous experience. The guesstimates failure probabilities associated to the retained doses were given by clinicians as 0.5, 0.25, 0.10, 0.05, 0.02, and 0.01, a priori corresponding respectively to ED50, 75, 90, 95, 98 and 99 of HP with sufentanil. The CRM is conducted as follows: the first cohort of four patients is administered the initial candidate of the ED95, the dose level 45 mg. Then, depending on the response observed for all patients in the cohort, Bayes theorem is applied in order to provide the actualized posterior distribution of the model parameter. Subsequently, the posterior mean estimate is computed – that is, the mean distribution after taking into account the patients recruited so far in the trial – E (θ/y), and then is used in the power model to give an updated probability of failure at each dose level. The dose allocated to the next cohort is the one with an actualized posterior response closest to the target 0.95 (95%). CRM allows to previously incorporate stopping rules which is important for an ethically and statistically reliable approach of patients [27, 33, 34]. Our trial continued until one of the following stopping criteria was met: the planned number of 40 patients was reached; the estimated posterior probability of response was either too low or too high for all dose levels; a reliable estimation of the ED95 was obtained, based on the predictive gains (mean and maximum) of further patients' inclusions on the response probability and on the width of its credibility interval lower than 5%. Collected demographic, surgical and clinical data were expressed as mean ± standard deviation or absolute number, as appropriate. The dose-finding allocation and analysis of remaining data were performed using R software version 3.2.2 (R CRAN, Vienna, Austria). Demographics and surgery statistics All 40 parturients enrolled completed the study according to the protocol and were included in the analysis. Demographics and surgery duration are presented in Table 1. Table 1 Demographics and surgery characteristics The blocks were effective in 35 patients and ineffective in 5 patients. Figure 2 shows the sequence of administrated HP doses. Figure 3 indicates that the actualized probabilities of success associated with each of 30, 35, 40, 45, 50, and 55 mg doses are 46, 71, 87, 93, 97, and 100%, respectively. The 95% Credibility intervals were [33.10–60.50%] for dose 30, [55.25–84.40%] for dose 35, [73.70–95.43%] for dose 40, [85.00–98.19%] for dose 45, [89.66–99.47%] for dose 50 and [93.08–99.79%] for dose 55. Figure 4 depicts the estimated response probability evolution and its 95% Credibility Interval. Sequence of doses Probability of success and 95% credibilty intervals Estimated response probability and 95% credibility interval for the proposed ED95 The doses given for each cohort varied from 35 to 50 mg of HP, according to the CRM, with a final ED95 lying between 45 and 50 mg of Prilocaine after completion of the 10 cohorts (Table 2). Table 2 Evolution of ED95 after each cohort Secondary results of the ED95 Tables 3 to 7 and Figs. 5 and 6 present the data corresponding to the doses of 45 and 50 mg. Data was recorded only if success: 19 patients for the dose of 45 mg, 4 for 50 mg. Table 3 Quality of central bloc Evolution of sensitive block Evolution of motor block Sensitive and motor blocks Mean time to T4 bilateral sensitive block was approximately 12 min, with duration of more than 2 h for the dose of 45 mg and over 3 h for the dose of 50 mg (Table 3). Figure 5 show sensitive levels at 1, 2 and 3 h post injection for the predefined doses of 45 and 50 mg. The sensitive level at one hour post injection was over T5 for most of the patients, and decreasing rapidly each hour afterward. At hour 3, the sensitive block for the dose of 50 mg was at the lumbar level while for the dose of 45 mg it was sacral for most of the patients. Figure 6 show Bromage scores at 1, 2 and 3 h post injection for the same doses. All the patients had a Bromage score of 3 or 4 at one hour post injection. The third hour, all the patients that received the dose of 45 mg were able to move freely. Blood pressure was stable for both doses (Table 4). Table 4 Hemodynamics Newborn parameters Table 5 presents newborn parameters. Apgar scores at 1 min were at least 9 for the majority of babies and 10 after 5 min. Table 5 Newborn parameters Regarding side effects, 17 of the 24 patients who received the dose of 45 or 50 mg needed vasopressors, 7 experienced dizziness, 3 had nausea and none showed TNS, neither pruritus nor urinary retention. The majority of patients were satisfied, 20 out of 23. This data is shown in Table 6. Table 6 Adverse effects The ideal spinal anesthesia for elective cesarean section using the "optimal" local anesthetic dose should provide adequate surgical conditions throughout the procedure without consequent maternal or fetal adverse effects. It should provide a rapid onset of sensory and motor blocks (also interesting in a semi-emergency context) and rapid predictable regression of motor block permitting early rehabilitation, while ensuring sufficient postoperative analgesia. These qualities, together with a low incidence of adverse effects, are undoubtedly the requirements for any anesthetist in day practice. The primary aim of the current study was to determine the ED95 of 2% intrathecal hyperbaric prilocaine, combined with sufentanil 2,5 μg and morphine 100 μg, for elective cesarean section. Using the continual reassessment method, we estimated the ED95 for successful anesthesia was between 45 and 50 mg, with most observed success with the 45 mg dosage. The definition of a successful block differs widely amongst dose-finding studies having investigated the potency of intrathecal local anesthetics for cesarean section [1, 10, 35,36,37]. In this study, we defined as "success" the combination of a bilateral T4 attained sensory level obtained within 15 min after intrathecal HP dose administration with no pain experienced upon incision and until the end of surgery. We did this choice for the following reasons. Regarding the sensory level required for CS, we aligned our practice with the current recommendations suggesting a T4/T5 dermatome, rather than a bilateral T6 adopted by previous studies [1, 35]. We also considered that a 15 min delay to attain the sensory level was more appropriate than the 10 min previously reported, in order to avoid early failures due rather to the spread than the dose itself [1]. In addition, to our knowledge, since no study on intrathecal HP has reported before the time to T4 dermatome, we believed that 15 min delay was consistent with the results concluded on bupivacaine for CS, varying between 4 and 12 min [4, 37, 38]. Overall, surgical anesthesia was effective in 35 of 40 patients (87.50%), for the predefined assessed doses, which can be consider as a high success rate comparing with reported results on other local anesthetics [10, 39]. Interestingly, our results provide evidence that a dose of HP between 45 and 50 mg is sufficient to ensure surgical anesthesia to a T4 sensory level, which is in fact lower comparing to the doses reported by previous dose-finding studies [12]. We believe that the adjuvant sufentanil may contribute in reducing the dosage of prilocaine in our study. It is well acknowledged that opioids enhance the quality of anesthesia provided by local anesthetics for caesarean delivery [9, 10, 40]. In regard to secondary results, studies investigating local anesthetics for CS, differ widely in their methodology, including the drugs, doses and methods by which the characteristics of blocks are assessed this hampering correct comparability [11]. In this study, time to attain T4 level was comparable to the one reported for levobupivacaine (the levorotatory enantiomer of bupivacaine) but longer comparing to the long-lasting hyperbaric bupivacaine [4, 10]. The duration of motor block was however shorter as expected because of the intermediate potency of HP, consistent with the short duration of surgery in our tertiary center. Importantly, no adverse hemodynamic effects were recorded in our study population, thus suggesting that prilocaine may offer an interesting perspective to the current dilemma for anesthetists "dense-better anesthesia is associated with a higher incidence and severity of hypotension" [8]. In addition, no side-effects were observed in babies and no TNS was shown, while the majority of patients were globally satisfied by the whole procedure. Comparability with other local anesthetics being beyond of the scope of the study, we are convinced that it will be of great interest to conduct prospective randomized studies to compare HP to other established drugs in this field. Such studies should be based on equipotent doses, which were concluded for bupivacaine to range between 11 and 13 mg [1, 35] and for ropivacaine, when used alone, close to 26 mg [41]. Whereas efficient, such dosages elicit hypotension, thereby carrying a high risk for mother and fetus [6, 7]. Several trials have reported the applicability of HP, since 2005, for short surgical procedures under spinal anesthesia. However, its use has never been reported in obstetrical anesthesia yet. Today's policies appeal for a generalization of enhanced recovery procedures. Hyperbaric bupivacaine, despite its advantage of reliable good quality block, presents side effects that are a barrier to this enhanced recovery objective. Also, its ED95 has only been calculated from the ED50. In fact, the most used statistical method in anesthesiology for determination of a drug's ED95 is the Up-And-Down method (UDM). The principle is that each administrated dose is determined by the success or failure of the previous one. If it was a success, next dose would be inferior, but in case of failure, the next one would be superior, aiming to the ED50. ED95 is then calculated from the dose/response curve. The major advantage is that small groups of patients are sufficient, but the estimation of ED95 from ED50 lacks of precision. Another statistical design, the "3 + 3" method, is based on the same principle but uses cohorts of 3 patients for each dose, which give more precise information for every single dose. His disadvantage is the need to start with a low dose, which means treating patients with inefficient doses until the efficacy range is reached. Moreover, it does not provide any accurate estimate of the response rate, based at most from 6 patients. In this study, we used the CRM, working on Bayesian inference. This statistical approach exists since the XVIIIth century, but is used in dose estimation since 1990. It is still poorly used in clinical research because unknown and complex, needing the active participation of a biostatistician to help the clinician. Citing Prof. H. Motulsky, Bayesian approach "allows combining objective results with previous clinical intuition to calculate the probability of a patient being sick". For a dose/response clinical study, the clinician will use every a priori available information and complete data a posteriori with further results to establish conclusions. The use of CRM in this study showed several advantages over UDM: not aiming at ED50 is the main one. Aiming directly at ED95 leads to treat patients with efficient doses earlier, which is ethically important. UDM uses logistic regression to estimate ED95, where CRM uses a one parameter model to directly estimate ED95, more precisely. It uses all information available to give each patient the lowest efficient dose. It's liability is better as it uses the information of every cohort to estimate the ED95, where UDM uses only the previous patient result. O'Quigley, which used CRM for the first time in 1990 for phase I clinical trials in cancer, concludes superiority of CRM over UDM because it "learns" from information obtained at earlier points in the study. Consequently, it is less likely to treat patients at toxic doses, and more likely to treat patients at effective doses [25, 42]. Notably, it has been extended to phase II dose-finding clinical trials to estimate the minimal effective dose of a new drug [34]. CRM avoids treating patients with toxic doses by setting limitation rules restraining the trespassing of superior and inferior doses. It also allows a more rapid variation of dose than UDM. Those rules have to be adapted with each study design. In our, we followed advice from statisticians based on Zohar and Chevret's model [27]. While it is true that the complexity of the model restrains its use in clinical practice, needing to work with a biostatistician, this collaboration appeared to be interesting and stimulating, with the participation of an external and different point of view. Another limitation of our study may be considered the choice of the sensory block assessment, however, consensus on the best method is warranted [31]. In conclusion, the ED95 of intrathecal hyperbaric prilocaine with sufentanil 2.5 μg and morphine 100 μg for elective cesarean delivery was found to be between 45 and 50 mg. Taking in consideration the good quality provided sensitive block combined with early rehabilitation, hemodynamic tolerance and good babies' outcome, hyperbaric prilocaine may be an interesting alternative to other long-lasting local anesthetics in the context of scheduled cesarean delivery. Results of the current study are published at https://clinicaltrials.gov/ct2/show/results/NCT03036384. The complete datasets used and analyzed are available from the corresponding author upon request. They are deposited in the electronic database of the Anesthesiology department, University Hospital Saint Pierre. The CRM estimation data are deposited in the electronic database of our biostatistician. ASA: American Society of Anesthesiologists CRM: Cesarean section CSE: Combined spinal-epidural ED50: 50% effective dose Hyperbaric prilocaine PACU: Post-anesthesia care unit SAP: Systolic blood pressure SpO2: TNS: Transient neurologic symptoms UDM: Up-and-down method VAS: Visual analogue scale Ginosar Y, Mirikatani E, Drover DR, Cohen SE, Riley ET. ED50 and ED95 of Intrathecal hyperbaric bupivacaine Coadministered with opioids for cesarean delivery. Anesthesiology. 2004;100(3):676–82. https://doi.org/10.1097/00000542-200403000-00031. Jenkins JG, Khan MM. Anaesthesia for caesarean section: a survey in a UK region from 1992 to 2002. Anaesthesia. 2003;58(11):1114–8. https://doi.org/10.1046/j.1365-2044.2003.03446.x. Aiono-Le Tagaloa L, Butwick AJ, Carvalho B. A survey of perioperative and postoperative anesthetic practices for cesarean delivery. Anesthesiol Res Pract. 2009;2009:510642. https://doi.org/10.1155/2009/510642. Van De Velde M, Van Schoubroeck D, Jani J, et al. Combined spinal-epidural anesthesia for cesarean delivery: dose-dependent effects of hyperbaric bupivacaine on maternal hemodynamics. Anesth Analg. 2006;103(1):187–90. https://doi.org/10.1213/01.ane.0000220877.70380.6e. Van de Velde M. Low-dose spinal anesthesia for cesarean section to prevent spinal-induced hypotension. Curr Opin Anaesthesiol. 2019;32(3):268–70. https://doi.org/10.1097/ACO.0000000000000712. Reynolds F, Seed PT. Anaesthesia for caesarean section and neonatal acid-base status: a meta-analysis. Anaesthesia. 2005;60(7):636–53. https://doi.org/10.1111/j.1365-2044.2005.04223.x. Roberts SW, Leveno KJ, Sidawi JE, Lucas MJ, Kelly MA. Fetal acidemia associated with regional anesthesia for elective cesarean delivery. Obstet Gynecol. 1995;85(1):79–83. https://doi.org/10.1016/0029-7844(94)P4401-9. Benhamou D, Wong C. Neuraxial anesthesia for cesarean delivery: what criteria define the "optimal" technique? Anesth Analg. 2009;109(5):1370–3. https://doi.org/10.1213/ANE.0b013e3181b5b10c. Choi DH, Ahn HJ, Kim MH. Bupivacaine-sparing effect of fentanyl in spinal anesthesia for cesarean delivery. Reg Anesth Pain Med. 2000;25(3):240–5. https://doi.org/10.1097/00115550-200005000-00006. Bouvet L, Da-Col X, Chassard D, et al. ED50 and ED95 of intrathecal levobupivacaine with opioids for caesarean delivery. Br J Anaesth. 2011;106(2):215–20. https://doi.org/10.1093/bja/aeq296. Rucklidge MWM, Paech MJ. Limiting the dose of local anaesthetic for caesarean section under spinal anaesthesia - has the limbo bar been set too low? Anaesthesia. 2012;67(4):347–51. https://doi.org/10.1111/j.1365-2044.2012.07104.x. Manassero A, Fanelli A. Prilocaine hydrochloride 2% hyperbaric solution for intrathecal injection: a clinical review. Local Reg Anesth. 2017;10:15–24. https://doi.org/10.2147/LRA.S112756. Förster JG, Rosenberg PH. Revival of old local anesthetics for spinal anesthesia in ambulatory surgery. Curr Opin Anaesthesiol. 2011;24(6):633–7. https://doi.org/10.1097/ACO.0b013e32834aca1b. Rattenberry W, Hertling A, Erskine R. Spinal anaesthesia for ambulatory surgery. BJA Educ. 2019;19(10):321–8. https://doi.org/10.1016/j.bjae.2019.06.001. Camponovo C, Fanelli A, Ghisi D, Cristina D, Fanelli G. A prospective, double-blinded, randomized, clinical trial comparing the efficacy of 40 mg and 60 mg hyperbaric 2% prilocaine versus 60 mg plain 2% prilocaine for intrathecal anesthesia in ambulatory surgery. Anesth Analg. 2010;111(2):568–72. https://doi.org/10.1213/ANE.0b013e3181e30bb8. Guntz E, Latrech B, Tsiberidis C, Gouwy J, Kapessidou Y. ED50 and ED90 of intrathecal hyperbaric 2% prilocaine in ambulatory knee arthroscopy. Can J Anaesth. 2014;61(9):801–7. https://doi.org/10.1007/s12630-014-0189-7. Crankshaw TP. Citanest (prilocaine) in spinal analgesia. Acta Anaesthesiol Scand Suppl. 1965;16:287–90. Poppers PJ, Finster M. The use of prilocaine hydrochloride (Citanest) for epidural analgesia in obstetrics. Anesthesiology. 1968;29(6):1134–8. Hehre FW. Continuous lumbar peridural anesthesia in obstetrics. V. Double-blind comparison of 2 percent lidocaine and 2 percent prilocaine. Anesth Analg. 1969;48(2):177–80. https://doi.org/10.1213/00000539-196903000-00004. Lund P, Cwik J. Propitocaine (Citanest) and Methemoglobinemia. Anesthesiology. 1965;26:569–71. https://doi.org/10.1097/00000542-196507000-00020. Hillman KM. Spinal prilocaine. Anaesthesia. 1978;33(1):68–9. https://doi.org/10.1111/j.1365-2044.1978.tb08292.x. Boublik J, Gupta R, Bhar S, Atchabahian A. Prilocaine spinal anesthesia for ambulatory surgery: a review of the available studies. Anaesth Crit Care Pain Med. 2016;35(6):417–21. https://doi.org/10.1016/j.accpm.2016.03.005. Tucker GT, Mather LE. Clinical pharmacokinetics of local Anaesthetics. Clin Pharmacokinet. 1979;4:241–78. Kant A, Gupta PK, Zohar S, Chevret S, Hopkins PM. Application of the continual reassessment method to dose-finding studies in regional anesthesia. Anesthesiology. 2013;119(1):29–35. https://doi.org/10.1097/ALN.0b013e31829764cf. Garrett-Mayer E. The continual reassessment method for dose-finding studies: a tutorial. Clin Trials. 2006;3(1):57–71. https://doi.org/10.1191/1740774506cn134oa. Motulsky HJ, Dramaix-Wilmet M. Biostatistique une approche intuitive. Louvain-la-Neuve, Belgique: De Boeck; 2013. Zohar S, Chevret S. The continual reassessment method: comparison of Bayesian stopping rules for dose-ranging studies. Stat Med. 2001;20(19):2827–43. https://doi.org/10.1002/sim.920. Sung L, Hayden J, Greenberg ML, et al. Seven items were identified for inclusion when reporting a Bayesian analysis of a clinical study. J Clin Epidemiol. 2005;58(3):261–8. https://doi.org/10.1016/J.JCLINEPI.2004.08.010. Chan AW, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med. 2013:200–7. https://doi.org/10.7326/0003-4819-158-3-201302050-00583. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340(7748):698–702. https://doi.org/10.1136/bmj.c332. Russell IF. A comparison of cold, pinprick and touch for assessing the level of spinal block at caesarean section. Int J Obstet Anesth. 2004;13(3):146–52. https://doi.org/10.1016/j.ijoa.2003.12.007. Chevret S. Statistical methods for dose-finding experiments. Chichester, UK: John Wiley & Sons, Ltd; 2006. Zohar S, Resche-Rigon M, Chevret S. Using the continual reassessment method to estimate the minimum effective dose in phase II dose-finding studies: a case study. Clin Trials. 2013;10(3):414–21. https://doi.org/10.1177/1740774511411593. Resche-Rigon M, Zohar S, Chevret S. Adaptive designs for dose-finding in non-cancer phase II trials: influence of early unexpected outcomes. Clin Trials. 2008;5(6):595–606. https://doi.org/10.1177/1740774508098788. Carvalho B, Durbin M, Drover DR, et al. The ED 50 and ED 95 of Intrathecal isobaric bupivacaine with opioids for cesarean delivery. Anesthesiology. 2005;103:606–18. Gautier P, De Kock M, Huberty L, et al. Comparison of the effects of intrathecal ropivacaine, levobupivacaine, and bupivacaine for caesarean section. Br J Anaesth. 2003;91(5):684–9. https://doi.org/10.1093/bja/aeg251. Maes S, Laubach M, Poelaert J. Randomised controlled trial of spinal anaesthesia with bupivacaine or 2-chloroprocaine during caesarean section. Acta Anaesthesiol Scand. 2016;60(5):642–9. https://doi.org/10.1111/aas.12665. Wang LZ, Zhang YF, Hu XX, Chang XY. A randomized comparison of onset of anesthesia between spinal bupivacaine 5 mg with immediate epidural 2% lidocaine 5 mL and bupivacaine 10 mg for cesarean delivery. Int J Obstet Anesth. 2014;23(1):40–4. https://doi.org/10.1016/j.ijoa.2013.08.009. Zheng D, Wu G, Qin P, et al. Hyperbaric spinal anesthesia with ropivacaine coadministered with sufentanil for cesarean delivery: a dose-response study. Int J Clin Exp Med. 2015;8(4):5739–45. Ben-David B, Miller G, Gavriel R, Gurevitch A. Low-dose bupivacaine-fentanyl spinal anesthesia for cesarean delivery. Reg Anesth Pain Med. 2000;25(3):235–9. Khaw KS, Ngan Kee WD, Wong EL, Liu JY, Chung R. Spinal Ropivacaine for cesarean section a dose-finding study. Anesthesiology. 2001;95:1346–50. O'Quigley J, Pepe M, Fisher L. Continual reassessment method: a practical design for phase 1 clinical trials in cancer. Biometrics. 1990;46(1):33–48. The authors thank all staff members of the Obstetrics Department of the CHU Saint-Pierre University Hospital for their cooperation in this study. We also thank Pace NL (PhD, Department of Anesthesiology, University of Utah, Salt Lake City, Utah) and Stylianou MP (MD, Office of Biostatistics Research, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda) for their statistical advice. We thank Pascal S (Research Department, University Hospital Saint Pierre, Université Libre de Bruxelles, Brussels, Belgium) for her help encoding the protocol and the results on clinicaltrials.gov. No funding for the current study had been received from agencies in the public, commercial, or not-for-profit sectors. Support was provided solely by Anesthesiology department's sources. Department of Anesthesiology, University Hospital Saint Pierre, Université Libre de Bruxelles, CHU Saint-Pierre, Rue Haute 322, 1000, Brussels, Belgium P. Goffard, Y. Vercruysse, R. Leloup & Y. Kapessidou Ars Statistica S.P.R.L, Nivelles, Belgium J-F Fils Service de Biostatistique et Information Médicale, Hôpital Saint-Louis, Paris, France S. Chevret P. Goffard Y. Vercruysse R. Leloup Y. Kapessidou Study conception: GP, VY. Study design: GP, VY, FJF, CS, KY. Participant recruitment: GP, VY, LR. Data collection: VY, GP. Data analysis: FJF, VY, GP, KY. Writing up the first draft: VY, GP, FJF. Revision of drafts: VY, GP, FJF, KY Final approval: GP, VY, LR, FJF, CS, KY GP and VY are both first authors. They equally designed and conducted the study, analyzed the data, and wrote the manuscript. All authors read and approved the manuscript. Correspondence to Y. Vercruysse. The study was approved by the institutional Medical Ethics Committee (President E. Stevens, Research Ethics Board number O.M.007; date of protocol approval 24 of March 2016; protocol number NB076201627436). Patients were enrolled in the study after signed written informed consent had been obtained. GP was invited by Sintetica SA as a speaker for a lecture at the Euroanaesthesia 2019 congress in Vienna, Austria, entitled "Hyperbaric prilocaine for intermediate and short duration procedures". Expenses relating to speaking engagements were refunded by the society. Goffard, P., Vercruysse, Y., Leloup, R. et al. Determination of the ED95 of intrathecal hyperbaric prilocaine with sufentanil for scheduled cesarean delivery: a dose-finding study based on the continual reassessment method. BMC Anesthesiol 20, 293 (2020). https://doi.org/10.1186/s12871-020-01199-0 Sufentanil Perioperative medicine and outcome
CommonCrawl
What is the worst case of the randomized incremental delaunay triangulation algorithm? I know that the expected worst-case runtime of the randomized incremental delaunay triangulation algorithm (as given in Computational Geometry) is $\mathcal O(n \log n)$. There is an exercise which implies the worst-case runtime is $\Omega(n^2)$. I've tried to construct an example where this actually is the case but haven't been successful so far. One of those tries was to arrange and order the point set in a manner such that, when adding a point $p_r$ in step $r$, about $r-1$ edges are created. Another approach might involve the point-location structure: Try to arrange the points such that the path taken in the point-location structure for locating a point $p_r$ in step $r$ is as long as possible. Still, I'm unsure which of these two approaches is correct (if at all) and would be glad for some hints. ds.algorithms randomized-algorithms delaunay-triangulation worst-case TedilTedil $\begingroup$ Try putting all the points on the curve $y = x^r$ for some well-chosen $r$. $\endgroup$ – Peter Shor Jul 27 '12 at 15:04 The first approach can be formalized as follows. Let $P$ be an arbitrary set of $n$ points on the positive branch of the parabola $y=x^2$; that is, $$ P = \{ (t_1, t_1^2), (t_2, t_2^2), \dots, (t_n, t_n^2) \} $$ for some positive real numbers $t_1, t_2, \dots, t_n$. Without loss of generality, assume these points are indexed in increasing order: $0 < t_1 < t_2 < \cdots < t_n$. Claim: In the Delaunay triangulation of $P$, the leftmost point $(t_1, t_1^2)$ is a neighbor of every other point in $P$. This claim implies that adding a new point $(t_0, t_0^2)$ to $P$ with $0 < t_0 < t_1$ adds $n$ new edges to the Delaunay triangulation. Thus, inductively, if we incrementally contract the Delaunay triangulation of $P$ by inserting the points in right-to-left order, the total number of Delaunay edges created is $\Omega(n^2)$. We can prove the claim as follows. For any real values $0<a<b<c$, let $C(a,b,c)$ denote the unique circle through the points $(a,a^2), (b,b^2), (c,c^2)$. Lemma: $C(a,b,c)$ does not contain any point $(t,t^2)$ where $a<t<b$ or $c<t$. Proof: Recall that four points $(a,b), (c,d), (e,f), (g,h)$ are cocircular if and only if $$ \begin{vmatrix} 1 & a & b & a^2 + b^2 \\ 1 & c & d & c^2 + d^2 \\ 1 & e & f & e^2 + f^2 \\ 1 & g & h & g^2 + h^2 \\ \end{vmatrix} = 0 $$ Thus, a point $(t,t^2)$ lies on the circle $C(a,b,c)$ if and only if $$ \begin{vmatrix} 1 & a & a^2 & a^2 + a^4 \\ 1 & b & b^2 & b^2 + b^4 \\ 1 & c & c^2 & c^2 + c^4 \\ 1 & t & t^2 & t^2 + t^4 \\ \end{vmatrix} = 0 $$ It's not hard (for example, ask Wolfram Alpha) to expand and factor the $4\times4$ determinant into the following form: $$ (a-b)(a-c)(b-c)(a-t)(b-t)(c-t)(a+b+c+t) = 0 \tag{$*$} $$ Thus, $(t,t^2)$ lies on $C(a,b,c)$ if and only if $t=a$, $t=b$, $t=c$, or $t=-a-b-c < 0$. Moreover, because $0<a<b<c$, these four roots are distinct, which implies that the parabola actually crosses $C(a,b,c)$ at those four points. It follows that $(t,t^2)$ lies inside $C(a,b,c)$ if and only if $-a-b-c<t<a$ or $b<t<c$.$\qquad\Box$ $\begingroup$ Thank you, even though I actually only wanted a hint (without the proof) ;) $\endgroup$ – Tedil Jul 27 '12 at 17:29 Not the answer you're looking for? Browse other questions tagged ds.algorithms randomized-algorithms delaunay-triangulation worst-case or ask your own question. Randomized algorithm that "looks" deterministic? Do Delaunay triangulations on the sphere maximize the minimum angle? Delaunay Triangulation of Parallelepiped Justifying asymptotic worst-case analysis to scientists Non-Midpoint Segment Splitting in Ruppert's Delaunay Triangulation Refinement Algorithm What is worst case complexity of number field sieve?
CommonCrawl
CFD investigation of flow through a centrifugal compressor diffuser with splitter blades M. G. Khalafallah1, H. S. Saleh1, S. M. Ali1 & H. M. Abdelkhalek ORCID: orcid.org/0000-0003-3629-20871 The aerodynamic losses in centrifugal compressors are mainly associated with the separated flow on the suction sides of impeller and diffuser vanes. The overall performance of such compressors can be improved by adding splitter vanes. The present work examines the effect of varying the geometrical location of the splitter vanes in the diffuser on the overall performance of a high-speed centrifugal compressor stage of a small gas turbine. To increase the pressure recovery through the diffuser, two radial sets of vanes are used. The first set of vanes (diffuser-1) is equipped with splitter vanes, placed mid-distance between the main vanes, while the vanes of the second set (diffuser-2) are conventional vanes. Flow through the compressor was simulated using the ANSYS 19 workbench program. Flow characteristics and compressor performance were obtained and analyzed for different circumferential positions of the splitting vanes relative to the main vanes of diffuser-1. The study covered seven positions of the splitter vanes including the original design of the diffuser where the splitter vanes were located at mid-distance between the main vanes. The analysis shows that, at design conditions, selecting the position of the splitter vanes to be nearer to the pressure side of the main vanes improves the stage performance. In the present study, locating the splitters at 33% of the angular distance between the main vanes leads to the best performance, and a significant improvement in the overall stage performance is recorded. The pressure recovery coefficient is raised by about 17%, the pressure ratio is increased by about 1.13%, and the stage efficiency is increased by about 2.01%, compared to the original splitter position. Performance improvement is related to the suppression of the flow separation and the more uniformity of flow. On the contrary, further moving the splitter closer to the main blade, the pressure recovery coefficient is decreased by about 2% than the position of 33% of the angular distance, but still higher than the original position by about 15% and a limited improvement in the compressor performance is noticed. Moving the splitter far out the main blade annihilates the static pressure recovery of the diffuser by about 2:7% compared with the original position. So, for the investigated compressor, the best position of the splitter blade in the circumferential direction, which provides the best stage performance in our parametric analysis, is not necessary to be at the mid-angular distance between the diffuser's main blades, but it is achieved by moving the splitter to about 33% of the angular distance where the diminished loss from the suppressed flow separation is more prevailing and the instigated friction losses from splitter surfaces are less critical. Centrifugal compressors are used in a variety of applications in contemporary industry, including aerospace, oil, chemicals, metallurgy, gas field, automobile engines, air separation, and others. The fluid initially enters the impeller in a centrifugal compressor, where the energy is transferred to the gas. The impeller will create a complicated flow field, with significant variations in velocity and flow angles in both the circumferential and axial directions [1,2,3]. At the impeller output, a jet-wake structure is often observed, which is quite similar to the mixing of parallel flows. It generally denotes the impeller outlet's non-uniform discharge flow caused by tip leakage flow, high curvature, boundary layer formation, centrifugal force, and Coriolis force. So, the flow entering the diffuser is unsteady and distorted, with a large quantity of kinetic energy to be converted to static pressure increase, especially for high-speed compressors. The flow field in the diffuser is further affected by the pressure non-uniformity generated by the volute in the off-design state [4]. As a result, the diffuser is sandwiched between two extremely complex flow components, both of which have an impact on its flow field and performance. The diffuser's design might have a negative impact on the compressor's overall efficiency. So, it is critical for the designer to understand the impact of various factors on the flow through the compressor in order to design an efficient compressor. Nearly 30:40% of the total input work to the centrifugal compressor is transferred to the kinetic energy of the flow at the impeller's exit. To attain a high level of efficiency, as much kinetic energy as feasible must be converted into a static pressure increase. Two different methods can be used to convert the kinetic energy to pressure increase in the diffuser: By increasing the flow area, the velocity is reduced and the static pressure is increased. Changing the mean flow path radius with guide vanes, which reduces the radial and tangential velocity of the flow while increasing the static pressure. As shown in Fig. 1, the diffuser can be vaneless or vaned. Because of the extended logarithmic spiral flow path, which produces large friction losses, the former allows for a broader working range but has poorer efficiency and pressure recovery. Because the required quantity of diffusion is dependent on the diffuser outlet radius, this flow path cannot be easily decreased. Inserting diffuser vanes is the most frequent method for shortening the flow path. The mean flow path radius is changed by these vanes, which reduces both the radial and tangential velocities of the flow, causing an increase in the static pressure. The compressor stage efficiency and pressure rise would be improved, but the working range would be reduced. Splitter vanes are typically employed to improve flow guiding and decrease the effects of flow separation in radial vaned diffusers. Vaneless and vanned diffuser The main objective of the present work is to study the effect of the relative location of splitter vanes, with respect to diffuser main vanes, on the flow characteristics through the diffuser and on the performance of the compressor stage. The pressure loading on the blades, which is the pressure differential between suction and pressure sides of the impeller blade, and the inlet Mach number values are two of the most important limiting variables in centrifugal compressor design. This pressure loading, and therefore the pressure gradient on the suction side at the vane exit, must be considered in the compressor's design since it impacts the compressor's performance. If this pressure gradient exceeds a specific value on the suction side near the vane exit, the fluid flow separates from the suction side at that point, resulting in extreme energy loss. From the impeller's leading edge to its trailing edge, the blade pressure loading rises. As a result, near the impeller's exit, flow separation becomes more likely [5]. Increasing the number of blades is an imperative solution to improve flow guidance and reduce blade loading, but this increase leads to more friction loss and a larger blockage coefficient. So, there is the insistent needing for using splitter blades, which are shorter blades inserted between full-length blades, to reduce the friction and blockage effects, consequently reducing the separation process in impeller and diffuser and avoiding choking of the flow in the throat of radial impellers and diffusers and achieving high-pressure ratio [6]. Several studies have been conducted to examine the appropriateness of utilizing splitter vanes in the impeller blade passage to address its advantages. For saving time and effort, using a design analysis tool like CFD is the cornerstone to perform the numerical study. Analyzing the complicated flow in a centrifugal compressor has been a good proof for the importance of using such a technique [7]. By using splitters in the impeller's design, it is monitored that its primary effect is to reduce both the load on the main blades and the jet/wake effect at the rotor exit. Fradin [8] conducted a series of experiments on the flow fields of two centrifugal rotors, one with splitters and the other without. The flow field was transonic in both situations, and the study found that when splitters were employed, the flow field at the rotor (impeller) outlet was more uniform and homogenous. Gui et al. [9] tested two centrifugal fans in the incompressible flow regime: one with no splitter and the other using varied geometry splitters. They looked carefully at how splitter length, stagger angle, and circumferential location affected the fans' performance. The results remarked that by using splitters, the load and velocity gradients on the main blades are reduced, but they also contribute extra losses that are highly dependent on their profile shape. When the splitter is situated closer to the suction side of the main blade, the pressure recovery coefficient rises. Also, by lengthening the splitter, the pressure recovery coefficient can be increased with no influence on the efficiency. Many studies have also been carried out to look at the impact of impeller-diffuser interaction. These studies discovered that there is an optimum radial distance between the impeller blades' trailing edge and the diffuser blades' leading edge, which is one of the key factors for achieving peak performance [10, 11]. On the contrary, limited researches have been done on the diffusers to address the suitability of using splitter vanes in the blade passage of radial diffusers. The design of vaned diffusers (VDs) is generally based on engineering knowledge and experimental data. However, because a solid design method has not yet been created, these characteristics pose a constraint when a VD must be used, for example, in centrifugal compressors of small-scale turbochargers. In recent years, the coupling of computational fluid dynamics (CFD) codes with optimization approaches has gained popularity in turbomachinery and has proven to be effective in overcoming the difficulty of constructing a VD based on empirical correlations. Nevertheless, there are few researches on this issue in the literature and even fewer works on optimizing vanned diffusers for turbocharger centrifugal compressor performance. Teipel et al. [12] examined the flow field in radial diffusers for a high-pressure ratio centrifugal compressor utilized in small gas turbine units theoretically. The tested compressor is equipped with a (19 blades) radial diffuser. In addition, infinitely thin splitter vanes were placed along the center line of each diffuser channel to divide each channel into two equal sub-passages. The splitter's leading edge is mounted near the throat of the main diffuser channel. The calculations for inviscid transonic flow fields were done using a time marching technique, and the purpose of this study is to illustrate how the splitter vanes affect both the key features of the flow pattern and the diffuser's overall performance. The fraction of mass flow in the two splitter channels is the concern of Oana et al. in their study [13]. Splitters were usually placed in the mid-angular distance between the main blades, but they modified the splitter incidence angle at its circumferential location to maintain an equal mass flow rate between the two channels. At a certain pressure ratio, this led to improving the impeller's overall efficiency. Abdul Nassar et al. [14] modeled a centrifugal compressor with 18 backswept main blades to be used in a turbocharger. Firstly, they studied the effect of different tip clearances on the compressor performance at different speeds. It was monitored that by increasing the tip clearance, the compressor performance would deteriorate in the form of decreasing the efficiency and the pressure ratio of the compressor and turning the flow to a jet-wake pattern at the impeller's exit. Then, they replaced 9 full blades with 9 splitters and varying their circumferential position at a different spacing from the suction side of the main blade and varying their length ratio and studied the effect of these variations on the overall compressor performance. They concluded that the optimum splitter length ratio is 0.5, and repositioning the splitter at different circumferential positions did not have any effect on the flow structure and the overall compressor performance. Malik et al. [15] used a centrifugal compressor type (DDA 404-III) with a vaneless diffuser; the original impeller was designed by 15 backswept full blades and 15 splitters. The impeller was modified by adding multi splitters as follows: adding a big splitter close to the pressure surface and small splitter close to the suction surface. The modified impeller would be a total of 33 blades (11 main blades, 11 big splitters, and 11 small splitters). The flow conditions and impeller definition such as backswept, thickness, and theta angle are kept the same as the original impeller, to show only the effect of adding multi-splitters on the overall compressor performance. It was remarked that the compressor performance was improved in the form of increasing the efficiency by about 2%, increasing the pressure ratio by 10% and decreasing the relative Mach number at the impeller's inlet. Madhwesh et al. [16] performed a numerical analysis for the effect of adding splitters and varying their geometric location on the overall performance of the centrifugal fan stage. The used impeller is composed of 12 backward swept blades and was supplied by splitters, which had the same aerofoil shape as the main blades. The splitters were provided on the leading edge of the impeller and on the trailing edge of the impeller, at different circumferential positions between the impeller's main blades. Six configurations were formed by these variations and compared with the original design. It was recorded that the overall performance of the centrifugal fan was improved by providing the splitters on the leading edge, especially at the mid-pitch between the main blades, in the form of increasing the static pressure recovery coefficient, while locating the splitters on the trailing edge had an adverse effect on the fan performance due to the formation of large recirculation zone as well as a flow instability. Xiao et al. [17] numerically investigated the improvement of a high-pressure ratio centrifugal compressor performance by applying splitters to a 19-blade vaned diffuser. The applied splitters were located at the mid-pitch between the diffuser's main blades and had the same specifications as the main blade except its length, as shown in Fig. 2. When the splitter was applied in the vaned region, by increasing its length ratio from 0.10 to 0.70 of the main blade, it was observed that the flow separation has been successfully suppressed and there was a significant improvement in the diffuser pressure recovery coefficient and the whole stage efficiency. Locating the splitter at the semi-vaneless region, by further increasing its length ratio from 0.8 to 0.95, the compressor performance was deteriorated due to the resulting higher shock strength and thus the shock loss. Hence, the diffuser pressure recovery was decreased by 0.03, and the whole stage efficiency was dropped by 0.7%. Schematic of the diffuser design Yagnesh Sharma et al. [7] used a centrifugal fan supplied by a vaned diffuser to perform an extensive numerical analysis for the effect of positioning splitter vanes in discrete circumferential locations on the trailing and leading edges of the impeller and the diffuser. The used impeller was equipped with 13 2-D backswept blades, and the used diffuser consisted of 13 blades. Supplemented splitters were selected to be the same aerofoil shape of the main blade in the impeller or the diffuser but with a length ratio of 0.25 of the main blade. The analysis proved that the overall stage performance was improved to a larger extent by applying the splitters on the trailing edge of the diffuser, due to streamlining the flow in the whole flow passage and vanishing the formation of a rotating stall. There is a peripheral improvement in the performance by applying the splitters, in the mid-pitch between the main blades, on the trailing edge of the impeller, and the leading edge of the diffuser, while the stage performance was deteriorated by locating the splitter on the trailing edge, near the suction side of the impeller's main blade, due to the formation of recirculation zone plus the flow instability. It can be noted from the above literature survey that a CFD analysis for the effect of splitter blades in diffusers on the performance of high-pressure ratio centrifugal compressors, as well as its effect on impeller-diffuser interaction, has not been adequately investigated in the long run. Hence, to fill this gap, the current study is devoted to numerically examine the several splitter design choices in a vaned diffuser of a high-pressure ratio centrifugal compressor. The basic configuration of the splitters is extracted from the main vane. To ensure appropriate stage matching, the choke mass flow rate of the diffuser is maintained constant. Diffusers with different splitter angular locations have been investigated to analyze their effect on the fluid flow in the radial diffuser passage and on the overall performance of the centrifugal compressor. The objectives of this study are as follows: To achieve further understanding of the effect of varying the geometrical location of the splitter vanes on the overall performance of a high-speed centrifugal compressor stage of a small gas turbine To optimize the circumferential position of the splitter in the vaned diffuser To improve the overall centrifugal compressor performance, in the form of increasing the pressure ratio and the efficiency of the compressor This study is organized as follows: Section 1 introduces the basic compressor theory, including the flow passage through its components, commonly observed flow phenomena, sources of loss generation, and types of diffusers; after that, a literature survey reviewed the existing experimental and numerical works on the centrifugal compressor to address the benefits of using the splitter vanes in the blade passage of impeller. Section 2 presents a brief about the objectives, design, and setting of this study. Section 3 presents the main dimensions and grid generation of the centrifugal compressor components by "ANSYS Turbo-Grid" program. Sections 4 presents a brief introduction to the turbomachinery modeling techniques by the "ANSYS CFX Pre 19" program which solves the 3D steady compressible Reynolds-averaged Navier-Stokes (RANS) equations and using a finite-volume method to discretize the equations, followed by the numerical conditions. Section 5 examines the accuracy of the numerical model by comparing the numerical results with experimental data at different speeds and mass flow rates. Section 6 presents the results from the simulation that are discussed in detail due to the effect of varying the geometrical location of the splitter vanes on the overall performance of a high-speed centrifugal compressor stage. Section 7 summarizes the conclusions of the study. Computational domain setup The present work was carried out on a centrifugal compressor stage designed for a pressure ratio of 1.9 and a mass flow rate 5.4 kg/s. This is the last stage, after the nine-stage axial flow compressor, of a high-pressure ratio compressor. The radial bladed impeller is followed by two separate radial diffusers where the first diffuser is provided with splitter blades and the second diffuser guides the flow to exit radially. The main dimensions and grid structure of the stage's components are shown in Table 1. Table 1 Main dimensions and grid structure of centrifugal compressor components Because of periodicity structural characteristics of the geometry of compressor's components, only a single channel holding the inlet duct, a full blade of the impeller, 2 vanes (main + splitter) of the vaned (diffuser-1), and a full blade of diffuser-2 are modeled. A powerful tool "ANSYS Turbo-Grid" is used to automate the generation of high-quality hexahedral meshes for the blade passages in the rotating impeller and the diffusers [18], while preserving the underlying geometry, as seen in Fig. 3. Grid structure of the computational domain for the rotor and diffusers The grid convergence study was done for the final model. Initially, coarse grid of (481130) elements is used to plot the characteristic curves at design speed. Subsequently, the grid is increased to (1021734) elements and (2015034) elements. Grid resolution is the main factor for the result accuracy. Grid refinement can be continued until the grid independence limit (GIL) is reached, at which point the acquired results do not have considerable change. The total pressure at the compressor outlet at design speed is taken to evaluate the grid independence test of the computational domain. Various levels of grid refinement used for the grid independence test are shown in Table 2. Table 2 Various levels of grid refinement for centrifugal compressor components The total pressure at the compressor outlet with respect to the grid refinement level is shown in Fig. 4. The results illustrate that there are deviations in the computed value of the total pressure at the compressor outlet for the grid refinement level from coarse to fine. The variation in the results for the grids from (1021734) to ( 2015034) is less than 2%, so all further analysis is carried out by a medium grid to save time and effort. Outlet total pressure for different grid refinement Boundary conditions By the end of the meshing stage, it is exported to the physics definition stage. Specifying boundary conditions as inlet, outlet, walls, flow, and thermal variables, on the CFD model's borders, is the baseline for this stage. The type of fluid material and its axis of rotation, if the fluid zone is rotating, must be specified carefully as input boundary conditions to define the fluid zone. Moving domains, or parts of these domains, problems can be handled in CFX by applying the rotating fluid zone. The rotating frame option was utilized to simulate the flow in the compressor. Due to sweeping the domain periodically by the compressor rotor blades, the flow is unsteady in an inertial frame in this case. However, in the absence of stators, computations may be performed in a domain that moves with the rotating part. In this case, the flow is considered steady relative to the rotating frame. The experimental total pressure and total temperature at the inlet boundary are specified, and a 5.4 kg/s mass flow rate is given at the outlet boundary of the computational domain. No-slip condition and zero heat transfer are set for the walls, and the turbulence is simulated using a SST model with a turbulent length scale, which describes the size of the large energy-containing eddies in the turbulent flow of 0.25 m and turbulence intensity of 5%. By using a finite volume technique, "Ansys -CFX Pre 19" solves the 3D steady compressible Reynolds-averaged Navier-Stokes (RANS) equations by dividing each region of interest into small sub-regions, called control volumes. The working fluid is air, with a molar mass equal of 28.96 kg/kmol. The thermodynamic properties have been obtained with the ideal gas equation of state, and with a four-coefficient zero-pressure polynomial equation for the specific heat capacity. For the root mean square residuals of all equations, the convergence of the steady-state simulations was set to 1 × 10−5 and was controlled by a physical timescale equal to (1/ω), where ω is the angular velocity given in rad/s. All the blades and end walls are considered as smooth walls. The values of the boundary conditions and other input parameters are listed in Table 3. Ideal gas air is used as a working fluid. A single blade passage per row is considered by applying periodic boundary conditions and a mixing plane approach at the rotating impeller and the stationary vanned diffuser. This approach performs a circumferential averaging of the fluxes through bands on the interface, and steady-state solutions are then obtained in each reference frame. Therefore, the transient interaction effects are not accounted for. Table 3 Inlet and outlet boundary conditions Validation of the model To examine the accuracy of the numerical model, the whole flow through the compressor is simulated for different rotational speeds. The model is validated by comparing the compressor pressure ratio and outlet total temperature from numerical results with experimental data at different speeds and corrected mass flow rates (\( {\dot{m}}_{\mathrm{corr}} \)) [19], as seen in Figs. 5 and 6. Pressure ratio validation Outlet total temperature validation $$ {\dot{m}}_{corr}=\frac{\sqrt{T_{o\kern0.3em in}/{T}_{\mathrm{ref}}}}{P_{o\kern0.3em in}/{P}_{\mathrm{ref}}} $$ po in and To in are the total pressure and temperature at the compressor inlet, respectively. pref and Tref are the reference pressure and temperature, respectively. According to the above diagrams, 2% maximum disagreement in pressure ratio at speed ratio NH = 90% and 4% maximum disagreement in outlet total temperature at NH = 80% are observable respectively between experimental and simulation results at the maximum efficiency point. Discrepancies between numerical and experimental results can be attributed to the degree of conforming between the numerical simulation and the experimental data, the accuracy of measurements, and the method of measurements as the experimental measurements give the pressure and temperature distribution along the shroud while the calculations give simple basic mass-averages of the CFD results. The blockage of the flow from the impeller to the first set of vaned diffusers is one of the most important aspects to account for in the vaned diffuser. Reducing the total pressure losses and consequently improving the diffuser efficiency can be accomplished by overcoming this blockage [20]. In the present work, as mentioned before, flow through the compressor stage with splitter vaned diffuser is investigated to get the effect of the circumferential location of the splitter vanes on diffuser performance. The reference case is the diffuser with thirty splitter vanes that have the same main blades' angle distribution along the chord direction and the same thickness distribution along the relative chord length, Table 1. The trailing edge of the splitter vanes is located at the same radius as that of the main vane, and its circumferential position is at the middle of the passage surrounded by two adjacent main vanes (Fig. 7). Blade to blade view of the impeller and diffusers Six different cases were studied and compared for the nominal operating conditions of the compressor. The splitter vanes are shifted (by one degree for each case) in the circumferential direction, three cases (A1, A2, A3) toward the pressure side of the adjacent main blades and three cases (A4, A5, A6) toward the suction side as shown in Fig. 8. A0 designates the reference case where the location of the splitter is mid-distance between two main blades. Geometric configuration for splitter vanes and relative locations of investigated cases The overall performance of the diffusers is measured by the total pressure loss and the static pressure recovery coefficients. The pressure recovery coefficient, which describes the gain in static pressure as a result of transforming the inlet dynamic pressure, is defined as the ratio of static pressure rise to diffuser inlet dynamic pressure, as follows: $$ {C}_{pr}=\frac{p_{out}-{p}_{in}}{p_{t in}-{p}_{in}} $$ while the total pressure loss coefficient is defined as: $$ {K}_{\mathrm{pl}}=\frac{p_{t in}-{p}_{t out}}{p_{t in}-{p}_{in}} $$ where pt in and pin are the total and static pressure at the inlet of the diffuser, respectively. pt out and pout are the total and static pressure at the outlet of the diffuser, respectively. The pressure recovery coefficient and the total pressure loss coefficient will depend to a great extent on the averaging procedure to determine the static and total pressures. Mass averaging is used to calculate total pressures, while area averaging is used to calculate static pressures in the CFX post-processing, such as: $$ Area\kern0.34em average=\frac{\int \upphi \kern0.2em dA}{A} $$ $$ Mass\kern0.34em average=\frac{\int \upphi \kern0.1em d\dot{m}}{\dot{m}} $$ where ∅ is an arbitrary scalar property of the flow. Figures 9 and 10 show total pressure loss and static pressure recovery coefficients with respect to different configurations of splitter vanes used in the present analysis at the exit of diffuser-1 and diffuser-2, respectively. Total pressure loss coefficient and static pressure recovery coefficient at diffuser-1 exit for various configurations Total pressure loss coefficient and static pressure recovery coefficient for diffusers 1 and 2 It is clearly observed that the configuration corresponding to (A2) represents the best performance generally as the static pressure recovery coefficient through the diffusers is better by 17% compared to the original configuration. Other configurations show reasonable improvement of the performance for stage static pressure recovery when compared to configuration (A0) by about (4:8)%. However, at the exit of diffuser-1, configurations A4, A5, and A6 annihilate the static pressure recovery by about 2:7% compared with the original configuration (A0), but Cpr returns to increase at the end of the whole stage by about 4:13%. The physical reasoning for the above-observed phenomena could be deduced by carefully analyzing the total pressure and the velocity contours at 50% span of diffusers 1 and 2, as shown in Figs. 11, 12, and 13, and the entropy contours at different chord length of diffusers 1 and 2, as shown in Figs. 14, 15, 16, and 17, obtained for the configurations which are mentioned above. For a detailed discussion, configurations A2 and A will be selected to represent the best and the worst performance, respectively, compared to the original configuration (A0). Total pressure contours and velocity vectors at 50% span for configuration (A0) Entropy distribution at 30, 60, and 90% chord of diffuser-1 Entropy distribution at 30, 60, and 90% chord of diffuser-1 for original configuration Entropy distribution at 30, 60, and 90% chord of diffuser-1 for configuration A2 It is well known that the area ratio of the diffuser is one of the parameters that needs to be considered during the design of a diffuser [21]. The ratio of the cross-sectional area of the diffuser outlet to that of the diffuser throat plays a significant role in pressure recovery. According to CFX calculations, the pressure recovery factor for diffuser-1 is 0.641, while the design area ratio predicts a Cpr value of 0.721. The basic reason for this discrepancy is flow separation. This occurs when the boundary layers on the walls break away and cause an unfavorable reduction in performance. This break-away is also referred to as a stall that creates backflow in the diffusing region. The performed CFD calculations have shown that the presence of a splitter in diffuser-1 in the original configuration does not provide the best compressor performance. Owing to narrowing down the flow passage, the wetted area increases, and the flow accelerates in the diffuser passage, which induces an adverse pressure gradient and increases the surface friction loss. That is seen clearly in Fig. 11, at 50% span for the original position of the splitter, high-pressure difference (100: 170 kPa) between the pressure side of the main blade and the suction side of the splitter along the domain of diffuser-1, leads to the occurrence of flow separation at: (1) the pressure side of the main blade from 30% chord length till its trailing edge and (2) the pressure side of the splitter at its leading edge extends to 50% chord length. Comparing Figs. 11 and 12 reveals that by reducing the circumferential width of the flow path passage as in configuration (A2), however increasing the friction loss on the blades surfaces than the original diffuser (A0), but the pressure difference is reduced between the blades' surfaces, the speed difference is also reduced, increase the flow velocity in the radial direction. As a result, the flow becomes more homogenous and the extension of the separation region occurring at the pressure side of the main blade disappeared completely except that separation attached to the pressure side of the splitter blade is reduced and shifted downstream. Increasing the outlet radial velocity from diffuser-1 leads to rapid pressure drop in the upstream portion of diffuser-2 blade. This pressure drop is attached by severe increase in pressure gradient in the downstream portion of diffuser-2 blade overcomes the boundary layer and the viscous shear on the suction side of the blade, which leads to forming vortices in the rear part of diffuser-2, as shown by velocity vectors in Fig. 12. What happens in diffuser-2 does not badly affect the overall Cpr at diffuser-2 exit, which still represents the highest Cpr for all configurations. Resultant of these observations, the static pressure of the modified diffuser (A2) is raised by means of increase in static pressure recovery coefficient (Cpr) by about 17% than configuration (A0). Also, the pressure ratio is raised by about 2%, and the efficiency of the whole stage is raised by about 2.01% compared to the original design (A0). Further moving the splitter toward the pressure side of the diffuser-1 main blade, configuration (A3), the pressure recovery coefficient (Cpr), on the contrary, decreases by about (2%) than configuration (A2), but still higher than (A0) by about (15%). On the other side, situating the splitter at different angular distances between the splitter and the main blade's suction side of the neighbor domain, as in configurations (A4, A5, and A6), a great pressure gradient reaches 200 kPa is induced between the pressure side of the main blade and the suction side of the splitter along the flow passage of diffuser-1, as shown in Fig. 13, the distortion of through-flow pattern and the formation of secondary flow are attributed to the presence of swirling flow inside the diffuser-1. High-intensity vortices as well as flow instability leading to the detachment of the boundary layer on the pressure side of the blades. Separation was observed at the pressure-side of the main blade near 25% chord position and grew toward downstream, then the separated fluid moved to the middle of the passage at 50% chord position and occupied half of the passage cross-section, as shown by velocity vectors in Fig. 13. All that leads to annihilating the static pressure recovery coefficient (Cpr) by about 7% compared with the original configuration (A0) at the exit of diffuser-1. Once the circumferential width of the flow path passage is increased, the speed difference caused is also increased, which reduces the flow velocity in the radial direction. Then, a marginally pressure drop in the upstream portion of diffuser-2 blade is noticed followed by a gradual increase in pressure gradient in the downstream portion due to back pressure from the volute. This small difference in the pressure overcomes the formation of vortices and leads to improve the flow in diffuser-2. As a result, the static pressure recovery coefficient (Cpr) for the whole stage is increased by about 4:13%, as shown in Fig. 10, and the stage efficiency and pressure ratio are increased by about 0.17% and 0.2%, respectively, compared to configuration (A0). The above observations are supported by analyzing contours of static entropy along the stream wise of diffusers 1 and 2, as shown in Figs. 14, 15, 16, and 17. At 30% chord length of diffuser-1, it is observed that elevated entropy areas are accumulated close to solid surfaces, i.e., blades and shroud surfaces, especially the shroud of the main blade. Continuing toward 60% chord length, it is noticed that the elevated entropy area is accumulated close to the whole span of the pressure side of the splitter likewise the pressure side of the main blade and it displays a quick diffusion toward the center of flow path, i.e., between the pressure side of the main blade and suction side of the splitter. Consequently, moving toward the trailing edge of diffuser-1 blades, at 90% chord, the entropy close to the shroud of diffuser-1 is reduced and the accumulated elevated entropy on the pressure side of the splitter propagates toward the neighbor flow channel, i.e., between the pressure side of splitter and suction side of the neighbor main blade, as shown in Fig. 15. Further moving toward, 25% chord length of diffuser-2 blade, the elevated entropy contours area is extended along the shroud of the diffuser. At 50% chord length of diffuser-2 blade, the elevated entropy contours diffuse toward the mid-span on the suction surface of the blade. At the trailing edge of diffuser-2 blade, the entropy intensity at the suction surface is reduced. For configuration A2, the effect of modifying splitter position on suppressing the flow separation is illustrated in Fig. 16, where the static entropy contours superimposed with isolines at different cross flow planes of the diffusers are presented. The region of elevated entropy areas is decreased compared to the preliminary design along the pitch of diffusers 1 and 2. A significant performance improvement is observed in the modified diffuser and the whole compressor stage. Not only the elevated entropy area is reduced, but also more uniform fluid flow is also observed along the diffuser-1 domain. The modification of splitter blades makes that the friction loss in the original impeller is reduced, as the velocity and the Mach number on blade surfaces are lower in the modified impeller. On the contrary, locating the splitter, far away from the pressure side of the main blade in the same domain, and close to the suction side of the main blade in the neighbor domain, configurations A4, A5, and A6, it is noticed that at 30% chord of diffuser-1, the elevated entropy area becomes more intensive and accumulated close to the solid surface of the main blade, i.e., blade, shroud, and hub surfaces. Continuing toward the 60% chord, it is observed that a high elevated entropy area close to the pressure side of the main blade displays a quick diffusion toward the center of conduits, and a new high entropy area is accumulated at the pressure side of the splitter. The entropy loss is directly proportional to the surface pressure distribution and the dissipation coefficient, thus more severe entropy loss seems to happen at the main blade pressure side due to relatively higher surface pressure distribution and much larger extent of turbulent boundary layer. Consequently, moving toward the trailing edge at 90% chord, the elevated entropy close to the pressure side of the main blade is reduced while increased at the hub section corner for the splitter pressure side, as shown in Fig. 17. Resultant to all the above observations, the stage performance in the form of stage efficiency and pressure ratio, according to the splitter position in diffuser-1, can be remarked as shown in Table 4. Table 4 Centrifugal compressor performance according to splitter location In general, providing splitter vanes in the diffuser, at judiciously chosen locations, tends to improve the performance of the centrifugal compressor in terms of higher static pressure recovery coefficients and reduced total pressure loss coefficients. But there is the best position of the splitter blade in the diffuser's circumferential position. This position is not in necessity at the middle of the circumferential distance between the diffuser's main blades. From the present work, it was found that: The overall centrifugal compressor performance illustrates a significant improvement after positioning the splitter at 33% of the angular distance closer to the main blade's pressure side. It is observed that the static pressure recovery coefficient at the exit of the whole compressor stage is increased by 17%, the pressure ratio is increased by 1.13%, and the stage efficiency is increased by 2.01% (absolute) compared to the original configuration. Further moving the splitter closer toward the pressure side of the diffuser-1 main blade than (33%) of the angular distance, the pressure recovery coefficient, on the contrary, decreases by about (2%) than this configuration, but still higher than original splitter position by about (15%). Locating the splitter at different angular distances far away from the pressure side of the main blade, i.e., closer to the suction side of the main blade in the neighbor domain, leads to annihilate the static pressure recovery coefficient of the diffuser by about 4:7% compared with the original splitter position. Finally, moving the splitter in the vaned diffuser to 33% of the angular distance achieves the best compressor performance, where the diminished loss from the suppressed flow separation is more prevailing, the instigated friction losses from splitter surfaces is less critical, and the compressor is able to operate at wider range due to decreasing choke's margin. original configuration A (1, 2,…): Modified configurations CFD: C pr : Static pressure recovery coefficient GIL: Grid independence limit K pl : Total pressure loss coefficient NH: High-pressure compressor speed p : p ref : Reference pressure = 101.325 kPa p t : Total pressure p o in : Total pressure at the compressor inlet Blade pressure side RANS: Reynold-average Navier-Stokes equations SS: Blade suction side SST: Shear stress transport S1: Diffuser-1 T o in : Total temperature at the compressor inlet T ref : Reference temperature = 288.15 K Vaned diffuser ω : ∅: Arbitrary scalar property of the flow Dean R, Senoo Y (1960) Rotating wakes in vaneless diffuser. Journal of Basic Engineering 82(3):254–263. https://doi.org/10.1115/1.3662659 Eckardt D (1975) Instantaneuos measurements in the jet-wake discharge flow of a centrifugal compressor impeller. Journal of Engineering for Power 97(3):337–346. https://doi.org/10.1115/1.3445999 Krain H (1981) A study on centrifugal impeller and diffuser flow. Journal of Engineering for Power 103(4):688–697. https://doi.org/10.1115/1.3230791 Hillewaert K, Van den Braembussche R. A (1999) "Numerical simulation of impeller-volute interaction in centrifugal compressors", Journal of Turbomachinery, 121:603–608, 3, DOI: https://doi.org/10.1115/1.2841358. Millour V (1988) "3D flow computations in a centrifugal compressor with splitter blade including viscous effect simulation", 16th Congress. International Council for Aeronautical Societies 1:842–847 Drtina P, Dalbert P, Schachenmann A (1993) "Optimization of a diffuser with splitter by numerical simulation", ASME paper, 93-GT-110. Yagnesh Sharma N, Vasudeva Karanth K (2009) "Numerical analysis of a centrifugal fan for improved performance using splitter vanes", Int. J. Mech., Aero., Ind., Mecha., Manuf. Eng., 3(12), pp. 1520-1526. Fradin C (1987) "Investigation of the three-dimensional flow near the exit of two backswept transonic centrifugal impellers," Proc. of the Eighth International Symposium in Air Breathing Engines, pp. 149 -155. Gui L, Gu C, Chang H (1989) "Influences of splitter blades on the centrifugal fan performances", ASME paper, 89-GT-33. Clements W, Artt D (1989) "The influence of diffuser vane leading edge geometry on the performance of a centrifugal compressor", ASME Paper, No. 89- GT-163. Gottfried D, Fleeter S (2002) "Impeller blade unsteady aerodynamic response to vaned diffuser potential fields", AIAA J. Propulsion Power 18(2):472–480. https://doi.org/10.2514/2.5958 Teipel I, Wiedermann A (1987) "Computation of flow fields in centrifugal compressor diffusers with splitter vanes", The International Gas Turbine Congress, (2) pp: II 311- 317. Oana M, Kawamoto O, Ohtani H, Yamamoto Y (2002) "Approach to high performance transonic centrifugal compressor," AIAA paper 3536. Nassar A, Nagpurwala QH, Bramahanada K (2006) Design and CFD investigation of centrifugal compressor for turbocharger and parametric study on splitter blades. SAS Technology, Volume V NO. 2 Malik A, Zheng Q, Zaidi A, Fawzy H (2018) Performance enhancement of centrifugal compressor with addition of splitter blade close to pressure surface. Journal of Applied Fluid Mechanics 11(4):919–928. https://doi.org/10.29252/jafm.11.04.28658 Madhwesh N, Vasudeva Karanth K, Yagnesh Sharma N (2011) "Impeller treatment for a centrifugal fan using splitter vanes – a CFD approach", Proceedings of the World Congress on Engineering, Vol. III, WCE 2011, July 6 - 8, London, U.K. Xiao HE, Xinqian ZHENG, Jie WEI, Hanxuan ZENG (2016) "Investigation of vaned diffuser splitters on the performance and flow control of high-pressure ratio centrifugal compressors", Turbomachinery Technical Conference and Exposition GT2016. Seoul, South Korea ANSYS, Inc. (2019) "ANSYS-CFX Solver Theory Guide", Release 19.0, Canonsburg, Pennsylvania, U.S.A Metwally M (2007) "Investigation of fluid power system controller at start/stop conditions gas turbine engine", Ph.D thesis, Military Technical College, Cairo. Cumpsty N A (1989) "Compressor aerodynamics handbook", Longmann Scientific and Technical, Essex, England, ISBN 0-582-01364-X Japikse D (1996) "Centrifugal compressor design and performance", Concepts ETI, Wilder, Vermont. Thomson-Shore, Inc., U.S.A The authors declare that they receive no funding. Mechanical Power Department, Faculty of Engineering, Cairo University, Giza, Egypt M. G. Khalafallah, H. S. Saleh, S. M. Ali & H. M. Abdelkhalek M. G. Khalafallah H. S. Saleh S. M. Ali H. M. Abdelkhalek MGK revised the analyzed results and performed the final modifications and revision of the manuscript to be in the final form. HSS revised the paper and made the primary modifications to the manuscript. SMA performed the simulation and analyzed the results. HMA acquired the data, prepared the primary fittings for the simulation, collected the results, and wrote the paper. All authors read and approved the final manuscript. Correspondence to H. M. Abdelkhalek. Khalafallah, M.G., Saleh, H.S., Ali, S.M. et al. CFD investigation of flow through a centrifugal compressor diffuser with splitter blades. J. Eng. Appl. Sci. 68, 43 (2021). https://doi.org/10.1186/s44147-021-00040-w Centrifugal compressor Splitter Blades Numerical simulation Flow separation
CommonCrawl
Gravity gradient bias in the EPF experiment Regular Article Gyula Tóth ORCID: orcid.org/0000-0002-0280-90601 The European Physical Journal Plus volume 135, Article number: 222 (2020) Cite this article The Eötvös Pekár and Fekete (EPF) equivalence test was aimed to check composition dependence of the gravitational force. It was designed to be free from any bias related to the possibly time-varying ambient gravity field. Interestingly in 1986 Fischbach and his colleagues found a systematic composition dependency in the EPF data. This discovery induced an intense debate, and even though the effect was unreproducible, the cause behind is still unknown. Now we found, however, that time-varying gravity gradients may have biased the results in spite of the experimenters' protocol to avoid it. Moreover, even in a constant, but inhomogeneous gravity field the gravitational force was necessarily different on the EPF samples that had different shapes. These issues are serious: we demonstrate that time-varying ambient gravity alone can fully reproduce the EPF results. Hence, we propose a remeasurement with a careful control of gravity gradient bias. We also discuss other more precise equivalence tests in connection with the above bias. The Eötvös Pekár and Fekete (EPF) equivalence test of gravitational and inertial masses of a body was an outstanding achievement in physics at the beginning of the twentieth century [1]. They improved on the precision of previous tests by more than three orders of magnitude and were the first to use a torsion balance to test the equivalence principle. They found no violation on the level of accuracy 1/100,000,000 [2]. In 1986 Fischbach and his coworkers reanalyzed the EPF data and found a surprising composition dependence in terms of baryon number-to-mass ratios of the samples [3, 4]. They hypothesized a composition-dependent fifth force. A series of novel gravitational experiments followed, most notably from the Eöt-Wash group [5,6,7,8,9,10,11,12], and found no evidence of such a fifth force. The original hypothesis in lack of experimental support was abandoned. There remained valid questions, however, about the EPF experiment. This experiment was quite different from the following more precise equivalence tests, and the EPF correlation in spite of every effort has not yet been explained in terms of conventional or unconventional physics [4, 13, 14]. This situation motivated us to investigate the role of gravity gradients in the EPF test. Also we were motivated by our experience with the torsion balance in a nonlinear gravity field [15], and we asked whether any such effect might be visible in the results of the EPF experiment. A In the EPF experiment lower mass of the balance was replaced with different samples. Sample geometry variation changed the coupling with the ambient gravity field and leads to variation of the gravitational torque. This caused the balance to move into a new equilibrium position, even when the equivalence principle was not violated and the ambient gravity field was unchanged. B Gravity field might change during the experiment In this paper we present arguments that the EPF results might have been seriously biased by a classical systematical effect related to the ambient gravity field. We found by using source mass modeling of the ambient gravity field that a gravity gradient bias is enough to fully reproduce the EPF results without any fifth force effect. The essential aspects of our analysis are summarized in Fig. 1. In Sect. 2 we overview relevant details of the EPF measurements and show the origin of the gravity gradient bias. In Sect. 3 we introduce multipole formalism and use it to model the interaction between the balance and the ambient gravity field. Next we evaluate and discuss gravity gradient bias in the EPF experiment with our model. In Sect. 4 we construct a simple ambient gravity field model and present numerical results with this model. In Sect. 5 other more precise equivalence tests are examined in connection with gravity gradient bias. Finally, we point out some important conclusions. EPF measurement principle and origin of gravity gradient bias The purpose of the EPF experiment was to compare gravitational acceleration G due to the Earth on different materials or samples [1]. The main idea was to compare the horizontal component of gravitational force acting on mass m with the horizontal component of centrifugal force mC (see Fig. 2). Eötvös assumed that centrifugal force mC is composition independent; hence, if gravitational force depends on material composition, the imbalance of horizontal forces can be detected with his torsion balance. If angle \(\varepsilon \) is the direction difference between gravity force mg (sum of gravitational and centrifugal forces) and gravitational force mG, \(mG\sin \varepsilon \) is the horizontal component of this force. At geodetic latitude \(\varphi \) the horizontal component of centrifugal force is \(mC\sin \varphi \). EPF introduced the Eötvös parameter \(\eta \) to characterize possible composition dependence of the gravitational force through formula \((1+\eta )mG\), assuming \(\eta =0\) for a selected reference material. This parameter is the ratio of the horizontal component of the differential acceleration of the upper and lower masses of the balance and the horizontal component of the gravitational acceleration [16]. EPF worked with 10 pairs of samples. The effect on the samples below the arm was compared to the fixed upper mass by the Eötvös parameter \(\eta \). The results of EPF tests were finally described in terms of variation of the Eötvös parameter \({\Delta }\eta \) between different pairs of samples. If there was no bias, nonzero Eötvös parameter variation indicated equivalence principle violation. Principle of the EPF experiment Since the small force to be detected is in the north–south direction, the arm of the balance must be set to the east–west direction for maximum effect. (For direction reference we will consistently use the position of the lower mass of the balance.) When first the lower mass lies to the east, net torque on the balance is \(-(\eta _a - \eta _b) mGl\sin \varepsilon = -{\Delta }\eta mGl\sin \varepsilon \). Here l is half arm length and \(\eta _a\), \(\eta _b\) are Eötvös parameters belonging to the lower and upper masses, respectively. (A positive torque is that causes a positive rotation from north to east.) Next, when the balance is rotated by 180\(^\circ \), net torque on the balance changes to \({\Delta }\eta mGl\sin \varepsilon \). The difference of torques is thus \(-2{\Delta }\eta mGl\sin \varepsilon \). The azimuth of the lower mass will change after rotation by \(180^\circ +v_1\), i.e., the change will not be exactly \(180^\circ \) in case of a nonzero \({\Delta }\eta \). This differential rotation \(v_1\) is proportional with the difference of torques. The constant of proportionality is the reciprocal of the torsion constant \(\tau \) of the fiber. If \(v_1\) is measured, the difference of \(\eta \) can be calculated $$\begin{aligned} {\Delta }\eta = -\frac{\tau v_1}{2mlG\sin \varepsilon }. \end{aligned}$$ Unfortunately, this simple formula gets more complicated because of the spatial variation of the gravitational force. We use a local north-east-down reference frame: the x-axis points to north, y to east and z to down. In this frame only the x-component of the gravitational force, \(g_x\), exerts torque on the balance in east–west position. In linear approximation its variation between the masses in east and west positions is \(mg_{x}(z) = mg_{xz}\, z\), where \(g_{xz}\) is the vertical gradient of \(g_x\). Hence, differential rotation due to gravitational gradients is $$\begin{aligned} v_2 = - \frac{2}{\tau } mlh\,g_{xz}, \end{aligned}$$ where h is vertical distance between upper and lower masses. Formula 2 makes it clear that at least two important requirements must be met to avoid gravity gradient bias in the EPF experiment. Both come from the requirement that \(v_2\) should be kept strictly constant during measurement. Otherwise, any change in the total differential rotation angle \(v = v_1 + v_2\), according to Eq. (1), might be interpreted as false violation of the equivalence principle. We note that Eötvös and his coworkers were fully aware of these requirements, and as we will immediately see, they changed their experimental protocol accordingly. The first requirement was that torsion constant \(\tau \), mass m, half arm length l and vertical mass distance h should either be the same for the sample pair or should be measured and used for correction. Since constancy of \(\tau \) cannot be assumed, but its variation cannot be measured accurately, Eötvös and his coworkers used a smart idea to get rid of its change. They made use of the fact that there is no torque due to the composition-dependent gravitational force on the balance in the north–south direction, but there is a gradient effect causing a differential rotation w very similar to Eq. (2) $$\begin{aligned} w = \frac{2}{\tau } mlh\,g_{yz} \end{aligned}$$ due to vertical gravity gradient \(g_{yz}\). (We note that Eqs. (2) and (3) differ in sign since forces \(mg_x\) and \(mg_y\) with positive sign cause opposite torques.) The ratio v/w is free of the critical parameter \(\tau \), but \({\Delta }\eta \) can still be calculated from the change of this ratio across different samples if gravity gradients were unchanged. This was their Method 2. The second requirement was that ambient gravity gradient \(g_{xz}\) (and in Method 2 also \(g_{yz}\)) must be unchanged during the experiment. To get rid of this requirement and still avoid bias EPF took simultaneous measurements with a pair of samples using a double balance. Hence, any possible change of gradients had the same effect on the sample pair; by differencing v/w across the two balances at the same time these effects disappeared. After the first set of measurements they measured a second set by exchanging the samples between the two balances to cancel any false effect coming from the slightly different parameters and orientation of the individual balances of the double balance. This was their most advanced Method 3. It must be noted, however, that the output of their experiment, \({\Delta }\eta \) for measured sample pairs, contained results obtained with both methods. Now we consider the origin of a gravity gradient bias that was not recognized by EPF. Equation (2) is valid both for point masses and for homogeneous circular cylinders if l and h refer to their centers of mass. The latter can easily be verified by integration. But what would be the result, if the vertical variation of \(g_x\) were not strictly linear, i.e., not well described by the formula \(g_{x}(z) = g_{xz}\, z\)? The next possibility would be to use the quadratic approximation \(g_{x}(z) = g_{xz}\, z + g_{xzz}\,z^2\). For cylindrical samples used by EPF the total gravitational force must be calculated by integration of \(g_{x}(z)\) $$\begin{aligned} v_2 = -\frac{2}{\tau }\int _{z_1}^{z_2} m_z l g_x(z) \hbox {d}z, \end{aligned}$$ where \(z_1\), \(z_2\) denote heights of upper and lower faces of the cylindrical sample and \(m_z\) is mass of infinitesimal cross section. After easy calculation for a sample with height \(H = z_2 - z_1\) we get $$\begin{aligned} v_2 = -\frac{2}{\tau } m l \left( hg_{xz} + \left( h^2 + \frac{H^2}{12} \right) g_{xzz} \right) . \end{aligned}$$ Equation (5) clearly points to a new source of gravity gradient bias in the EPF experiment. It is due to sample height dependence. If sample height varies from H to \(H'\) and \(g_{xzz}\) is nonzero, there is a gravity gradient bias $$\begin{aligned} {\Delta }\eta _{{\mathrm{bias}}} = -\frac{g_{xzz}}{12G\sin \varepsilon } (H^2 - H'^2) \end{aligned}$$ in the Eötvös parameter. A false violation of the equivalence principle is detected. And since EPF used samples with very different heights in their experiment, there is a room for this bias. For example the height of the Pt cylinder was 6 cm, that of magnalium (Mg–Al alloy) was 11.9 cm, and their Snakewood cylinder was 24 cm long. (We remark that Eq. (5) is valid only for thin cylinders. A better approximation will be shown later that contains a term proportional to \(H^2/12 - R^2/4\). This expression depends on the radius R of the cylinder as well, but our conclusion on the origin of the bias is the same.) Let us estimate the magnitude of the bias. According to Eq. (6) it is proportional with \(g_{xzz}\). This is the coefficient of the quadratic term in the height dependence of \(g_x\). Close to surfaces of density jumps there are strong nonlinearities of \(g_{x}\). Since the original field books, notes and any possible sketches about the EPF measurements are unavailable at the moment, we can only guess the masses that were originally close to the balances at the measurement site(s). Our calculations with mass models showed that \(g_{xzz}\) may be as big as 0.2–\(3\,\mathrm {nGal/cm}^2\) when we are within 1 m from a strong density jump (floor, walls, etc.). We recently measured \(g_{xzz}=0.07\,\mathrm {nGal/cm}^2\) in an underground tunnel with an improved Pekár G-2B balance [17]. Hence, gravity gradient bias in \({\Delta }\eta \) may reach up to \(2 \cdot 10^{-9} - 8\cdot 10^{-8}\) and has a strong dependence on the local structure of the ambient gravity field and on sample shape. Compare this bias with the magnitudes of \({\Delta }\eta = \pm 1-6\cdot 10^{-9}\) reported by EPF [1]. We have shown in this section the origin and expected magnitude of the ambient gravity gradient bias. Next we further formulate and discuss the bias using multipoles. Multipole formulation and discussion of possible gravity gradient effects Multipoles [18] proved to be useful to describe gravitational interaction between the masses of the torsion balance and the masses outside that produce the ambient gravity field [9]. The ambient field is characterized by \(Q_{lm}\) multipole fields; with this characterization the gravitational torque on the balance is $$\begin{aligned} T_g = -\frac{\partial W}{\partial \phi } = -4\pi iG \sum _{l=2}^\infty \frac{1}{2l+1} \sum _{m=-l}^l m \; q_{lm} Q_{lm} e^{-im\phi } . \end{aligned}$$ Here W is gravitational potential energy, G is the universal constant of gravitation, \(q_{lm}\) are multipole moments of the balance calculated in a body-fixed frame, and \(\phi \) is azimuth of the balance's arm. Azimuth is measured from the x-axis and positive toward the y-axis. No torque is produced by the \(Q_{11}\) multipole field, because the arm is hanging freely on the torsion fiber; hence, the sum starts from \(l = 2\). Our goal is to find Eötvös parameter variation \({\Delta }\eta \) in terms of multipole moments and ambient multipole fields. This quantity is expressed for Method 2 by $$\begin{aligned} {\Delta }\eta =c\left( \frac{v}{w} - \frac{v'}{w'} \right) \end{aligned}$$ where v and w denote deflection differences of the balance arm in E–W and N–S directions expressed in scale units and by $$\begin{aligned} {\Delta }\eta =\frac{c}{2} \left[ \left( \frac{v_1}{w_1} - \frac{v_2'}{w_2'} \right) + \left( \frac{v_2}{w_2} - \frac{v_1'}{w_1'} \right) \right] \end{aligned}$$ for Method 3, where primes indicate a different sample and subscripts denote individual balances of the double balance [1]. The proportionality constant is $$\begin{aligned} c= \frac{w\tau }{4LM_a l_a C\sin \varphi }, \end{aligned}$$ where L is distance to the scale in scale units, \(M_a\) is mass of the sample, \(l_a\) is length of the balance arm, \(\tau \) is torsion constant of the fiber, and \(C \sin \varphi \) is the horizontal component of centrifugal acceleration. Small variations of the Eötvös parameter can also be caused by azimuthal differences of the arm; we omitted these since we were interested in gravity gradient effects. Next, we expressed the ratio of v and w in terms of gravitational torque differences, which in turn are related to field multipoles according to Eq. (7). For negative m we utilized the relation \(Q_{l,-m} = (-1)^m Q^*_{lm}\) where star denotes complex conjugation. We assumed a symmetrical mass distribution of the balance with respect to the plane of the arm's axis and the fiber. In this case all \(q_{lm}\) are real and by keeping terms up to \(l \le 4\) $$\begin{aligned} \frac{v}{w} = -\frac{\hbox {Re}(p)}{\hbox {Im}(p)}, \end{aligned}$$ $$\begin{aligned} p = q_{21}Q_{21} + \frac{5}{7}\; q_{31}Q_{31} - \frac{5}{7}\; q_{33}Q^*_{33} + \frac{5}{9}\; q_{41}Q_{41} - \frac{5}{9}\; q_{43}Q^*_{43}. \end{aligned}$$ To summarize, Eqs. (8–12) formulate the effect of gravity gradients on the output of the EPF experiment for multipole moments and fields for \(l \le 4\). Now we discuss possible sample and gravity field-related variation of the v/w ratio, for any such change can bias the Eötvös parameter. First consider shape dependence of \(q_{lm}\)'s. Multipole moments for a mass distribution with density \(\rho ({r})\) over domain \({\varOmega }\) are defined as $$\begin{aligned} q_{lm} = \int _{{\varOmega }} \rho ({r}) r^l Y_{lm}^*({\hat{r}}) \hbox {d}^3r \end{aligned}$$ where \(Y_{lm}\) is spherical harmonic, star denotes complex conjugation, and \({\hat{r}}\) is the unit vector along r. The samples used in the EPF experiment were all cylinders suspended vertically. For homogeneous vertical cylinders placed at the origin, the expression due to [19] was utilized and resulting multipoles were translated using the technique disclosed by [18]. We found the following low-degree multipole moments (\(l = 2,3,4\)) of vertical cylinders to be shape dependent $$\begin{aligned} q_{20}= & {} \frac{1}{24} \sqrt{\frac{5}{\pi }} M (H^2 -3R^2-6x^2-6y^2+12z^2) \end{aligned}$$ $$\begin{aligned} q_{31}= & {} \frac{1}{8} \sqrt{\frac{21}{\pi }} M (x-iy)(H^2 -3R^2-3x^2-3y^2+12z^2) \end{aligned}$$ $$\begin{aligned} q_{41}= & {} -\frac{3}{8} \sqrt{\frac{5}{\pi }} M z (x-iy)(H^2 -3R^2-3x^2-3y^2+4z^2) \end{aligned}$$ where M is mass of the cylinder, R is radius, H is height, and x, y, z are coordinates of the center of mass of the cylinder. If the center of mass lies in the xz plane, these coefficients are real. These multipole moments indeed include shape dependence that is proportional with \(H^2/12-R^2/4\). What is relevant for the EPF experiment is the shape dependence of \(q_{31}\) and \(q_{41}\), because these also appear in Eq. (12). The largest gravity field-related bias is expected from the lowest degree multipole fields. Also, the largest shape effect is expected to come from the \(q_{31}Q_{31}\) term because Eq. (7) converges as \((r/R)^l\) where r is a typical dimension of the torsion balance and R is a characteristic distance from the pendulum to the closest source [9]. Hence, for the moment we keep only the first two terms in Eq. (12) to discuss shape and gravity field-dependent bias. This means that we assume a two-component gravity field with nonzero field multipoles \(Q_{21}\) and \(Q_{31}\). To quantify bias the total derivative of v/w was calculated in terms of infinitesimal increments \(\hbox {d}q_{31}\), \(\hbox {d}Q_{21}\) and \(\hbox {d}Q_{31}\) of multipole moments and fields $$\begin{aligned} \hbox {d}\left( \frac{v}{w}\right) = -\frac{1}{\hbox {Im}(p)}\left( \frac{5}{7} \hbox {d}q_{31} Q^+_{31} + q_{21}\hbox {d}Q^+_{21} + \frac{5}{7} q_{31} \hbox {d}Q^+_{31}\right) , \end{aligned}$$ where we used the abbreviations \(\hbox {d}Q^+_{21}=\hbox {Re}(\hbox {d}Q_{21})+v/w\,\hbox {Im}(\hbox {d}Q_{21})\), \(\hbox {d}Q^+_{31}=\hbox {Re}(\hbox {d}Q_{31})+v/w\,\hbox {Im}(\hbox {d}Q_{31})\), \(Q^+_{31}=\hbox {Re}(Q_{31})+v/w\,\hbox {Im}(Q_{31})\) and \(p = q_{21}Q_{21} + \frac{5}{7}\; q_{31}Q_{31}\). The change of v/w described by Eq. (17) is directly related to the bias of \({\Delta }\eta \) through Eqs. (8) and (9). The first term on the right-hand side of Eq. (17) gives a shape bias that depends on \(\hbox {d}q_{31}\), while the next two terms give a gravity field bias related to \(\hbox {d}Q^*_{21}\) and \(\hbox {d}Q^*_{31}\). For Method 2 (see Eq. 8) all three terms may bias the Eötvös parameter, since v/w and \(v'/w'\) values were calculated from successive measurement sets with different samples in the EPF experiment. There must have been a considerable time delay between these sets. Although [1] disclosed no details on timing of their measurements, they do gave the number of observations in each set. From this it can be estimated that at least one week or even more, a couple of weeks must have passed between subsequent sets of measurements, during which constancy of the gravity field cannot safely be assumed. And, of course, there is a sample shape-dependent bias irrespective of any ambient gravity field change due to \(\hbox {d}q_{31}\). For Method 3 (see Eq. 9) EPF made simultaneous measurements with a double balance. Values of v/w and \(v'/w'\) were calculated from measurements taken at almost the same time; therefore, the bias terms containing \(\hbox {d}Q^*_{21}\) and \(\hbox {d}Q^*_{31}\) must have been quite the same for both v/w and \(v'/w'\) even in a changing gravity field. Their differences according to Eq. (9) must have canceled giving no effect. The first term in Eq. (17) may still have biased the results, because \(q_{31}\) was not the same for the samples, i.e., \(\hbox {d}q_{31}\) was nonzero in the experiment. Clearly this bias also depends on the multipole field \(Q^*_{31}\). This field might have changed a little because in this method there was another set of measurements with samples exchanged between the two balances. So even in this method there was a slight dependence of the Eötvös parameter on time variation of the ambient gravity field due to the nonzero shape effect (nonzero \(\hbox {d}q_{31}\)). If we keep the first term in Eq. (17) and introduce the relative change \(\hbox {d}q_{31}/q_{31}\) $$\begin{aligned} \hbox {d}\left( \frac{v}{w}\right) = -\frac{5}{7}\frac{Q^*_{31}}{\hbox {Im}(Q_{21})+5/7 \hbox {Im}(Q_{31})\,q_{31}/q_{21}} \cdot \frac{\hbox {d}q_{31}}{q_{31}} \end{aligned}$$ we see linear dependence of the bias for Method 3 (or for Method 2 in unchanging gravity field) on \(\hbox {d}q_{31}/q_{31}\) when constancy of the \(q_{31}/q_{21}\) ratio and of field multipoles \(Q_{21}\), \(Q_{31}\) is assumed across different sample pairs. This important relation will be used later for checking the EPF results, since no knowledge of field multipoles is required for using it. Finally we remark that if we neglect terms in p of higher than second degree (this was the analysis done by EPF), any false gravity gradient violation signal must come from the change of \(Q_{21}\), that is from the second term of Eq. (17). This bias was avoided by EPF through their Method 3. We have seen that if higher-degree field multipoles are not negligible, even Method 3 results may be biased. The origin of this bias is the change of coupling to the gravity field as a function of sample geometry (see Fig. 1). Calculation and discussion of possible gravity gradient bias in the EPF experiment Before starting the interpretation of the results of the EPF experiment, it is useful to show the results themselves. In Fig. 3 we plotted the recalculated Eötvös parameter differences \({\Delta }\eta \) and their standard deviations for the 10 measured sample pairs. Recalculation was based on the v/w ratios and their standard errors published in the original paper [1]. Recalculated Eötvös parameter differences \({\Delta }\eta \) with standard errors for the 10 sample pairs measured by EPF in the order these results were published in their paper [1]. Recalculation was based on the published v/w ratios and their standard errors. Shading indicates measurements with Method 3 Equation (17) shows clearly that both multipole moments and field multipoles are required to calculate the Eötvös parameter bias. It was relatively straightforward to calculate multipole moments of the balances used by EPF from the parameters published in their paper [1]. We used closed expressions of inner multipole moments [18, 19] of cylinders in upright and horizontal positions. Balance beam was a hollow brass cylinder with 0.5 cm diameter in the EPF experiment. The upper cylindrical Pt mass with 30 g weight was inserted into one end of the hollow cylinder [20]. Missing parameters of beam geometry were determined from the mass moments given by [1]. We emphasize that a complete mass model of the balance (with different probe masses, the balance beam and the upper mass) was used for all the calculations unless indicated otherwise. For reference we show multipole moments of the complete balance as well as its \(q_{31}/q_{21}\) ratios calculated in Table 1. Table 1 Calculated multipole moments and \(q_{31}/q_{21}\) ratios of the complete balance including the arm for each measured sample of the EPF experiment It is explained in Sect. 3 that linear dependence of the bias for Method 3 (or for Method 2 in unchanging gravity field) is expected on \(\hbox {d}q_{31}/q_{31}\) for constant \(q_{31}/q_{21}\) ratio and field multipoles \(Q_{21}\), \(Q_{31}\) across different sample pairs. From Table 1 it can be seen that \(q_{31}/q_{21}\) varies by less than 2% across samples for Method 3 and up to 16% for Method 2. In a steady-state ambient gravity field thus approximate linear correlation with the Eötvösparameter variation \({\Delta }\eta \) is expected. Figure 4 shows the correlation with computed relative changes \({\Delta } q_{31}/q_{31}\) between the samples of the same sample pair. For Method 3 average of the two slightly different \({\Delta } q_{31}/q_{31}\)'s was used. Variation of the Eötvös parameter \({\Delta }\eta \) as a function of relative multipole change \({\Delta } q_{31}/q_{31}\) of the balance for sample pairs. If we assume a constant \(q_{31}/q_{21}\) ratio of the sample pairs, linear dependence is expected in unchanging ambient gravity field for Method 2 and for Method 3 even in changing ambient gravity field. Approximate linear dependence is clearly seen for Method 3 EPF results, which strongly supports a gravity field originating bias in the experiment The distribution and size of masses close to EPF measurement sites were basically unknown to us. As we mentioned neither sketches nor field notes were available that might have helped us in this respect. Hence, one may only guess on the amount of gravity gradient bias in the EPF experiment. Quite the opposite to this situation, in modern rotating torsion balance tests multipole fields were carefully measured and compensated for [9, 12, 21] to minimize any possible false gravity gradient violation signal. Point mass model of the ambient gravity field. The first part contains two point masses \(M_1\) and \(M_1\) symmetrically placed around the origin. It models the \(Q_{21}\) field multipole and has a nonzero \(Q_{43}\). Three point masses \(2M_2\), \(M_2\) and \(M_2\) form the second part to model the \(Q_{31}\) field multipole (see Table 2). Parameter \(\alpha \) is \(5/2 \root 8 \of {4/5}\) We wanted, however, to demonstrate the effect of changing ambient gravity field on the output of the EPF experiment even in case of missing field multipoles. Therefore, we constructed a simple source mass model of the ambient gravity field (Fig. 5). We admit that this model was quite artificial. Despite this we think it still serves its purpose to demonstrate how sensitive the EPF experiment was to the ambient gravity field due to sample geometry bias. Table 2 Details of the ambient gravity field model of low-order (\(l \le 4\)) \(Q_{lm}\) field multipoles Our mass model consisted of 5 point masses. There were 4 independent model parameters. Details of the model are found in Table 2. The v/w ratio computed from the model must conform to the EPF measurements. Hence, the azimuths \({\varPhi }_1\) were constrained to yield the measured v/w ratios. We assumed point mass sources of the \(Q_{31}\) field multipole to lie at 20 m characteristic distance. It was because about 20 m from the measurement site there was a strong concrete tower reported by [2]. More precise values of Eötvös parameter variations \({\Delta }\eta \) and standard deviations for the 10 sample pairs were recalculated from the original data [1]. The parameter space of the model was searched for optimum solutions by differential evolution [22]. Optimality criterion was that the sum of weighted squared differences between model \({\Delta }\eta _{{\mathrm{model}}}\) and measured \({\Delta }\eta \) should be minimum. Weights were assigned from the standard deviations of the results. We considered two extreme cases: In Case 1, no variation of the model was allowed, i.e., point masses were fixed for all measurement epochs. Upper subfigure of Fig. 6 shows the correlation between this model and the original EPF measurement in terms of variations of the Eötvös parameter \({\Delta }\eta \). In Case 2, reasonable changes were allowed for parameters of the mass model between different measurement epochs. Lower subfigure of Fig. 6 shows that this way a perfect match between the model and the original EPF measurement could be achieved in terms of \({\Delta }\eta \) Eötvös parameter variation. These two figures show two extreme cases of correlation of our ambient gravity field bias model with the EPF experiment in terms of modeled and measured \({\Delta }\eta \) Eötvös parameter differences. The upper figure shows results of Case 1. In this case no variation of the ambient gravity field was allowed. Consequently, modeled \({\Delta }\eta _{{\mathrm{model}}}\) was due to varying sample geometry alone. The lower figure shows perfect correlation in Case 2. This fit was achieved by allowing small variations of the ambient gravity field model between measurements that were not taken at the same time (see Fig. 7). Although it is unreasonable to require a perfect fit, it demonstrates clearly that the original EPF measurements can be interpreted fully as a false gravity gradient effect. \(R^2\) and \(R^2_{adj}\) both denote coefficient of determination, and the latter is adjusted for the number of terms in the model. F is F test statistics for overall significance, and p(F) is p value of the F test Parameter values of the ambient gravity field model that were required for exactly reproducing the EPF results (see lower subfigure of Fig. 6). Shading indicates measurements with Method 3. For each sample pair measured with Method 2 there were two models since these measurements were not taken at the same time. The ambient gravity field model was a simple 5-point mass model composed of a 2-point \(Q_{21}\), \(Q_{43}\) and of another 3-point \(Q_{31}\) field. Parameters \(d_1\) and \({\varPhi }_1\) resp. \(d_2\) and \({\varPhi }_2\) are horizontal distance and azimuth belonging to the 2-point resp. 3-point mass models. \(M_2/M_1\) is the mass ratio of the two models. 84% of the relative parameter changes are below \(\pm 5\%\), and the maximum is 17.9%. These changes seem reasonable since EPF reported on the construction of a nearby building during the observations Figure 7 presents model parameters required for the perfect fit in Case 2. This fit was achieved with a 2.6% average absolute variation of the mass model's parameters. Additionally, 84% of the relative parameter changes were below \(\pm 5\%\). Maximum parameter variation was 17.9%, and the three largest variations were found in the mass ratio parameter \(M_2/M_1\). Figure 4 indicates no linear dependence for results by Method 2; on the contrary, results by Method 3 showed an approximate linear dependence. No dependence for Method 2 was expected, since time variation of the ambient gravity field between the two measurement sets might have probably hidden the effect of sample geometry. We checked this assumption with a simple calculation by using the same mass model the parameters of which are shown in Fig. 7. The gravity field was changing according to the mass model, but the shape of the probe masses remained constant within any given pair of Method 2 measurements. Results in Table 3 show that indeed for this particular mass model the variations of the Eötvös parameter are in the same order of magnitude or even bigger than those obtained by EPF. We mention that EPF reported on the construction of a nearby building and excavation of a deep pit during the observations [20]. This construction work must have caused a significant local gravity field variation. On the other hand, Method 3 results are virtually insensitive to this time variation and the sample geometry effect became clearly visible. We think these results strongly support our hypothesis of sample geometry-dependent ambient gravity field bias. We argue that in case of purely random effects there would be no such clear distinction between the two methods in the results of EPF experiment. Table 3 Effect of the time-varying gravity field on Eötvös parameter differences calculated according to Method 2 of EPF The simple source mass model has demonstrated that a particular ambient gravitational field can even reproduce the measured effects. Interpretation of the results obtained by source mass modeling confirmed the possible role of time variation of the ambient gravity field during the EPF experiment. When no time variation was allowed, we found moderate correlation between modeled and measured Eötvös parameter differences \({\Delta }\eta \). Even for this fit quite unrealistic model parameters (too large \(M_2/M_1\) ratio and too small \(d_2\)) were required. On the other hand, when time variation of the source mass model was allowed, we got reasonable results. Although the assumption of a perfect fit without any statistical fluctuation is obviously unrealistic, Fig. 7 shows that both magnitude of calculated parameters of the mass model and range of their variations are feasible. Examination of other equivalence tests in connection with gravity gradient bias In view of the above results for the EPF experiment, the question is whether other, more precise equivalence tests might also be affected by a similar gravity gradient bias. Example calculations were made to assess the magnitude of such effects. In this section we present results of calculations of gravity gradient effects for three more precise tests made by Dicke et al. [23], Braginsky [24] and the Eöt-Wash group [12]. We also remark in this context on the space equivalence experiment MICROSCOPE [25]. For a recent overview of weak equivalence principle tests the reader is referred to [26]. In Sect. 3 we have shown that the EPF bias is due to either differences in proof mass geometry or time variation of ambient gravity or both. First let us discuss bias due to proof mass geometry in the context of the above more precise tests. Compared to the EPF experiment, these tests used better proof mass geometry that was designed to reduce spurious ambient gravity field effects. Dicke and his collaborators [23], for example, reduced field gradient effects by 1) making the torsion balance small, with a moment arm of only 3.3 cm, 2) by giving the balance an approximate threefold symmetry axis and 3) by operating the instrument remotely. Braginsky [24] reduced the effects of variable local gradients by constructing a balance in the form of an eight-pointed star with equal masses at the points. The rotating pendulum designed by Wagner et al. [12] with fourfold azimuthal and up–down symmetries reduced systematic effects by minimizing coupling to the gravity gradients by allowing four different orientations of the pendulum with respect to the turntable rotor. Another difference of the Dicke and Braginsky tests with respect to the EPF test was that their balances were fixed in the N–S direction, because rotation of the balance was not necessary to test for the effect of solar gravity field. (While EPF made some tests with a fixed balance for the solar gravity, their results were of inferior accuracy with respect to those in the Earth's gravity field.) With a fixed balance there is only a constant torque due to shape effects that cannot bias the amplitude of the expected 24-hour solar equivalence violation signal. Wagner et al. [12] did rotate the balance to measure in the Earth's gravity field, but—in contrast with the EPF experiment—they very carefully considered and controlled coupling to external gravity gradients (multipole fields) due to shape effects. In the following we present example calculations of the bias due to time variation of ambient gravity. Our intent was just to show the expected magnitude of certain environmental effects. Dicke and his collaborators [23] estimated and reported the effects of time-varying gravitational disturbances due to anthropogenic causes, precipitation, atmospheric masses and imperfections of the triangular torsion balance. Their balance was mounted and remotely controlled inside an instrument pit 12 ft deep by 8 ft square on a poured concrete floor resting on rock. Figure 11 of their paper [23] shows that the instrument pit was surrounded by soil; hence, the balance might have been affected by soil moisture effects. These effects were not mentioned in their paper [23]. To calculate the torque due to soil moisture variations we used the multipole formalism introduced in Sect. 3, specifically Eq. (7). To use this equation, \(q_{lm}\) multipole moments and \(Q_{lm}\) multipole fields are required. Assuming perfect geometry of the balance, i.e., identical moment arms, the \(q_{lm}\) multipole moments of the Dicke Al–Au balance were calculated up to degree and order 10. If water content increased by 50% next to the pit in the pores of a rectangular volume of soil 60 cm \(\times \) 70 cm \(\times \) 1.5 cm, assuming normal porosity of 40%, the density change was \( 0.2 \,\hbox {g/cm}^3\), and an extra mass of 1.3 kg appeared inside that volume of the soil. Using corrected formula Eq. (19) from [19], the multipole moments of a cuboid with mass M, height H, sides a, b and orientation \(\phi \), the \(Q_{lm}\) multipole fields were obtained up to the same degree and order 10 of the rectangular mass anomaly using the analytic method de scribed in [18]. With these data we calculated the maximum torque on the balance \(T_g = 6.4 \cdot 10^{-10}\,\hbox {dyn}\,\hbox {cm}\). If, however, 1% imperfection in one of the arms of the balance was assumed, the maximum torque reached \(7.9 \cdot 10^{-10}\) dyn\(\cdot \)cm. Both torques are higher than the one that corresponds to the standard deviation of the Eötvös parameter \(\eta = 1 \cdot 10^{-11}\) reported by Dicke et al., since the torque \(T_g\) on the balance that corresponds to this value is \(T_g = 5.9 \cdot 10^{-10}\,\hbox {dyn}\,\hbox {cm}\). Such mass changes and the corresponding torque of course should have a 24-h periodicity to be falsely interpreted as an equivalence violation. But our simple calculations showed the sensitivity of the apparatus used by Dicke et al. [23] to possible time-varying environmental gravitational disturbances. $$\begin{aligned} q_{lm}= & {} M\sqrt{\frac{(2l+1)(l+m)!(l-m)!}{4\pi }} e^{-im\phi } \nonumber \\&\times \, \sum _{k=0}^{(l-m)/2} \frac{(-1)^{k}{H}^{l-2k-m} }{(m+k)!k!2^{l+2k+m}(l-m-2k+1)!(2k+m+2)} \nonumber \\&\times \, \sum _{p=0}^{m/2 } (-1)^p {m\atopwithdelims ()2p}\sum _{j=0}^k {k\atopwithdelims ()j} \nonumber \\&\times \, \frac{(-1)^{m/2}a^{2k+m-2j-2p}b^{2j+2p}+b^{2k+m-2j-2p}a^{2j+2p}}{2j+2p+1}\,,\nonumber \\&\quad \text {for both } m \text { and } l \text { even, and } m\ge 0\,. \end{aligned}$$ The experiment by Braginsky et al. [24] was performed in a basement room of Moscow State University that was thermally insulated very carefully. In lack of additional details the size of the room and moisture control of walls can only be guessed. Again, if we assume perfect geometry of the balance, i.e., identical arms, \(q_{lm}\) multipole moments were calculated up to degree and order 10. Assuming 0.8% moisture change in a 35-cm-thick wall of size 2.5 m \(\times \) 5 m at a distance of 5 m from the balance, field multipoles were again calculated using Eq. (19). In this case the maximum torque was only \(T_g = 7.2 \cdot 10^{-15}\,\hbox {dyn}\,\hbox {cm}\). This is negligible considering standard deviation of the Eötvös parameter \(\eta = 0.9 \cdot 10^{-12}\) reported by the Braginsky team, since the torque \(T_g\) on the balance that corresponds to this value is \(T_g = 7.5 \cdot 10^{-13}\,\hbox {dyn}\,\hbox {cm}\). Relative machining tolerance of the balance was reported by [27] as \(4 \cdot 10^{-4}\). If therefore one of the arms of the balance was assumed to be changed by this amount (for a 10-cm arm by \(40\,\mu \hbox {m}\)), the quadrupole moment \(q_{22}\) became nonzero. With this slight imperfection the maximum torque on the balance increased by two orders of magnitude to \(T_g = 8.1 \cdot 10^{-13}\,\hbox {dyn}\,\hbox {cm}\) assuming the same moisture change as above. This torque is slightly bigger than that corresponds to the reported precision of the experiment. The Eöt-Wash group at University of Washington, Seattle, performed a series of equivalence tests with uniformly rotating balances; the last and most precise of their test was reported in [28, 29]. The \(Q_{21}\), \(Q_{31}\) and \(Q_{41}\) moments of the environmental gravity gradient fields were measured with a specially designed gradiometer pendulum. Daily variations of \(Q_{21}\), \(Q_{31}\) were also monitored for about 10 days each. To avoid false equivalence principle violating signal from ambient gravity gradient field multipoles due to \(m=1\) moments, compensating masses near the apparatus reduced the \(Q_{21}\) and \(Q_{31}\) moments, as well as the \(Q_{22}\) moment. The \(Q_{41}\) field was shown to be negligible. However, the observed seasonal variations of \(\sim 1\%\) due to changes in the moisture content of the soil behind the laboratory and adjustments to the water level in Lake Washington limited their ability to fully compensate for environmental gravity gradients. The best standard deviation of the Eötvös parameter obtained in the Earth's gravity field was \(\eta = 1.2 \cdot 10^{-13}\) for the Be–Al configuration, and the corresponding torque on the balance was estimated as \(T_g = 2.6 \cdot 10^{-12}\,\hbox {dyn}\,\hbox {cm}\). Multipole moments of the balances in Be–Ti and Be–Al configurations are taken from Table 5.4 in [29]. We assumed 0.8% moisture change in a 35-cm-thick wall of size 1 m \(\times \) 1 m at a distance of 0.8 m from the balance and calculated multipole fields of this source as above. The maximum torque was \(T_g = 1.1 \cdot 10^{-13}\,\hbox {dyn}\,\hbox {cm}\) for the Be–Al and \(2.1 \cdot 10^{-13}\,\hbox {dyn}\,\hbox {cm}\) for the Be–Ti configuration. Both values are less than 10% of the value corresponding to the best standard deviation. The MICROSCOPE space mission implemented a new approach to test the weak equivalence principle [25]. Nongravitational forces acting on the satellite are counteracted by thrusters making it possible to compare the accelerations of two concentric hollow cylindrical test masses of different compositions. The test masses' shape has been designed to reduce the local self-gravity gradients by making them approximate gravitational monopoles [30]. For ideal gravitational monopoles no perturbing gravitational source, however close, could induce a differential acceleration between test masses [31]. Therefore, this smart design eliminated almost completely the bias that may result from interaction of test masses with the possible time-varying environmental gravity field. We think that our findings shed a fresh light on the EPF data. It was demonstrated that the gravity field-related bias discussed in the present paper must be taken into account in any future attempt to explain the EPF results. We thus suspect that EPF results were infected by this gravity gradient-related systematic error, although its precise magnitude remained unknown. Therefore, we propose a remeasurement to provide experimental verification of the gravity gradient effect by using an original torsion balance. Such an effect is not excluded by much more precise equivalence tests since it can only be measured with an Eötvös torsion balance. We mention in this regard that a remeasurement of the EPF equivalence test is going on this year in Budapest. An original Pekár G-2B balance with computer-assisted automatic rotation and reading was recently installed in a deep tunnel at 30 m below ground. We hope this remeasurement with careful control of the gravity gradient effect may provide further details on the EPF bias and on the EPF experiment itself. This experimental validation could also exclude or confirm the existence of an effect that could be related to the rotation of the Earth [14], a possible other reason of the apparent systematic deviation from the equivalence principle observed in the EPF experiment. Considering equivalence tests that followed the EPF experiment, our example calculations have shown that even if the balance is not rotated, environmental gravity effects may reach the sensitivity of some of the experiments. This was the case for the tests made by Dicke, Braginsky and their collaborators. Even slight imperfections may increase sensitivity of the balances to relatively small variations of environmental gravity by orders of magnitude. On the contrary, due to the careful consideration of ambient gravity effects and/or by smart design, more recent equivalence tests by the Eöt-Wash group or by the MICROSCOPE mission are well-controlled in terms of the gravity gradient bias. R.V. Eötvös, D. Pekár, E. Fekete, Annalen der Physik 373, 11 (1922). https://doi.org/10.1002/andp.19223730903 L. Bod, E. Fischbach, G. Marx, M. Náray-Ziegler, Acta Physica Hungarica 69(3–4), 335 (1991). https://doi.org/10.1007/BF03156102 E. Fischbach, D. Sudarsky, A. Szafer, C. Talmadge, S.H. Aronson, Phys. Rev. Lett. 56, 3 (1986). https://doi.org/10.1103/PhysRevLett.56.3 E. Fischbach, D. Sudarsky, A. Szafer, C. Talmadge, S.H. Aronson, Ann. Phys. 182(1), 1 (1988). https://doi.org/10.1016/0003-4916(88)90294-1 C.W. Stubbs, E.G. Adelberger, F.J. Raab, J.H. Gundlach, B.R. Heckel, K.D. McMurry, H.E. Swanson, R. Watanabe, Phys. Rev. Lett. 58(11), 1070 (1987). https://doi.org/10.1103/PhysRevLett.58.1070 C.W. Stubbs, E.G. Adelberger, B.R. Heckel, W.F. Rogers, H.E. Swanson, R. Watanabe, J.H. Gundlach, F.J. Raab, Phys. Rev. Lett. 62(6), 609 (1989). https://doi.org/10.1103/PhysRevLett.62.609 E.G. Adelberger, C.W. Stubbs, B.R. Heckel, Y. Su, H.E. Swanson, G. Smith, J.H. Gundlach, W.F. Rogers, Phys. Rev. D 42(10), 3267 (1990). https://doi.org/10.1103/PhysRevD.42.3267 P.G. Nelson, D.M. Graham, R.D. Newman, Phys. Rev. D 42, 963 (1990). https://doi.org/10.1103/PhysRevD.42.963 Y. Su, B.R. Heckel, E.G. Adelberger, J.H. Gundlach, M. Harris, G.L. Smith, H.E. Swanson, Phys. Rev. D 50(6), 3614 (1994). https://doi.org/10.1103/PhysRevD.50.3614 J.H. Gundlach, G.L. Smith, E.G. Adelberger, B.R. Heckel, H.E. Swanson, Phys. Rev. Lett. 78(13), 2523 (1997). https://doi.org/10.1103/PhysRevLett.78.2523 G.L. Smith, C.D. Hoyle, J.H. Gundlach, E.G. Adelberger, B.R. Heckel, H.E. Swanson, Phys. Rev. D 61(2), 022001 (1999). https://doi.org/10.1103/PhysRevD.61.022001 T.A. Wagner, S. Schlamminger, J.H. Gundlach, E.G. Adelberger, Class. Quantum Grav. 29(18), 184002 (2012). https://doi.org/10.1088/0264-9381/29/18/184002 A. Franklin, E. Fischbach, The Rise and Fall of the Fifth Force: Discovery, Pursuit, and Justification in Modern Physics (Springer, Berlin, 2016) E. Fischbach (2019). arXiv e-prints arXiv:1901.11163 G. Csapó, S. Laky, C. Égető, Z. Ultmann, G. Tóth, L. Völgyesi, Period. Polytech. Civ. Eng. 53(2), 75 (2009). https://doi.org/10.3311/pp.ci.2009-2.03 E.G. Adelberger, J.H. Gundlach, B.R. Heckel, S. Hoedl, S. Schlamminger, Prog. Part. Nucl. Phys. 62, 102 (2009). https://doi.org/10.1016/j.ppnp.2008.08.002 Z. Szabó, Acta Geodaetica et Geophysica 51(2), 273 (2016). https://doi.org/10.1007/s40328-015-0126-4 C. D'Urso, E.G. Adelberger, Phys. Rev. D 55(12), 7970 (1997). https://doi.org/10.1103/PhysRevD.55.7970 J. Stirling, S. Schlamminger (2017). arXiv e-prints arXiv:1707.01577 P. Selényi (ed.), Roland Eötvös Gesammelte Arbeiten (Akademiai Kiado, Budapest, 1953) J.H. Xu, C.G. Shao, J. Luo, Q. Liu, L. Zhu, H.H. Zhao, Chin. Phys. B 26(8), 080401 (2017) R. Storn, K. Price, J. Global Optim. 11(4), 341 (1997). https://doi.org/10.1023/A:1008202821328 P. Roll, R. Krotkov, R. Dicke, Ann. Phys. 26(3), 442 (1964). https://doi.org/10.1016/0003-4916(64)90259-3 V.B. Braginsky, V.I. Panov, Sov. J. Exp. Theor. Phys. 34, 463 (1972) P. Touboul, G. Métris, M. Rodrigues, Y. André, Q. Baghi, J. Bergé, D. Boulanger, S. Bremer, P. Carle, R. Chhun, B. Christophe, V. Cipolla, T. Damour, P. Danto, H. Dittus, P. Fayet, B. Foulon, C. Gageant, P.Y. Guidotti, D. Hagedorn, E. Hardy, P.A. Huynh, H. Inchauspe, P. Kayser, S. Lala, C. Lämmerzahl, V. Lebat, P. Leseur, F.M.C. Liorzou, M. List, F. Löffler, I. Panet, B. Pouilloux, P. Prieur, A. Rebray, S. Reynaud, B. Rievers, A. Robert, H. Selig, L. Serron, T. Sumner, N. Tanguy, P. Visser, Phys. Rev. Lett. 119, 231101 (2017). https://doi.org/10.1103/PhysRevLett.119.231101 A.M. Nobili, A. Anselmi, Physics Letters A 382(33), 2205 (2018). https://doi.org/10.1016/j.physleta.2017.09.027. Special Issue in memory of Professor V.B. Braginsky V.B. Braginskii, A.B. Manukin, Measurement of Weak Forces in Physics Experiments (University of Chicago Press, Chicago, 1977) T.A. Wagner, S. Schlamminger, J.H. Gundlach, E.G. Adelberger, Class. Quantum Gravity 29(18), 184002 (2012). https://doi.org/10.1088/0264-9381/29/18/184002 T.A. Wagner, Rotating torsion balance tests of the weak equivalence principle. Ph.D. thesis, University of Washington (2014). Also available as http://hdl.handle.net/1773/27559 A. Connes, T. Damour, P. Fayet, Nucl. Phys. B 490(1), 391 (1997). https://doi.org/10.1016/S0550-3213(97)00041-2 N.A. Lockerbie, Class. Quantum Gravity 17(20), 4195 (2000). https://doi.org/10.1088/0264-9381/17/20/304 Open access funding provided by Budapest University of Technology and Economics (BME). The author thanks colleagues at Wigner Research Centre for Physics for discussions, especially Péter Ván for calling our attention to the EPF experiment, Lajos Völgyesi and György Szondy for their encouragement and support, and József Cserti for helping to fix an error in the calculation of the ambient gravity field model. We acknowledge that the EPF remeasurements are scheduled as part of the Eötvös year in 2019 held on the occasion of the 100th anniversary of Loránd Eötvös's death. We are also grateful for constructive comments of the referees. Department of Geodesy and Surveying, Budapest University of Technology and Economics, Műegyetem rkp. 3, Budapest, 1111, Hungary Gyula Tóth Correspondence to Gyula Tóth. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Tóth, G. Gravity gradient bias in the EPF experiment. Eur. Phys. J. Plus 135, 222 (2020). https://doi.org/10.1140/epjp/s13360-020-00242-w Accepted: 20 January 2020 DOI: https://doi.org/10.1140/epjp/s13360-020-00242-w
CommonCrawl
What are the effects of slicing the Earth in half with a particle beam? An exact copy of our world is sliced in half by a particle beam. The beam has a diameter of 40 meters and moves at a speed of 0.999c. The beam slices the earth from top to bottom in 600 milliseconds or 0.6 of a second What are the implications of this? I'm assuming that the Earth itself would still remain as a singular planet and not split into two separate bodies due to gravitational binding but I would like to know some specifics of the other effects. How large would the earthquakes be? What would the Earth's atmosphere be like after the split? How large would the tidal waves be? Would the damage done last for months, years or decades? Would there even be human civilization left? science-based physics environment gravity earth Ender Look ChrisChris $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – James♦ Jul 16 '18 at 6:43 $\begingroup$ Why limit it to "particle beam" ? Same effects from a photon-only beam, or a really big circular saw. The biggest uncertainty is the "time on target" required to make the slice. $\endgroup$ – Carl Witthoft Jul 16 '18 at 14:35 $\begingroup$ None of the answers so far appear to have addressed the whole picture. We need to consider the energy density of the 40m wide beam of particles moving at 0.999c that would be required to sweep the cut clear of extraneous matter at Earth's maximum thickness, then calculate the leakage from the sides of the beam as the earth's matter was turned into a plasma and swept away. My feeling is that the energy/particle density required to do this would mean that the 'particle beam' would be rather more like a vast chainsaw blade, and energy transfer would blow the earth apart. $\endgroup$ – Monty Wild♦ Jul 18 '18 at 8:59 All the people who said "We all die!" are correct with good answers, but for most people it won't be the earthquakes that kill them -- they'll already be dead. The leakage from the radiation beam will be large enough to fry pretty much everything. As it cuts through the atmosphere -- even before it hits the ground -- it will scatter a huge amount of energy, since in order for the beam to cut through 8000 miles of rock it needs a huge2 or even huger3 amount of energy. (This is enough energy to vaporize and disperse a disk of rock 8000 miles in diameter and 40 meters thick in half a second. Note that since it's cutting through the vaporized rock, it needs to push the vapor out of the way so it can keep cutting.) This xkcd What If on a similar scenario has a lot of hints about what would happen. A small percentage of the radiation will scatter off the air, heating the air to center-of-the-Sun incandescence which will radiate in all directions. This heating will push the air up and out and all that will interact further, creating a shock wave, but more importantly dumping enough energy into the atmosphere to fry things at a long distance (around the curve of the Earth) by reflection off the air. (Also, the "Earthlight" reflected off the Moon will fry that whole side of the planet.) The huge pulse of vaporized rock produces a shock wave that propagates up through the rock. It's supersonic for a considerable distance, but I can't compute whether it's supersonic all the way to the surface at the point perpendicular to the disk where the rock in between is thickest. In the meantime, it will take a couple seconds for the gap between the halves of the Earth to close, but the shock of it closing will only travel at the speed of sound in rock, so it will take up to an hour (the speed of a P-wave is 1-8 kps) for the shock to get to everywhere on Earth. Most people/plants/things get fried first. So highly energetic flash kills nearly everyone, then the supersonic blast wave from the cut from all the vaporized rock blows everyone into the air at high speed and at high acceleration and this kills everyone who survived the radiation. When the debris falls back to fill the 40 meter gap, no one is around to be killed by it. Added: One of the other answers notes that the effect would be like a huge detonation in the slice, and a couple of them do nice back-of-the-envelope calculations of the size. But these calculations are lower bounds to the actual magnitude, since they assume that most of the beam's energy would be deposited where it was intended, and that is certainly not true -- as noted above, the boiling plasma would scatter the beam and mean that the actual energy of the beam far exceeds what's needed to vaporize the slice under ideal circumstances. So, a much bigger bang with some incredibly complex plasma physics going on. We can say a few things. First, near the surface the energy would be more than enough to produce craters -- to blow matter on either side of the slice up and out. Since the slice is a line rather than a point impact, one result would be a furrow -- pretty much the effect you'd get by simultaneously exploding a chain of millions of deeply buried nuclear bombs. But it wouldn't be just a single BANG!, since there would be a continuous outflux of vaporized rock from deeper down. So maybe more like a chain of Mt St Helens eruptions with a long, linear caldera? Finally, the effect on the far side of the Earth would be much worse, since there the beam would come blasting up from below and super-heated, super-high-pressure rock gas from many cubic miles of rock would burst out, throwing the adjacent rock high -- probably some out to escape velocity -- and continue the furrow, probably even deeper than on the entrance side. 90 degrees away from the slice you'd start with the radiation, then the blast wave through the air and through the rock. As I noted, I expect the effect of the outgoing wave from deep below would be far more devastating than the effect when the material from the two halves drop together. Under ideal conditions the energy needed to vaporize a 40 meter slice is not enough to disrupt the Earth, since it's not enough to melt the Earth, and total disruption requires more than that. Would the inefficiency caused by the boiling plasma raise the energy requirements high enough that there would be enough energy to melt things? I don't know and I don't think we can easily model it. But it's certain that a lot of rock would be thrown into the sky and while some would go into orbit or escape, most would fall back over the next few days as a horrendous meteor shower. It would be interesting to watch. From Mars. With good radiation protection. Nick S Mark OlsonMark Olson $\begingroup$ What are your huge² and huger³ footnotes referring to? $\endgroup$ – Orphevs Jul 14 '18 at 0:06 $\begingroup$ That's huge squared and huger cubed! I.e., really really big and really, really, really ,really humongous. $\endgroup$ – Mark Olson Jul 14 '18 at 0:33 $\begingroup$ They're quite confusing. I was wondering where you footnotes were on first reading. $\endgroup$ – jpmc26 Jul 14 '18 at 10:26 $\begingroup$ Seemed obvious to me they were tounge-in-cheek mathematical references and exponents, and not footnotes. Footnotes are usually noted with super case and square brackets. $\endgroup$ – YetAnotherRandomUser Jul 14 '18 at 12:12 $\begingroup$ @YetAnotherRandomUser: Depends on what you consider "usually". I've seen that footnote notation on the web, but I've never seen it in professional publications. There it is always either raised or in square brackets, but never both at the same time. $\endgroup$ – celtschk Jul 15 '18 at 5:09 Oh. Oh my.. this is Not Good, but not for the reason you might immediately think. The reason is that the particle beam, while doing it's slicing, will have turn the material it hits into plasma in order to get it to move out of the way. That isn't immediately a problem. The problem is that the plasma is, from the point of view of the particle beam, pretty opaque. The only way the plasma beam can get to the material below that plasma is to literally blast it out of the way. You may spot a problem here: quite a lot of that plasma is inside the planet. Not only that, but the numbers you've given (specifically the 0.6 seconds) means that the beam must be delivering a truly staggering amount of energy in order to 'slice the world in half', as the material on the far side of the earth can't be 'sliced' unless the stuff between it and the beam has first been moved out of the way. Now, I can't work out the amount of energy needed, because this kind of physics is hard enough without insane constraints like 'the two halves of the planet must not be touching', as Randall Munroe noted in the XKCD what-if where a hypothetically much less powerful beam turned the surface of the moon into a super efficient rocket engine, but I can say with some certainty if your beam is powerful enough to blast its way through the thickest part of the Earth then it's powerful enough to blow the side of the Earth nearest to it into a cloud of very, very hot gas. It really wouldn't surprise me if you needed to exceed the gravitational binding energy of the Earth in order to do it, but even if you don't then an awful lot of the planet has just been very, very forcibly blasted out of the way. If you just mean 'this beam immediately removes all the matter it touches' then the other answers have you covered. If, however, you genuinely mean a particle beam of sufficient power to do this then... erm... you no longer have a planet. Joe BloggsJoe Bloggs $\begingroup$ I LOVE this answer! It would take, what, a Dyson Sphere around our own sun to power the beam and it would flash burn the atmosphere into non-existence taking all life and probably the top 10 meters of soil along with it before it finished its cut? Fabulous! $\endgroup$ – JBH Jul 13 '18 at 21:11 $\begingroup$ "no longer have a planet" - wouldn't gravity cause the remains to collapse back into a planet again? $\endgroup$ – JBentley Jul 14 '18 at 13:49 $\begingroup$ @JBentley: not if the beam delivers over the gravitational binding energy of the planet, which is what I think it would have to do. Even if it doesn't the bits collapsing back won't be collapsing into a planet as much as a ball of molten rock, and at an absolute minimum the bits along the cut will have to reach escape velocity $\endgroup$ – Joe Bloggs Jul 14 '18 at 16:51 $\begingroup$ @JBH As it happens, exactly this has been covered in another what-if. :P $\endgroup$ – Siguza Jul 15 '18 at 21:52 $\begingroup$ @JBentley, even if only the binding energy of the slice was delivered, the catastrophic effects to the atmosphere would be such that nobody would care what's left, even the attackers (unless they were hunting minerals, I suppose). To paraphrase from the movie, "A Christmas Story," it's a planet in the academic sense that it's round and once supported life. $\endgroup$ – JBH Jul 15 '18 at 22:31 TL; DR --> Nobody will survive I won't talk about the laser because it's unimportant, we would be already dead so it doesn't matter. Earth has $5.97237\times10^{24}\text{ kg}$ of mass, so if we split it at half it will have the same mass but splited into two different bodies $2\times(2.986185\times10^{24}\text{ kg} )$. I won't calculate the mass loss for the laser because 1) I don't know how to do that. 2) It's the same because that matter wouldn't be disintegrated, so its mass would still by added to the body's mass. You say a separation of $40\text{m}$ and our gravity is $9.81\text{m/s}^2$. If we split the earth at half it's the gravity will also split at the half for each body as it does with mass. So each body is falling into the other body at a speed of $4.905\text{m/s}^2$. Also, the distance is the half, because both bodies are falling on each other. Equations for a falling body: We can calculate the time for collision: $$\text{t} = \sqrt{\frac{2\times{d}}{\text{g}}}$$ $$\text{t} = \sqrt{\frac{2\times{20m}}{4.905\text{m/s}^2}}$$ $$\text{t} = 2.855\text{s}$$ Each body will take 2.855 seconds to impact with the other half. We can calculate the final speed before collision: $$\text{v}_\text{i} = \sqrt{2\times\text{g}\times\text{d}}$$ $$\text{v}_\text{i} = \sqrt{2\times4.905\text{m/s}^2\times20\text{m}}$$ $$\text{v}_\text{i} = 14\text{m/s}$$ Each body will fall with a final speed of 14m/s into the other body. And we this speed we can calculate the collision impact: $$\text{E} = \frac{\text{M} \times \text{V}^2}{2}$$ $$\text{E} = \frac{2.986185 \times 10^{24} \text{kg} \times 4.905^2 \text{m/s}^2}{2} $$ $$\text{E} = 7.1844\times10^{25}\text{N} = 35.92\text{YJ}$$ At the point of collision, each body will produce around 36 yottajoules of energy! But remember that the impact is the double because each body is falling into the other one: 72 YJ of impact. TNT equivalent In other words, the energy of the collision will be $17.17\text{ Yg of TNT}$ or: $$17,171,295,308,227\text{ Gigatons} = 17,171,295,308,227,772\text{ Megatons}$$ $$89,433,829\times\text{The asteroid who kill dinosaurs}$$ $$799\text{ Tons of mater-antimatter anihilation}$$ No one will survive. Also, this collision will crack Earth into more pieces (bounded by gravity). Note: Due to some comments I've received I warn you that this is only a bare and simple idea of what would happen. If you are looking for a deeper analysis you should be aware of: calculate the gravity with a complex integral because of gravity change over the distance from the core. Calculate the decompression of the core and the explosion it will release. Know that the impact waves would travel around sound's speed. Calculate the inelastic collision of the huge Earth's core and its effect on the several collision it would release (because it will bounce). Determine the temperature of the plasma from the laser, it's expansion and explosion and determines if that would slow down the Earth's collision. $\begingroup$ P(momentum)=MxV. F(force)=MxA. Not that it matters much, 10^25 is a big number. How much of that force is absobed into the liquid core (inelastic collision) that represents 99.75% of the planetary radius? Hitting a punching bag with force that would break your fingers against brick won't break your fingers. (I won't downvote your answer, either 😉). $\endgroup$ – JBH Jul 13 '18 at 20:50 $\begingroup$ @JBH !! Still processing your comment, I'll have to do some research! :( $\endgroup$ – Ender Look Jul 13 '18 at 21:02 $\begingroup$ You've turned v^2 (m^2/s^2) into (m/s^2). That should be energy, not force. 'g' is almost irrelevant here (and it wouldn't be 9.8m/s^2 anyway, it would be almost zero near the core). The earth shouldn't be modeled as a rigid object at those pressures. $\endgroup$ – BowlOfRed Jul 14 '18 at 0:44 $\begingroup$ @BowlOfRed, ok, I've just turn newtons into joules. I think gravity will still affect because the planet is divided into 2 half, so each half of the core will attract with gravity the other part. If the body wasn't split into 2, the gravity would be 0 because it would be only one core pulling from everywhere at the same time, not two half... no? Or at least that I think $\endgroup$ – Ender Look Jul 14 '18 at 1:58 $\begingroup$ 9.81 m/s² is the gravitational acceleration near Earth's surface. I'm pretty sure it's wrong to simply divide it by 2. You'd need a more complex integral over an hemisphere to calculate the correct value. If your model isn't very accurate, you shouldn't keep so many significant digits (e.g. 4.905 m/s²) because you cannot even be sure that the order of magnitude is correct. $\endgroup$ – Eric Duminil Jul 15 '18 at 12:21 It's clear from the other answers this will not be good. But just how not good? One aspect that hasn't been calculated is what sort of energy does it take to vaporize a 40m slice of the Earth in 0.6 seconds (why 40 m and why 0.6 seconds?) How much material is in this slice? We can get a rough answer by calculating the ratio of its volume with the Earth's volume. The Earth's volume is about $10^{12} km^3$. The volume of the slice is... $$slice volume = area * height$$ $$slice volume = \pi r^2 * height$$ Plugging in the Earth's mean radius of 6,371 km we get about $5x10^6 km^3$. That gives us a ratio of about $5x10^{-7}$. The mass of the Earth is about $6x10^{24} kg$ so our slice has a mass of about $3x10^{18} kg$. That's roughly the size of a small moon, say Hyperion. You just detonated a small moon in the middle of the Earth When you turn a solid like rock into gas it expands. When you do it rapidly that tends to cause a lot of pressure. When you do it so rapidly that the pressure wave travels is supersonic we call this a detonation and the things which do it "high explosives". Those burn at about 5 to 10 km/s. You just detonated a small moon at 21,000 km/s. I'm not even sure what that means, so let's look at the energies involved. I'm not sure how much energy it would take to cleanly slice through the Earth, but we can get some idea just by looking at the energy necessary to raise that slice by 1° Kelvin. The Earth is roughly... 32% iron 15% silicon 14% magnesium 9% other stuff Each of these has a specific heat, how much energy it takes to raise 1 kg by 1˚ Kelvin? If we multiple each of their specific heat by their ratios, we get a rough idea of the specific heat of the Earth. About $670 \frac{J}{kg K}$. Multiply that by the mass of the slice, $3x10^{18} kg$, and we get $2 x 10^{21} \frac{J}{K}$. To get everything to vaporize let's say we want to raise their temperature by 1000 K. I don't actually know what it would take, but I'm going to guess it's more than 100 K and less than 10,000. So that's $2 x 10^{24} J$. That's roughly the energy the Earth gets from the Sun in six months. Or about four dinosaur killing meteors. But wait, there's more! Phase transitions also cost energy. A LOT of energy. Solid to liquid is the specific heat of fusion. Liquid to gas is the specific heat of vaporization. Using the same technique plugging in the ratios we get a specific heat of fusion of $400,000 \frac{J}{kg}$ and a specific heat of vaporization of $4,700,000 \frac{J}{kg}$. It's mass of $3x10^{18} kg$ means an extra $1.2x10^{24} J$ going from solid to liquid, and $1.4 x 10^{25} J$ from liquid to gas. Adding that all together brings us to $1.7 x 10^{25} J$ which is the energy of a small solar flare. We're cooked. We can estimate how fast this will shove on the halves of the Earth apart using the kinetic energy equation: $e = \frac{1}{2} m v^2$ If know the energy of the explosion and and mass of the Earth, so solving for velocity we get $\sqrt{\frac{2e}{m}}$. Plugging in our numbers we get about 2.4 m/s or a pokey 8 kph. At least the Earth won't be blown apart. SchwernSchwern $\begingroup$ That's a nice back-of-the-envelope analysis. But it gives an extreme lower limit because to cut anywhere nearly as quickly as specified, you need to move all that rock vapor out of the way -- thousands of miles out of the way through a 40 meter slot! -- in less than half a second. That's going to require enormous heating, far greater than that needed to simply vaporize the rock. $\endgroup$ – Mark Olson Jul 14 '18 at 0:46 $\begingroup$ @MarkOlson Thank you. I saw Joe Bloggs covered the "getting out of the way" problem. I'm coming at it from another direction. I thought the energies would be higher. $\endgroup$ – Schwern Jul 14 '18 at 1:02 $\begingroup$ I'm not following what the dollar signs mean. Do dollars convert to all those different forms of mass and energy? If so, can I do a reverse conversion to make money? $\endgroup$ – YetAnotherRandomUser Jul 14 '18 at 12:51 $\begingroup$ @YetAnotherRandomUser Those indicate a formula to be rendered with MathJax. If you're seeing dollar signs, your browser must not be rendering it. Perhaps you have Javascript off? $\endgroup$ – Schwern Jul 14 '18 at 16:01 $\begingroup$ +1 for plugging the numbers. They're smaller than I expected for just 'let's turn this slice of rock to vapour' but roughly the right order of magnitude. In my head I was hovering somewhere around 25 zeroes. Of course, it's delivering at least that much energy evenly across the disk in the timescale that's the killer.. $\endgroup$ – Joe Bloggs Jul 14 '18 at 23:14 Do you mean the beam is powerful enough to take out the 40m slice out of the entire planet? Then, the two halves will snap together due to gravity. This will result in world-wide earthquakes, and months of intense volcanic activity. Atmosphere will stay (mostly), but will be filled with volcanic gases and dust. Most life will die, from earthquakes, poisonous gases, or starvation. I suspect some primitive lifeforms will survive in the oceans. Edit for supporting Evidence: Tectonic plates typically move at 1-10cm per year. The Fukushima equarthquake was caused by (at most) a 30m shift of tectonic plates in a single spot. Here, we will have 40m shift instantly and everywhere. Here is what happened to dinosaurs: Wiki. OK, I might have been overly dramatic, and am willing to downgrade my dire predictions to destruction of 75% of life, with few human survivors scrambling for dwindling food and resources among the ruins of civilization. Ender Look did the right kind of the math, though, and shows that this will be a lot worse than dinosaur-killing meteor. JBH Bald BearBald Bear $\begingroup$ While this is a short answer, it is quite literally the truth. Also, I'm not the downvoter. P.S. These effects would be somewhat different depending on how fast it gets cut. $\endgroup$ – Hosch250 Jul 13 '18 at 19:18 $\begingroup$ I don't think you are being overly dramatic, all of us will die. $\endgroup$ – Ender Look Jul 13 '18 at 20:04 $\begingroup$ @EnderLook, with everybody dead, there will be nobody to put up a drama $\endgroup$ – L.Dutch♦ Jul 13 '18 at 20:55 EDIT: Several answers have demonstrated beyond a shadow of a doubt that the earth will look like popcorn that's been in a microwave too long. But! I'll leave this for future reference because it was fun to write. The Earth's crust isn't holding things in, gravity is holding things down and causing pressure as a result. Nothing ultra dramatic would occur. You wouldn't get the two pieces slipping sideways or suddenly flying apart or one spinning against the other in an odd way. Remember, gravity is holding everything together. Newton teaches that things basically stay the way they are unless a force acts on them. At the moment, there's only two forces: Gravity, which pulls the two pieces together. The liquid core and mantle re-combine and serve as bandages (coagulation) for the cuts near the surface. A bit of vulcanism, some wishy-washy with the oceans, but after a few billion dollars worth of insurance claims, life goes on. Pressure has a bit part in this. The magma wants out, but it can't overcompensate for gravity and since the vast majority of the planet is liquid the beam will have the effect of passing a knive through a bowl of water. The lovely seam your beam creates on the surface will have some scabby bulges due to the localized pressure release, but that's it. But, what happens to the mass touched by the beam? Here's where there might be something interesting. Your beam has superheated the mass it touches into plasma, which wants more space than simple mass. You still won't have planetary disaster, but there could be some cool side effects. For one, you're creating bubbles of plasma in the magma layers. Those might find their way to existing volcanic vents and wreak a bit of havoc with the natives, so to speak. But near the surface the plasma might serve to blow the upper seam apart. Nope, no pulling the planet apart, but suddenly that scabby seam might be something more dramatic. The exploding plasma would send shockwaves through the atmosphere (probably breaking every window on the planet). That's going to cause a ton of death and destruction. The new seam is much larger now, too, meaning some weather patterns and ocean currents will change. So, taking "everything" (I'm sure I haven't taken everything into account) into account, the 24 hours after your beam hits would be ugly, really ugly. The clean-up would be massive. We'd need to find some new fishing areas.... But we'd survive it, IMO. JBHJBH $\begingroup$ I've to formally disagree with you. If half of Earth fall into the other half at a distance of 20 metres until the collision the acceleration of the bodies will make a huge collision. It isn't very fast the speed, but if you multiply by Earth mass... (I won't downvote your answer). $\endgroup$ – Ender Look Jul 13 '18 at 20:09 $\begingroup$ Estimates put the energy of the Chicxulub Impact Event that wiped out the dinosaurs at 1.15 × 10^20 joules. -- To vaporize a 40 meter wide, 6,371 km in diameter disk takes 2.3 x 10^26 joules. -- In short: rocks fall, everyone dies. $\endgroup$ – Ghedipunk Jul 13 '18 at 20:56 $\begingroup$ @Ghedipunk, rocks aren't falling (?). The release of magma pressure would be faster than the gravity collision (otherwise gysers and volcanoes wouldn't work) so the space is being filled long before the two halves recombine. It's not the impact that could threaten people, it's the potential shockwave into the atmosphere and dust. This would be fun to model. $\endgroup$ – JBH Jul 13 '18 at 20:59 $\begingroup$ @JBH, but filling the space between the halves won't last much time, isn't that plasma? When plasma cold down (or is put away by the crushing pressure) both halves will still collide, no? $\endgroup$ – Ender Look Jul 13 '18 at 21:01 $\begingroup$ I didn't say that the impact is what would threaten people. I said that the energy being put into our planet is 200,000 times the energy that caused the last global mass extinction event. "But we'd survive it, IMO" is an "opinion" that is not supported by fact. $\endgroup$ – Ghedipunk Jul 13 '18 at 21:02 I find myself thinking things will go a bit differently that most of the posters are imagining. We are going to dump vast amounts of energy into that slice of Earth. The general answer seems to be that the plasma will absorb enough energy to be pushed out of the slice, thus expanding as a disk. However, remember that he specified a particle beam, not a laser. Particle beams carry momentum. Imagine what happens as the beam digs in--anything trying to come back up the hole is going to be stuffed back in it by the energy of the particle beam. It can't escape that way. The energy will build until it finds a new way out (and remember that while it's building no cutting is going on and we have a very short time limit--the buildup must be exceedingly fast.) I see only one way out--push the two halves apart. Of course it won't go smoothly but against that sort of energy a bit of deformation of a planet is nothing. The plasma must spread enough to get thin enough to let the beam through. That's pretty thin--the pieces go flying apart fast. I think the two halves are going to be thrown apart at well above escape velocity. I also think the thinner parts around the edge are going to be blown off. (And that's assuming they don't simply vaporize from the energy of all that plasma.) Practical issues with the whole cut-a-planet-with-a-beam idea. Your fundamental problem is that if you dig the trench by sequential applications of a beam with shallow penetration the time scale is dominated by how long it takes the already heated material to clear out of the way and that scale is long enough that the hole keeps collapsing on you.1 Instead you have to vaporize the whole cut in one rapid go. So you need a penetrating beam by which I mean one that will deposit energy all the way through a diameter of the Earth. And do it without excessive difference in the power deposited on the near and far side. That leaves out all beams that interact by the strong or electromagnetic interaction because the distance scale for those beams too short (even extremely high energy muons are lucky to go a few kilometers in rock-like materials). So, your beam is going to be neutrinos or something exotic and not yet discovered (but with interaction cross-sections comparable to or smaller than those of neutrinos). The good news is that you actually want a fairly low energy neutrino beam (neutrino interaction cross-sections scale linearly in energy over a wide range), but that bad news is that a beam that will still be depositing useful amounts of energy after passing through a whole diameter of the planet will waste a large fraction (even most) of its energy by over-penetration.2 And then we get to that energy cost. The current way of making neutrino beams is vastly inefficient, and there are no better proposals on the horizon. So you have a vast energy cost for doing the damage, a significant loss to over-penetration (at least a factor of two), and a high multiplicative factor for losses in beam generation. All together you are looking at an energy cost at least ten times the energy applied to the event which is already above the scale needed for moderately relativistic interstellar colonization. And you have to do all that on a time-scale around a few hundredth of a second or so.3 1 The mantle is viscoelastic and rather thinck and slow to react, but that is its behavior under pressure. When mantle rock is brought quickly to the surface if forms a low viscosity lava that flows fast and smoothly. The mantle is going to keep pouring into the trench almost like water. 2 And because your beam is highly collimated you'll need to worry about what it does across distance at least the scale of the solar system. And remember that you sweeping the beam, so it is a fan-shaped danger region. 3 No point in going a lot faster because the beam propogation time is about $0.04\,\mathrm{s}$, but you don't want to go much slower because that gives the remaining structure time to react and the whole effect is ruined if the edge where you started has stuck itself back together before you get done at the other side. dmckeedmckee According to this forum post (which in turn references some book), the gravitational force required to pull two hemispheres apart is $$ F = \dfrac{3GM^2}{4r^2}$$ with $G\approx6.67\times10^{-11}\ \mathrm{m^3/s^{2}/kg}$ the gravitational constant, $M$ the mass of a hemisphere, such that $2M\approx 5,97\times 10^{24}\ \mathrm{kg}$ is the mass of Earth, and finally $r$ the radius of Earth, $6371\ \mathrm{kg}$. To make my life a little bit easier, I will assume that the mass of Earth does not change by removing that 40m slice, and that the gravitational attraction does not change due to the 40m separation (and remains constant as the separation reduces to zero). As it is already established that a particle beam is a bad way of slicing the Earth in half, we shall simply fire our handwavium gun which removes the 40 meter slice of Earth without depositing additional energy on Earth, and deposits it on Mars. The moment the slice is removed, the hemispheres will accelerate towards each other, with acceleration $a=F/M$. [Filling in some numbers](http://www.wolframalpha.com/input/?i=3G(earth+mass%2F2)%2F(4*(earth+radius)%5E2), we can see that $a=3.67\ \mathrm{m/s^2}$. Each hemisphere will move 20 meters before colliding with the other half. The velocity attained at the end of this is $v=\sqrt{2ad}$ with $d=20\ \mathrm{m}$, which gives a final velocity of $12.12\ \mathrm{m/s}$ (a typical car accident, but on planetary scale). The total energy that each hemisphere has gained by that time is $E=\dfrac{1}{2}Mv^2$, which gives about $4.39\times10^{26}\ \mathrm{J}$ per hemisphere, or a total energy of $\mathrm{8.777\times10^{26}\ J}$. How much is this, really? Let's turn to Wikipedia. Here, we can find that it's about 1000 Chixulub Craters, or the equivalent of 100 years of solar energy that the Earth normally receives. Without doubt, this will destroy most if not all multicellular life. Furthermore, the massive amount of material deposited on Mars will likely definitively end the already struggling Opportunity's mission, which most of us would agree would be the real victim of this plan (although it may be a good way to prevent this scenario). Not the answer you're looking for? Browse other questions tagged science-based physics environment gravity earth or ask your own question. By what mechanism can lasers destroy an entire planet? Could we transport matter to other planets with a charged particle beam? What if the earth was physically split in half? What would happen to the Earth if Gravity disappeared? What does the Higgs-Boson particle mean for interacting with gravity? What's the practical difference between a Lightning Bolt,a Laser and a Charged particle beam? What if half the Earth was covered in lava? Preventing the dispersion of a particle-beam behind the target What would be the effects on the enviroment if a Giant walked by in a world with Earth like physics? What would the effects on Earth be if the Moon rotated around Earth at the same speed as Earth's rotation? If the earth is not flat, are we walking uphill or downhill?
CommonCrawl
Journal Home About Issues in Progress Current Issue All Issues Feature Issues pp. 42276-42286 •https://doi.org/10.1364/OE.446122 Enhanced performance of GaN-based visible flip-chip mini-LEDs with highly reflective full-angle distributed Bragg reflectors Lang Shi, Xiaoyu Zhao, Peng Du, Yingce Liu, Qimeng Lv, and Shengjun Zhou Lang Shi,1 Xiaoyu Zhao,1 Peng Du,1 Yingce Liu,2 Qimeng Lv,3 and Shengjun Zhou1,* 1Center for Photonics and Semiconductors, School of Power and Mechanical Engineering, Wuhan University, Wuhan 430072, China 2Xiamen Changelight Co. Ltd., Xiamen 361013, China 3Hubei San'an Optoelectronics Co. Ltd., Ezhou 436030, China *Corresponding author: [email protected] Shengjun Zhou https://orcid.org/0000-0002-9004-049X L Shi X Zhao P Du Y Liu Q Lv S Zhou Lang Shi, Xiaoyu Zhao, Peng Du, Yingce Liu, Qimeng Lv, and Shengjun Zhou, "Enhanced performance of GaN-based visible flip-chip mini-LEDs with highly reflective full-angle distributed Bragg reflectors," Opt. Express 29, 42276-42286 (2021) Numerical simulation and experimental investigation of GaN-based flip-chip LEDs and top-emitting... Xingtong Liu, et al. Appl. Opt. 56(34) 9502-9509 (2017) Numerical and experimental investigation of GaN-based flip-chip light-emitting diodes with highly... Shengjun Zhou, et al. Opt. Express 25(22) 26615-26627 (2017) Highly efficient GaN-based high-power flip-chip light-emitting diodes Opt. Express 27(12) A669-A692 (2019) Table of Contents Category Optical Devices and Detectors Bragg reflectors Distributed Bragg reflectors Light extraction Light wavelength Liquid crystal displays Original Manuscript: October 15, 2021 Revised Manuscript: November 25, 2021 Manuscript Accepted: November 25, 2021 Results and discussions High-efficiency GaN-based visible flip-chip miniaturized-light emitting diodes (FC mini-LEDs) are desirable for developing white LED-backlit liquid crystal displays. Here, we propose a full-angle Ti3O5/SiO2 distributed Bragg reflector (DBR) for blue and green FC mini-LEDs to enhance the device performance. The proposed full-angle Ti3O5/SiO2 DBR is composed of different single-DBR stacks optimized for central wavelength in blue, green, and red light wavelength regions, resulting in wider reflective bandwidth and less angular dependence. Furthermore, we demonstrate two types of GaN-based FC mini-LEDs with indium-tin oxide (ITO)/DBR and Ag/TiW p-type ohmic contacts. Experimental results exhibit that the reflectivity of full-angle DBR is higher than that of Ag/TiW in the light wavelength range of 420 to 580 nm as the incident angle of light increases from 0° to 60°. As a result, the light output powers (LOPs) of blue and green FC mini-LEDs with ITO/DBR are enhanced by 7.7% and 7.3% in comparison to blue and green FC mini-LEDs with Ag/TiW under an injection current of 10 mA. In addition, compared with FC mini-LED with Ag/TiW, light intensity of FC mini-LED with ITO/DBR is improved in side direction, which is beneficial to mix light in backlight system of liquid crystal displays (LCDs). © 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement Miniaturized-LEDs (Mini-LEDs) whose size is between 100 µm and 150 µm have been considered as the promising candidates for next generation displays, owing to their advantages of high brightness, high color saturation, power saving, and long lifetime [1–6]. Mini-LEDs have significant potential as local dimming backlight unit in liquid crystal displays (LCDs) to realize high dynamic range (HDR) [7–11]. The backlight white LEDs could be realized by combining the colors of red, green, and blue mini-LEDs, which enables a higher color gamut (90% BT2020) and a high contrast radio (above 10000:1) of LCDs [12–15]. However, to realize the HDR requirement of LCDs, mini-LEDs should be more efficient. Various methods applicable in conventional LEDs for improved efficiency could be applied to mini-LEDs, such as flip-chip technology [12,16–21], chip geometry shaping [22–24], and patterned sapphire substrate [25,26]. Among these methods, flip-chip technology is widely used in mini-LEDs because of its unique advantage in light extraction efficiency (LEE), heat dissipation, and current spreading. In flip-chip mini-LEDs (FC mini-LEDs) configuration, highly reflective p-type ohmic contact, which could reflect downward photons back into sapphire substrate, plays a critical role in obtaining better LEE and further improving the efficiency of FC mini-LEDs [27–30]. It is well known that silver (Ag) and indium-tin oxide (ITO) /distributed Bragg reflector (DBR) are generally served as reflective ohmic contacts in FC LEDs due to their high reflectivity [31–33]. Owing to better heat dissipation and current spreading, the performance of FC LED with Ag/TiW is superior to that of FC LED with ITO/DBR at high injection current [20]. Nevertheless, due to higher reflectance and alleviated self-heating issue, the performance of FC LED with ITO/DBR is better than that of FC LED with Ag/TiW at low injection current [33]. Hence, ITO/DBR p-type ohmic contact is a better choice for display application of FC mini-LEDs with low operation current. However, the drawbacks of conventional single-DBR stack are narrow reflective bandwidth and strong angular dependence, which hinders the further improvement in efficiency of FC mini-LEDs [34–37]. The angular dependence was compensated by combining two single-DBR stacks with different dielectric layer thicknesses into a double-DBR stack [38]. However, as the incident angle of light increases, the blueshift of reflective bandwidth of DBR stack optimized for long central wavelength could not completely compensate the blueshift of reflective bandwidth of DBR stack optimized for short central wavelength. Consequently, the reflectivity of double-DBR stack decreases sharply in the light wavelength ranging from long central wavelength to short central wavelength as the incident angle of light increases. The strong angular dependence of double-DBR stack still restricts the improvement in performance of FC mini-LEDs. Therefore, a full-angle DBR with wider reflective bandwidth to enhance performance of blue and green FC mini-LEDs is required. In this paper, we introduce a full-angle Ti3O5/SiO2 DBR for blue and green FC mini-LEDs. The full-angle DBR is constructed by combining DBR stacks with different thicknesses optimized for discrete central wavelengths in blue, green, and red light wavelength regions, which significantly alleviates the angular dependence and increases the reflective bandwidth. The ITO/DBR and Ag/TiW are constructed as highly reflective ohmic contacts for FC mini-LEDs. Compared with Ag/TiW, the ITO/DBR demonstrates higher reflectivity in the light wavelength range of 420 to 580 nm as the incident angle of light increases from 0° to 60°. As a result, improvements of ∼7.7% and ∼7.3% in LOPs of blue and green FC mini-LEDs with ITO/DBR are attained with respect to that of blue and green FC mini-LEDs with Ag/TiW. Our study exhibits that the full-angle DBR provides a promising strategy for the development of high-efficiency blue and green FC mini-LEDs for display application. The GaN epitaxial layers of blue and green mini-LEDs are grown on c-plane PSS using metal-organic chemical vapor deposition (MOCVD) method. The GaN-based blue mini-LED consists of a 25 nm-thick low temperature GaN nucleation layer, a 3.0 µm-thick undoped GaN buffer layer, a 2.5 µm-thick Si-doped n-GaN layer, a 12 pair In0.16Ga0.84N (3 nm) /GaN (12 nm) multiple quantum wells (MQWs), a 40 nm-thick p-Al0.2Ga0.8N electron blocking layer, and a 112 nm-thick Mg-doped p-GaN layer. The epitaxial layers of green mini-LED are identical to that of blue mini-LED except for MQWs. The MQWs of green mini-LED consist of 12-pair In0.25Ga0.75N (3 nm) /GaN (12 nm). The LED wafer is subsequently annealed at 750 °C to activate Mg in the p-GaN layer. Figure 1 shows the schematic of fabrication process for FC mini-LED with ITO/DBR. The detailed fabrication process steps are described as follows: (a) n-via hole is defined by using inductively coupled plasma (ICP) etching based on BCl3/Cl2 mixture gas. (b) electron beam evaporation is employed to deposit 90-nm-thick ITO transparent conductive layer, followed by thermal anneal in N2 atmosphere at 540 °C for 20 min to strengthen p-ohmic contact. (c) Cr/Al/Ti/Pt/Au metal layer is deposited onto the n-GaN layer and ITO to serve as n- and p-electrodes by electron beam evaporation. (d) DBR consisting of 14-pair alternating Ti3O5/SiO2 with different thicknesses is sputtered by ion beam deposition, followed by the formation of p-electrode hole through DBR using ICP etching based on CHF3/Ar/O2 mixture gas. (e) Cr/Al/Ti/Pt/Ti/Pt/Au metallization is deposited onto n-via hole and p-electrode hole as n- and p-contact pads. Fig. 1. Schematic illustration of fabrication process for FC mini-LED with ITO/DBR ohmic contact. Download Full Size | PPT Slide | PDF Figure 2 shows the schematic of fabrication process for FC mini-LED with Ag/TiW. The process consists of following steps: (a) ICP etching based on BCl3/Cl2 mixture gas is employed to form n-via hole. (b) Ag (100 nm) /TiW (50 nm) /Pt (10 nm) /TiW (50 nm) /Pt (25 nm) /Ti (30 nm) /Pt (25 nm) /Ti (30 nm) /Pt (25 nm) /Ti (30 nm) /Pt (60 nm) /Ti (30 nm) stacks are deposited onto p-GaN as p-electrode. (c) Cr/Al/Ti/Pt/Au metallization layer is deposited into n-via hole as n-electrode. (d) DBR insulating layer consisting of 14-pair Ti3O5/SiO2 is deposited by ion beam deposition. The ICP etching based on CHF3/Ar/O2 mixture gas is used to form p-electrode hole through Ti3O5/SiO2 DBR; (e) Cr/Al/Ti/Pt/Ti/Pt/Au metal layer is deposited onto n-via hole and p-electrode hole as n- and p-contact pads. Fig. 2. Schematic illustration of fabrication process for FC mini-LED with Ag/TiW ohmic contact. Figures 3(a) and 3(b) show the schematic illustrations of FC mini-LEDs with ITO/DBR and Ag/TiW ohmic contacts, respectively. The dimension of FC mini-LEDs is 120 × 350 µm2. The plan-view images of FC mini-LEDs were taken by a field emission scanning electron microscope (SEM, TESCAN MIRA 3, UK). The transmission electron microscope (TEM) samples were prepared using focused ion beam (FIB, TESCAN GAIA3 XMH, Czech Republic) technique. The cross-sectional structures of FC mini-LEDs were analyzed using FIB combined with SEM. The analysis of structural characteristics for FC mini-LEDs was completed using TEM (JEM-F200, Japan) in combination with energy-dispersive x-ray (EDX) mapping spectroscopy. The reflectance spectra of ITO/DBR and Ag/TiW were measured using ultraviolet/visible/near infrared spectrophotometer with universal reflectance accessory (LAMDA 950, USA). The current versus voltage (I-V) characteristics were measured by using a semiconductor parameter analyzer (Keysight B2901A, USA). The light output power versus current (L-I) characteristics were measured by using a high-precision photometric colorimetric and electric test system (HAAS-2000, China). Fig. 3. Schematic illustrations of (a) FC mini-LED with ITO/DBR ohmic contact and (b) FC mini-LED with Ag/TiW ohmic contact. 3. Results and discussions The DBR is made by stacking high refractive dielectric layers (H) and low refractive dielectric layers (L) with quarter-wavelength thickness ($\lambda /4n$, where $\lambda $ is the central wavelength of reflectivity spectrum, n is the refractive index of the material) based on thin-film interference effect [39]. The layer thickness is determined by the following equation [40]: (1)$${n_H}{t_H} = {n_L}{t_L} = \lambda /4$$ where ${n_H}$ and ${n_L}$ are the index of high index layer and low index layer, respectively, while ${t_H}$ and ${t_L}$ are the thickness of high index layer and low index layer, respectively. We investigated angular dependence of single-DBR stack using the commercial software TFCalc. The refractive indices of Ti3O5 and SiO2 in the simulation are fixed at 2.37 and 1.46, respectively. Figure 4(a) shows the reflectance spectra of 14-pair Ti3O5 (47.75 nm) /SiO2 (79.07 nm) single-DBR stack optimized for central wavelength at 450 nm. The reflectance decreases in blue light wavelength region when incident angle of light exceeds 50°, as shown in Fig. 4(a). Figure 4(b) shows the reflectance spectra of 14-pair Ti3O5 (54.76 nm) /SiO2 (89.10 nm) single-DBR stack optimized for central wavelength at 520 nm. It is clearly seen in Fig. 4(b) that the reflectance decreases sharply in green light wavelength region when incident angle of light exceeds 40°. Figure 4(c) shows the reflectance spectra of 14-pair Ti3O5 (65.73 nm) /SiO2 (107.56 nm) single-DBR stack optimized for central wavelength at 620 nm. It is obvious that the 14-pair Ti3O5 (65.73 nm) /SiO2 (107.56 nm) single-DBR stack exhibits high reflectivity in green light wavelength region when incident angle exceeds 40°. However, when the incident angle is less than 40°, the reflectivity is low in green light wavelength region. Fig. 4. Reflectance spectra of (a) 14-pair Ti3O5 (47.75 nm) /SiO2 (79.07 nm) single-DBR stack, (b) 14-pair Ti3O5 (54.76 nm) /SiO2 (89.10 nm) single-DBR stack, (c) 14-pair Ti3O5 (65.73 nm) /SiO2 (107.56 nm) single-DBR stack, (d) double-DBR I, (e) double-DBR II, and (f) full-angle DBR versus incident angle of light. The blue and green rectangle regions show blue and green light wavelength regions, respectively. To alleviate the angular dependence, we combine 7-pair Ti3O5 (65.73 nm) /SiO2 (107.56 nm) single-DBR stack optimized for a central wavelength at 620 nm and 7-pair Ti3O5 (54.76 nm) /SiO2 (89.10 nm) single-DBR stack optimized for a central wavelength at 520 nm into double-DBR I. Figure 4(d) shows the reflectance spectra of double-DBR I versus incident angle of light. As the incident angle increases, the reflectivity of double-DBR I is high in green light wavelength region but low in blue light wavelength region. To further broaden the reflective bandwidth, we combine 7-pair Ti3O5 (65.73 nm) /SiO2 (107.56 nm) single-DBR stack optimized for central wavelength at 620 nm and 7-pair Ti3O5 (47.75 nm) /SiO2 (79.07 nm) single-DBR stack optimized for central wavelength at 450 nm into double-DBR II. Figure 4(e) shows the reflectance spectra of double-DBR II versus incident angle of light. The double-DBR II exhibits high reflectivity in both blue and green light wavelength regions when the incident angle is less than 40°. However, it was seen in Fig. 4(e) that the reflectivity of double-DBR II decreases sharply in light wavelength region II as the incident angle of light increases. This indicates that as the incident angle of light increases, the blueshift of reflective bandwidth of DBR stack optimized for 620 nm could not completely compensate the blueshift of reflective bandwidth of DBR stack optimized for 450 nm due to the large gap between the short central wavelength (450 nm) and the long central wavelength (620 nm). In addition, the narrow reflective bandwidth of double-DBR stack leads to the decrease of reflectivity in light wavelength region I and region III, as shown in Fig. 4(e). The angular dependence of double-DBR stack results from narrow reflective bandwidth as well as large gap between long central wavelength and short central wavelength. The reflective bandwidth of DBR stack could be calculated by the following equation [41]: (2)$$\Delta \lambda \textrm{ = }\lambda \frac{2}{\pi }\arcsin \left( {\frac{{{n_H} - {n_L}}}{{{n_H} + {n_L}}}} \right)$$ where λ is the central wavelength of DBR stack, Δλ is reflective bandwidth of DBR stack. We consider increasing the number of DBR stacks with multiple central wavelengths in light wavelength region I, region II, and region III to alleviate the angular dependence. This strategy could decrease the gap between short central wavelength and long central wavelength, which suppresses the decrease of reflectivity in light wavelength region II as incident angle of light increases. On the other hand, increasing the number of DBR stacks with multiple central wavelengths in light wavelength region I and region III could further broaden reflective bandwidth. Hence, we combine single-DBR stacks, with discrete central wavelengths in light wavelength region I, region II, and region III, into full-angle DBR. The detailed thickness of each dielectric layer and central wavelength of each stack are shown in Table 1. Table 1. The detailed thickness of each dielectric layer and central wavelength of each stack Figure 4(f) shows the reflectance spectra of the full-angle DBR at different incident angle of light. In light wavelength region II, the reflectivity decrease of full-angle DBR is remarkably less than that of double-DBR II as incident angle of light increases. Moreover, in light wavelength region I and region III, the reflectivity of full-angle DBR is higher than that of double-DBR II. The results demonstrate that full-angle DBR has less angular dependence and wider reflective bandwidth compared with double-DBR stack. We measured the reflectance spectra of full-angle DBR and Ag/TiW at various incident angles by using ultraviolet/visible/near infrared spectrophotometer. Figure 5(a) shows the measured reflectance spectra of Ag/TiW and full-angle DBR at incident angles of 0°, 10°, 20°, 30°, 40°, 50°, and 60°. The electroluminescent (EL) spectra of blue and green mini-LEDs are also shown in Fig. 5(a). The peak wavelength of blue and green FC mini-LEDs is 465 nm and 520 nm, respectively. The reflectivity of full-angle DBR is much higher than that of Ag/TiW in the light wavelength range of 420 to 580 nm as the incident angle of light increase from 0° to 60°, as shown in Fig. 5(a), revealing that FC mini-LEDs with full-angle DBR can obtain higher LEE. Figures 5(b) and 5(c) show cross-sectional TEM images and EDX mapping spectroscopy of full-angle Ti3O5/SiO2 DBR and Ag/TiW/Pt/TiW/Pt/Ti/Pt/Ti/Pt/Ti/Pt/Ti, respectively. Fig. 5. (a) Measured reflectance spectra of Ag/TiW and full-angle DBR at incident angles of 0°, 10°, 20°, 30°, 40°, 50°, and 60° as well as the EL spectra of blue and green LEDs. (b) Cross-sectional TEM images of full-angle Ti3O5/SiO2 DBR and corresponding EDX mapping spectroscopy. (c) Cross-sectional TEM images of Ag/TiW/Pt/TiW/Pt/Ti/Pt/Ti/Pt/Ti/Pt/Ti and corresponding EDX mapping spectroscopy. Figure 6 shows the top-view and cross-sectional SEM images of the fabricated FC mini-LEDs (named as FC mini-LED I and FC mini-LED II). In FC mini-LED I, Ag-based metallic reflector (Ag/TiW/Pt/TiW/Pt/Ti/Pt/Ti/Pt/Ti/Pt/Ti) is employed as highly reflective p-ohmic contact. In FC mini-LED II, transparent ITO combined with full-angle Ti3O5/SiO2 DBR is used as highly reflective p-ohmic contact. Figures 6(b) and 6(c) show the cross-sectional SEM images of FC mini-LED I with Ag/TiW along A-A and B-B directions, as marked in Fig. 6(a). Figures 6(e) and 6(f) show the cross-sectional SEM images of FC mini-LED II with ITO/DBR along C-C and D-D directions, as marked in Fig. 6(d). Fig. 6. (a) Top-view SEM image of FC mini-LED I with Ag/TiW. Cross-sectional SEM images of FC mini-LED I with Ag/TiW milled by FIB along (b) A-A and (c) B-B directions. (d) Top-view SEM image of FC mini-LED II with ITO/DBR. Cross-sectional SEM images of FC mini-LED II with ITO/DBR milled by FIB along (e) C-C and (f) D-D directions. Figure 7(a) shows the current-voltage (I-V) characteristics of green FC mini-LED I and FC mini-LED II. The inset in Fig. 7(a) is the optical image of green FC mini-LEDs after flip-chip bonding on a proper package. At 10 mA, the forward voltages of green FC mini-LED I and FC mini-LED II are 2.9 V and 3.0 V, respectively. Owing to the high electrical conductivity of Ag/TiW p-type ohmic contact, the forward voltage of green FC mini-LED I is lower than that of green FC mini-LED II. Figure 7(b) shows the light output power-current (L-I) characteristics of green FC mini-LED I and FC mini-LED II. At 10 mA, LOPs of green FC mini-LED I and FC mini-LED II are 4.1 and 4.4 mW, respectively. The green FC mini-LED II exhibits 7.3% improvement over green FC mini-LED I in LOP due to the higher reflectivity of full-angle Ti3O5/SiO2 DBR in green light wavelength region. Fig. 7. (a) I-V characteristics of green FC mini-LED I and FC mini-LED II. The inset shows optical image of green FC mini-LEDs after flip-chip bonding on a proper package. (b) L-I characteristics of green FC mini-LED I and FC mini-LED II. (c) I-V characteristics of blue FC mini-LED I and FC mini-LED II. The inset shows optical image of blue FC mini-LEDs after flip-chip bonding on a proper package. (d) L-I characteristics of blue FC mini-LED I and FC mini-LED II. (e) Far-field radiation pattern of green FC mini-LED I and FC mini-LED II. (f) Far-field radiation pattern of blue FC mini-LED I and FC mini-LED II. Figure 7(c) shows I-V characteristics of blue FC mini-LED I and FC mini-LED II. The inset in Fig. 7(c) is the optical image of blue FC mini-LEDs after flip-chip bonding on a proper package. At 10 mA, the forward voltages of blue FC mini-LED I and FC mini-LED II are 3.0 V and 3.1 V, respectively. Figure 7(d) shows the L-I characteristics of blue FC mini-LED I and FC mini-LED II. At 10 mA, the LOPs are 9.1 mW for blue FC mini-LED I and 9.8 mW for blue FC mini-LED II. The LOP of blue FC mini-LED II is increased by 7.7% in comparison with that of blue FC mini-LED I. The improved LOP for blue FC mini-LED II is due to the higher reflectivity of full-angle Ti3O5/SiO2 DBR in blue light wavelength region. Figures 7(e) and 7(f) show normalized far-field angular radiation patterns of green and blue FC mini-LEDs, respectively. It could be clearly seen that the light intensity of FC mini-LED II is significantly improved in side direction compared with FC mini-LED I. In addition, the improved light intensity in side direction of FC mini-LED II is beneficial for the mixing light in backlight system of LCDs. Therefore, when the FC mini-LED II is applied to form backlight system, fewer mini-LEDs are required to realize the same luminance uniformity in backlight system, reducing the cost and power consumption of displays. In summary, a novel full-angle Ti3O5/SiO2 DBR was developed to enhance the performances of blue and green FC mini-LEDs. The full-angle DBR consists of different single-DBR stacks optimized for discrete central wavelengths at 689, 647, 645, 631, 619, 606, 585, 543, 502, 497, 464, 437, 433, and 390 nm, which exhibits wider reflectance bandwidth and less angular dependence compared with conventional DBR structures. In addition, ITO combined with full-angle DBR and Ag/TiW serve as p-type ohmic contacts for FC mini-LEDs. The experiment results exhibit that the ITO/DBR shows higher reflectivity in comparison with Ag/TiW in the wavelength range of 420 to 580 nm as the incident angle of light increases from 0° to 60°. As a result, the light output powers (LOPs) of blue and green FC mini-LEDs with ITO/DBR is 9.8 mW and 4.4 mW at 10 mA, which are 7.7% and 7.3% higher than that of blue and green FC mini-LEDs with Ag/TiW, respectively. The full-angle Ti3O5/SiO2 DBR could provide a promising method for the realization of high-efficiency visible FC mini-LEDs. National Natural Science Foundation of China (51675386, 51775387, 52075394); National Youth Talent Support Program. The authors also acknowledge valuable support from the National Youth Talent Support Program. The authors declare no conflicts of interest. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. 1. T. Wu, C. W. Sher, Y. Lin, C. F. Lee, S. Liang, Y. Lu, S. W. H. Chen, W. Guo, H. C. Kuo, and Z. Chen, "Mini-LED and Micro-LED: Promising candidates for the next generation display technology," Appl. Sci. 8(9), 1557 (2018). [CrossRef] 2. E. L. Hsiang, Z. Yang, Q. Yang, Y. F. Lan, and S. T. Wu, "Prospects and challenges of mini-LED, OLED, and micro-LED displays," J. Soc. Inf. Disp. 29(6), 446–465 (2021). [CrossRef] 3. G. Tan, Y. Huang, M. C. Li, S. L. Lee, and S. T. Wu, "High dynamic range liquid crystal displays with a mini-LED backlight," Opt. Express 26(13), 16572–16584 (2018). [CrossRef] 4. E. L. Hsiang, Q. Yang, Z. He, J. Zou, and S. T. Wu, "Halo effect in high-dynamic-range mini-LED backlit LCDs," Opt. Express 28(24), 36822–36837 (2020). [CrossRef] 5. S. Kikuchi, Y. Shibata, T. Ishinabe, and H. Fujikake, "Thin mini-LED backlight using reflective mirror dots with high luminance uniformity for mobile LCDs," Opt. Express 29(17), 26724–26735 (2021). [CrossRef] 6. E. Chen, J. Guo, Z. Jiang, Q. Shen, Y. Ye, S. Xu, J. Sun, Q. Yan, and T. Guo, "Edge/direct-lit hybrid mini-LED backlight with U-grooved light guiding plates for local dimming," Opt. Express 29(8), 12179–12194 (2021). [CrossRef] 7. M. Y. Deng, E. L. Hsiang, Q. Yang, C. L. Tsai, B. S. Chen, C. E. Wu, M. H. Lee, S. T. Wu, and C. L. Lin, "Reducing Power Consumption of Active-Matrix Mini-LED Backlit LCDs by Driving Circuit," IEEE Trans. Electron Devices 68(5), 2347–2354 (2021). [CrossRef] 8. B. Tang, J. Miao, Y. Liu, H. Wan, N. Li, S. Zhou, and C. Gui, "Enhanced light extraction of flip-chip mini-LEDs with prism-structured sidewall," Nanomaterials 9(3), 1–8 (2019). [CrossRef] 9. Y. Huang, E. L. Hsiang, M. Y. Deng, and S. T. Wu, "Mini-LED, Micro-LED and OLED displays: present status and future perspectives," Light Sci. Appl. 9(1), 1–16 (2020). [CrossRef] 10. Y. Sun, J. Fan, M. Liu, L. Zhang, B. Jiang, M. Zhang, and X. Zhang, "Highly transparent, ultra-thin flexible, full-color mini-LED display with indium-gallium-zinc oxide thin-film transistor substrate," J. Soc. Inf. Disp. 28(12), 926–935 (2020). [CrossRef] 11. Y. M. Huang, T. Ahmed, A. C. Liu, S. W. H. Chen, K. L. Liang, Y. H. Liou, C. C. Ting, W. H. Kuo, Y. H. Fang, C. C. Lin, and H. C. Kuo, "High-Stability Quantum Dot-Converted 3-in-1 Full-Color Mini-Light-Emitting Diodes Passivated with Low-Temperature Atomic Layer Deposition," IEEE Trans. Electron Devices 68(2), 597–601 (2021). [CrossRef] 12. K. P. Chang, Y. T. Tsai, C. C. Yen, R. H. Horng, and D. S. Wuu, "Structural design and performance improvement of flip-chip AlGaInP mini light-emitting diodes," Semicond. Sci. Technol. 36(9), 095008 (2021). [CrossRef] 13. K. Masaoka and Y. Nishida, "Metric of color-space coverage for wide-gamut displays," Opt. Express 23(6), 7802–7808 (2015). [CrossRef] 14. Y. Huang, G. Tan, F. Gou, M. C. Li, S. L. Lee, and S. T. Wu, "Prospects and challenges of mini-LED and micro-LED displays," J. Soc. Inf. Disp. 27(7), 387–401 (2019). [CrossRef] 15. X. Zhao, B. Tang, L. Gong, J. Bai, J. Ping, and S. Zhou, "Rational construction of staggered InGaN quantum wells for efficient yellow light-emitting diodes," Appl. Phys. Lett. 118(18), 182102 (2021). [CrossRef] 16. W. Guo, N. Chen, H. Lu, C. Su, Y. Lin, G. Chen, Y. Lu, L. L. Zheng, Z. Peng, H. C. Kuo, C. H. Lin, T. Wu, and Z. Chen, "The Impact of Luminous Properties of Red, Green, and Blue Mini-LEDs on the Color Gamut," IEEE Trans. Electron Devices 66(5), 2263–2268 (2019). [CrossRef] 17. B. Lu, Y. Wang, B. R. Hyun, H. C. Kuo, and Z. Liu, "Color Difference and Thermal Stability of Flexible Transparent InGaN/GaN Multiple Quantum Wells Mini-LED Arrays," IEEE Electron Device Lett. 41(7), 1040–1043 (2020). [CrossRef] 18. R. H. Horng, H. Y. Chien, K. Y. Chen, W. Y. Tseng, Y. T. Tsai, and F. G. Tarntair, "Development and Fabrication of AlGaInP-Based Flip-Chip Micro-LEDs," IEEE J. Electron Devices Soc. 6, 475–479 (2018). [CrossRef] 19. Y. C. Lee, H. C. Kuo, C. E. Lee, T. C. Lu, and S. C. Wang, "High-performance (AlxGa1-x)0.5In0.5P-based flip-chip light-emitting diode with a geometric sapphire shaping structure," IEEE Photonics Technol. Lett. 20(23), 1950–1952 (2008). [CrossRef] 20. S. Zhou, X. Liu, H. Yan, Z. Chen, Y. Liu, and S. Liu, "Highly efficient GaN-based high-power flip-chip light-emitting diodes," Opt. Express 27(12), A669–A692 (2019). [CrossRef] 21. L. B. Chang, C. C. Shiue, and M. J. Jeng, "High reflective p-GaN/Ni/Ag/Ti/Au Ohmic contacts for flip-chip light-emitting diode (FCLED) applications," Appl. Surf. Sci. 255(12), 6155–6158 (2009). [CrossRef] 22. J. M. Smith, R. Ley, M. S. Wong, Y. H. Baek, J. H. Kang, C. H. Kim, M. J. Gordon, S. Nakamura, J. S. Speck, and S. P. Denbaars, "Comparison of size-dependent characteristics of blue and green InGaN microLEDs down to 1 µm in diameter," Appl. Phys. Lett. 116(7), 071102 (2020). [CrossRef] 23. Z. Zhuang, D. Iida, and K. Ohkawa, "Effects of size on the electrical and optical properties of InGaN-based red light-emitting diodes," Appl. Phys. Lett. 116(17), 173501 (2020). [CrossRef] 24. S. Lu, Y. Zhang, Z.-H. Zhang, B. Zhu, H. Zheng, S. T. Tan, and H. V. Demir, "High-Performance Triangular Miniaturized-LEDs for High Current and Power Density Applications," ACS Photonics 8(8), 2304–2310 (2021). [CrossRef] 25. H. Hu, B. Tang, H. Wan, H. Sun, S. Zhou, J. Dai, C. Chen, S. Liu, and L. J. Guo, "Boosted ultraviolet electroluminescence of InGaN/AlGaN quantum structures grown on high-index contrast patterned sapphire with silica array," Nano Energy 69, 104427 (2020). [CrossRef] 26. S. Zhou, S. Yuan, Y. Liu, L. J. Guo, S. Liu, and H. Ding, "Highly efficient and reliable high power LEDs with patterned sapphire substrate and strip-shaped distributed current blocking layer," Appl. Surf. Sci. 355, 1013–1019 (2015). [CrossRef] 27. J. J. Wierer, D. A. Steigerwald, M. R. Krames, J. J. O'Shea, M. J. Ludowise, G. Christenson, Y. C. Shen, C. Lowery, P. S. Martin, S. Subramanya, W. Götz, N. F. Gardner, R. S. Kern, and S. A. Stockman, "High-power AlGaInN flip-chip light-emitting diodes," Appl. Phys. Lett. 78(22), 3379–3381 (2001). [CrossRef] 28. O. B. Shchekin, J. E. Epler, T. A. Trottier, T. Margalith, D. A. Steigerwald, M. O. Holcomb, P. S. Martin, and M. R. Krames, "High performance thin-film flip-chip InGaN-GaN light-emitting diodes," Appl. Phys. Lett. 89(7), 071109–2007 (2006). [CrossRef] 29. B. P. Yonkee, E. C. Young, S. P. DenBaars, S. Nakamura, and J. S. Speck, "Silver free III-nitride flip chip light-emitting-diode with wall plug efficiency over 70% utilizing a GaN tunnel junction," Appl. Phys. Lett. 109(19), 191104–6 (2016). [CrossRef] 30. K. P. Hsueh, K. C. Chiang, Y. M. Hsin, and C. J. Wang, "Investigation of Cr- and Al-based metals for the reflector and Ohmic contact on n-GaN in GaN flip-chip light-emitting diodes," Appl. Phys. Lett. 89(19), 191122 (2006). [CrossRef] 31. J. Y. Kim, M. K. Kwon, I. K. Park, C. Y. Cho, S. J. Park, D. M. Jeon, J. W. Kim, and Y. C. Kim, "Enhanced light extraction efficiency in flip-chip GaN light-emitting diodes with diffuse Ag reflector on nanotextured indium-tin oxide," Appl. Phys. Lett. 93(2), 021121 (2008). [CrossRef] 32. C. H. Lin, C. F. Lai, T. S. Ko, H. W. Huang, H. C. Kuo, Y. Y. Hung, K. M. Leung, C. C. Yu, R. J. Tsai, C. K. Lee, T. C. Lu, S. C. Wang, and S. Member, "Enhancement of InGaN-GaN Indium-Tin-Oxide Flip-Chip Light-Emitting Diodes With TiO2-SiO2 Multilayer Stack Omnidirectional Reflector," IEEE Photonics Technology Lett. 18(19), 2050–2052 (2006). [CrossRef] 33. S. Zhou, X. Liu, Y. Gao, Y. Liu, M. Liu, Z. Liu, C. Gui, and S. Liu, "Numerical and experimental investigation of GaN-based flip-chip light-emitting diodes with highly reflective Ag/TiW and ITO/DBR Ohmic contacts," Opt. Express 25(22), 26615–26627 (2017). [CrossRef] 34. T. Zhi, T. Tao, B. Liu, Y. Yan, Z. Xie, H. Zhao, and D. Chen, "High Performance Wide Angle DBR Design for Optoelectronic Devices," IEEE Photonics J. 13(1), 1–6 (2021). [CrossRef] 35. X. Ding, C. Gui, H. Hu, M. Liu, X. Liu, J. Lv, and S. Zhou, "Reflectance bandwidth and efficiency improvement of light-emitting diodes with double-distributed Bragg reflector," Appl. Opt. 56(15), 4375–4380 (2017). [CrossRef] 36. W. Cai, W. Wang, B. Zhu, X. Gao, G. Zhu, J. Yuan, and Y. Wang, "Suspended light-emitting diode featuring a bottom dielectric distributed Bragg reflector," Superlattices Microstruct. 113, 228–235 (2018). [CrossRef] 37. S. Zhou, H. Xu, M. Liu, X. Liu, J. Zhao, N. Li, and S. Liu, "Effect of dielectric distributed bragg reflector on electrical and optical properties of GaN-based flip-chip light-emitting diodes," Micromachines 9(12), 650 (2018). [CrossRef] 38. S. Zhou, B. Cao, S. Yuan, and S. Liu, "Enhanced luminous efficiency of phosphor-converted LEDs by using back reflector to increase reflectivity for yellow light," Appl. Opt. 53(34), 8104–8110 (2014). [CrossRef] 39. J. Zou, Z. Yang, C. Mao, and S. T. Wu, "Fast-response liquid crystals for 6G optical communications," Crystals 11(7), 797 (2021). [CrossRef] 40. M. A. Kats, R. Blanchard, P. Genevet, and F. Capasso, "Nanometre optical coatings based on strong interference effects in highly absorbing media," Nat. Mater. 12(1), 20–24 (2013). [CrossRef] 41. H. Kim, M. Kaya, and S. Hajimirza, "Broadband solar distributed Bragg reflector design using numerical optimization," Sol. Energy 221, 384–392 (2021). [CrossRef] Article Order T. Wu, C. W. Sher, Y. Lin, C. F. Lee, S. Liang, Y. Lu, S. W. H. Chen, W. Guo, H. C. Kuo, and Z. Chen, "Mini-LED and Micro-LED: Promising candidates for the next generation display technology," Appl. Sci. 8(9), 1557 (2018). [Crossref] E. L. Hsiang, Z. Yang, Q. Yang, Y. F. Lan, and S. T. Wu, "Prospects and challenges of mini-LED, OLED, and micro-LED displays," J. Soc. Inf. Disp. 29(6), 446–465 (2021). G. Tan, Y. Huang, M. C. Li, S. L. Lee, and S. T. Wu, "High dynamic range liquid crystal displays with a mini-LED backlight," Opt. Express 26(13), 16572–16584 (2018). E. L. Hsiang, Q. Yang, Z. He, J. Zou, and S. T. Wu, "Halo effect in high-dynamic-range mini-LED backlit LCDs," Opt. Express 28(24), 36822–36837 (2020). S. Kikuchi, Y. Shibata, T. Ishinabe, and H. Fujikake, "Thin mini-LED backlight using reflective mirror dots with high luminance uniformity for mobile LCDs," Opt. Express 29(17), 26724–26735 (2021). E. Chen, J. Guo, Z. Jiang, Q. Shen, Y. Ye, S. Xu, J. Sun, Q. Yan, and T. Guo, "Edge/direct-lit hybrid mini-LED backlight with U-grooved light guiding plates for local dimming," Opt. Express 29(8), 12179–12194 (2021). M. Y. Deng, E. L. Hsiang, Q. Yang, C. L. Tsai, B. S. Chen, C. E. Wu, M. H. Lee, S. T. Wu, and C. L. Lin, "Reducing Power Consumption of Active-Matrix Mini-LED Backlit LCDs by Driving Circuit," IEEE Trans. Electron Devices 68(5), 2347–2354 (2021). B. Tang, J. Miao, Y. Liu, H. Wan, N. Li, S. Zhou, and C. Gui, "Enhanced light extraction of flip-chip mini-LEDs with prism-structured sidewall," Nanomaterials 9(3), 1–8 (2019). Y. Huang, E. L. Hsiang, M. Y. Deng, and S. T. Wu, "Mini-LED, Micro-LED and OLED displays: present status and future perspectives," Light Sci. Appl. 9(1), 1–16 (2020). Y. Sun, J. Fan, M. Liu, L. Zhang, B. Jiang, M. Zhang, and X. Zhang, "Highly transparent, ultra-thin flexible, full-color mini-LED display with indium-gallium-zinc oxide thin-film transistor substrate," J. Soc. Inf. Disp. 28(12), 926–935 (2020). Y. M. Huang, T. Ahmed, A. C. Liu, S. W. H. Chen, K. L. Liang, Y. H. Liou, C. C. Ting, W. H. Kuo, Y. H. Fang, C. C. Lin, and H. C. Kuo, "High-Stability Quantum Dot-Converted 3-in-1 Full-Color Mini-Light-Emitting Diodes Passivated with Low-Temperature Atomic Layer Deposition," IEEE Trans. Electron Devices 68(2), 597–601 (2021). K. P. Chang, Y. T. Tsai, C. C. Yen, R. H. Horng, and D. S. Wuu, "Structural design and performance improvement of flip-chip AlGaInP mini light-emitting diodes," Semicond. Sci. Technol. 36(9), 095008 (2021). K. Masaoka and Y. Nishida, "Metric of color-space coverage for wide-gamut displays," Opt. Express 23(6), 7802–7808 (2015). Y. Huang, G. Tan, F. Gou, M. C. Li, S. L. Lee, and S. T. Wu, "Prospects and challenges of mini-LED and micro-LED displays," J. Soc. Inf. Disp. 27(7), 387–401 (2019). X. Zhao, B. Tang, L. Gong, J. Bai, J. Ping, and S. Zhou, "Rational construction of staggered InGaN quantum wells for efficient yellow light-emitting diodes," Appl. Phys. Lett. 118(18), 182102 (2021). W. Guo, N. Chen, H. Lu, C. Su, Y. Lin, G. Chen, Y. Lu, L. L. Zheng, Z. Peng, H. C. Kuo, C. H. Lin, T. Wu, and Z. Chen, "The Impact of Luminous Properties of Red, Green, and Blue Mini-LEDs on the Color Gamut," IEEE Trans. Electron Devices 66(5), 2263–2268 (2019). B. Lu, Y. Wang, B. R. Hyun, H. C. Kuo, and Z. Liu, "Color Difference and Thermal Stability of Flexible Transparent InGaN/GaN Multiple Quantum Wells Mini-LED Arrays," IEEE Electron Device Lett. 41(7), 1040–1043 (2020). R. H. Horng, H. Y. Chien, K. Y. Chen, W. Y. Tseng, Y. T. Tsai, and F. G. Tarntair, "Development and Fabrication of AlGaInP-Based Flip-Chip Micro-LEDs," IEEE J. Electron Devices Soc. 6, 475–479 (2018). Y. C. Lee, H. C. Kuo, C. E. Lee, T. C. Lu, and S. C. Wang, "High-performance (AlxGa1-x)0.5In0.5P-based flip-chip light-emitting diode with a geometric sapphire shaping structure," IEEE Photonics Technol. Lett. 20(23), 1950–1952 (2008). S. Zhou, X. Liu, H. Yan, Z. Chen, Y. Liu, and S. Liu, "Highly efficient GaN-based high-power flip-chip light-emitting diodes," Opt. Express 27(12), A669–A692 (2019). L. B. Chang, C. C. Shiue, and M. J. Jeng, "High reflective p-GaN/Ni/Ag/Ti/Au Ohmic contacts for flip-chip light-emitting diode (FCLED) applications," Appl. Surf. Sci. 255(12), 6155–6158 (2009). J. M. Smith, R. Ley, M. S. Wong, Y. H. Baek, J. H. Kang, C. H. Kim, M. J. Gordon, S. Nakamura, J. S. Speck, and S. P. Denbaars, "Comparison of size-dependent characteristics of blue and green InGaN microLEDs down to 1 µm in diameter," Appl. Phys. Lett. 116(7), 071102 (2020). Z. Zhuang, D. Iida, and K. Ohkawa, "Effects of size on the electrical and optical properties of InGaN-based red light-emitting diodes," Appl. Phys. Lett. 116(17), 173501 (2020). S. Lu, Y. Zhang, Z.-H. Zhang, B. Zhu, H. Zheng, S. T. Tan, and H. V. Demir, "High-Performance Triangular Miniaturized-LEDs for High Current and Power Density Applications," ACS Photonics 8(8), 2304–2310 (2021). H. Hu, B. Tang, H. Wan, H. Sun, S. Zhou, J. Dai, C. Chen, S. Liu, and L. J. Guo, "Boosted ultraviolet electroluminescence of InGaN/AlGaN quantum structures grown on high-index contrast patterned sapphire with silica array," Nano Energy 69, 104427 (2020). S. Zhou, S. Yuan, Y. Liu, L. J. Guo, S. Liu, and H. Ding, "Highly efficient and reliable high power LEDs with patterned sapphire substrate and strip-shaped distributed current blocking layer," Appl. Surf. Sci. 355, 1013–1019 (2015). J. J. Wierer, D. A. Steigerwald, M. R. Krames, J. J. O'Shea, M. J. Ludowise, G. Christenson, Y. C. Shen, C. Lowery, P. S. Martin, S. Subramanya, W. Götz, N. F. Gardner, R. S. Kern, and S. A. Stockman, "High-power AlGaInN flip-chip light-emitting diodes," Appl. Phys. Lett. 78(22), 3379–3381 (2001). O. B. Shchekin, J. E. Epler, T. A. Trottier, T. Margalith, D. A. Steigerwald, M. O. Holcomb, P. S. Martin, and M. R. Krames, "High performance thin-film flip-chip InGaN-GaN light-emitting diodes," Appl. Phys. Lett. 89(7), 071109–2007 (2006). B. P. Yonkee, E. C. Young, S. P. DenBaars, S. Nakamura, and J. S. Speck, "Silver free III-nitride flip chip light-emitting-diode with wall plug efficiency over 70% utilizing a GaN tunnel junction," Appl. Phys. Lett. 109(19), 191104–6 (2016). K. P. Hsueh, K. C. Chiang, Y. M. Hsin, and C. J. Wang, "Investigation of Cr- and Al-based metals for the reflector and Ohmic contact on n-GaN in GaN flip-chip light-emitting diodes," Appl. Phys. Lett. 89(19), 191122 (2006). J. Y. Kim, M. K. Kwon, I. K. Park, C. Y. Cho, S. J. Park, D. M. Jeon, J. W. Kim, and Y. C. Kim, "Enhanced light extraction efficiency in flip-chip GaN light-emitting diodes with diffuse Ag reflector on nanotextured indium-tin oxide," Appl. Phys. Lett. 93(2), 021121 (2008). C. H. Lin, C. F. Lai, T. S. Ko, H. W. Huang, H. C. Kuo, Y. Y. Hung, K. M. Leung, C. C. Yu, R. J. Tsai, C. K. Lee, T. C. Lu, S. C. Wang, and S. Member, "Enhancement of InGaN-GaN Indium-Tin-Oxide Flip-Chip Light-Emitting Diodes With TiO2-SiO2 Multilayer Stack Omnidirectional Reflector," IEEE Photonics Technology Lett. 18(19), 2050–2052 (2006). S. Zhou, X. Liu, Y. Gao, Y. Liu, M. Liu, Z. Liu, C. Gui, and S. Liu, "Numerical and experimental investigation of GaN-based flip-chip light-emitting diodes with highly reflective Ag/TiW and ITO/DBR Ohmic contacts," Opt. Express 25(22), 26615–26627 (2017). T. Zhi, T. Tao, B. Liu, Y. Yan, Z. Xie, H. Zhao, and D. Chen, "High Performance Wide Angle DBR Design for Optoelectronic Devices," IEEE Photonics J. 13(1), 1–6 (2021). X. Ding, C. Gui, H. Hu, M. Liu, X. Liu, J. Lv, and S. Zhou, "Reflectance bandwidth and efficiency improvement of light-emitting diodes with double-distributed Bragg reflector," Appl. Opt. 56(15), 4375–4380 (2017). W. Cai, W. Wang, B. Zhu, X. Gao, G. Zhu, J. Yuan, and Y. Wang, "Suspended light-emitting diode featuring a bottom dielectric distributed Bragg reflector," Superlattices Microstruct. 113, 228–235 (2018). S. Zhou, H. Xu, M. Liu, X. Liu, J. Zhao, N. Li, and S. Liu, "Effect of dielectric distributed bragg reflector on electrical and optical properties of GaN-based flip-chip light-emitting diodes," Micromachines 9(12), 650 (2018). S. Zhou, B. Cao, S. Yuan, and S. Liu, "Enhanced luminous efficiency of phosphor-converted LEDs by using back reflector to increase reflectivity for yellow light," Appl. Opt. 53(34), 8104–8110 (2014). J. Zou, Z. Yang, C. Mao, and S. T. Wu, "Fast-response liquid crystals for 6G optical communications," Crystals 11(7), 797 (2021). M. A. Kats, R. Blanchard, P. Genevet, and F. Capasso, "Nanometre optical coatings based on strong interference effects in highly absorbing media," Nat. Mater. 12(1), 20–24 (2013). H. Kim, M. Kaya, and S. Hajimirza, "Broadband solar distributed Bragg reflector design using numerical optimization," Sol. Energy 221, 384–392 (2021). Ahmed, T. Baek, Y. H. Bai, J. Blanchard, R. Cai, W. Cao, B. Capasso, F. Chang, K. P. Chang, L. B. Chen, B. S. Chen, D. Chen, E. Chen, G. Chen, K. Y. Chen, N. Chen, S. W. H. Chen, Z. Chiang, K. C. Chien, H. Y. Cho, C. Y. Christenson, G. Dai, J. Demir, H. V. Denbaars, S. P. Deng, M. Y. Ding, H. Ding, X. Epler, J. E. Fan, J. Fang, Y. H. Fujikake, H. Gao, X. Gao, Y. Gardner, N. F. Genevet, P. Gong, L. Gordon, M. J. Götz, W. Gou, F. Gui, C. Guo, J. Guo, L. J. Guo, T. Guo, W. Hajimirza, S. He, Z. Holcomb, M. O. Horng, R. H. Hsiang, E. L. Hsin, Y. M. Hsueh, K. P. Hu, H. Huang, H. W. Huang, Y. Huang, Y. M. Hung, Y. Y. Hyun, B. R. Iida, D. Ishinabe, T. Jeng, M. J. Jeon, D. M. Jiang, B. Jiang, Z. Kang, J. H. Kats, M. A. Kaya, M. Kern, R. S. Kikuchi, S. Kim, C. H. Kim, H. Kim, J. W. Kim, J. Y. Kim, Y. C. Ko, T. S. Krames, M. R. Kuo, H. C. Kuo, W. H. Kwon, M. K. Lai, C. F. Lan, Y. F. Lee, C. E. Lee, C. F. Lee, C. K. Lee, M. H. Lee, S. L. Lee, Y. C. Leung, K. M. Ley, R. Li, M. C. Li, N. Liang, K. L. Liang, S. Lin, C. C. Lin, C. H. Lin, C. L. Liou, Y. H. Liu, A. C. Liu, B. Liu, M. Liu, S. Liu, X. Liu, Y. Liu, Z. Lowery, C. Lu, B. Lu, H. Lu, S. Lu, T. C. Lu, Y. Ludowise, M. J. Lv, J. Mao, C. Margalith, T. Martin, P. S. Masaoka, K. Member, S. Miao, J. Nakamura, S. Nishida, Y. O'Shea, J. J. Ohkawa, K. Park, I. K. Park, S. J. Peng, Z. Ping, J. Shchekin, O. B. Shen, Q. Shen, Y. C. Sher, C. W. Shibata, Y. Shiue, C. C. Smith, J. M. Speck, J. S. Steigerwald, D. A. Stockman, S. A. Su, C. Subramanya, S. Sun, H. Sun, J. Sun, Y. Tan, G. Tan, S. T. Tang, B. Tao, T. Tarntair, F. G. Ting, C. C. Trottier, T. A. Tsai, C. L. Tsai, R. J. Tsai, Y. T. Tseng, W. Y. Wan, H. Wang, C. J. Wang, S. C. Wang, W. Wierer, J. J. Wong, M. S. Wu, C. E. Wu, S. T. Wu, T. Wuu, D. S. Xie, Z. Xu, H. Xu, S. Yan, H. Yan, Q. Yan, Y. Yang, Q. Yang, Z. Ye, Y. Yen, C. C. Yonkee, B. P. Young, E. C. Yu, C. C. Yuan, J. Yuan, S. Zhang, L. Zhang, M. Zhang, X. Zhang, Z.-H. Zhao, H. Zhao, J. Zhao, X. Zheng, H. Zheng, L. L. Zhi, T. Zhou, S. Zhu, B. Zhu, G. Zou, J. ACS Photonics (1) Appl. Opt. (2) Appl. Phys. Lett. (8) Appl. Sci. (1) Appl. Surf. Sci. (2) IEEE Electron Device Lett. (1) IEEE J. Electron Devices Soc. (1) IEEE Photonics J. (1) IEEE Photonics Technol. Lett. (1) IEEE Photonics Technology Lett. (1) IEEE Trans. Electron Devices (3) J. Soc. Inf. Disp. (3) Light Sci. Appl. (1) Micromachines (1) Nano Energy (1) Nanomaterials (1) Nat. Mater. (1) Opt. Express (7) Semicond. Sci. Technol. (1) Sol. Energy (1) Superlattices Microstruct. (1) Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper View in Article | Download Full Size | PPT Slide | PDF Equations on this page are rendered with MathJax. Learn more. (1) n H t H = n L t L = λ / 4 (2) Δ λ = λ 2 π arcsin ⁡ ( n H − n L n H + n L ) The detailed thickness of each dielectric layer and central wavelength of each stack Stack No. Layer thickness (nm) Central wavelength of designed stack (nm) Ti3O5 1 118.0 72.5 689 8 99.3 53.4 543 10 85.0 51.1 491
CommonCrawl
Mortality advantage among migrants according to duration of stay in France, 2004–2014 Matthew Wallace ORCID: orcid.org/0000-0002-8318-79521, Myriam Khlat2 & Michel Guillot2,3 The migrant mortality advantage is generally interpreted as reflecting the selection of atypically healthy individuals from the country of origin followed by the wearing off of selection effects over time, a process theorised to be accelerated by progressive and negative acculturation in the host country. However, studies examining how migrant mortality evolves over duration of stay, which could provide insight into these two processes, are relatively scarce. Additionally, they have paid little attention to gender-specific patterns and the confounding effect of age. In this study, we analyze all-cause mortality according to duration of stay among male and female migrants in France, with a particular focus on the role of age in explaining duration of stay effects. We use the Échantillon Démographique Permanent (Permanent Demographic Sample; EDP), France's largest socio-demographic panel and a representative 1% sample of its population. Mortality was followed-up from 2004 to 2014, and parametric survival models were fitted for males and females to study variation in all-cause mortality among migrants over duration of stay. Estimates were adjusted for age, duration of stay, year, education level and marital status. Duration of stay patterns were examined for both open-ended and fixed age groups. We observe a migrant mortality advantage, which is most pronounced among recent arrivals and converges towards the mortality level of natives with duration of stay. We show this pattern to be robust to the confounding effect of age and find the pattern to be consistent among males and females. Our novel findings show an intrinsic pattern of convergence of migrant mortality towards native-born mortality over time spent in France, independent from the ages at which mortality is measured. The consistent pattern in both genders suggests that males and females experience the same processes associated with generating the migrant mortality advantage. These patterns adhere to the selection-acculturation hypothesis and raise serious concerns about the erosion of migrant health capital with increasing exposure to conditions in France. One of the most enduring findings in the social sciences literature is that of the migrant mortality advantage, a phenomenon defined as lower mortality among international migrants relative to native populations in high-income host countries [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]. The generally accepted – if rarely empirically observed – explanation for this pattern is the positive selection of atypically healthy and robust individuals from their origin countries [12]. Selection effects are theorized to be at their strongest just after migrants have arrived and wear off with time spent in the host country. This selection process may be accelerated by exposure to adverse social conditions and/or a progressive acculturation to prevailing beliefs, attitudes, and behaviors of the host society, which causes a shift in the disease patterns of migrants towards that of the host population [16]. However, it should be stated that this wearing off of selection effects can occur in the absence of any acculturation processes [17]. In lieu of comparable and reliable mortality data between migrants and natives in their origin countries (to directly examine selection), and rich longitudinal information on migrant health behaviors (to investigate acculturation), analyzing all-cause mortality according to duration of stay in the host country can offer insight into the migrant mortality advantage and its main explanations [18]. Studies into exactly how the migrant mortality advantage varies over duration of stay are scarce. In four of the studies we identified, there was no apparent effect of duration of stay on mortality [5, 18,19,20,21]. In the remaining six studies, there was some convergence in mortality towards the levels of natives over duration of stay [11, 14, 16, 22,23,24]. However, across these six studies, there were substantial differences in how quickly migrant mortality converged (and whether mortality converged fully), whether it was converging from an initial point of advantage or disadvantage, and according to the region or country of birth. Additionally, duration of stay was inconsistently defined across the studies, thus drawing comparisons across studies was difficult. Lastly, although several studies have investigated mortality from specific causes of death [25,26,27,28,29,30,31], their focus has been on determining the roles of genetic predisposition and environment in disease etiology [16, 32], rather than the main explanations of an overall mortality advantage. Therefore, our understanding of the patterns behind this low mortality remains unclear. In this study, we use the largest individual-level longitudinal data source in France, which is representative in terms of both population structure and mortality patterns by age and sex to national estimates from the French National Institute of Statistics and Economic Studies (Institut National de la Statistique et des Études Économiques, INSEE). Our overarching goal is to determine whether the migrant mortality advantage is pronounced among migrants who have recently arrived and converges with duration of stay, consistent with the proposed primary explanations of this phenomenon. To frame our research we present two specific research questions: Does the relationship between duration of stay and mortality vary by sex? Only two of the cited studies examined sex differences by duration of stay [11, 23] despite the fact that being male or female is theorized to play some role in whether migrants experience a mortality advantage or not [2, 9, 33]. In a recent review of the French literature, a striking feature was the relative mortality advantage experienced by males as compared with the disadvantage experienced by females [33]. The authors posited that this could relate to a weaker selection effect among female migrants who move for family reunification [2, 9, 33] and are therefore only admitted as "dependent" wives and not as "independent" women who have chosen to migrate [34]. Such an assertion is relevant, but needs to be considered carefully with respect to specific arrival cohorts, gender norms in the origin and host countries and sex-specific integration processes. Explicitly examining sex differentials in the migrant mortality advantage according to duration of stay represents the first key contribution of our study. Is there an intrinsic duration of stay pattern which is independent of age? Additionally, in most of the studies that we cited, the analyses adjusted for age and duration of stay but the ages over which mortality was measured were not fixed [11, 14, 19, 21, 23, 24]; it was only examined over wide or open-ended intervals. Only three of the studies fixed age into narrow bands of 15-years or less [16, 18, 22]. It is crucial when investigating the effect of duration to fix age into narrow bands, rather than simply adjusting for it, otherwise it becomes to difficult to disentangle whether the observed patterns are caused by age or duration. The reasons for this center around two demographic expectations. First, the average age of migrants who have lived in the host country for a short time will be lower than that of migrants who have lived in the host country a long time. Second, as age increases mortality levels increase but variability in relative terms around these levels decreases. Consequently, mortality differentials at older ages tend to be smaller than at younger ages. In the absence of this knowledge, if one was to observe a larger mortality advantage in a shorter duration group than in a longer one among adult migrants in a study using wide or open-ended age bands, one might interpret this a duration effect when it could equally be an age effect [35, 36]. Thus, fixing the ages at which mortality is measured and letting duration of stay vary addresses this problem, allowing us interpret the patterns as an intrinsic effect of duration of stay. This represents the second main contribution of our study. The French Permanent Demographic Sample (Échantillon Démographique Permanent; EDP) is France's largest socio-demographic panel and a representative 1% sample of the French population. It combines vital event information from official civil registers (birth, deaths, and marriages) with census data. Eligibility for the sample is based on date of birth (being born on one of four dates in one of January, April, July, or October). The EDP is a dynamic sample that is refreshed over time. New people can enter the sample through being born in, or moving to, France (and being born on a sample date) and leave the sample through death or leaving the country). EDP members' records are updated with new information and are kept after their death. Although the EDP has been active since 1968, we only follow individuals from 2004 as the year of arrival question was only asked consistently at censuses from this point. However, for 1 month (October – the original sampling month of the EDP prior to the expansion of the sample in 2004) we also benefit from direct and indirect retrospective information on year of arrival before 2004 (at least at censuses in which the question was included). Unfortunately, retrospective information was not linked for the other sampling months after the 2004 expansion of the sample. To construct our main variable of interest – duration of stay – we relied upon two questions: year of arrival ("If you are foreign-born, what year did you arrive in France?") and previous place of residence ("Where did you live on date x?"). The rate of non-response in these two questions was substantial (20%) and selective. Fitting a logistic regression model with non-response as our explanatory variable and after adjustment for age, sex and education, we found that those who did not respond were more likely to die (OR: 1.30; 95% CIs: 1.20–1.41). Consequently, we decided to limit our sample to those born in 1 month – October – due to availability of prior censuses and civil register information. For these migrants we could rely on retrospective data from three of five exhaustive censuses on year of arrival (1968, 1975, and 1999), past place of residence, and census presence to eliminate this undesirable level of non-response. This also provided the opportunity to validate information provided by migrants in the year of arrival question, as we could corroborate the date provided for those with long durations with these additional indicators of arrival. In short, we observed a reassuring level of consistency between dates. Given the EDP's sampling method, the omission of the other sampling months should not have a substantive impact on results (except for the loss of some statistical power in our regression models). Duration of stay was generated by subtracting year of arrival from year of first enumeration at a census point for each person. We gave natives an arbitrary value "88,888". We categorized the variable into bands 0–5, 5–10, 10–15, 15–20, 20–30, 30–40, 40–50, 50–60 and 60 years and over. We estimated adult mortality for ages 20+ by sex fitting continuous-time survival models in which ui(t) denotes the hazard (or 'force') of mortality for individual i at age t and u0(t) denotes the baseline hazard (risk of mortality by age which follows a Gompertz distribution i.e. an exponential increase in mortality with age). xij(t) represents our vector of explanatory covariates. $$ {u}_i(t)={u}_0(t)\times \mathit{\exp}\left\{\sum \limits_j{\beta}_j{x}_{ij}(t)\right\} $$ In the baseline model, the vector of covariates was age (the baseline hazard), duration of stay, and year of onset of risk. The latter was a categorical variable from 2004 to 2013. In the final model, the vector of covariates was expanded to include education level and marital status. Education level at censuses was categorized according to the International Standard Classification of Education (ISCED) and coded into categories: less than primary, primary, secondary (secondary 1st and 2nd cycle) and tertiary (post-secondary to pre-university and above). Marital status was taken as provided from the census: single, married, divorced and widowed. Figure 1 presents the study design. Our study period began in 2004 (the onset of the first rolling census) and ended in October 2014, the latest point for which we had death data and final collection year of the second rolling census. Individuals were considered to be "at risk" from the year they were enumerated at a rolling census point for the first time between 2004 and 2013 as long as they were aged 20+. We followed 10 annual entry cohorts (the first one in 2004 [entry cohort 2004; Fig. 1] and final one in 2013 [entry cohort 2013; Fig. 1]). We followed these cohorts for 5-years [cohorts 2004–9], or up until the end of the study period [cohorts 2010–13], whichever came first. We intentionally restricted the length of follow-up in earlier cohorts to limit the impact of 'censoring bias' (underestimation of migrant mortality due to an inability to remove from the risk set any migrants who have left the host country [5, 10, 37, 38]). Limiting follow-up ensured that any bias introduced through an inability to censor leavers was minimized. We note that we experimented with several different follow-up lengths, but this did not have a substantive impact upon our main findings; 5-years represents a comprimise between sufficient power and minimizing censoring bias. Individuals in each entry cohort were followed until their death or until the end of study. Study design: onset of risk and follow-up periods of annual entry cohorts Our analytical strategy entails first examining the relationship between duration of stay and mortality over an open-ended age interval (20+) as is common in the literature. This analysis serves to show the difficulties we encounter in trying to isolate the role of duration of stay in migrant mortality patterns when age is not fixed. Next, we fix age into narrow 10-year bands (60–70 and 70–80) but continue to allow duration of stay to vary to see if we can identify an intrinsic duration effect (research Q2). The analyses are conducted separately for male and females (research Q1) in Stata 15.1. We compare all-cause mortality of 29,118 foreign-born males (1,188 deaths) with 175,842 native males (10,368 deaths) and 30,959 foreign-born females (1,063 deaths) with 193,288 native females (10,395 deaths). Additional file 1: Tables S5–S7 also include additional descriptive information on the composition of each duration group according to characteristics: region of birth, education level, age, year of arrival, and age at arrival. These descriptives will help us to interpret any patterns we might observe. Age at arrival is particularly important, as the mortality of migrants is the summation of both their duration of stay and age at arrival. Given age is such a strong determinant of mortality, this age/duration of stay/age at arrival issue constitutes an extension of the non-identifiability problem of age/period/cohort models in the study of time trends [25]. Age at arrival is associated with variation in selection. Case in point, a migrant arriving aged 10 with his or her parents may not be subjected to the same level of selection as a migrant arriving aged 25 who has chosen to move to the host country for work. A recent study documented excess mortality among child migrants (aged less than 20-years) in the U.S., France and the U.K. The authors argued that these migrants will play a large role in observed relationships between duration of stay and mortality and that for them, a lack of positive selection may play a more crucial role than duration of stay effects [39]. Similarly, a study in Sweden combined information on age at arrival and duration of stay to demonstrate an excess mortality among migrants arriving before age 18 and only a moderate duration effect in specific migrant groups [21]. We thus pay close attention to age at arrival when interpreting findings. Figure 2 presents hazard ratios (HRs) for all-cause mortality by duration of stay among adult migrants (aged 20+) relative to natives born in France. The hazard ratios are displayed from the final model additionally adjusting for educational level and marital status (regression tables for the baseline and final model are both available in the Additional file 1: Tables S1 and S2). For each duration category, we include boxplots detailing the age composition of migrants to highlight the close relationship between the two variables. For both sexes, we observe a convergence with duration, though the pattern is more firmly established among males. HRs are pronounced among migrants who have arrived in the past 5-years (males HR = 0.39; 95% CIs 0.25–0.63; females HR = 0.49; 95% CIs 0.27–0.89) and move closer to 1 with duration of stay. For males, mortality has converged by 60 years + (HR = 0.95; 95% CIs 0.85–1.06), but among females a migrant mortality advantage persists (HR = 0.83; 95% CIs 0.75–0.91). Our post-estimation tests show that mortality in the shortest duration of stay band is statistically significantly different from the longest duration of stay band for males (HR = 2.39; p < 0.05) and for females (HR = 1.68; p < 0.05). Hazard ratios (log) for all-cause mortality among male and female migrants over duration of stay, combined with box plots of age for each duration category, ages 20+. Survival analyses adjusted for age, baseline year of entry, education level, and marital status. See Tables S1 and S2 of Additional file 1 for regression tables Taken at face value, the pattern is consistent with what would be expected in the presence of selection effects and progressive acculturation with increasing duration of stay. This is further corroborated by the additional compositional information in Additional file 1: Tables S5 (males) and S6 (females). We find that in the duration categories where the advantage is most pronounced (< 15-years), migrants are more highly educated (this is, of course, adjusted for in the models) and almost half arrive from Sub-Saharan Africa and Other Europe (migrant streams known to be highly skilled and educated) [40]. These migrants also arrive at prime working ages (as is evidenced by information on the age at arrival) with few arriving younger than age 18 (the age at which it is no longer possible for non-EU migrant children to arrive on family reunification visas). Moreover, the older duration categories are typically associated with migration from Algeria (a colony until 1962), Morocco and Tunisia (which were protectorates until 1955) and Southern Europe (older migration streams in France that were known to be less skilled and lower educated) [40]. However, Additional file 1: Tables S5 and S6 provide us with two important pieces of information that lead us to question this interpretation. First, average age at arrival decreases with duration of stay. This indicates that many migrants in longer duration bands arrived in France as children. Second, average age increases with duration of stay (we also refer readers to the box plots in Fig. 2). It therefore becomes impossible to determine whether the observed patterns are a generated by duration of stay (a wearing off of selection effects over time accelerated by acculturation), age at arrival (weaker selection in longer durations as more migrants arrived as children), or age (a reduction in the variability of mortality levels in longer durations bands as migrants are older). Therefore, to assess the role of duration of stay in the migrant mortality advantage, we extended the analysis by fitting the same models but fixing age at two age groups at baseline: 60–70 and 70–80 years. This has three consequences. First, age no longer increases over the duration of stay categories. To elaborate, duration of stay can continue to vary from 0 to 70 [in the first age band] and 0 to 80 [in the second age band], but the age of migrants can only vary within a very narrow interval (from 60 to 70 [in the first age band] and 70 to 80 [in the second age band]). Second, by fixing age, we create a near perfect correlation between age at arrival and duration of stay. Then, if we assume a constant selection by age at arrival past age 18 (i.e. the age at which arriving on a family reunification visa as a child is no longer possible), then we propose that the effects we observe can be considered as isolated duration of stay effects. Of course, the two longest duration of stay bands are comprised of migrants who arrived exclusively as children, so the same is not true for migrants in these two categories. Third, because variability in mortality is lower in the two age groups, should we observe a more pronounced mortality advantage among those with shorter durations than those with longer ones, then this will provide solid evidence of a duration effect. Figure 3 presents HRs for the two age groups (baseline and final models are in the Additional file 1: Tables S3 and S4). For both sexes, we continue to observe convergence in HRs with increasing duration of stay, even after fixing age. The continued presence of convergence at these older ages is promising and in both age groups is well established among males (but less so among females). For example, track the HR of males aged 60–70 who have arrived in the past 10-years (HR = 0.39; 95% CIs 0.17–0.87) to that of males who arrived 60 years ago (HR = 0.96; 95% CIs 0.68–1.36). Similarly, track the HR of males aged 70–80 who have arrived in the past 10-years (HR = 0.54; 95% CIs 0.26–1.13) to that of males who arrived 70 years ago (HR = 0.91; 95% CIs 0.74–1.09). For females, the greater fluctuation we observe (in fact in both Fig. 1 and Fig. 2) is likely explained by a lower number of death events, particularly at shorter durations (note the wider CIs relative to males and number of deaths in Tables S5 to S7 in the Additional file 1). Nonetheless, for both sexes, the very low HRs for the most recent arrivals (and therefore the size of mortality differences between migrants and natives) remain quite striking, especially considering age is fixed in categories where variability in mortality tends to be lower. Post-estimation tests show that in model a (ages 60–70) mortality in the shortest duration band is statistically significantly different from the longest duration band for males (HR = 2.48; p < 0.05) and marginally significant for females (HR = 3.71; p < 0.10). In model b (ages 70–80), the test is insignificant for males (HR = 1.69; p > 0.10) but marginally significant for females (HR = 3.33; p < 0.10). Hazard ratios (log) for all-cause mortality among male and female migrants over duration of stay, fixed at age bands 60–70 (top) and 70–80 (bottom). Survival analyses adjusted for age, baseline year of entry, education level, and marital status. See Tables S3 and S4 of Additional file 1 for regression tables The additional compositional characteristics in Additional file 1: Table S7 (we do not show age, age at arrival or year of arrival given age is fixed) show that migrants in the most recent duration categories originate largely from countries in the European Union (particularly Great Britain, Germany, Holland, and Belgium). Most notably, over 60% of the duration of stay category 0–10 is composed of migrants arriving from Great Britain. Additionally, the individuals in the two most recent duration of stay categories are remarkably highly qualified relative to natives and individuals in the other duration of stay categories (again, education level is adjusted for in the models). We elaborate upon these interesting findings in the discussion. One thing that we must bear in mind for the two longest duration categories is the effect of age at arrival and its association with migration selection effects (or lack thereof among migrants arriving as children who almost exclusively comprise these groups). In short, are the converged values we observe a consequence of the long time migrants have spent living in France, or because they arrived as children and were not positively selected in the first place? This remains an open-ended question for future investigation. To the best of our knowledge, for the first time in France we have observed a migrant mortality advantage that was strongest among the most recent arrivals and then converged towards the mortality of the native population with duration of stay. Importantly, we showed this pattern to be robust to the confounding effect of age, and to some extent age at arrival, and found it to be consistent among male and female migrants. Such patterns adhere to the theorized narrative of erosion over time of the health advantage generated by initial selection, which does not provide lifelong protection. The patterns also accord with the idea of exposure to adverse conditions and a gradual and negative acculturation to the prevailing health behaviors of the host society, although we cannot favor either process with the data we have available. Our findings complement those from a French longitudinal study examining changes in diet and physical activity among Tunisian migrants [41] and the wider international literature examining acculturation in migrant health behaviors [42,43,44,45,46,47,48]. Contrary to our expectations, the pattern of advantage followed by convergence presented in both male and female migrants and there was no difference between the two. This challenges the perception in the literature that the migrant mortality advantage is gendered. In high-income countries, female migrants can be admitted as dependent wives or as independent woman integrated into the workforce [34]. This dependent/independent balance in all likelihood varies depending upon the composition of the migrant population. Notably, we should consider origin country (especially its level of gender equality and partnership norms) and year of arrival (given improving gender equality over time in both the origin and host countries leading to improved access to education and greater diversity in labor market opportunities). Additional file 1: Tables S5 and S6 provides the composition of migrant females and males by country of birth, sex, year at arrival and age at arrival. Women arriving in the past 10 years are highly qualified (over 50% have tertiary level education) and their education level distributions across duration bands are comparable with males (albeit with somewhat higher levels having primary or less in the longer duration bands). Women arriving recently do so from countries associated with skilled labor or education migration to France (Other Europe [largely from the EU], Algeria, and Sub-Saharan Africa). Furthermore, the initial mortality advantage observed among women was quite similar to that experienced by males and, unlike males, did not fully converge with rising duration of stay. This should be considered all the more striking in light of the double discrimination faced by female migrants in many host countries (as both women and ethnic minorities) [34] which presents an additional factor in the negative assimilation process which is not experienced by males. It is fascinating that we continue to observe marked mortality advantages among migrants arriving between ages 60 and 80, especially given that mortality differences between natives and migrants should be smaller at these ages as the variability in mortality is lower. Additional file 1: Table S7 showed that migrants aged 60 to 80 who had arrived recently to live in France were highly educated and originated largely from other European nations, particularly Great Britain. With this in mind, we offer two tentative explanations for this old age migrant mortality advantage. For migrants arriving at pre-retirement age (retirement age in France depends upon date of birth: some can retire from 60; everyone must retire by 70 [49]) most will still be moving for work and will continue to be subjected to the selection processes associated with the healthy migrant and healthy worker effects. France's reunification policy relates only to spouses and children; not the elderly parents of migrants [50]. For migrants arriving post-retirement, our explanation relates to international retirement migration (IRM) [51]. France is a popular host country for IRM, especially for British citizens [who comprise a staggering 60% of the duration category 0–10 for males and females]. IRM is socially selective, most of those who move countries to retire are either 'early retired' or 'active young' elderly with unique levels of wealth and income (and so presumably good health) [51]. The mortality risk of international retirement migrants is not likely to be reflective of individuals of the same age they move to live amongst. The main strength of this study lies in the use of a large and representative longitudinal data source to investigate detailed variation in migrant mortality over duration of stay, with a particular emphasis on the roles of age and sex. The main limitation of the study is that we were unable to make full use of the entire EDP data. Consequently, we were unable to investigate specific variation by the country of birth or education level of migrants, which could have provided fresh insight into the two primary explanations of the migrant mortality advantage. Nonetheless, the main contribution of this study has been to isolate an intrinsic duration of stay effect in patterns of migrant mortality. Our findings should promote renewed interest in the experiences of migrants after they arrive in the host country, as it is vital to determine whether the convergence we have observed is an unavoidable one (selection effects wearing off) or accelerated by the lifelong hardship to which migrants can be exposed in the host country. In the latter case, targeted health policies would be needed to preserve the initial substantial health capital of migrants and prevent (or slow) this erosion. Achieving this goal would help to maximize the potential social, cultural, and economic contributions of migrants and support their healthy ageing. Future studies could investigate how the effect of duration varies over country of birth and other socio-demographic characteristics and give more salience to gender dynamics and equality in countries of origin. Additionally, examining how causes of death vary over duration of stay in relation to all-cause mortality would provide crucial insight into acculturation and the migrant mortality advantage. CIs : EDP : Échantillon Démographique Permanent HRs : Hazard Ratios IRM : International Retirement Migration ISCED : International Standard Classification of Education OR : Odds Ratios UK : US : Marmot MG, Adelstein AM, Bulusu L. Lessons from the study of immigrant mortality. Lancet. 1984;323(8392):1455–7. Khlat M, Courbage Y. Mortality and Causes of Death of Moroccans in France, 1979–91. Popul Engl Ed. 1996;8:59–94. Razum O, Zeeb H, Akgun HS, Yilmaz S. Low overall mortality of Turkish residents in Germany persists and extends into a second generation:merely a healthy migrant effect? Trop Med Int Health. 1998;3(4):297–303. Abraido-Lanza AF, Dohrenwend BP, Ng-Mak DS, Turner JB. The Latino mortality paradox: a test of the "Salmon Bias" and healthy migrant hypotheses. Am J Public Health. 1999;89(10):1543–8. CAS PubMed PubMed Central Article Google Scholar Anson J. The migrant mortality advantage: a 70 month follow-up of the Brussels population. Eur J Popul. 2004;20:191–8. Deboosere P, Gadeyne S. Adult migrant mortality advantage in Belgium: evidence using census and register data. Popul Engl Ed. 2005;60(5/6):655–98. Palloni A, Arias E. Paradox lost: explaining the Hispanic adult mortality advantage. Demography. 2004;41(3):385–415. Turra CM, Elo IT. The impact of Salmon Bias on the Hispanic mortality advantage: new evidence from social security data. Popul Res Policy Rev. 2008;27(5):515–30. Boulogne R, Jougla E, Breem Y, Kunst AE, Rey G. Mortality differences between the foreign-born and locally-born population in France (2004-2007). Soc Sci Med. 2012;74(8):1213–23. Wallace M, Kulu H. Low immigrant mortality in England and Wales: a data artefact? Soc Sci Med. 2014;120:100–9. Vandenheede H, Willaert D, De Grande H, Simoens S, Vanroelen C. Mortality in adult immigrants in the 2000s in Belgium: a test of the 'healthy-migrant' and the 'migration-as-rapid-health-transition' hypotheses. Trop Med Int Health. 2015;20(12):1832–45. Guillot M, Khlat M, Elo IT, Solignac M, Wallace M. Age variation in the migrant mortality advantage. PLoS One. 2018;13(6):e0199669. Uitenbroek DG. Mortality trends among migrant groups living in Amsterdam. BMC Public Health. 2015;15:1187. Hajat A, Blakely T, Dayal S, Jatrana S. Do New Zealand's immigrants have a mortality advantage? Evidence from the New Zealand census-mortality study. Ethnicity & health. 2010;15(5):531–47. Vang ZM, Sigouin J, Flenon A, Gagnon A. Are immigrants healthier than native-born Canadians? A systematic review of the healthy immigrant effect in Canada. Ethnicity & health. 2017;22(3):209–41. Harding S. Mortality of migrants from the Indian subcontinent to England and Wales: effect of duration of residence. Epidemiology. 2003;14(3):287–92. Palloni A, Ewbank DC. Selection Processes in the Study of Racial and Ethnic Differentials in Adult Health and Mortality. In: Anderson NB, Bulatao RA, Cohen B, editors. Critical Perspectives on Racial and Ethnic Differences in Health in Late Life. Washington DC: National Academies Press; 2004. p. 171–225. Bos V, Kunst AE, Garssen J, Mackenbach JP. Duration of residence was not consistently related to immigrant mortality. J Clin Epidemiol. 2007;60(6):585–92. Lehti V, Gissler M, Markkula N, Suvisaari J. Mortality and causes of death among the migrant population of Finland in 2011–13. Eur J Pub Health. 2016;27(1):117–23. Ronellenfitsch U, Kyobutungi C, Becher H, Razum O. All-cause and cardiovascular mortality among ethnic German immigrants from the former Soviet Union: a cohort study. BMC Public Health. 2006;6:16. Juárez SP, Drefahl S, Dunlavya A, Rostilaa M. All-cause mortality, age at arrival, and duration of residence among adult migrants in Sweden: A population-based longitudinal study. Soc Sci Med (Pop Health). 2018;6:16–25. Harding S. Mortality of migrants from the Caribbean to England and Wales: effect of duration of residence. Int J Epidemiol. 2004;33(2):382–6. Hammar N, Kaprio J, Hagström U, Alfredsson L, Koskenvuo M, Hammar T. Migration and mortality: a 20 year follow up of Finnish twin pairs with migrant co-twins in Sweden. J Epidemiol Community Health. 2002;56(5):362–6. Syse A, Strand BH, Naess O, Steingrímsdóttir OA, Kumar BN. Differences in all-cause mortality: a comparison between immigrants and the host population in Norway 1990–2012. Demogr Res. 2016;34(22):615–56. Khlat M, Vail A, Parkin M, Green A. Mortality from melanoma in migrants to Australia: variation by age at arrival and duration of stay. Am J Epidemiol. 1992;135(10):1103–13. Stirbu I, Kunst AE, Vlems FA, Visser O, Bos V, Deville W, Nijhuis HG, Coebergh JW. Cancer mortality rates among first and second generation migrants in the Netherlands: convergence toward the rates of the native Dutch population. Int J Cancer. 2006;119(11):2665–72. Gray L, Harding S, Reid A. Evidence of divergence with duration of residence in circulatory disease mortality in migrants to Australia. Eur J Pub Health. 2007;17(6):550–4. McCredie M, Williams S, Coates M. Cancer mortality in east and southeast Asian migrants to New South Wales, Australia, 1975–1995. Br J Cancer. 1999;79:127761282. McMichael AJ, McCall MG, Hartshore JM, Woodings TL. Patterns of gastro-intestinal cancer in european migrants to Australia: the role of dietary change. Int J Cancer. 1980;25(4):431–7. McMichael AJ, Giles GG. Cancer in migrants to Australia: extending the descriptive epidemiological data. Cancer Res. 1988;48(3):751–6. McCredie M, Williams S, Coates M. Cancer mortality in migrants from the British Isles and continental Europe to New South Wales, Australia, 1975–1995. Int J Cancer. 1999;83(2):179–85. Parkin DM, Khlat M. Studies of cancer in migrants: rationale and methodology. Eur J Cancer. 1996;32A(5):761–71. Khlat M, Guillot M. Health and mortality patterns among migrants in France. In: Trovato F, editor. Migration, Health and Survival. United Kingdom: Edward Elgar Publishing; 2017. p. 193–213. Llacer A, Zunzunegui MV, del Amo J, Mazarrasa L, Bolumar F. The contribution of a gender perspective to the understanding of migrants' health. J Epidemiol Community Health. 2007;61(Suppl 2):4–10. Houweling TAJ, Kunst AE, Huisman M, Mackenbach JP. Using relative and absolute measures or monitoring health inequalities: experiences from cross-national analyses on maternal and child health. Int J Equity Health. 2007;6(15):1–9. Eikemo TA, Skalicka V, Avendano M. Variations in health inequalities: are they a mathematical artefact? Int J Equity Health. 2009;8(32):1–5. Kibele E, Scholz R, Shkolnikov VM. Low migrant mortality in Germany for men aged 65 and older: fact or artifact? Eur J Epidemiol. 2008;23(6):389–93. Scott AP, Timaeus IM. Mortality differentials 1991-2005 by self-reported ethnicity: findings from the ONS longitudinal study. J Epidemiol Community Health. 2013;67(9):743–50. Guillot M, Khlat M, Elo I, Solignac M, Wallace M. Understanding age variations in the migrant mortality advantage: an international comparative perspective. PLoS One (in press. 2018. Ichou M, Goujon A. DIPAS: Immigrants' educational attainment: a mixed picture, but often higher than the average in their country of origin. Pop Soc. 2017;541:1–4. Mejean C, Traissac P, Eymard-Duvernay S, El Ati J, Delpeuch F, Maire B. Influence of socio-economic and lifestyle factors on overweight and nutrition-related diseases among Tunisian migrants versus non-migrant Tunisians and French. BMC Public Health. 2007;7:265. Reiss K, Lehnhardt J, Razum O. Factors associated with smoking in immigrants from non-western to western countries - what role does acculturation play? A systematic review. Tob Induc Dis. 2015;13(1):11. Smith NR, Kelly YJ, Nazroo JY. The effects of acculturation on obesity rates in ethnic minorities in England: evidence from the health survey for England. Eur J Pub Health. 2012;22(4):508–13. Lara M, Gamboa C, Kahramanian MI, Morales LS, Bautista DE. Acculturation and Latino health in the United States: a review of the literature and its sociopolitical context. Annu Rev Public Health. 2005;26:367–97. Abraido-Lanza AF, Chao MT, Florez KR. Do healthy behaviors decline with greater acculturation? Implications for the Latino mortality paradox. Soc Sci Med. 2005;61(6):1243–55. Riosmena F, Wong R, Palloni A. Migration selection, protection, and acculturation in health: a binational perspective on older adults. Demography. 2013;50(3):1039–64. Riosmena F, Everett BG, Rogers RG, Dennis JA. Negative acculturation and nothing more? Cumulative disadvantage and mortality during the immigrant adaptation process among Latinos in the United States. Int Migr Rev. 2015;49(2):443–78. Kuerban A. Healthy migrant effect on smoking behavior among Asian immigrants in the United States. J Immigr Minor Health. 2016;18(1):94–101. Retraite dans le privé : âge légal de départ à la retraite [https://www.service-public.fr/particuliers/vosdroits/F14043]. Accessed 2 Apr 2018. Regroupement familial [https://www.service-public.fr/particuliers/vosdroits/F11166]. Accessed 2 Apr 2018. King R, Warnes AM, Williams AM. International retirement migration in Europe. International journal of population geography : IJPG. 1998;4(2):91–111. research reported in this manuscript was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) of the National Institutes of Health (NIH) under award number R01HD079475. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The data that support the findings of this study are available from INSEE (Institut national de la statistique et des études économiques; National Institute of Statistics and Economic Studies) through CASD (Centre d'accès Sécurisé aux Données; the Secure Data Access Centre) but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available to the readers of this article. Demography Unit, Department of Sociology, Stockholm University, Stockholm, Sweden Matthew Wallace Institut national d'études démographiques, French National Demographic Institute, 133 Boulevard Davout, 75020, Paris, France Myriam Khlat & Michel Guillot Population Studies Center, University of Pennsylvania, 242 McNeil Building, Philadelphia, PA19104, USA Michel Guillot Myriam Khlat Michel Guillot (MG), Myriam Khlat (MK) and Matthew Wallace (MW) conceived the study. MW cleaned and set up the data. MW, MG, and MK analyzed the data. MW prepared the table and Fig. MW drafted the paper. All authors were involved with data interpretation, critical revisions of the paper, and approved the final version. MW acts as the guarantor. Correspondence to Matthew Wallace. Additional file1: Table S1. Hazard ratios (log) for all-cause mortality among male migrants over duration of stay, ages 20+, baseline and final models. (1) baseline year of entry adjusted for but not shown (2) significant levels at ** p < 0.01, * p < 0.05, and + p < 0.10. Table S2. Hazard ratios (log) for all-cause mortality among female migrants over duration of stay, ages 20+, baseline and final models. (1) baseline year of entry adjusted for but not shown (2) significant levels at ** p < 0.01, * p < 0.05, and + p < 0.10. Table S3. Hazard ratios (log) for all-cause mortality among male and female migrants over duration of stay, fixed at age bands 60–70, baseline and final models. (1) baseline year of entry adjusted for but not shown (2) significant levels at ** p < 0.01, * p < 0.05, and + p < 0.10. Table S4. Hazard ratios (log) for all-cause mortality among male and female migrants over duration of stay, fixed at age bands 70–80, baseline and final models. (1) baseline year of entry adjusted for but not shown (2) significant levels at ** p < 0.01, * p < 0.05, and + p < 0.10. Table S5. Compositional characteristics (education, country of origin, year and age of arrival, and age) of male migrants over duration of stay, ages 20+. Table S6. Compositional characteristics (education, country of origin, year and age of arrival, and age) of female migrants over duration of stay, ages 20+. Table S7. Compositional characteristics (education and country of origin only) of male and female migrants over duration of stay, ages 60–80. (XLSX 43 kb) Wallace, M., Khlat, M. & Guillot, M. Mortality advantage among migrants according to duration of stay in France, 2004–2014. BMC Public Health 19, 327 (2019). https://doi.org/10.1186/s12889-019-6652-1 Migrant mortality advantage Healthy migrant effect All-cause mortality Duration of stay Length of residence
CommonCrawl
Search Results: 1 - 10 of 463776 matches for " Herbert A Weich " Page 1 /463776 DNA methylation regulates expression of VEGF-R2 (KDR) and VEGF-R3 (FLT4) Hilmar Quentmeier, Sonja Eberth, Julia Romani, Herbert A Weich, Margarete Zaborski, Hans G Drexler BMC Cancer , 2012, DOI: 10.1186/1471-2407-12-19 Abstract: Real-time (RT) PCR analysis was performed to quantify KDR and FLT4 expression in some ninety leukemia/lymphoma cell lines, human umbilical vein endothelial cells (HUVECs) and dermal microvascular endothelial cells (HDMECs). Western blot analyses and flow cytometric analyses confirmed results at the protein level. After bisulfite conversion of DNA we determined the methylation status of KDR and FLT4 by DNA sequencing and by methylation specific PCR (MSP). Western blot analyses were performed to examine the effect of VEGF-C on p42/44 MAPK activation.Expression of KDR and FLT4 was observed in cell lines from various leukemic entities, but not in lymphoma cell lines: 16% (10/62) of the leukemia cell lines expressed KDR, 42% (27/65) were FLT4 positive. None of thirty cell lines representing six lymphoma subtypes showed more than marginal expression of KDR or FLT4. Western blot analyses confirmed KDR and FLT4 protein expression in HDMECs, HUVECs and in cell lines with high VEGF-R mRNA levels. Mature VEGF-C induced p42/44 MAPK activation in the KDR- /FLT4+ cell line OCI-AML1 verifying the model character of this cell line for VEGF-C signal transduction studies. Bisulfite sequencing and MSP revealed that GpG islands in the promoter regions of KDR and FLT4 were unmethylated in HUVECs, HDMECs and KDR+ and FLT4+ cell lines, whereas methylated cell lines did not express these genes. In hypermethylated cell lines, KDR and FLT4 were re-inducible by treatment with the DNA demethylating agent 5-Aza-2'deoxycytidine, confirming epigenetic regulation of both genes.Our data show that VEGF-Rs KDR and FLT4 are silenced by DNA methylation. However, if the promoters are unmethylated, other factors (e.g. transactivation factors) determine the extent of KDR and FLT4 expression.Vascular endothelial growth factors (VEGFs) and their corresponding receptors (VEGF-Rs) are important regulators of angiogenesis and lymphangiogensis. VEGF-A binds VEGF-R1 (FLT1) and VEGF-R2 (KDR). Both tyrosine kinase Elevated expression of VEGFR-3 in lymphatic endothelial cells from lymphangiomas Susanne Norgall, Maria Papoutsi, Jochen R?ssler, Lothar Schweigerer, J?rg Wilting, Herbert A Weich BMC Cancer , 2007, DOI: 10.1186/1471-2407-7-105 Abstract: Lymphangioma tissue samples were obtained from two young patients suffering from lymphangioma in the axillary and upper arm region. Initially isolated with anti-CD31 (PECAM-1) antibodies, the cells were separated by FACS sorting and magnetic beads using anti-podoplanin and/or LYVE-1 antibodies. Characterization was performed by FACS analysis, immunofluorescence staining, ELISA and micro-array gene analysis.LECs from foreskin and lymphangioma had an almost identical pattern of lymphendothelial markers such as podoplanin, Prox1, reelin, cMaf and integrin-α1 and -α9. However, LYVE-1 was down-regulated and VEGFR-2 and R-3 were up-regulated in lymphangiomas. Prox1 was constantly expressed in LECs but not in any of the BECs.LECs from different sources express slightly variable molecular markers, but can always be distinguished from BECs by their Prox1 expression. High levels of VEGFR-3 and -2 seem to contribute to the etiology of lymphangiomas.The blood vascular system supplies all organs with oxygen and nutrients while the lymphatic vasculature is crucial for the uptake of extra-cellular fluid, lipids from the gut and circulating immune cells during immune surveillance. Unfortunately, the lymphatics also serve as highways for metastatic tumour cells. Both vascular systems are anatomically and histologically closely related to each other, but they are also different as concerns their topography, architecture of their walls, and their cellular and molecular composition (reviews see [1-4]). In spite of the importance of lymphatic vessels in health and disease, e.g. 80% of carcinomas metastasize via the lymphatic system, they have received little attention until recently. This has been due to the absence of suitable markers that distinguish between lymphatic endothelial cells (LECs) and blood vascular endothelial cells (BECs). Recently, LEC markers have been discovered and characterized, including the hyaluronan receptor LYVE-1, the membrane glycoprotein podoplanin, the tran Regulation of soluble vascular endothelial growth factor receptor (sFlt-1/sVEGFR-1) expression and release in endothelial cells by human follicular fluid and granulosa cells Ruth Gruemmer, Karin Motejlek, Daniela Berghaus, Herbert A Weich, Joseph Neulen Reproductive Biology and Endocrinology , 2005, DOI: 10.1186/1477-7827-3-57 Abstract: We analyzed the influence of human follicular fluid obtained from FSH-stimulated women as well as of human granulosa cell conditioned medium on sFlt-1 production in and release from human umbilical vein endothelial cells (HUVEC) in vitro. Soluble Flt-1 gene expression was determined by RT-PCR analysis, amount of sFlt-1-protein was quantified by Sandwich-ELISA.Human follicular fluid as well as granulosa cell-conditioned medium significantly inhibit the production of sFlt-1 by endothelial cells on a posttranscriptional level. Treatment of cultured granulosa cells with either hCG or FSH had not impact on the production of sFlt-1 inhibiting factors. We further present data suggesting that this as yet unknown sFlt-1 regulating factor secreted by granulosa cells is not heat-sensitive, not steroidal, and it is of low molecular mass (< 1000 Da).We provide strong support that follicular fluid and granulosa cells control VEGF availability by down regulation of the soluble antagonist sFlt-1 leading to an increase of free, bioactive VEGF for maximal induction of vessel growth in the ovary.Angiogenesis is a rare process in normal adult organs predominantly occurring during wound healing and tumor growth. However, under physiological conditions it plays an important role in the female reproductive tract with regard to follicular development, corpus luteum formation, and uterine endometrial proliferation during the menstrual cycle [1,2]. Here, the cyclic corpus luteum of the ovary is the organ with the strongest physiological angiogenesis [3,4]. Defects in ovarian angiogenesis may contribute to a variety of disorders including anovulation and infertility, pregnancy loss, ovarian hyperstimulation syndrome, and ovarian neoplasms [5-7].During follicular growth, angiogenesis is restricted to the theca cell layer. After ovulation, however, massive angiogenesis occurs and new blood vessels penetrate the basement membrane of the follicle invading the growing corpus luteum [8]. The establ Mouse lung contains endothelial progenitors with high capacity to form blood and lymphatic vessels Judith Schniedermann, Moritz Rennecke, Kerstin Buttler, Georg Richter, Anna-Maria St?dtler, Susanne Norgall, Muhammad Badar, Bernhard Barleon, Tobias May, J?rg Wilting, Herbert A Weich BMC Cell Biology , 2010, DOI: 10.1186/1471-2121-11-50 Abstract: In an attempt to isolate differentiated mature endothelial cells from mouse lung we found that the lung contains EPCs with a high vasculogenic capacity and capability of de novo vasculogenesis for blood and lymph vessels.Mouse lung microvascular endothelial cells (MLMVECs) were isolated by selection of CD31+ cells. Whereas the majority of the CD31+ cells did not divide, some scattered cells started to proliferate giving rise to large colonies (> 3000 cells/colony). These highly dividing cells possess the capacity to integrate into various types of vessels including blood and lymph vessels unveiling the existence of local microvascular endothelial progenitor cells (LMEPCs) in adult mouse lung. EPCs could be amplified > passage 30 and still expressed panendothelial markers as well as the progenitor cell antigens, but not antigens for immune cells and hematopoietic stem cells. A high percentage of these cells are also positive for Lyve1, Prox1, podoplanin and VEGFR-3 indicating that a considerabe fraction of the cells are committed to develop lymphatic endothelium. Clonogenic highly proliferating cells from limiting dilution assays were also bipotent. Combined in vitro and in vivo spheroid and matrigel assays revealed that these EPCs exhibit vasculogenic capacity by forming functional blood and lymph vessels.The lung contains large numbers of EPCs that display commitment for both types of vessels, suggesting that lung blood and lymphatic endothelial cells are derived from a single progenitor cell.In the developing embryo blood vessels and later also lymphatic vessels are formed via an initial process of vasculogenesis. This is followed by sprouting and intussusceptive growth of the vessels, termed angiogenesis for blood vessels and lymphangiogenesis for lymph vessels. These mechanisms give rise to a complete blood and lymphvascular system consisting of arteries, veins, capillaries and collectors. Endothelial cells (ECs) are specified according to the circulatory system Non-commutative Euclidean and Minkowski Structure A. Lorek,W. Weich,J. Wess Abstract: A noncommutative *-algebra that generalizes the canonical commutation relations and that is covariant under the quantum groups SOq(3) or SOq(1,3) is introduced. The generating elements of this algebra are hermitean and can be identified with coordinates, momenta and angular momenta. In addition a unitary scaling operator is part of the algebra. Falls in the older patient – time to change our views D Weich Continuing Medical Education , 2007, 'Defeating the dragon' – can we afford not to treat patients with heroin dependence? L Weich South African Journal of Psychiatry , 2010, The Hilbert Space Representations for SO_q(3)-symmetric quantum mechanics Wolfgang Weich Abstract: The observable algebra O of SO_q(3)-symmetric quantum mechanics is generated by the coordinates of momentum and position spaces (which are both isomorphic to the SO_q(3)-covariant real quantum space R_q^3). Their interrelations are determined with the quantum group covariant differential calculus. For a quantum mechanical representation of O on a Hilbert space essential self- adjointness of specified observables and compatibility of the covariance of the observable algebra with the action of the unitary continuous corepresent- ation operator of the compact quantum matrix group SO_q(3) are required. It is shown that each such quantum mechanical representation extends uniquely to a self-adjoint representation of O. All these self-adjoint representations are constructed. As an example an SO_q(3)-invariant Coulomb potential is intro- duced, the corresponding Hamiltonian proved to be essentially self-adjoint and its negative eigenvalues calculated with the help of a q-deformed Lenz-vector. Equivariant spectral asymptotics for h-pseudodifferential operators Tobias Weich Mathematics , 2013, DOI: 10.1063/1.4896698 Abstract: We prove equivariant spectral asymptotics for $ h$-pseudodifferential operators for compact orthogonal group actions generalizing results of El-Houakmi and Helffer (1991) and Cassanas (2006). Using recent results for certain oscillatory integrals with singular critical sets (Ramacher 2010) we can deduce a weak equivariant Weyl law. Furthermore, we can prove a complete asymptotic expansion for the Gutzwiller trace formula without any additional condition on the group action by a suitable generalization of the dynamical assumptions on the Hamilton flow. On the support of Pollicott-Ruelle resonanant states for Anosov flows Abstract: We show that all generalized Pollicott-Ruelle resonant states of a topologically transitiv $C^\infty$Anosov flow with an arbitrary $C^\infty$ potential, have full support.
CommonCrawl
Search all SpringerOpen articles Journal of Shipping and Trade Evolving structure of the maritime trade network: evidence from the Lloyd's Shipping Index (1890–2000) Zuzanna Kosowska-Stamirowska ORCID: orcid.org/0000-0001-6917-15941, César Ducruet1 & Nishant Rai2 Journal of Shipping and Trade volume 1, Article number: 10 (2016) Cite this article Over 90 % of the world trade volumes is being carried by sea nowadays. This figure shows the massive importance of the maritime trade routes for the world economy. However, the evolution of their structure over time is a white spot in the modern literature. In this paper we characterise and study topological changes of the maritime trade network and how they translate into navigability properties of this network. In order to do so we use tools from Graph Theory and Computer Science to describe the maritime trade network at different points in time between 1890 and 2000, based on the data on daily movements of ships. We also propose two new measures of network navigability based on a random walk procedure: random walk discovery and escape difficulty. By studying the maritime network evolution we find that it optimizes over time, increasing its navigability while doubling the number of active ports. Our findings suggest that unlike in other real world evolving networks studied in the literature up to date, the maritime network does not densify over time and its effective diameter remains constant. The last decade had witnessed a surge in maritime flow visualization and maritime network analysis, especially at the global scale. This stands in sharp contrast with the very few works on such themes produced along the previous century. In the 1940s already, world maps showed the precise geographic distribution of British vessels (Siegfried 1940) and of US maritime trade (Ullman 1949). But it is only in the late 1960s that geographers, claiming the need to include maritime linkages in the analysis of ports, port systems, and port hinterlands (Rimmer 2012), pioneered the application of Graph Theory to maritime transport (Robinson 1968), but on a more local scale. Graph Theory which had been so popular for the analysis of other transport systems (e.g. road, rail, river, air, and telecommunications), lost ground in the discipline. Maritime transport remained the focus of broad cartographies of volume and distribution of main routes until the late 1990s, when other geographers proposed to measure the topological structure of the global container shipping network (Joly 1999) and to analyze the global strategies of ocean carriers such as Maersk and CMA-CGM (Frémont 2015). The explosion of computer capacity, the revival of the "science of networks", and the growing availability of maritime traffic data soon gave birth to numerous analyses of global maritime flows, which greatly varied in objectives and outcomes. Physicists for instance found rather natural to investigate the topological properties of the global maritime network, responsible for no less than 90 % of world trade volumes, but focusing primarily on container shipping (Deng et al. (2009); Doshi et al. (2012); Hu and Zhu (2009)). They stressed the belonging to the classes of scale-free and small-world networks using standard measures from the then buoyant research field of complex networks. Other contributions of the kind consisted in comparing the networks of different fleet types (Kaluza et al. 2010) to better understand marine bioinvasions, analyzing the inter-similarity of the container shipping and airline networks (Parshani et al. (2010); Woolley-Meza et al. (2012)), and constructing global port-to-port matrices to estimate the impact of various scenarios on flow distribution (Tavasszy et al. (2011); Wang et al. (2012)). Geographers also contributed to this dynamics by mapping the nodal regions and centrality of pivotal hub ports for container shipping (Ducruet and Notteboom (2012); Gonzalez-Laxe et al. (2012); Wang and Wang (2011)), general cargo (Pais Montes et al. 2012), and in the multiplex graph (Ducruet 2013). Li et al. ((2015)) as well as Xu et al. (2015) departed from the classical view of port nodes to analyze the evolution of a global container shipping network made of large regions. Most of the other contributions to the field consisted in analyses of local and regional maritime networks using similar methodologies (see Tovar et al. (2015) and Ducruet (2015) for a synthesis). This rapid review of the field raises several questions that this research would like to tackle. First, most of the aforementioned contributions focused on container shipping, known to be the most valuable and modern segment of maritime transport, having gone through rapid growth and transformation of its network configurations since its emergence in the mid-1950s and especially, with the advent of mega-ships since the 2000s. If we exclude fully-fledged density maps done in recent years but only to address other issues without any reference to networks, such as environmental impacts (see for instance Halpern et al. (2008)), we find that other fleet types received much less attention from a network perspective, so that the global maritime network as a whole remains poorly studied. Second, and related to the first, the focus on container shipping motivated scholars to be the most up-to-date and therefore to analyze current topologies, namely the shape of the network from the late 1990s onward. The extent to which recent and current topologies differ from earlier ones thus cannot be discussed or demonstrated. This lacuna is surprising, given the efforts put on understanding, for instance, the impact of the container revolution on world trade between 1962 and 1990 (Bernhofen et al. 2016) and the numerous works on the impact of technological change on the port and shipping industry (see Guerrero and Rodrigue (2013); Kuby and Reid (1992); Mayer (1973)). Perhaps, the high costs related with data acquisition and encoding motivated scholars to offer a recent, static view of the network. In this paper, a new and dynamic analysis is proposed to fill-in such gaps. The main question to be tackled in this paper is whether the global maritime network has topologically changed over time and how this changes translated into its navigability properties. Based on a largely untapped historical database on worldwide merchant vessel movements, we compare current and past states of network configuration, assuming that successive and major technological (and wider economic) changes affected the dimension and architecture of the macro-system. This research contributes as well to the wider research field on spatial and complex networks where dynamical analyses remain rather rare, given the scarcity of accurate time series data, thus resulting in a dominance of simulation experiments over empirical analyses (see Barthelemy (2011); Boccaletti et al. (2006)). Especially in terms of the studies of real-world networks growth and densification, as to the best of the authors knowledge all the real-world networks studied so far (Leskovec et al. (2007), Strano (2012)) exhibit super-linear growth of the number of edges with respect to the number of nodes, which contradicts the widely-spread Preferential Attachment model proposed by Barabási and Albert ((1999)). Moreover, this paper enters the field of navigability studies of transportation networks (De Domenico et al. (2014), Gulyas 2015) by looking at network's efficiency from the point of view of ease of navigation. Our methodology consists of analyses of an unweighted network created by the ports, standing for nodes, and passages of the ships, standing for the edges. Then we take snapshots of the network almost every 5 years between 1890 and 2000. By applying tools from Graph Theory and navigability algorithms, we find that the maritime network has doubled its size in terms of the number of active ports over the studied period, but the rate of growth of the number of edges and the declining clustering coefficient indicate that the maritime network doesn't necessarily become denser with time, contrary to the findings of Leskovec et al. (2007) for other real-world networks. Our findings indicate that we might observe a process of network optimization which is due to some processes specific to the maritime industry as well as to economic and technological development. The random walk measures which we construct and apply in this paper show that navigation in the maritime network becomes easier with time, that is, the network's structure starts to privilege more efficient movements and that with time it becomes easier to reach a given port starting from any other port of the network. Surprisingly, we find that the observed processes begin before the widespread of containerization. The remainder of the paper is organized as follows. The second section presents the elaboration of the historical database after its extraction from archival documents and the network analytical tools to be applied to the resulting graph to best unravel dynamics of change. Main results are offered in the third section, ranging from the most common methods of complex network analysis to more advanced ones in relation to the evolving navigability of the network. The last section discusses the results and concludes about their usefulness to further understand the specificity of current port and maritime transport challenges. Elaboration of a global historical database using the Lloyd's Shipping Index Maritime flows of merchant vessels among ports of the world have been recorded by the maritime insurance company Lloyd's List since the late sixteenth century, focusing primarily on the British fleet but since 1890, on any other. An in-depth review of all the research works having used such a unique source of information concluded that it still remains unknown to most shipping specialists (including historians, geographers, and economists). Only a dozen references to Lloyd's List could be identified in the entire academic literature to date, mostly to retrieve the port calls of a given ship for genealogical purposes, to identify the location of shipwrecks for underwater archaeology, to count the vessel calls at a given port, and to measure the time gap between call date and publication date to analyze the evolution of telecommunications (Ducruet et al. 2015). Given their main focus on container shipping, most studies of maritime networks rather use carrier schedule data provided by Containerisation International, Barry Rogliano Salles (Alphaliner database), or company websites, while others compile information on real-time positioning of ships such as Automated Identification System (AIS). Data from Lloyd's List appear to be the world's only possible source to map and analyze global maritime flows back in time, i.e. prior to containerization. Since its origin, the Lloyd's Shipping Index reports on a daily or weekly basis the latest movement of each vessel between two or more ports, including dates of departure and arrival, tonnage capacity, operating company, flag, date of build, and additional comments in the case of damage, loss, or war event. The somewhat difficult readability of the older publications and the limitations of existing Optical Character Recognition (OCR) software forced us to concentrate our efforts on the extraction of vessel calls by port and inter-port link. The choice was made to extract one entire publication every five years or so between 1890 and 2000, a couple of years before the paper version ceased to exist. From 2009 onwards, such data is only available in expensive digital format. We believe that this period is a relevant time frame to cover the most important transitions from sail to steam, combustion, containerization, and mega-carriers, with a good balance between the periods pre- and post-containerization. Nowadays, the Lloyd's company insures about 80 % of the world fleet and therefore is historically a leader on the market with a monopoly power, centralizing most of the information on maritime transport flows. The stability of the document structure and contents, notwithstanding a huge growth in the number of movements, makes the 5-year snapshots comparable over time. But given the fact that this publication was daily or weekly, extracting only one item in the entire year inevitably created a potential bias in the representativeness of the data sample, difficult to estimate in comparison to the yearly figure. One solution has been to target the same period for every item, namely around April, to strengthen the robustness of our database to seasonal effects. However, the fact that we have a sample of data only from one month in a specific season (Spring) can potentially bias the results, as traffic can exhibit different patterns along the year, for example when goods need to be delivered before Christmas. Moreover, the global historical database did not come out ready. Immense efforts were put on data verification and cleaning: 10,253 place names were checked with scrutiny taking into account regular changes in port names (e.g. Port Swettenham in Malaysia becoming Klang or Port Klang) and exclude passage points such as straits and channels in order to keep only commercial ports in the database. Some test of the accuracy of the Lloyd's data were conducted in Ducruet et al. (2015), where, for example, the authors confirmed that over the entire period 1890–2000, the correlation with Chinese port tonnage was over 0.8, showing that regardless the data extraction technique and partial time coverage of the data base, the extracted data are sufficiently representative for maritime network flows. The resulting flow matrix or network is an undirected graph encompassing 22 different years of observation and constructed to allow the application of various tools originating from Graph Theory and Complexity Science. Graph Theory provides us with a powerful toolkit for the modelling and treatment of data which exhibit pairwise relations, such as a transportation network. In the case of the data derived from the Lloyd's Shipping Index, we treat each port as a node of the network, and the edges (the connections between the ports) are added if we observe at least one movement of a ship from port A to B in a given time period. We obtain in this way an undirected and unweighted graph G=(N,E), with the set of nodes (N) and edges (E). In this work we do not take into account the intensity of movements among the ports; that is, each edge in the network has a weight of 1 and contributes equally to the scores that we obtain while performing calculations on the network. This is certainly a simplification, which overlooks the intensity of flows, by looking only at the existing connections which we can observe in the maritime network. As a result, busy links, such as Singapore-Shanghai contribute to the the network in the same way as the links at which we observed only one movement during the studied period. This simplification is not harmful if one wants to study just the topological properties of the network, as it is the case in the present paper, and not necessarily the intensity of flows or congestion effects. For the studies of flows per se, weighting by tonnage or number of calls seems to be a necessity. As previously discussed, we only have a portion of data every 5 years, covering only a part of the yearly movements of the world fleet. However, they do provide a reasonable overview of the most important connections and ports, and they do keep track of the networks evolution over time. Topological measures used to analyze the network's evolution In the first part of our analysis we use the classical network measures in order to describe topological properties of the maritime trade network derived from the Lloyd's Shipping Index publications. The different network measures, which will be further discussed, allow us to draw conclusions about the structure of connections between the ports of the world maritime network and about its evolution over time. In this work our goal is to investigate the structure of the maritime trade network and to see how it relates to its efficiency. Understanding the underlying structural properties of the network is the first step to future research and modelling of the maritime network evolution, which in turn can be useful for simulations of its future developments. Thanks to the availability of data from different moments in time, we can constructs snapshots of the maritime trade network every 5 years, and therefore follow some global measures to see what evolutionary processes can be observed in this network. For a comprehensive overview of different network measures consult Newman (2010). The first and the most classic network measure which is largely used to characterize both node's centrality and global network evolution is the node degree. This measure corresponds to the number of neighbors each individual node has in the network and can be explained intuitively as the number of unique "trading partners" of a given port in a given time period. In this paper we will focus on the average degree, as it is a measure of the density of the network and of the proportional increase of the number of sea connections (edges) with the number of ports (nodes). The second and the third network measures which we use are the average shortest path length and the effective diameter. The first can be defined as the average of all topological distances between all pairs of nodes present in G along the shortest paths, while the diameter is the longest shortest path of the network. Formally, the average shortest path can be expressed as $$S_{G}=\frac{1}{n(n-1)} \sum_{i \neq j} d(n_{i}, n_{j}) $$ Where d(n i ,n j ) is the topological distance between a pair of two nodes and it means the number of "hops" between two nodes of the network. Both the average shortest path length and the diameter rely on topological distance between the pairs of ports, which can be a proxy for the speed of delivery of goods that are being shipped around the network. The shorter the average shortest path is, the faster (at least, in topological sense) the goods can arrive to their final destination. Following the steps of Leskovec et al. (2007), we compute the effective diameter, taking into account the 0.9th percentile distance in the network in order to avoid noise which often appears in the measurement of the diameter, and to to ensure comparability between our studies and those of the evolution of other real world networks. In order to calculate the effective diameter it was necessary to compute the shortest paths between all pairs of nodes and to plot a cumulative density function of the distances. We then took the 0.9th percentile to define the effective diameter. Both measures are calculated at the global level. The last global classic measure borrowed directly from Graph Theory is the clustering coefficient, which tells us how dense the network is and captures the probability with which the neighbors of n i are also connected to one another. The clustering coefficient can be defined as the ratio of the number of edges present in the node's direct neighborhood over the number of all potential edges in this neighborhood. Formally, $$C_{i}=\frac{2e_{i}}{k_{i}(k_{i} - 1)} $$ where k stands for the number of nodes in the neighborhood, e i stand for the edges present in the neighborhood of n i Just like discussed above, the nominator stands for the number of edges present in the neighborhood (multiplied by two, because this is an undirected graph), and the denominator captures the number of all possible edges which could exist in this neighborhood. We use average clustering coefficient in order to get a global measure which enables us to describe the network as a whole in one number per time period. This measure, combined with others, enables us to draw conclusions about the density of the network and its organization. It allows us to see if the network tends to evolve towards a more hub-and-spoke model, where shipping companies rely on transhipment rather than on direct links between all ports. Turning towards local measures, we calculate the closeness centrality which captures the number of hops from a node to any other node in a network. In other words, it measures the topological distance from n i to all other nodes in a network along the shortest paths. The higher the closeness centrality, the easier it is to get to other nodes of the network, as closeness centrality is the inverse of the sum of topological distances, formally $$C_{c}(n_{i})=\frac{1}{\sum_{j}{d(n_{i},n_{j})}} $$ Where d stands for topological distance. Closeness centrality is really important from the point of view of individual ports, as it delivers us information about the time of delivery of goods from the port of interest to any other port in the network. The more central the port is, the shorter the route taken by good shipped from this port will be, which can be a proxy for costs. However, closeness does not take into account the overall efficiency of the network, as it favors direct links rather than transshipment. Another well-known centrality measure which we use is the betweenness centrality, which tells us if the node lays on a crossroads of many routes in the network, therefore occupying a priviledged position of a so-called "middle man", or a hub, where the goods are transshiped. Formally, the betweenness centrality corresponds to the proportion of the shortest paths passing through n i to the number of all the shortest paths in the network between all pairs of nodes. Formally, $$C_{b}=\sum_{s,t \in N}\frac{\sigma_{st}(n_{i})}{\sigma_{st}} $$ where σ st (n i ) is the number of shortest paths passing through the node and σ st is the number of shortest paths between all pairs of nodes. Random walk measures - locally computable centrality metrics In the second part of our analyses we run algorithms on the network to measure its navigability. In a way, we leave the concept of a static network in order to analyse the potential flows on the underlying structure, metaphorically treating the network as a sort of preexisting infrastructure, like rail or pipelines. Navigability is a crucial concept in any transportation network (De Domenico et al. (2014), Gulyás (2015)). Intuitively it captures the ease with which one can travel from any point A to B in a network, which is of huge importance when the delivery times and efficient route planning are of essence, as it is in the maritime network. The algorithmic measures, which we propose in this paper, tell us how easy it is to move around the network, and also which nodes are the privileged ones, meaning that if we start a walk at that node, we will be able to visit many other nodes. Our aim is to see how the navigability of the maritime network changed over time together with the underlying topological structure. We would like to know if globalization, new technologies (like containerization) have pushed the network towards an optimal, more navigable organization. According to de Domenico et al. (2014) random walks are a good proxy to determine networks navigability, as they capture the dynamic functionality of the network. However, the classical random walk measures which attract substantial attention in the literatue (Lovasz 1996) are usually global and require time consuming computations, which depend on the size of the network. The best known examples of such measures are the cover time, which gives us the number of steps necessary to visit all the nodes in the network, mixing time, which gives us the number of steps that the random walker needs to perform to get lost in the graph, or the hitting time, which tells us how many steps will be needed before the random walker reaches the node of interest. Short random walks present an interesting, yet not that popular alternative to these global measures, especially provided that they can give us interesting information about the neighbourhood structure of the network and are computable locally, which reduces time complexity of such operations. In the present paper we propose a random walk discovery measure, which tells us how many unique nodes can be visited, if a walker (say, a ship) moving randomly around the network, starts at node n i and performs 100 steps. The question we ask here is: how many nodes are discovered in T steps of a random walk? By knowing the properties of short random walks on graphs, we know that some specific topological structures enable better scores than others. In a good case, when the graph looks locally like a tree with nodes of degree equal to at least 3, the number of nodes discovered will be close to T. However, in a bad case, which is a line, the number of nodes discovered will be roughly equal to \(\sqrt {T}\). The theoretical lower bound is \(\sqrt [3]{T}\) nodes discovered in T steps in any network (Barnes and Feige 1993). In practice, this means that a higher degree of the node or of nodes in its neighborhood leads to better discovery rates. Secondly, in clustered networks, better discovery rate is observed when communities are strongly interconnected. The more nodes are visited by the random walker, the better the position of the node in the network from the point of view navigability. We iterate the random walk discovery algorithm 100 times for each node of the network and take the average as final score in order to avoid statistical biases. The choice of a proper leght of the random walk is important, however, in order to be able to study the local neighborhood of nodes it is necessary and enough to make sure that the chosen number of steps is not too large and that it is smaller than the mixing time in the graph in question, because once the limit of the mixing time is exceeded, our measure will be insensitive to the starting node. The mixing time in for example expander graphs is of the order log(N), where N is the total number of nodes in the graph. The algorithm written in pseudocode can be found in the Appendix. The second algorithmic measure which we use is the escape difficulty. In this algorithm we ask the walker to start her random walk at node n i and we count how many steps she had to make in order to be at least 4 hops away from n i . More formally speaking, we want to see how many steps need to be performed by the mobile agent moving randomly around the network to escape the 3-neighborhood of n i . The considered measure corresponds precisely to the hitting time of the node outside of the 3-neighborhood of the starting node in the considered graph. It may be represented as the hitting time from the staring node n i to the special node v in the graph obtained from our network by merging all nodes outside of the 3-neighborhood of n i into a single node. The hitting time is a basic and well studied random walk parameter on graphs (Lovasz 1996). We remark that a symmetrized version of the hitting time between a pair of nodes, known as the commute time, describes the electrical resistance between this pair of nodes (Tetali 1991). Thus our measure reflects the electrical flow between the node n i and the outside of its 3-neighborhood. Electrical flows are in turn related to the maximal flow problem (Christiano et al. (2011)), under an appropriare weighting of links. Same as in the case of the random walk discovery, we iterate the algorithm 100 times and then take the average in order to obtain the final score. The escape difficulty measure will provide us with information about the direct neighborhood of the node. If the score is high we can conclude that it is plausible that the node lies in a small, highly connected cluster with few links to the outside world. If the score is low, it means that it is easy for the mobile agent to escape the 3-neighorhood and that node must lie on bridge between clusters. The algorithm written in pseudocode can be found in the Appendix. By analyzing the results obtained for the database covering a portion of movements of ships during the period 1890–2000 we can say that the maritime network has changed its structure over time by quite a bit. The size of the maritime network First of all, the network has substantially increased its size between 1890 and 2000. Note that we take into account only the active ports, that is the ones which either received or sent at least one ship in a given time period. In Fig. 1 we have reported the fluctuations in the number of active ports over the years. We observe a growth of the network which almost doubles its number of nodes between 1890 and 2000. We also see that this growth is rather steady over time. In the year 1890 we have 1011 ports that were active during the period of data collection, while in 2000 this number goes up to 1944 active ports. This constant increase in the number of active ports can be explained with progressive globalization and strengthen international trade, as well as the development of global supply chains. Number of active ports (1890–2000) In terms of the number of existing connections, we see that their number grows as well, and that the increase of the number of edges is not much faster than the growth of the number of nodes (Fig. 2). This indicates that while the network grows, it evolves towards its specific structure. Number of active sea connections (1890–2000) There exist models, especially the well-known Preferential Attachment, which predict that the increase in the number of edges should be linear in the number of nodes, which means that the average degree should not change over time. By observing the average degree computed for each time period in the studied maritime network, we find it to be equal to 11.6 in 1890 and then to grow slowly with some fluctuations until 1930 when we observe a drop, and then in 1951 it goes up again to reach its peak in 1990 (16.9), only to drop dramatically in the 2000 to a value similar to that from 1890. In the year 2000 the average degree is equal to 11.9, only by 0.3 higher than in 1890, with twice as many active ports (Fig. 3). Provided that the scores for average degree are the same at the beginning and the end of the studied period, we cannot exclude the possibility that the network evolved accordingly to some version of the Preferential Attachment model, which could potentially take into account the question of geographical distance between ports. Average node degree (1890–2000) Evolving topology The considerations of the average network degree from the previous subsection are especially interesting when compared to the existing literature on network densification. Leskovec at al. (2007) study the evolution of a number of real world networks, such as the scientific paper citation network, network of actors, email network etc. and find that the number of edges always grows super-linearly with respect to the number of nodes. This effect of growth of the average degree in real world networks is puzzling, because it contradicts Preferential Attachment model (Barabási and Albert 1999) which predicts a perfectly linear increase of the number of edges with the number of nodes. One example of a real world network which is close to the linearity of growth is the road network in the Milan region (Strano et al. 2012). The network which they study exhibits a rather constant average degree over time, with only a very slight increase of the order of 0.2, this however can be due to the nature of the road network, which is planar, which means that it can be drawn on a sphere in such a way that no links will overlap or cross. This property leads to important consequences for the network structure, because the maximal degree of nodes in constrained by space. In the case of road network nodes are defined as road junctions, therefore it is hard to expect many nodes of degree more than 10 and large variations in the average degree over time. The maritime network, like the road network, is a spatial and transportation network, but with one major difference — it is not planar. Therefore, the maritime network does not suffer from the "natural" limitation of the maximal number of neighbors, as each port can develop as many connections with as many ports as it wishes to. In the case of the maritime network we seem to observe an unusual effect where first the number of edges grows super-linearly, but at the end of the sample goes back to its initial level. Another major difference between the maritime network and the findings of Leskovec et al. (2007), is that the maritime network exhibits a constant effective diameter equal to 4 over the entire period, so we do not observe the phenomenon of shrinking diameter which Leskovec et al. (2007) find for all the networks studied by them. It seems that the maritime trade network is a network of unique evolutionary properties, which have not been yet observed in other real world networks, that, surprisingly, shared many common evolutionary traits. These findings place the maritime network at a hot spot for studies of the evolution of the real-world networks in complexity science, because it creates a need for better understanding of its evolution and a need for a potentially completely new model of network growth. In order to deepen our understanding of the evolution and densification of the maritime network we have calculated the average clustering coefficient for each time period (the results are presented in Fig. 4). We find that the average clustering coefficient decreases steadily over the period between 1940 and 2000, which indicates a change in the network structure. Perhaps we cannot go as far as to say that we observe network sparsification, but, especially by looking at the clustering coefficient, we can say that we observe a reorganization of the network, and that it becomes less clustered with time, which supports the hypothesis that the network develops into a more hub-and-spoke structure. We also find that the clustering starts to fall in 1940, that is long before the widespread of containerization, which would be the usual suspect for the cause of network optimization, understood as a tradeoff between network's navigability and maintenance cost. This network optimization process can be noticed also in the behavior of the average shortest path length. The results of the average shortest path are reported in Fig. 5, where we can see that the average topological distance between each pair of nodes is small, around 3 for the entire period under study. It increases over the time period, but only very slightly, passing from 2.88 to 3.23, even though the size of network increases tremendously. Average clustering of the maritime network (1890–2000) Average shortest path lenght (1890–2000) Network centralization Let us turn towards the measures on the local level. First we look at the degree distribution in each time period, then we compute the gini coefficient in order to see the level of inequalities in our network. We have calculated the gini coefficient for all the nodes in the network and the top 100 nodes in order to check for potential hierarchical structure. The results are reported in Fig. 6. We find that the inequalities in degree are much smaller among the top 100 nodes than for the entire network, as it oscillates around 0.3 for the top 100 and around 0.7 for the whole network. A similar pattern can be found by looking at the gini coefficient of betweenness, where we find that the distribution for the entire network is really unequal, whereas the scores for top 100 nodes are much more equal (Fig. 7). These findings indicate that the network has some well-connected nodes which span the network and which are rather equal among each other, while we observe significant inequalities in the network as a whole, indicating that apart from the top 100 nodes, there must be numerous not so well-connected ports which create links to those of higher degree. These findings would go in favor of the hypothesis that the network develops towards a more hub and spoke structure, favouring efficient transshipment, which is already indicated by the results of the average clustering coefficient. Gini coefficient of degree for Top 100 ports and for the entire network (1890–2000) Gini coefficient of the betweenness centrality scores for Top 100 ports and for the entire network (1890–2000) We have applied the same methodology to analyze the individual scores of closeness centrality and we report the results in Fig. 8. We find that the distribution of closeness scores is very equal, as it is close to 0. We still find that the closeness distribution for the top 100 ports is almost perfectly equal, while the inequalities in the entire network are relatively larger, but these differences are really small. Gini coefficient of closeness centrality scores for Top 100 ports and for the entire network (1890–2000) Network's navigability In order to measure the navigability of the maritime network we have created the random walk discovery measure, which tells us how many unique nodes are visited by a walker on the network in a random walk starting from a node n i . Each time the walker performed 100 steps and the procedure has been repeated for each node 100 times. We constructed the individual scores by taking the average of all iterations. We then took the average of all individual scores and reported them in Fig. 9. We find that the average number of unique nodes visited in a single walk increases over time, indicating an increase in the network's navigability. In general we observe a clear upward trend in the average random walk discovery starting from 1900, which becomes even more visible starting from 1946 - long before containerization has even begun. However, we do observe some drops in 1920, 1946, and, most surprisingly in the year 2000, when the drop is the largest in the whole period under study. The last drop is especially puzzling, because the beginning of the 21st century is known to be the period of increased globalization and increased optimization performed by maritime operators. It is possible that the effect which we observe in the year 2000 is due to the limited amount of data which has been used for this study, or is linked to the fact that since the year 2000 we started to observe an important trend in ship upscaling, which has led to exclusion of smaller ports that were unable to handle such large vessels. If the studies of the fuller dataset confirm that the average random walk discovery started to deteriorate in the 21st century, we would have a really interesting phenomenon to explain. Average random walk discovery (1890–2000) Another measure of navigability which we propose in this paper is the escape difficulty, where we check how many steps need to be performed by a random walker to leave the 3-neighborhood of n i . In theory, it is most difficult for the walker to leave the 3-neighborhood if n i is located in a small clique or dense subnetwork with few links to the rest of the network. Such network structure would correspond to a very regionalized world, where ports tend to develop connections mostly with their neighbors. If the score of the escape difficulty is low, we can suspect that the node lies in a sparse and highly connected neighborhood (formally, in a part of the graph with good expansion), such as a tree, or a forest, rather than a collection of weakly connected clusters. This would also be the case for a network with hub-and-spoke structure. Indeed, this is what we find by launching the escape difficulty procedure for each node of the maritime network and by taking the average of all the scores (just as in the case of random walk discovery, we iterate the algorithm 100 times for each node). The results are reported in Fig. 10, where we observe a downward-sloping trend in the average escape difficulty, whose values become smaller with time, passing from 8.37 to 5.93 between 1890 and 2000. However, it goes through rather large variations, especially the peak between 1925 and 1930, when the value of the average escape difficulty was over 11, so we cannot claim that the trend is very clear. Average number of steps needed to escape the 3-neigborhood (1890–2000) In the present paper we have constructed a network of maritime connections thanks to the data extracted from the Lloyd's Shipping Index, a database containing information on daily movements of ships of almost the entire world fleet. Our data cover the years between 1890 until 2000, where we have information about the movements of ships during at least 2 weeks in regular intervals of 5 years. This data enabled us to construct snapshots of the network of sea connections at different moments in time and to follow its evolution. Most of the existing studies of real world networks focus on the static networks due to scarcity of quality data. One of the few examples of such studies is the work by Leskovec et al. (2007) who find that real world networks tend to follow two laws, that is densification and shrinking diameters. Our work proposes a dynamic view on a real world and truly global network. In particular, we find that the maritime network doesn't necessarily densify with time and that its effective diameter remains constant over the period of a century, even though during this period the size (number of nodes) of the network doubles. In the case of the maritime network we seem to observe a strange phenomenon of network optimization, which begins long before the widespread of containerization, and exhibits itself in the decreasing clustering coefficient and increasing navigability. The maritime network tends to be also quite unequal, having the top ports creating a sort of a "rich club", which again, together with global network measures, suggests that the network structure tends to evolve towards a hub-and-spoke structure. Moreover, we construct two new algorithmic measures of network's navigability which are based on the random walk procedure. The random walk discovery measures the ease of exploration of a network in a given number of steps, while the escape difficulty tells us how hard is it to leave a 3-neighborhood of a given node, and therefore provides us with valuable insights about the global network structure. Similar studies need to be conducted on a fuller data sample in order to confirm the observed trends and check for possible seasonal effects. It would be certainly interesting to study the peaks and the falls of the network measures which seem to align with some well-known events from the world history, such as the Great Depression and the 2nd World War. At this stage of research we are unable to isolate the effects of precise events on the network structure in such a way that we could establish a causal relationship. Such studies would require data of much finner density than just 5 years and potentially external control variables to isolate the precise effects. All of which we leave for future research. Barabási, AL, Albert R (1999) Emergence of scaling in random networks. Science 286(5439): 509–512. Barnes, G, Feige U (1993) Short random walks on graphs In: Proceedings of the Twenty-fifth Annual ACM Symposium on Theory of Computing. STOC '93, 728–737.. ACM, New York, NY, USA, doi:10.1145/167088.167275. http://doi.acm.org/10.1145/167088.167275. Chapter Google Scholar Barthelemy, M (2011) Spatial networks. Physics Reports 499(1–3): 1–101. Bernhofen, DM, El-Sahli Z, Kneller R (2016) Estimating the effects of the container revolution on world trade. J Int Econ 98: 36–50. Boccaletti, S, Latora V, Moreno Y, Chavez M, Hwang DU (2006) Complex networks: Structure and dynamics. Phys Rep 424(4–5): 175–308. Christiano, P, Kelner JA, Madry A, Spielman DA, Teng S (2011) Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. In: Fortnow L Vadhan S. P (eds)Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011, San Jose, CA, USA, 6–8 June 2011, 273–282.. ACM, doi:10.1145/1993636.1993674. http://doi.acm.org/10.1145/1993636.1993674. De Domenico, M, Solé-Ribalta A, Gómez S, Arenas A (2014) Navigability of interconnected networks under random failures. Phys Sci Appl Phys Sci 111(23): 8351–8356. Deng, WB, Long G, Wei L, Xu C (2009) Worldwide marine transportation network: efficiency and container throughput. Chin Phys Lett 26(11): 118901. Doshi, D, Malhotra B, Bressan S, Lam JSL (2012) Mining maritime schedules for analyzing global shipping networks. Bus Intell Data Min 7(3): 186–202. Ducruet, C (2013) Network diversity and maritime flows. J Transp Geograph 30: 77–88. Ducruet, C (2015) Maritime flows and networks in a multidisciplinary perspective. In: Ducruet C (ed)Maritime Networks. Spatial Structures and Time Dynamics, 3–26.. Routledge Studies in Transport Analysis, Abingdon, UK, Ducruet, C, Haule S, Ait-Mohand K, Marnot B, Kosowska-Stamirowska Z, Didier L, Coche MA (2015) Maritime shifts in the world economy: Evidence from the Lloyd's List corpus, eighteenth to twenty-first centuries. In: Ducruet C (ed)Maritime Networks. Spatial Structures and Time Dynamics, 134–160.. Routledge Studies in Transport Analysis, Abingdon, UK, Ducruet, C, Notteboom TE (2012) The worldwide maritime network of container shipping: Spatial structure and regional dynamics. Global Netw 12(3): 395–423. Frémont, A (2015) A geo-history of maritime networks since 1945: The case of the Compagnie Générale Transatlantique's transformation into CMA-CGM. In: Ducruet C (ed)Maritime Networks. Spatial Structures and Time Dynamics, 37–49.. Routledge Studies in Transport Analysis, Abingdon, UK, Gonzalez-Laxe, F, Freire-Seoane MJ, Pais Montes C (2012) Maritime degree, centrality and vulnerability: Port hierarchies and emerging areas in containerized transport (2008–2010). J Transp Geograph 24: 33–44. Guerrero, D, Rodrigue JP (2013) The waves of containerization: Shifts in global maritime transportation. J Transp Geograph 34: 151–164. Gulyás, A, Bíró J. J, Kőr'́osi A, Rıetvári G, Krioukov D (2015) Navigable networks as Nash equilibria of navigation games. Nat Commun 6(7651). doi:10.1038/ncomms8651. Halpern, BS, Walbridge S, Selkoe KA, Kappel CV, Micheli F, D'Agrosa C, Bruno JF, Casey KS, Ebert C, Fox HE, Fujita R, Heinemann D, Lenihan HS, Madin EMP, Perry MT, Selig ER, Spalding M, Steneck R, Watson R (2008) A global map of human impact on marine ecosystems. Science 319(5865): 948–952. Hu, Y, Zhu D (2009) Empirical analysis of the worldwide maritime transportation network. Physica A 388(10): 2061–2071. Joly, O (1999) La structuration des réseaux de circulation maritime. PhD thesis, Le Havre University, CIRTAI. Kaluza, P, Koelzsch A, Gastner MT, Blasius B (2010) The complex network of global cargo ship movements. J R Soc Interface 7: 1093–1103. Kuby, MJ, Reid N (1992) Technological change and the concentration of the U.S, general cargo port system: 1970–88. Econ Geograph 68(3): 272–289. Leskovec, J, Kleinberg J, Faloutsos C (2007) Graph evolution: Densification and shrinking diameters. ACM Transp Knowl Discov Data 1(2): 1. Li, Z, Xu M, Shi Y (2015) Centrality in global shipping network basing on worldwide shipping areas. Geojournal 80(1): 47–60. Lovasz, L (1996) Random walks on graphs: A survey. Combinatoris, Paul Erdos is Eighty 2: 353–398. Marnot, B (2005) Interconnexion et reclassements: l'insertion des ports français dans la chaîne multimodale au XIXème siècle. Flux 59(1): 10–21. Mayer, HM (1973) Geographical aspects of technological change in maritime transportation. Econ Geograph 49: 145–155. Newman, MEJ (2010) Chapters 7 and 8 In: Networks: an Introduction.. Oxford University Press, New York, USA. Pais Montes, C, Freire Seoane MJ, Gonzalez-Laxe F (2012) General cargo and containership emergent routes: A complex networks description. Transp Policy 24: 126–140. Parshani, R, Rozenblat C, Ietri D, Ducruet C, Havlin S (2010) Inter-similarity between coupled networks. Europhys Lett 91: 68002. Rimmer, PJ (2012) The changing status of New Zealand seaports, 1853–1960. Ann Assoc Am Geograph 57(1): 88–100. Robinson, R (1968) Spatial structuring of port-linked flows: The port of Vancouver, Canada, 1965. PhD thesis, Vancouver: University of British Columbia, Geography Department. Siegfried, A (1940) Suez, Panama et les routes maritimes mondiales, Paris: Armand Colin. Strano, E, Nicosia V, Latora V, Porta S, Barthélemy M (2012) Elementary processes governing the evolution of road networks. Nat Sci Rep 2(296). doi:10.1038/srep00296. Tavasszy, L, Minderhoud M, Perrin JF, Notteboom TE (2011) A strategic network choice model for global container flows: Specification, estimation and application. J Transp Geograph 19(6): 1163–1172. Tetali, P (1991) Random walks and the effective resistance of networks. J Theor Probab 4(1): 101–109. doi:10.1007/BF01046996. Tovar, B, Hernandez R, Rodriguez-Deniz H (2015) Container port competitiveness and connectivity: The Canary Islands main ports case. Transp Policy 38: 40–51. Ullman, EL (1949) Mapping the world's ocean trade: a research proposal. Prof Geograph 1(2): 19–22. Wang, C, Wang J (2011) Spatial pattern of the global shipping network and its hub-and-spoke system. Res Trans Econ 32(1): 54–63. Wang, J, Pulat PS, Shen G (2012) Data mining for the development of a global port-to-port freight movement database. Int J Shipping Transport Logistics 4(2): 137–156. Woolley-Meza, O, Thiemann C, Grady D, Lee JJ, Seebens H, Blasius B, Brockmann D (2012) Complexity in human transportation networks: A comparative analysis of worldwide air transportation and global cargo-ship movements. Eur Phys J B 84: 589–600. Xu, M, Li Z, Shi Y, Zhang X, Jiang S (2015) Evolution of regional inequality in the global shipping network. J Transp Geograph 44: 1–12. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2014-2020)/ERC Grant Agreement n. 31384. A part of this work was done by Nishant Rai during his internship at the INRIA Gang Project - we would like to thank Laurent Viennot and Adrian Kosowski for their supervision of Nishant Rai's work. ZKS: Study conception and design. CD: Acquisition of data. ZKS and NR: Analysis and interpretation of data. ZKS and CD: Drafting of manuscript. ZKS and NR: Critical revision. All authors read and approved the final manuscript. Centre National de la Recherche Scientifique (CNRS), UMR 8504 Géographie-cités, Université Paris 1 Panthéon-Sorbonne, 13 rue du FourParis, F-75006, France Zuzanna Kosowska-Stamirowska & César Ducruet IIT Kanpur, G102 Kanpur Uttar Pradesh, Kanpur, India Nishant Rai Zuzanna Kosowska-Stamirowska César Ducruet Correspondence to Zuzanna Kosowska-Stamirowska. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Kosowska-Stamirowska, Z., Ducruet, C. & Rai, N. Evolving structure of the maritime trade network: evidence from the Lloyd's Shipping Index (1890–2000). J. shipp. trd. 1, 10 (2016). https://doi.org/10.1186/s41072-016-0013-3 Maritime trade Network evolution Network navigability Connecting the World through Global Shipping Networks Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
Success in books: a big data approach to bestsellers Burcu Yucesoy1, Xindi Wang1, Junming Huang1,2 & Albert-László Barabási ORCID: orcid.org/0000-0002-4028-35221,3,4,5 Reading remains the preferred leisure activity for most individuals, continuing to offer a unique path to knowledge and learning. As such, books remain an important cultural product, consumed widely. Yet, while over 3 million books are published each year, very few are read widely and less than 500 make it to the New York Times bestseller lists. And once there, only a handful of authors can command the lists for more than a few weeks. Here we bring a big data approach to book success by investigating the properties and sales trajectories of bestsellers. We find that there are seasonal patterns to book sales with more books being sold during holidays, and even among bestsellers, fiction books sell more copies than nonfiction books. General fiction and biographies make the list more often than any other genre books, and the higher a book's initial place in the rankings, the longer the book stays on the list as well. Looking at patterns characterizing authors, we find that fiction writers are more productive than nonfiction writers, commonly achieving bestseller status with multiple books. Additionally, there is no gender disparity among bestselling fiction authors but nonfiction, most bestsellers are written by male authors. Finally we find that there is a universal pattern to book sales. Using this universality we introduce a statistical model to explain the time evolution of sales. This model not only reproduces the entire sales trajectory of a book but also predicts the total number of copies it will sell in its lifetime, based on its early sales numbers. The analysis of the bestseller characteristics and the discovery of the universal nature of sales patterns with its driving forces are crucial for our understanding of the book industry, and more generally, of how we as a society interact with cultural products. Books remain an important part of our lives, reading being the favorite leisure activity for many individuals. Indeed, the average American reads 12 to 13 books per year, and how people select the reading has been of much interest for researchers for decades. Consequently, book publishing is a huge industry in the U.S., with a revenue that is projected to reach nearly 44 billion U.S. dollars in 2020. In 2015, about 2.7 billion books were sold, a number that has remained fairly consistent in the last few years [1]. Of the over 3 million books in print in the U.S. every year, more than a hundred thousand are new titles. Yet, only a tiny fraction attract considerable readership. For example, less than 500 books make it to the New York Times bestseller lists and only a handful of authors stay on the list for ten or more weeks. These near impossible odds reflect the challenges of capturing an audience in today's highly competitive world. Consequently, the success drivers of books remains of interest to many researchers [2]. Some of these drivers were explored in the literature [3, 4]: they are book critics [5], the author's and fans' circle of friends [6, 7], celebrities [8], online reviews [9, 10] and word of mouth [11]. The writing style of the author [12, 13], the amount of publicity [14], the timing of the book release [15], award-winning [16] or already bestseller [17] status of the book, the genre of the book and even the gender of the author [18, 19] are among the factors considered in past research. Yet, which books become successful and how they reach this status remains a mystery. Aiming to address this mystery, here we explore the sales patterns of bestsellers and the authors who write them. We used a big data approach to understand the type of books that made it to the New York Times bestseller list as hardcovers, and quantified the sales patterns necessary to reach the lists and the characteristics of the bestselling authors. Additionally, we explore the weekly sale numbers [11, 20], allowing us to uncover the dynamic patterns of how a book becomes successful. This allows us to propose a statistical model that captures how collective interest in a book peaks and drops over time. The model accurately predicts the total number of copies an edition will sell, using the early sales numbers after the first release. The New York Times Bestseller List (NYTBL) is the most influential and prominent lists of best-selling books in the United States. Published since 1931, the list is digitally available since 2008 and consists of several sub-lists focusing on specific editions (hardcover, trade and mass-market paperback, e-book) and topics (fiction, nonfiction, children's and graphic books are the main weekly categories). For each book the list offers some basic identifying information like the ISBN number, title, author, publisher, amazon.com link. The NYTBL ranks books by the number of individual copies sold that week, using sales numbers reported by an undisclosed list of retailers across the United States, statistically weighted to represent all outlets nationwide [21]. The rankings are calculated each week, hence a book that continues to sell well stays on the same bestseller list for multiple weeks. In this study we consider all books featured on the New York Times hardcover fiction and hardcover nonfiction bestseller lists between August 6th, 2008 and March 10th, 2016 (410 weeks), altogether 2468 unique fiction and 2025 nonfiction titles. To capture the time resolved sales patterns of books, we use NPD BookScan (formerly Nielsen BookScan), the largest sales data provider for the book publishing industry. Their database contains information about all print books that are being sold in the United States since 2004. For every book, this information includes the ISBN number, author name, title, category, BISAC number, publisher, price, total weekly sales across US and weekly sales in different geographical districts. Books come in a variety of formats and editions, from the more expensive hardcovers to the cheaper trade and mass market paperbacks, digital e-books and audiobooks. Despite the recent rise of digital books, printed books remain the preferred format for 65% of book readers in the U.S. Indeed, of the 2.7 billion books sold in 2015, 1.7 billion were printed books (577 million hardcover, 1.18 billion trade or mass market paperback). As new titles are usually released in hardcover first, here we focus on the sales patterns of books that made it to the New York Times bestseller list as hardcovers in fiction and nonfiction categories, each encompassing a variety of genres. Not all of these genres are equally popular among the readers, some being more present on the bestseller list than others. Additionally, the dynamics of being featured on the list is different for each book: Some enter with low ranks and drop immediately off the list while others reach high ranks and stay on the list for a long time. In this section we focus on these dynamic patterns, along with the corresponding sale numbers and their seasonal patterns and we briefly discuss when and how different editions of the same title are released. Genre fiction and memoirs dominate the bestseller list According to a 2015 survey [1], mystery, thriller and crime are the preferred book genres in the U.S., nearly half of Americans reading in these genres. About 33% of the surveyed readers chose history as their favorite genre, while 31% preferring biographies and memoirs. To check if these preferences are reflected in the New York Times bestseller list, in Fig. 1(A) and (B) we break down all bestsellers by genre. We find that within fiction, most bestsellers fall into the 'general' fiction category (also known as 'mainstream' fiction), with 800 books making it the most popular category. This category mainly contains 'literary' fiction, i.e. fictional works focused more on themes and characters than on plot. These are the books frequently discussed by literary critics, featured in prominent venues and taught in schools, factors contributing to their popularity. In contrast 'genre' fiction, i.e. plot driven fiction like mystery or romance, shown separately in Fig. 1(A), is rarely considered by literary critics and is often reviewed only in venues catering to niche audiences. Yet, we find that the total number of bestsellers in these 'genre' categories collectively (1668) is more than twice the number of bestsellers in general fiction (800). Especially Suspense/Thrillers and Mystery/Detective categories resonate well with readers, in line with the survey findings [1]. Recent research and media discussions [18, 22, 23] noted that the popularity of genre fiction has been increasing over the years, thanks to the equal opportunities provided by online venues and rating systems, and the stagnant popularity of the traditional literary venues that remain focused on general fiction. We indeed observe a slight increase in the percentage of genre fiction among the bestsellers during the past decade (Fig. 1(C)). Genre popularities. The breakdown of bestsellers into genre categories of (A) fiction and (B) nonfiction. Most bestsellers fall into Suspense and Mystery genres combined in fiction or Biography/Memoir and History genres in nonfiction. (C) The percentage of genre fiction (all fiction that can be classified as a specific genre, shown in dark green) and general fiction (all else, shown in light green) on the bestseller list over the years. The popularity of genre fiction has been increasing steadily since 2011 Among nonfiction books, almost half of the 2025 bestsellers are from the Biography/Memoir category, consisting of books written by or about famous individuals, from politicians to artists or business personalities. Their dominance on the nonfiction market demonstrates a continuous interest in the life stories of well known individuals. The next most popular genre in nonfiction is history, in line with the survey results. The 'General Nonfiction' category encompasses multiple genres such as 'Psychology', 'Nature' or 'Philosophy' and 'Reference' mainly collects books in 'Science' or 'Technology'. For example, Malcolm Gladwell's bestselling books are categorized as General Nonfiction, while popular science books including Stephen Hawking's The Grand Design are classified in Reference. In short, US readers prefer genre fiction over general fiction, making Thrillers and Mystery the most represented genres in the NYTBL over time. Biographies and memoirs are the most preferred genre within nonfiction, making up half of the nonfiction bestsellers. Bestseller status rarely lasts In marketing, a book is labeled a New York Times bestseller if it appears on the NYTBL for at least one week. Yet, there are major differences between bestsellers: some pop up on the list for a singe week while others retain their bestselling status for months and even years. To illustrate this, we measured the length of stay on the list for all New York Times bestsellers (Figs. 2(A) and (B)). We find that 25% of books appear only once on the list, while a few do spend an exceptional amount of time there. Longevity of bestsellers. Distribution of (A) fiction and (B) nonfiction bestsellers based on the number of weeks they stayed on the list. The number of weeks is shown in logarithmic scale to account for both the large number of short stays and the few exceptionally longer presence. We marked on the first columns the corresponding linear scale numbers for clarity. The best rank a book achieved on the New York Times (C) fiction and (D) nonfiction bestseller list vs. the number of weeks it stayed on the list. The size of the dots indicates the number of books with the same attributes. Overall, the better a book's best ranking, the higher is its probability of staying longer in the list For fiction, the number of books listed for only one week is high (26% of all books), indicating that the list changes considerably from week to week. In fact, only 10 of the 2468 fiction bestsellers stayed on the list for more than a year. The longest presence during our observation period is The Help, the 2009 book by Kathryn Stockett, which has been featured on the bestseller list for 131 subsequent weeks. Its continuous presence was helped by a movie adaptation nominated for the Academy Award in 2011. Highly anticipated books in ongoing popular series tend to stay longer in the list, like the fifth book A Dance With Dragons in George R.R. Martin's A Song of Ice and Fire series with 114 consecutive weeks on the list, and the third book The Girl Who Kicked the Hornet's Nest in the Millennium series by Swedish writer Stieg Larsson with 92 weeks on the list. Finally, literary awards can also sustain bestseller status: All the Lights We Cannot See by Anthony Doerr, having won both the 2015 Pulitzer Prize for Fiction and the 2015 Andrew Carnegie Medal for Excellence in Fiction, was on the list for 99 weeks at the time of data collection. In comparison, the nonfiction bestseller list shows slightly less variation from week to week, indicating that it is more common for nonfiction books to sustain their bestseller status. This is why we have fewer nonfiction books in our dataset than fiction books (2025 nonfiction bestsellers compared to 2468 in fiction). In the nonfiction category, 24% of books stayed only for one week on the list and 18 books lasted for more than a year. The most remarkable was Unbroken: A World War II Story of Survival, Resilience, and Redemption by Laura Hillenbrand which remained on the list for a record 203 weeks. Other examples of long-lasting success are Outliers by Malcolm Gladwell (125 weeks) and Killing Lincoln by Bill O'Reilly (96 weeks). In popular science category, The Grand Design by Stephen Hawking and Leonard Mlodinow stayed longest (23 weeks) on the NYTBL. In general, the better a book's best ranking, the longer it stays on the NYTBL (Figs. 2(C) and (D)). The majority (86% for fiction and 89% for nonfiction) of the books that stayed on the list for a single week have reached the best ranking of 15 while the majority of the books (93% for fiction and 88% for nonfiction) that stayed for at least 10 weeks were ranked among the top ten at least once. Still there are a few exceptions, books that were ranked low on the list yet remained there for a long time. The most significant is A Dog's Purpose by Tom Doherty (now a motion picture as well) which remained on the list for 19 weeks even though its best position was number 20. The Paris Wife by Paula McLain or Born To Run by Christopher McDougall are also outliers that stayed much longer on the list than what would be expected based on their best ranking. Finally, the majority of the books that reached the top spot on the NYTBL stayed on the list for at least 10 weeks (51% for fiction and 80% for nonfiction). In summary, most books stay on the NYTBL for only a week, and books lasting more than a year are extremely rare. That said, books reaching better ranks on the list stay on for longer periods compared to books ranked lower, many of the top ten books staying for several months at least. Not all bestsellers sell The number of copies a hardcover sells in its first year is an important measure of its commercial success. As after one year a cheaper paperback edition of the same title is likely to be released, the hardcover will no longer be the only print option. Therefore, in this section we focus on the first year sales of bestsellers, allowing us to explore their variability and the factors that determine their popularity. The one year sales distribution of all bestsellers indicates that the majority of bestsellers sell between 10,000 and 100,000 copies in their first year (Figs. 3(A) and (B)). Comparison of the NYTBL statistics and first year sales. The distribution of the number of copies sold in a year for all hardcover bestsellers in (A) fiction and (B) nonfiction. The relation between one year sales and the best rank of the book in NYTBL (C) for fiction and (D) for nonfiction. The relation between one year sales and the length of stay on NYTBL (E) for fiction and (F) for nonfiction. Most hardcover bestsellers sell between 10,000 and 100,000 copies in their first year for both fiction and nonfiction, and the higher they get in the list and the longer they stay on, the more copies they sell in a year In fiction, The Lost Symbol by Dan Brown takes the lead with a record-breaking 3 million copies sold in a year, followed by the highly anticipated The Girl Who Kicked the Hornet's Nest and Go Set a Watchmen, selling over 1.6 Million copies each. These two books were anticipated for different reasons, the former being the third book in an ongoing successful series and the latter being the long-awaited second book of Harper Lee, published 55 years after her classic To Kill A Mockingbird (1960). There are also several books that even though made it to the NYTBL with high first week sales, could not sustain those numbers over the course of a year, such as The Famous And The Dead, being the conclusion to T. Jefferson Parker's Charlie Hood series (6 books) and Hush Now, Don't You Cry, the 11th novel in the Molly Murphy Mysteries by Rhys Bowen. In nonfiction, the autobiography of former president George W. Bush, Decision Points sold the most copies in a single year, followed by the biography Steve Jobs by journalist Walter Isaacson, the basis for the 2015 movie of the same title. Yet, in the other extreme The Slippery Year: A Meditation on Happily Ever After, a memoir by Melanie Gideon sold less than 5000 copies in its first year. Occasionally nonfiction authors explore their themes throughout several books, resembling serialized novels of fiction, also resulting in high sales. A good example is Killing Patton: The Strange Death of World War II's Most Audacious General, by Bill O'Reilly and Martin Dugard about the final year of World War II and the death of General George Patton, which had sustained sales following other highly successful books with the same scheme by the same authors, Killing Kennedy, Killing Lincoln and Killing Jesus. To understand the dynamics of sustained sales, we looked into the relationship between the number of copies sold within a year and best ranking the book achieved on the list (Figs. 3(C) and (D)) and the length of stay of a book on the list (Figs. 3(E) and (F)). Obviously, the more copies a book sells in a single week, the better is its ranking in the bestseller list. For most books we also observe a direct correlation between the best ranking and the number of copies sold within a year. The most remarkable are The Lost Symbol and Decision Points, books ranked number one on the NYTBL in their category (fiction and nonfiction respectively) selling more than a million copies in their first year after publication. Yet we do note some outliers, selling significantly more (A Dog's Purpose in fiction and Duck Commanders and Grain Brain in nonfiction) or significantly less (Blackberry Pie Murder and Hush Now, Don't You Cry in fiction and News For All The People in nonfiction) copies in a year than would be expected from their best positions in the NYTBL. In case of the Duck Commanders, a behind-the-scenes account of the family featured in the reality TV show Duck Dynasty, the sustained sales were supported by the TV show's continuous popularity, prompting several books from different members of the same family with various degrees of success. Grain Brain by neurologist David Perlmutter is also an interesting case showcasing the seasonality of the bestseller lists. The book first came out in September 2013 and hit the NYTBL soon after, reaching its highest sales in December when book sales are typically the highest (see the following section). Hence, despite the impressive sales numbers, in those weeks it did not qualify for better rankings in the NYTBL, even dropping entirely from the list. Not surprisingly, we also observe a direct correlation between the length of stay on the bestseller list and the number of copies sold in a year (Figs. 3(E) and (F)). The only two exceptional cases are in the nonfiction category. As already mentioned, Grain Brain had sustained high sales but a short stay on the NYTBL due to the seasonality of the list. In case of Sarah Palin's Going Rogue, the book sold a record-breaking 500,000 copies in a single week, but the sale numbers dropped steadily afterwards. Yet the sales accumulated in the 16 weeks the book was on the list were sufficiently high to make the book an outlier among bestsellers. In summary, the number of copies a NYTBL book sells in its first year spans over two orders of magnitude. Overall, the best rank a book achieves on the list is a good predictor of its yearly sales: the better the rank, the higher the total sales. The length of stay of a book on the list is another good indicator of the number of copies sold, as longer stays mean larger number of copies sold every week. We read more fiction than nonfiction and buy books during holidays Since 2008, books on the hardcover NYTBL have sold anywhere between a thousand to a million copies in a single week. We show the distribution of the weekly sales that have gotten these books to the list in Fig. 4(A) and (B). Of course a book ranked first must sell far more copies than a book ranked 20 or 35. Accordingly, the high end outliers in Fig. 4(A) and (B) were all ranked number one for several weeks. The low outliers are included on the list either by mistake, or BookScan has a different record of their sales from New York Times. They are all books with much higher sales on other weeks, but on those particular weeks, their sale numbers recorded by BookScan were much lower than what is typically needed to hit the NYTBL. Aside from the extremes and differences between ranks, there may be several causes for the general high variability, as we discuss next. Making the NYTBL. The distribution of the number of copies sold in a week needed to be featured in NYTBL for (A) fiction and (B) nonfiction. (C) The number of copies bestselling books have sold in a single week in mid August (week number 33) that got them to the list over different years. (D) Explanation of the box plot technique used in (C). The median number of copies the bestselling books sold to hit the NYTBL during different weeks throughout a year, for (E) fiction and (F) nonfiction. In general, books need to sell between 1000 and 100,000 copies in a single week to hit the NYTBL, a range that has been fairly stable over the years. Fiction book sales are higher in summer and all sale numbers are significantly elevated from December to early January. Fiction sells more than nonfiction throughout the year but the gap is smallest in early January First, we looked into the sales needed to make it to the bestseller list since 2008. In Fig. 4(C) we consider week 33 (August), showing the number of copies each book on the NYTBL sold that week each year. We see that fiction books sell more copies than nonfiction books, in other words, fewer copies are needed to qualify a book for the nonfiction list than the fiction one. Also the stability of the year-by-year sales pattern is remarkable: today a book needs to sell between a 1000 to 10,000 copies to make it to the bestseller list, a range that stayed roughly the same during the past eight years. Next, we looked into the seasonal fluctuations in the sales patterns during a year. To explore how these fluctuations affect the bestseller list, we measured the median number of sales that got the books on the list at different times of the year (Figs. 4(E) and (F)). The dots correspond to the median sales of all books on the NYTBL at any given week each year, the line indicating the median over all years. We focus on the median instead of the average given the high variability amplified by record-breaking sales highlighted in Figs. 4(A) and (B). Overall, we find that median sales mostly fluctuate between 4000–8000 in fiction and 2000–6000 in nonfiction. Yet, there is a significant increase in sales late-December during holiday shopping, a pattern persisting into early January, likely due to delays in sales reporting. In early January, the lowest median sales over the years is close to 15,000 copies a week, a number higher than the highest median sales of any other time of the year except late December. For fiction, a similar but less pronounced peak is observed during the summer months with median sales surpassing 10,000, likely due to book purchases in preparation for the summer vacation. In nonfiction, there is no such summer peak. During these periods of elevated sales a book needs to sell more copies to make it to the New York Times bestseller list than during other months. We also note that in general, fiction books sell more copies than nonfiction, a gap which is largest during summer and decreases considerably during the holiday season, where the sales of both fiction and nonfiction are significantly elevated. In summary, we find that books featured in the NYTBL over the years hit the list by selling anywhere between thousands and tens of thousands copies, a range that has been stable since 2008. Seasonal fluctuations within a year matter much more, books needing higher sales during the holidays to stand out, even though more books are purchased in that period. Additionally, a book on the fiction list needs to sell more copies on average compared to the nonfiction bestseller list, due to the fact that on average, fiction sales are higher than nonfiction sales. It is sufficient for an author to have written a single book that appeared on the NYTBL for a single week to be labeled a 'bestselling author', a label that sticks for life. Yet, not all bestselling authors are alike. There are those with a single high selling book in their career, like Kathryn Stockett (The Help), and there are authors with over fifty books with varying sale numbers under their belt, like James Patterson or Stephen King. Additionally, some authors build their readership over time, achieving bestseller status with their later work while others enter the NYTBL with their first book. The success of a book is deeply linked to the previous success and the name recognition of its author, prompting us to explore in this section the dynamics of success for authors, quantifying the differences between authors in terms of productivity, repeat success and gender among authors within different bestseller categories. Fiction authors have more repeat success than nonfiction authors To understand the patterns of productivity among bestselling authors, we collected all unique titles published by them in hardcover since 2008, regardless of whether they made the bestseller list or not. After eliminating new editions of older titles, we ended up with 5396 books (2468 of them bestsellers) by 854 authors with bestsellers in fiction and 3968 books (2025 of them bestsellers) by 1670 authors with bestsellers in nonfiction categories. These numbers already indicate that fiction authors are more prolific than nonfiction ones, with half the number of fiction authors having written 1.5 times more books than nonfiction authors since 2008. As indicated by Figs. 5(A) and (B), only 14% of the fiction authors have written only one book since 2008. The vast majority of them have at least two books, but having close to 10 books is also common. Some authors are significantly more productive than others, with James Patterson being an outlier: he published 94 hardcovers since 2008, often with coauthors and in a variety of genres such as mystery, suspense, romance and even nonfiction. In Fig. 5(C) we take a closer look at his productivity and sales patterns, denoting each hardcover with a line located at its publishing date, its height indicating the number of copies sold in its first year. His most successful book was I, Alex Cross, the 16th novel in his Alex Cross series published in 2009, which stayed on the NYTBL for more than 20 weeks. With stars indicating bestseller status, we see that more than half (51) of his books were bestsellers. Fiction authors who also write graphic novels, like Neil Gaiman (48 books) and Warren Ellis (42), are also productive due to the usual high volume of publications in the graphic novel category. Other exceptionally prolific authors are mystery, thriller and fantasy author Ted Dekker (42), romance author Danielle Steel (37) and thriller author Clive Cussler (34). Productivity and repeat success. The number of unique hardcovers published by (A) fiction and (B) nonfiction authors since 2008, including their bestsellers. Books published by (C) James Patterson and (D) Timothy J. Keller, the line height indicating one year sales and stars marking bestseller status. The number of (E) fiction and (F) nonfiction bestsellers per author. While most authors in both fiction and nonfiction have only one bestseller, repeat success is more common for fiction authors than for nonfiction ones. Additionally, having one or two hardcovers published since 2008 is the norm for most authors, yet bestselling fiction authors publish significantly more compared to nonfiction authors In nonfiction, high productivity is rare (Fig. 5(B)) with nearly half (43%) of the authors having published only one hardcover since 2008. The most prolific author in nonfiction is pastor and theologian Timothy Keller, having written 21 books about spiritual topics. In Fig. 5(D) we show his career since 2008. He had 4 bestsellers, starting with the 2008 book The Reason For God, with the most successful first year sales. Even though several more of his books sold quite well over the course of a year, their individual weekly sales were not sufficiently high for them to make the NYTBL in any particular week. Economist Joseph E. Stiglitz is the next most productive nonfiction author with 18 books, including several textbooks. The editors of Life Magazine have curated 18 books since 2008, primarily focusing on events and people of public interest, like the sinking of Titanic or the life of Barack Obama. Five of them became bestsellers in nonfiction category. Next, we look at the repeat success by only considering bestsellers (Figs. 5(E) and (F)). In fiction, we find that the 2468 hardcovers on the NYTBL are written by only 854 authors, indicating that the list is dominated by a small number of authors with multiple bestsellers. We have already seen that James Patterson takes the lead in both repeat success and productivity. Similarly, Clive Cussler with 31 bestsellers and Danielle Steel with 25 bestsellers are also especially successful, in addition to being rather prolific. In general, bestselling authors with multiple books will often have multiple bestsellers. This is partly because fiction authors commonly write novels in serialized form and once a series builds up an audience, the subsequent books in the series also receive substantial attention. In nonfiction NYTBL, repeated authorship is less common: the 2025 nonfiction books are written by 1670 authors, indicating fewer recurrent authors. The distribution in Fig. 5(F) indicates that an overwhelming majority (85%) of the bestselling nonfiction authors have only one bestseller since 2008. Interestingly, in nonfiction, the top 3 authors with most bestsellers, Bill O'Reilly, Dick Morris and Glenn Beck, are all political commentators, writing mostly about current public affairs and government issues in the U.S. Yet, as we mentioned earlier, Bill O'Reilly has also written several bestselling books with historical themes, speculating about the deaths of prominent historic figures, with co-author Martin Dugart. In summary, fiction authors are likely to write multiple books in quick succession and often in serialized format, and they often have multiple bestsellers. In nonfiction however, the norm is one bestseller per author, which is typically the only hardcover they published since 2008. This is partly due to the fact that most nonfiction bestsellers are memoirs, books written by or about famous individuals, without repeat authorship. Yet higher productivity and repeat success does happen in nonfiction as well, albeit less frequently. Many bestsellers are debut books but late success is also possible To understand the patterns of writing a bestselling book, we need to explore the careers of individual authors. In this section we ask if it is more common for authors to reach bestseller status with their first book, or if bestselling status is something built up through the increasing popularity of multiple books. To address this question, we focused on bestselling authors who started publishing in 2008 and after, a cohort of 145 authors in fiction and 591 authors in nonfiction. In fiction, the author with the most hardcovers is Taylor Anderson, with 10 books in The Destroyermen series (Fig. 6(A)). The first three books in the series sold relatively poorly, yet we observe increasing sales with each new book. It was the 4th book that reached a mass audience, doubling the weekly sales at its release, yet still not enough to land on the bestseller list. The fifth book of the series finally became a bestseller, yet for a single week. Yet, the added visibility propelled the 6th and 7th books in the series to the NYTBL. His following 3 books, although selling strongly, did not make the list any longer. Another fiction author achieving success with late books is the author of the young-adult steampunk Finishing School series Gail Carriger (Fig. 6(C)). She already had moderate success in paperback form with her adult oriented Parasol Protectorate series, yet none of her four hard cover format Finishing School novels (shown in blue, green, red and orange) made the NYTBL. Her 2015 hardcover Prudence (purple) was the book that finally landed her on the list. When do authors succeed? (A) The sales history of The Destroyermen series author Taylor Anderson, showing for each book the weekly hardcover sales. Curves marked with a star on top indicate bestsellers. Same for authors (B) Nick Vujicic, (C) Gail Carriger and (D) John Gerzema. (E) Heatmap showing the number of hardcovers a fiction author has published in their career vs the order i of one that became the first bestseller, (F) same for nonfiction authors. Only authors who started their career in 2008 and after are considered and boxes the authors shown in (A)–(D) belong to are marked accordingly. We see that while most bestselling authors first get on the NYTBL with their first book, later success is not uncommon In nonfiction, Christian evangelist and motivational speaker born with tetra-amelia syndrome (a rare disorder characterized by the absence of arms and legs) Nick Vujicic is one of the more productive authors starting his writing career in 2010 (Fig. 6(B)). His first book, Life Without Limits, was an international success, being translated into more than 20 languages. Yet, it did not make the NYTBL. His second book, Unstoppable, got there two years later, helped by the buzz created by his first book and possibly his motivational speaking engagements. He went on to write three more books, none of them matching the success of his first two. Another nonfiction author, columnist and businessman John Gerzema writing about impact of leadership ethics, had his success grow steadily with each subsequent book, finally getting the third one, The Athena Doctrine, into the NYTBL (Fig. 6(D)). To understand the typical order of a bestseller within an author's career, we constructed heatmaps showing the distribution of the place (index) of an author's first bestseller among their published hardcovers with respect to how many hardcovers they published in total (Figs. 6(E) and (F)). We see that many (corresponding to the sum of the first row in the Figs. 6(E) and (F), 120 for fiction and 531 for nonfiction) of the bestselling authors who debuted on or after 2008 first got on the NYTBL with their first book. This is partly due to the fact that debut novels are over-represented in this selection, since we did not consider authors with previous publishing history even if their first bestseller was published after 2008. Consequently, we observe that many of the considered bestselling authors had only one book so far, overwhelmingly so for nonfiction authors (65%). Yet, late success like Taylor Anderson's is not unheard of. Many 2-book authors got into the NYTBL with their second book (8 in fiction and 44 in nonfiction) and even later success is achieved by several (12 in fiction and 5 in nonfiction). Additionally, there are 2 fiction authors like Gail Carriger with 5 books whose 4th book was their first bestseller and 3 nonfiction authors like John Gerzema who built up to bestselling status with the first two of their three books. In general, we find that most bestselling authors who started their careers on or after 2008 were successful with their first book, yet getting into the NYTBL with a second or later book is possible as well. Previous success is a good predictor of the next book's success As we established earlier, repeat authorship is common on the bestseller list, particularly in fiction. We have also seen that some authors start with a large readership while others build their readership gradually, and finally some lose the public's interest after a few successful books. To quantify how the sales of a previous hardcover affect the sales of the subsequent book, in Figs. 7(A) and (B) we show the one-year-sales of hardcover number i vs. one-year-sales of the previous hardcover, \({i-1}\). Since we are showing only unique hardcovers from bestselling authors after 2008, it is not surprising that most books are concentrated on the top right of the plot indicating high sales, a pattern present in both fiction and nonfiction. The fact that most books fall very close to the 45 degree line indicates that books that sell well are likely to be followed by books with comparably strong sales. Consequently, we find that best selling authors tend to sustain their success. Yet, even bestselling authors can publish books that sell poorly, as indicated by the outliers marked in Figs. 7(A) and (B), showing no clear relation between a book's and its predecessor's sales. Previous sales matter. The relation between the sales of the previous hardcover \({i-1}\) and the sales of the next one, i, by the same author, for (A) fiction and (B) nonfiction. Best selling authors tend to sustain their success with new books, as most dots lie close to the 45 degree line. The outlier books, discussed in the main text, are written by (1) Stieg Larsson, (2) Annie Barrows, (3) Melissa de la Cruz and (4) John Jackson Miller for (A) and (1) Kristin Chenoweth, (2) Kay Robertson, (3) Eric Blehm and (4) Lady Colin Campbell for (B). They are either books written in a different genre from the previous bestselling books of the author, or reissues of old books with a different title In general, the outliers are books in other genres than the one the author had their bestseller in. Remarkable examples are The Expo Files, the only nonfiction book by the Millennium Triology author Stieg Larsson (marked 1 in Fig. 7(A)), children's books by The Truth According to Us author Annie Barrows (2) or by John Jackson Miller (4), author of several Star Wars novels. We see the same trends in Fig. 7(B) for nonfiction, with a song book by Kristin Chenoweth (1) whose memoir was a bestseller, children's books by Kay Robertson (2) of the Duck Dynasty fame or by Eric Blehm (3) who had multiple nonfiction bestsellers about life in the military or an autobiography from the Lady Colin Campbell (4) who normally writes about the British Royal Family. Finally, reissues of old books with new titles do not sell well regardless of an otherwise successful author, as shown by the Melissa de la Cruz book (3 in Fig. 7(A)) Popularity Takeover, originally published as Lip Gloss Jungle 6 years prior. In short, bestselling authors are likely to sustain their success with subsequent books unless they choose to write books in significantly different genres, losing their reader base. Female authors dominate romance while men dominate nonfiction Books are cultural products and as such, their success is highly interconnected with the cultural makeup of our society. Since gender and gender role perceptions are an essential part of this makeup, we next explore if gender plays a role in book success. We start by dividing the books into four categories: books written by male authors (2182 books), by female authors (1518), books with multiple authors (multi, 720) and finally books for which we could not identify the author's gender (unknown, 75). Multiple authorship means either that several authors collaborated on a book (most books by celebrities fall into that category, being co-written with professional writers), or they may be translations to English, hence often the translator is listed as a co-author. In rare cases a book written under a pseudonym may have two names listed once the pseudonym is unveiled, like The Cuckoo's Calling written by J.K. Rowling under the pen name Robert Galbraith. To obtain gender information from author names, we used a database of first names separating names into groups of 'male', 'female' and 'neutral', and verified a selection from author informations available on Wikipedia and GoodReads. In fiction, we find that bestselling books are mostly written by a single author, and the number of bestsellers written by female authors and male authors are indistinguishable (Fig. 8(A)). In nonfiction, however, the bestseller list is dominated by male authors (Fig. 8(B)). As we do not know the ratio of female/male authors publishing in nonfiction that do not make the NYTBL, we cannot tell if readers prefer nonfiction written by men, or simply more nonfiction is written by men. If we compare one-year-sales of the bestsellers written by different groups, we do not see significant gender differences, as indicated by the median values shown as stars in Figs. 8(C) and (D). In other words, in both fiction and nonfiction, the sales patterns of female and male authors are largely indistinguishable. Bestsellers and gender. (A) The breakdown of the New York Times hardcover bestseller list by the author's gender. Fiction bestsellers are equally likely to be written by female or male authors, while most nonfiction bestsellers are written by male authors. (B) One year sales of the fiction bestsellers written by different gender groups, (C) the same for nonfiction. A breakdown of the bestselling book categories by gender, fiction (D) and nonfiction (E). Median yearly sales for books written by different groups are indistinguishable. In fiction, romance genre is dominated by female authors, while male authors are more prominent in suspense/thrillers, and in nonfiction the Biography/Memoir genre has the most female presence To observe if different gender authors prefer different genres, we broke the bestselling book categories by gender (Fig. 8(D)). We find that female authors are better represented in Literary (General) Fiction than men and dominate in Romance. In contrast, Thrillers, Science Fiction or Action/Adventure are dominated by male authors. Both groups are represented equally in Mystery novels. In nonfiction, all categories are dominated by men (Fig. 8(E)), but for Memoirs, the gender difference is smaller. In summary, we find that in fiction, equal number of bestsellers are written by female and male authors, and bestsellers with multiple authors are rare. In contrast, male authors dominate the nonfiction NYTBL, and the number of female- and multi-authored bestsellers are similar to each other. The dynamics of book sales In this section we model the temporal changes in book sales, allowing us to capture and predict the observed sales patterns. Bestsellers reach their sales peak in less than ten weeks In Sect. 3 we argued that the first year sales are the most important for a hardcover. Indeed, for the 2035 fiction bestsellers we have at least two years of sales data, we find that 96% of the sales took place in the first year. Similarly, 94% of the sales of 1699 nonfiction bestsellers also happen in the first year. To systematically explore the dynamics of the sales patterns, we start by showing the weekly sales of all bestselling fiction (Fig. 9(A)) and nonfiction (Fig. 9(A)) books. The thick line corresponds to the median sale values. The peak sale values vary significantly from book to book, some books selling over 100,000 copies at their peak while others only reaching a few hundreds. We therefore use a logarithmic scale to display all sales curves. These plots already indicate that for both fiction and nonfiction peak sales are within the first ten weeks after a book's release. Weekly sales. The weekly sales numbers of all bestselling (A) fiction and (B) nonfiction books, thick lines showing the median sales values for that week. The shaded area starts after the first year and only 4% of fiction and 6% of nonfiction sales happen there. The number of books reaching their peak sales at a given week after publication, (C) for fiction and (D) for nonfiction. While the sales numbers vary significantly from book to book and from week to week, bestseller sales in both fiction and nonfiction increase sharply at first, reach their peak in the first ten weeks and drop slowly afterwards. The sale curves of selected outliers reaching their peak sales later than usual are shown in (E) for fiction and (F) for nonfiction We find that almost all books, regardless of category, peak in the first 15 weeks after publication (Figs. 9(C) and (D)). Furthermore, most fiction books have their peaks strictly in the first 2–6 weeks; in contrast for nonfiction, even though peaks at weeks 2–5 are common, the peak can happen any time during the first 15 weeks. For example both The Lost Symbol by Dan Brown and Go Set A Watchman by Harper Lee peaked in their 3rd week, and so did Sarah Palin's Going Rogue. George W. Bush's Decision Points had its peak sales even earlier, on the second week after publication. Still there are some outliers that peak much later, towards the end of their first, or well into their second year. These exceptionally late peaks are typically triggered by exogenous events such as winning awards, being adapted for a movie or in rare cases, having a prominent public figure's endorsement. We are showcasing several such examples in Figs. 9(E) and (F). All the Light We Cannot See, shown in purple in Fig. 9(F), is a novel written by Anthony Doerr, published on May 6, 2014. The novel had an initial peak and subsequent decline when it was shortlisted for the National Book Award later that year. The sales numbers tripled the week after it lost the award to Redeployment by Phil Klay. The novel later won both the 2015 Pulitzer Prize for Fiction and the 2015 Andrew Carnegie Medal for Excellence in Fiction, causing further peaks in sales numbers, but the most drastic effect was seen at the end of 2015, where people overwhelmingly chose this multiple award-winning book during their holiday shopping. Another example of awards causing late peaks is the nonfiction book The Immortal Life Of Henrietta Lacks (red in Fig. 9(F)) by Rebecca Skloot, which won both the American Association for the Advancement of Science's Young Adult Science Book award and the Wellcome Trust Book Prize. The Help (red in Fig. 9(E)) by Kathryn Stockett on the other hand was a 'sleeper hit' which gradually increased in sales until a movie adaptation was announced. The announcement, coinciding with the holiday shopping season, propelled the book's sales to more than 60,000 a week. Another peak in sales happened when the first pictures of the movie's cast appeared and the following holiday shopping season was also beneficial for the book. Finally, Humans of New York author Brandon Stanton (purple in Fig. 9(F)) and his well-known Facebook page of the same title as the book were featured on CNN shortly after the book's publication, causing a second peak in sales. But the book's biggest success came when Stanton interviewed the then U.S. President Barack Obama in the Oval Office in January of 2015. These exogenous events aside, the data indicates that the first few weeks of a book are crucial: This is when the books capture the interest of their readership. Also this is the time when publishers will invest in a book's advertising and the most likely period for a book to be featured in the front of book stores and considered for reviews in various media. As such, a book's sales to be the highest in that period. Sales follow a universal pattern As can be seen in Figs. 9(A) and (B), most books follow a similar sales pattern: the sales increase very fast, reach their peak in the first ten weeks and drop dramatically afterwards. This similarity suggests the existence of a universal sales pattern, i.e. the possibility that the properties for all sales curves are the same, independent from the details and degree of complexity of each individual book's sales narrative. This hypothesis allows us to develop a simple yet general model, helping us identify the mechanisms that drive the sales of books. In general, three fundamental mechanisms contribute to the observed sale patterns: (i) Each book carries a different value for its audience, stemming from the author's name recognition, the writing style, the marketing efforts by the publisher and even the quality of the book cover. Some books are anticipated and well-liked, resulting in high sales, some will be unexpected, lacking familiar elements and hard to get into, resulting in lower sales. To account for these inherent differences, we define a parameter called the book's fitness, \(\eta_{i}\), that captures the book's ability to respond to the taste of a wide readership. (ii) Second, a book that sells well will attract even more sales, an effect called preferential attachment [24, 25]. Preferential attachment in this context is likely rooted in collective effects, like recommendations from friends, critics, celebrities, online reviews and bookstores who display a sought after book in visible spots. Mathematically it implies that the likelihood of purchasing a book depends on its up-to-date sales, \(S_{i}^{t}\). (iii) Finally, even the best books lose their novelty and fade from the public eye some time after their publication. Barring exogenous events, once the book reached its target audience, less and less individuals are interested in purchasing it. To model this gradual loss of interest, we need to add an aging term, using a form adopted from the decay of citations in research papers [26] $$ A_{i}(t) = \frac{1}{\sqrt{2\pi }\sigma_{i} t}\exp \biggl[- \frac{(\ln t - \mu_{i})^{2}}{2\sigma_{i}^{2}}\biggr], $$ where \(\mu_{i}\) is the book's immediacy determined by the time the sales reach its peak and \(\sigma_{i}\) is the decay rate capturing the longevity. In case of books, a lognormal aging term (1) is motivated by the fact that the time of purchase t can be approximated as a multiplicative process, resulting from independent random factors contributing to a reader's decision to buy a book. Such random multiplicative processes are shown to lead to a lognormal distribution [27–31]. Combining these mechanisms, we can write the probability \(\Pi_{i}(t)\) of a book i to be purchased at a time t after publication as [26] $$ \Pi_{i}(t) \sim \eta_{i} S_{i}^{t} A_{i}(t), $$ which depends on (i) the book's fitness \(\eta_{i}\), (ii) the total number of sales until t, \(S_{i}^{t}\) (preferential attachment), and (iii) the aging factor (1). Combining (i)–(iii), we find that the total sales of book i at time t after publication follows (a detailed derivation is given in Section S2.2 of the supplementary materials of [26]), $$ S_{i}^{t} = m\bigl[e^{\lambda_{i} \Phi (\frac{\ln t - \mu_{i}}{\sigma_{i}})} - 1\bigr], $$ $$ \Phi (x) = (2\pi )^{-1/2} \int_{-\infty }^{x} e^{-y^{2}/2}\,dy $$ is the cumulative normal distribution related to the error function as \(\Phi (x)=1/2 \operatorname {erfc}(-x/ \sqrt{2})\) where erfc is the complementary error function given by \(1-\operatorname {erf}(x)\) and \(\lambda_{i}\) is the relative fitness proportional to \(\eta_{i}\). To demonstrate how the model (2)–(4) can reflect actual sales, in Fig. 10(B) we show the sales pattern of The Appeal by John Grisham which sold over a quarter million copies in a single week after publication. We obtained the parameters \(\lambda = 10.37\), \(\mu = 2.03\) and \(\sigma = 1.12\) by fitting Eq. (3) to the book's cumulative sales, the fit being shown in Fig. 10(C), trailing closely the real sales pattern (\(R^{2}=0.99\)). In fact, the model (3) can handily explain a wide range of sales patterns by varying only the three parameters μ, σ and λ (Fig. 10(E)). Modeling sales patterns. (A) Weekly sales of 200 randomly selected bestsellers, with colors indicating best rank on the NYTBL. (B) The weekly sale numbers of the bestselling hardcover The Appeal by John Grisham as reported by NPD Bookscan. (C) The fit on the cumulative sales of (B), empty circles showing the measured sales curve and the green line the model with parameters \(\lambda = 10.37\), \(\mu = 2.03\) and \(\sigma = 1.12\). The fit accurately reflects the data, as evidenced by the high \(R^{2}=0.99\) value. The only exception are the very first weeks where data is often unreliable due to discrepancies between the reported publication date and the book's physical availability in bookstores. (D) Rescaled sales curve for The Appeal. (E) How weekly sale curves change for different λ, σ and μ values. By tuning these, the many different types of sales histories shown in (A) can be accounted for. Rescaled sales curves of all hardcover bestsellers, (F) fiction and (G) nonfiction. Most book sales curves closely follow a single formula, indicating that the model captures the patterns correctly as given in Eq. (5) A key prediction of model (3) is that by transforming all the sales curves into a single curve using rescaled variables, all sales curves should follow the same universal curve. These rescaled variables are \(\tilde{t} \equiv (\ln t - \mu_{i})/\sigma_{i}\) and \(\tilde{S} \equiv \ln (1+S_{i}^{t}/m)/\lambda_{i}\) and by substituting them into (3) we obtain $$ \tilde{S} = \Phi (\tilde{t}). $$ As an example, we show the rescaled curve for The Appeal in Fig. 10(D). The rescaled time \({\tilde{t}=1}\) roughly corresponds to the time of the peak sale, and for this book there were almost no sales before that point. If the model fits the sales pattern for all books, we expect the rescaled curves derived for all books to collapse into a single curve. We therefore measured the \(\mu_{i}\), \(\sigma_{i}\) and \(\lambda_{i}\) values for all books in the New York Times bestseller data using \(m = 30\) and the Least-Square Fitting method on the available sales range for each book. We then rescaled the sales curve of each book accordingly, the rescaled sales curves being shown in Fig. 10(F) for fiction and (G) for nonfiction. The fact that all curves collapse into a single one indicates that the model correctly captures the sales pattern of most books. One limitation of the proposed model is that it cannot account for exogenous events like awards, movie adaptations or mentions by prominent venues or celebrities. These events may land an otherwise unnoticed book on the New York Times bestseller list years after its original publication as we have seen in Figs. 9(C)–(F). Yet, these are exceedingly rare cases and most bestsellers follow a more typical sales pattern, one that is well accounted for by our model. Taken together, we find that by using the fundamental mechanisms of fitness, preferential attachment and aging, we can explain and accurately model the sales curves of all bestsellers, regardless of genre. We can do this by relying on our observation that all books follow a well defined, regular path in selling copies including the timing of the peak sales, exceptions being rare. Predicting future sales In the previous section we have seen that only three parameters are needed to describe the sales history of any bestseller: the fitness λ, the immediacy μ and the decay rate σ. In Figs. 11(A)–(C) we show the probability distributions of each parameter for all bestsellers. In (A) we see that the fitness distribution \(P(\lambda )\) is very similar for fiction and nonfiction bestsellers. This is to be expected, since these are all bestselling books and therefore all have high fitness. Yet, the variation of relative fitness is slightly higher for fiction than nonfiction, indicating a broader range. The observation that fiction bestsellers show more variability than nonfiction bestsellers is consistent with earlier findings about the one year (Fig. 3) and weekly (Fig. 4) sales. Additionally, the λ distribution for fiction peaks at a slightly higher value for fiction than nonfiction, indicating a higher relative fitness on average. This is because fiction books sell more copies than nonfiction books on average, as discussed in Sect. 3. Predicting sales. Distribution of the parameters (A) fitness λ, (B) immediacy μ and (C) decay rate σ for all bestsellers, with fiction shown in green and nonfiction in blue. The dashed lines indicate the λ, μ and σ values of The Appeal, indicating high fitness resulting in a higher total sales, average immediacy indicating typical peak sales time and slightly lower than typical decay rate meaning a relatively slow drop in sales after the peak. Predicted vs measured total sales calculated from Eq. (6), using the first (D) 25 and (E) 50 weeks to obtain λ. The colors represent the peak sale number for each book. The first 6 months offers an accurate picture for the full total sales and the prediction accuracy increases as longer times are used to calculate the fit. Additionally, total sales of books with higher peak sales are predicted better than of those with lower peaks The relative fitness can singlehandedly predict how many copies a book will sell during its lifetime. Taking \(t \rightarrow \infty \) in Eq. (3), we obtain [26] $$ S_{i}^{\infty } = m\bigl(e^{\lambda_{i}} - 1 \bigr), $$ predicting that the total number of sales of a book in its lifetime depends only on a single parameter, the relative fitness λ. Consequently, if the model captures the data well, we expect a good match between the predicted and measured total sales, even when using data from a shorter time period than the book's lifetime to obtain the fitness parameter, allowing us to predict the total sales using (6). Results for different choices of time periods to calculate λ are shown in Fig. 11(D) for the first 25 weeks and (E) for the first 50 weeks after book release. We find that a fit derived from the first 25 weeks results in quite accurate predictions for the total sales of most books, indicating that our model can accurately predict how many copies a book will sell during its lifetime months after publication. As the number of weeks used for the fit increases, so does the accuracy of the prediction. Additionally, total sales of books with higher sales peaks are predicted more accurately, as indicated by the relative closeness of the red and orange dots to the 45 degree line as opposed to the green and blue dots which are generally more spread out. Figure 11(B) shows the probability distribution for the immediacy parameter \(P(\mu )\), indicating that both fiction and nonfiction books have similar immediacy distributions, i.e. all bestsellers reach their sales peak at a similar times. This result is consistent with Fig. 9, including the observation that \(P(\mu )\) peaks at a slightly higher value for nonfiction than fiction, pointing to later peak sale times for nonfiction compared to fiction bestsellers. Finally, the probability distribution for the decay rate values \(P(\sigma )\) is shown in Fig. 11(C). The distributions for both fiction and nonfiction are quite narrow, following each other closely except at the very top, indicating very similar longevity and decay rates for all bestsellers. Yet, the distribution is slightly broader for fiction compared to nonfiction, indicating that on average, the longevity and continued success of fiction books vary more than nonfiction books, even among bestsellers. The dashed lines on all three distributions show where the parameter values for The Appeal fall, indicating an extremely high fitness pointing to very high sales, typical immediacy pointing to average peak sale timing and lower than average decay rate pointing to a relatively slow drop in sales after the peak. We find, overall, that the model (3) correctly describes the sales pattern of a book, accurately predicting the total sales once the book has been out for some time. However, we have seen in Fig. 9 that for the majority of bestsellers, most sales happen during the first few months. Consequently, a prediction of the future sales many months after the publication date is of value for inventory management. The goal of this paper is to bring a big data perspective on the factors that influence book sales. For this, we developed a systematic, data driven approach to investigate the sales patterns of works and their creators that made it into the New York Times bestseller list. We find that bestsellers have a higher chance of coming from the general fiction and memoir categories and regardless of the sub-genre, nonfiction books sell less copies than fiction books. In both categories, any book making it to the top of the bestseller list will sustain its sales longer compared to the books that barely make it to the list, indicating that the higher the initial success, the longer it will persist. There were no significant changes in the number of copies a book needs to sell in order to achieve bestseller status over the years since 2008, approximately the same amount of hardcovers being sold today as they were in the past years. This is a remarkable finding showing that the increasing availability of books in digital format has no influence on hardcover sales. Yet seasonal fluctuations within a year are important, influencing the relative success of a book compared to the rest of the market. Even though the holidays are times where substantially more books are sold, it is harder for any book to stand out because of these elevated sales. From the author's perspective, we found fiction writers to be more prolific compared to nonfiction authors, achieving more repeat success. Such repeat success is helped by the serialized nature of many fiction bestsellers: When readers enjoy a series, subsequent books will have a higher potential of success. Interestingly, nonfiction authors writing in a serialized fashion focusing on a theme enjoy similar repeat success. As readers prefer the familiar over unknown, having some sense of what to expect drives more people towards a book or a series. This insight is consistent with the observation that people enjoy reading about celebrities or historic figures and events with whom they already have some degree of familiarity. While gender disparity is prevalent in both academia and business, its largely absent in fiction: female and male authors are equally represented on the fiction bestseller list. In contrast, in nonfiction most bestsellers are written by male authors, showing that female authors either avoid nonfiction or are less successful if they do so. Yet, the breakdown of genres by gender and the finding that more romance is written by women and more mystery is written by men shows that stereotypical gender roles may be found in the world of authors as well. Investigating the weekly sale numbers of bestsellers helped us identify a universal sales pattern: the sales increase very fast after a book's release, reaching their peak in the first ten weeks and drop dramatically afterwards. Using this universality we propose a statistical model that correctly describes the sales patterns of all bestsellers and accurately predicts the total number of copies an edition will sell during its lifetime a few months after a book's release, which could be of value for inventory management and assessing long-term impact. We find particularly interesting that a model originally proposed to describe citation patterns [26] offers an accurate description of book sales as well, albeit at different time scales. This suggests that the fundamental processes driving the attention economy of the phenomena—book selection and citations—are the same. The discovery of the universal nature of sales patterns and its driving mechanisms are important for our understanding of the industry and how individuals buy books. In fact, the model (3) provides us with an excellent tool to reconstruct the sales timeline of any book from beginning to end, once the parameters λ, μ and σ are known. Combining that with our insights about the characteristics of the bestsellers and their sales numbers, the ground is set for the development of tools to predict the parameters of the model before the book is published. Such a model could accurately foresee the entire sales curve of the given book months before that book is on the shelves, unlocking the full potential predictive power of the book industry. We expect our findings on bestsellers to offer a starting point and inspiration to investigate the success of books and authors further, considering and comparing a variety of books including those that did not sell well, helping us to ultimately understand what it takes to be successful in an industry that is not only large and extremely competitive, but also effects us both as individuals and collectively as a society, by shaping our culture. U.S. book industry/market—statistics & facts. Statista. https://www.statista.com/topics/1177/book-market/. Accessed 2015-09-29 Schmidt-Stölting C, Blömeke E, Clement M (2011) Success drivers of fiction books: an empirical analysis of hardcover and paperback editions in Germany. J Media Econ 24(1):24–47 Leemans H, Stokmans M (1992) A descriptive model of the decision making process of buyers of books. J Cult Econ 16(2):25–50 D'Astous A, Colbert F, Mbarek I (2006) Factors influencing readers' interest in new book releases: an experimental study. Poetics 34(2):134–147 Clement M, Proppe D, Rott A (2007) Do critics make bestsellers? Opinion leaders and the success of books. J Media Econ 20(2):77–105 Keuschnigg M (2015) Product success in cultural markets: the mediating role of familiarity, peers, and experts. Poetics 51:17–36 Nakamura L (2013) "Words with friends": socially networked reading on goodreads. Publ Mod Lang Assoc Am 128(1):238–243 Carmi E, Oestreicher-Singer G, Sundararajan A (2012) Is Oprah contagious? Identifying demand spillovers in online networks. NET Institute Working Paper 10-18 Chevalier JA, Mayzlin D (2006) The effect of word of mouth on sales: online book reviews. J Mark Res 43(3):345–354 Tsur O, Rappoport A (2009) RevRank: a fully unsupervised algorithm for selecting the most helpful book reviews. In: ICWSM Beck J (2007) The sales effect of word of mouth: a model for creative goods and estimates for novels. J Cult Econ 31(1):5–23 Ashok VG, Feng S, Choi Y (2013) Success with style: using writing style to predict the success of novels. Poetry 580(9):70 Johnson MW (2014) Bestsellers beyond bestsellers: the success of a good story. Online J Commun Media Technol 4(4):1 Sorensen AT, Rasmussen SJ (2004) Is any publicity good publicity? A note on the impact of book reviews. NBER Working Paper, Stanford University Clerides SK (2002) Book value: intertemporal pricing and quality discrimination in the US market for books. Int J Ind Organ 20(10):1385–1408 Kovács B, Sharkey AJ (2014) The paradox of publicity: how awards can negatively affect the evaluation of quality. Adm Sci Q 59(1):1–33 Sorensen AT (2007) Bestseller lists and product variety. J Ind Econ 55(4):715–738 Verboord M (2011) Cultural products go online: comparing the Internet and print media on distributions of gender, genre and commercial success. Communications 36(4):441–462 Verboord M (2012) Female bestsellers: a cross-national study of gender inequality and the popular–highbrow culture divide in fiction book production, 1960–2009. Eur J Commun 27(4):395–409 Deschâtres F, Sornette D (2005) Dynamics of book sales: endogenous versus exogenous shocks in complex networks. Phys Rev E 72(1):16112 MathSciNet Google Scholar About the best sellers. New York Times. http://www.nytimes.com/books/best-sellers/methodology/. Accessed 2014-09-29 Krystal A (2012) EASY WRITERS: guilty pleasures without guilt. The New Yorker Grossman L (2012) Literary revolution in the supermarket aisle: Genre fiction is disruptive technology. Time Barabási A-L, Albert R (1999) Emergence of scaling in random networks. Science 286(5439):509–512 Caldarelli G (2007) Scale-free networks: complex webs in nature and technology. Oxford University Press, London Wang D, Song C, Barabási A-L (2013) Quantifying long-term scientific impact. Science 342(6154):127–132 Boag JW (1949) Maximum likelihood estimates of the proportion of patients cured by cancer therapy. J R Stat Soc, Ser B, Methodol 11(1):15–53 Sartwell PE et al. (1950) The distribution of incubation periods of infectious disease. Am J Hyg 51:310–318 Preston FW (1981) Pseudo-lognormal distributions. Ecology 62(2):355–364 Williams CB (1940) A note on the statistical analysis of sentence-length as a criterion of literary style. Biometrika 31(3/4):356–361 Herdan G (1958) The relation between the dictionary distribution and the occurrence distribution of word length and its importance for the study of quantitative linguistics. Biometrika 45(1–2):222–228 We wish to thank Douglas Abrams and Akos Erdös for giving us an insider's view to the publishing word and helping us understand it better, Kim Albrect and Alice Grishchenko for helpful visualizations, Peter Rupert and other colleagues at the CCNR, especially those in the success group, for valuable discussions and comments. A companion website with interactive visualisations can be found at http://bestsellers.barabasilab.com. This research was supported by Air Force Office of Scientific Research (AFOSR) under agreement FA9550-15-1-0077, John Templeton Foundation under agreement 51977 and DARPA under agreement N66001-16-1-4067. Center for Complex Network Research and Department of Physics, Northeastern University, Boston, USA Burcu Yucesoy, Xindi Wang, Junming Huang & Albert-László Barabási CompleX Lab, Web Sciences Center, University of Electronic Science and Technology of China, Chengdu, China Junming Huang Center for Cancer Systems Biology, Dana Farber Cancer Institute, Boston, USA Albert-László Barabási Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, USA Center for Network Science, Central European University, Budapest, Hungary Burcu Yucesoy Xindi Wang All authors designed the research. BY, XW and JH obtained, prepared and cleaned the data. BY and XW analyzed the data and prepared the figures. BY and A-LB prepared the manuscript. All authors read and approved the final manuscript. Correspondence to Albert-László Barabási. Yucesoy, B., Wang, X., Huang, J. et al. Success in books: a big data approach to bestsellers. EPJ Data Sci. 7, 7 (2018). https://doi.org/10.1140/epjds/s13688-018-0135-y DOI: https://doi.org/10.1140/epjds/s13688-018-0135-y
CommonCrawl
Transient flowing-fluid temperature modeling in reservoirs with large drawdowns Original Paper - Production Engineering N. Chevarunotai1, A. R. Hasan2, C. S. Kabir3 & R. Islam2 Journal of Petroleum Exploration and Production Technology volume 8, pages 799–811 (2018)Cite this article Modern downhole temperature measurements indicate that bottomhole fluid temperature can be significantly higher or lower than the original reservoir temperature, especially in reservoirs where high-pressure drawdown is expected during production. This recent finding contradicts the isothermal assumption originally made for routine calculations. In a high-pressure drawdown environment, the Joule–Thomson (J–T) phenomenon plays an important role in fluid temperature alteration in the reservoir. This paper presents a robust analytical model to estimate the flowing-fluid temperature distribution in a reservoir that accounts for the J–T heating or cooling effect. All significant heat transfer mechanisms for fluid flow in the reservoir, including heat transfer due to convection, J–T phenomenon, and heat transfer from overburden and under-burden formations, are incorporated in this study. The proposed model successfully validates the results of a rigorous numerical model that intrinsically honored field data. Avoid the most common mistakes and prepare your manuscript for journal editors. Most reservoir engineering calculations presuppose that the fluid temperature entering the wellbore has the same temperature as that in the reservoir, regardless of pressure drop and elapsed time. While the assumption of constant fluid temperature may be true for high-permeability systems, reservoirs undergoing production from low-permeability systems at significant drawdowns may not conform to this simplified assumption. This reality has prompted several studies to probe a radial distribution of the fluid temperature in time. Early attempts to establish fluid temperature were mainly for heavy-oil reservoir management in thermal recovery operations. One of the earliest models for estimating temperature distribution during steam injection was presented by Lauwerier (1955). Subsequently, several models were presented by Spillette (1965) and Satman et al. (1979) using different approaches. More recently, Tan et al. (2012) compared some of these solutions and offered a solution of their own. All of these models treated heat conduction and convection as main heat transfer mechanisms in the reservoir; however, fluid temperature change due to the Joule–Thomson (J–T) effect remained unaccounted for a given low intrinsic flow rate in heavy-oil reservoirs. The J–T heating or cooling effect originated from interpreting temperature logs. Steffensen and Smith (1973) proposed an analytical solution for estimating the fluid's static and flowing temperature at bottomhole during steady-state flow by incorporating the J–T effect. They pointed out that the main heat transfer mechanisms of fluids in the reservoir during production and injection are heat convection and J–T heating (or cooling). Temperature change due to radial conduction is normally negligible. They also proposed that heat transfer between reservoir and overburden and under-burden formations during steady-state flow is negligible; therefore, the "heat transfer to overburden" term was not included in their study. Kabir et al. (2012), among others, showed how independent estimation of individual layer contributions may be made from temperature profiles in both gas and oil wells, with the J–T effect playing a major role. More recently, Onur and Palabiyik (2015) offered an analytical solution for single-phase water flowing in a geothermal reservoir that accounted for the effect of skin. Their approach was to study the use of temperature data for estimating reservoir parameters by history matching. Onur and Cinar (2016) presented an analytical solution accounting for the J–T effect, but not heat exchange with the overburden and under-burden formations. High-pressure drawdown is normally required to commercially produce from challenging reservoirs, such as those in deep, low-permeability, and overpressure systems. As a result, the impact of the J–T effect on flowing-fluid temperature is more prominent in some of the deepwater reservoirs in Gulf of Mexico. In some cases, the J–T effect may trigger fluid temperature increase of 20–30 °F higher than the fluid temperature at the initial-reservoir condition. Yoshioka et al. (2005, 2006) introduced a coupled reservoir-and-wellbore analytical temperature model for horizontal well production in a single-phase reservoir, assuming steady-state conditions. An extended version of Yoshioka et al.'s work was presented by Dawkrajai et al. (2006). They developed a finite-difference solution for the coupled reservoir/wellbore system to estimate fluid temperature distribution in the reservoir for two-phase production in horizontal wells. Their numerical solution removes the steady-state assumption and allows variation of reservoir and fluid properties in space and time. Duru and Horne (2010) developed a semianalytical solution for the same problem, taking into account J–T heating or cooling, as well as heat conduction and convection. They applied the operator-splitting and time-stepping (OSATS) semianalytical technique to solve the problem and split the reservoir energy-balance equation into two parts: convective transport and diffusion. They solved the convective transport part analytically and used the solution to approach the diffusion part. The diffusion part of the energy-balance equation was solved semianalytically; that is, the result from the first timestep is the initial condition for next timestep, and so on. They also coupled the reservoir temperature model with Izgec et al.'s (2007) analytical wellbore temperature model for fluid temperature analysis in the entire production system. App (2009, 2010) developed a nonisothermal reservoir simulator for single-phase oil flow by coupling mass and energy-balance equations with Darcy's law. He included all possible heat transfer mechanisms in the reservoir as part of the comprehensive energy-balance equation. While earlier work by other authors in this area generally assumes no heat transfer from a reservoir to its surroundings (adiabatic process), App's work incorporates potential heat transfer from reservoir to overburden and under-burden formations. His model shows that heat loss to overburden strata is significant and becomes crucial when fluid is significantly heated up later in the production period. He also discussed potential change in well productivity due to J–T heating (or cooling) in high-pressure, high-drawdown reservoirs because fluid viscosity variation depends on temperature change. Ramazanov et al. (2013) proposed a similar numerical model that included convection, radial heat conduction, and the J–T effect. Their numerical model validated their earlier (2007) study and pointed out that the impact of radial conduction to fluid temperature distribution in a reservoir is minimal when the production rate remains constant. Recently, App and Yoshioka (2013) offered an analytical solution for steady-state fluid temperature change as a function of producing rate, reservoir permeability, and drawdown, among other variables. They used Peclet number (P e = ur/α) to combine production rate and formation thermal conductivity to emphasize the effect of P e on fluid temperature change. They also pointed out that the effect of permeability is included through P e as it incorporates fluid velocity. Their study clearly shows that at high reservoir thermal conductivity when P e < 1, the fluid temperature change is strongly influenced by P e due to rapid conduction of heat through the formation. However, for P e > 3, the influence of P e on steady-state fluid temperature is negligible. The study also shows that fluid temperature changes minimally at very low P e (< 0.1), which appears quite reasonable. This paper presents an analytical transient-temperature model for estimating the flowing-fluid temperature distribution in a single-phase oil reservoir with constant rate production. The paper also presents an application of this approach to single-phase gas reservoirs. The J–T effect is included as one of the main energy transformation mechanisms of fluid flow in the reservoir. Additionally, heat transfer from a reservoir to overburden and under-burden formations is incorporated in this model formulation following App's approach. The model is validated with results from the rigorous numerical model developed earlier by App (2010), based on one set of actual field data. Model validation shows that the estimated temperature values compare favorably with those obtained from the numerical simulator. Reservoir system The reservoir system considered in this study is a 1D radial reservoir where fluid flow occurs only in the radial direction. The only flowing fluid in the reservoir is oil, and there is no free gas in the system. Connate water remains immobile. Figure 1 shows a schematic of the reservoir system used in the study. We note that fluid flow in the idealized circular reservoir occurs in the "negative" r-direction. Figure 1 displays this simplified wellbore and reservoir configuration. Schematic of the wellbore/reservoir system configuration Comprehensive energy-balance equation A principle for estimation of fluid temperature distribution in the reservoir is conservation of energy in the system, which includes reservoir fluid and rock. Conservation of mass for reservoir fluids is also incorporated to achieve a comprehensive energy-balance equation of the system. We also assume that reservoir is perfectly horizontal; thus, gravitational effect (change in fluid potential energy) is negligible. The general form of thermal energy balance in terms of equation of change for internal energy (Bird et al. 2006) can be written as: $$\frac{\partial }{\partial t}\rho \hat{U} = - \left( {\nabla \cdot \rho \hat{U}\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {u} } \right) - \left( {\nabla \cdot \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {q} } \right) - p\left( {\nabla \cdot \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {u} } \right) - \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {\tau } :\nabla \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {u} } \right) + \dot{Q}$$ where \(\hat{U}\) is fluid internal energy, ρ is fluid and/or rock density, and \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {u}\) is fluid local velocity. The ∇· term generally represents the net input rate of energy per unit volume of the system. The first term on the left side of Eq. (1) represents the total rate of internal energy increase in the system. The first and second terms on the right side are net input rate of internal energy to the system caused by convective transport and heat conduction, respectively. The third term represents the net reversible rate of internal energy increase due to fluid compression (pressure difference), while the fourth term is the net irreversible rate of internal energy increase caused by fluid viscous dissipation. The fourth term is also referred to as "J–T" in this study. In addition to heat conduction, convection, and the J–T phenomenon caused by fluid flow in the reservoir, energy transfer from surroundings (overburden and under-burden formations) to the system (reservoir fluids and formation) is considered in this study. Therefore, a term representing the net energy transfer rate between the system and surroundings, \(\dot{Q}\), is added to the energy-balance equation as the last term in Eq. (1). Using the principles of rock and fluid enthalpy, Fourier's law of conduction, Newton's law of cooling, conservation of mass, and Darcy's law, Eq. (1) is rearranged and rewritten as $$\begin{aligned} & \left[ {\varnothing s_{\text{o}} \rho_{\text{o}} c_{\text{po}} + \varnothing s_{\text{w}} \rho_{\text{w}} c_{\text{pw}} + \left( {1 - \varnothing } \right)\rho_{\text{f}} c_{\text{pf}} } \right]\frac{\partial T}{\partial t}+ \rho_{o} u_{\text{r}} c_{\text{po}} \frac{\partial T}{\partial r} \\ & \quad+ \rho_{\text{o}} u_{\text{r}} \sigma_{\text{o}} \frac{\partial p}{\partial r} + \left[ {\varnothing s_{\text{o}} \rho_{\text{o}} \sigma_{\text{o}} + \varnothing s_{\text{w}} \rho_{\text{w}} \sigma_{\text{w}} - 1} \right]\frac{\partial p}{\partial t} \\ & \quad = \frac{1}{r}\frac{\partial }{\partial r}\left[ {\lambda r\frac{\partial T}{\partial r}} \right] + \dot{Q} \\ \end{aligned}$$ Equation (2) is considered a comprehensive energy-balance equation for the system of interest. The first term on the left side of Eq. (2) contains the heat capacity of oil, water, and rock, which collectively represent energy change due to temperature transient. Similarly, the second term represents convective heat transport. The third term is energy change due to J–T effect, and the fourth term represents energy change due to pressure transient in the reservoir. This pressure transient term is neglected in deriving the analytical solution presented below. The first term on right side reflects change in energy arising from radial heat conduction, and the last term represents rate of heat transfer across system boundary, meaning to overburden and under-burden formations. Details of the comprehensive energy-balance equation are given in "Appendix A." We rearranged the comprehensive energy-balance equation of the system by applying all the assumptions described in "Appendix A." The energy-balance equation can be reduced to a first-order, partial-differential equation (PDE): $$\frac{\partial T}{\partial t} - \frac{B}{Ar}\frac{\partial T}{\partial r} - \frac{C}{{Ar^{2} }} = - \frac{D}{A}T + \frac{E}{A}$$ The method of characteristics was used to solve the PDE to arrive at a final form of the proposed analytical solution given below. "Appendix B" presents the details of this derivation. $$T\left( {r,t} \right) = T_{\text{i}} + \frac{C}{2B}{\text{e}}^{{\frac{{H\left( {Ar^{2} + 2Bt} \right)}}{2B}}} {\text{Ei}}\left[ { - \frac{{H\left( {Ar^{2} + 2Bt} \right)}}{2B}} \right] - \frac{C}{2B}{\text{e}}^{{\frac{{HAr^{2} }}{2B}}} {\text{Ei}}\left[ { - \frac{{HAr^{2} }}{2B}} \right]$$ $$A = \left[ {\varnothing s_{\text{o}} \rho_{\text{o}} c_{\text{po}} + \varnothing s_{\text{w}} \rho_{\text{w}} c_{\text{pw}} + \left( {1 - \varnothing } \right)\rho_{\text{f}} c_{\text{pf}} } \right]\left( {\frac{2\pi h}{q}} \right)$$ $$B = \rho_{\text{o}} c_{\text{po}}$$ $$C = \frac{{q\rho_{\text{o}} \sigma_{\text{o}} \mu }}{2\pi hk}$$ $$D = \frac{{4h_{\text{c}} \pi }}{q}$$ $$E = \frac{{4h_{c} \pi }}{q}T_{\text{i}}$$ $$H = \frac{D}{A}$$ Equation (4) can be used directly with any software that requires fluid bottomhole temperature as an input, such as in well pressure/temperature traverse calculations and production logging. When we solve Eq. (3) by assuming fluid property variation to be negligible, greater accuracy in estimated sandface temperature can be achieved when radial segments in the reservoir facilitate variation of fluid properties from one node to the next. For temperature computation with Eq. (4), we allowed such variation of fluid properties with pressure and temperature using 100 radial segments with logarithmic spacing in the reservoir. Calculation initiates with known pressure, temperature, and fluid property values at the reservoir boundary. Analytical expressions then facilitate estimation of pressure and temperature at the next node. During these computations, property values are retained from the prior node. New pressure and temperature then allow computation of new property values. This procedure repeats itself until the wellbore is reached. Because pressure, temperature, and viscosity change much faster as the wellbore is approached, logarithmic grid spacing—with a shorter spatial step at wellbore's proximity—works very well. We have investigated the effect of a number of spatial nodes on computational accuracy and found 100 nodes to be quite satisfactory. Model validation App's (2010) simulated results, which were anchored in field data, formed the cornerstone for model validation. This approach also implicitly verifies the results of the proposed analytical model with those of rigorous numerical solutions offered by App. "Appendix C" presents the reservoir and heat transfer parameters used in these calculations. In all cases, solid lines represent the reservoir temperature profiles estimated with the proposed analytical model. In all subsequent discussions, reference to estimates using the analytical model implies the use of Eq. (4), with the parameters estimated with Eq. (5) through Eq. (10), thereby allowing all fluid properties to vary with pressure and temperature. In this example, the flowing-fluid temperature distribution in the reservoir is calculated for five different constant production rates: 970, 2050, 3270, 4650, and 6200 STB/D. Figure 2 presents a comparison of solution generated with Eq. (4) with those of App. The solid lines represent the solutions of the analytical model, and the dashed lines do the same for those of App. Each profile represents distribution of the flowing-fluid temperature in the reservoir at different production rates, after 50 days of continuous production at a constant rate. One observes that the temperatures estimated with the proposed model are very close to App's rigorous, numerical solutions. Temperature estimations of analytical model at different rates compare favorably with those of App's (2010) numerical model for 50 days of production Figure 3 compares the analytical solutions for the sandface oil temperature (solid lines) with those of App (dashed lines). This figure reveals the evolution of flowing-fluid temperature at the sandface with time for the same constant rate production scenarios as before. This figure instills confidence in that the simplified analytical model yields very reasonable estimations of the bottomhole temperature for different production rate scenarios. Results of the analytical model compare well with App's numerical model (2010) for the sandface oil temperature Figure 3 shows that for any constant production rate, the oil temperature rises rapidly with time and then flattens out. Indeed, for high production rates, oil temperature actually begins a slow decline with time after attaining the maximum value. Figure 4 presents this phenomenon in a different way. Each profile of different color represents the flowing-fluid temperature distribution in the reservoir at a particular producing time for a production rate of 6200 STB/D. The solid lines represent our analytical solution, and the dashed lines are from App's study. Again, we observe that results from the analytical model are very close to the results obtained from App's numerical simulations. Differences in temperature profiles between the analytical model and App's simulator are expected because of the significantly more assumptions made to arrive at the analytical solution. Analytical model agrees well with App's (2010) numerical model results at different times for the 6200 STB/D producing rate Examination of Fig. 4 offers several insights. We observe that oil temperature in most of the formation remains unaffected; temperature increase is only noticeable up to about 100 ft from the wellbore for long producing times. The explanation of this phenomenon is simple; most of the pressure drop—the cause for temperature rise—occurs near the wellbore, especially for shorter production periods. In addition, because heat is generated continuously due to J–T effect, longer production leads to greater temperature rise. However, the rate of temperature increase slows down with time, and finally the reversal occurs. Therefore, fluid temperature rise for 400 days (black lines) of production is less than that for 100 days (blue lines). This reversal is captured in both App's numerical and our analytical solutions. Let us discuss the two reasons related to reduction and ultimate reversal in temperature rise with time. The primary reason is that our model accounts for fluid heat loss to the overburden and under-burden formations. This heat loss increases with increased fluid temperature. Figure 5 shows the estimated fluid temperature using the rigorous model (solid lines) compared to that estimated assuming no heat loss (everything else remaining the same) to the formation (dashed lines). Note that the maximum temperature after 400 days of flow period is about 10 °F higher when heat loss to the formation is not accounted for compared to when it is. Further analyses of ignoring fluid heat loss to the formation are discussed later. Comparison of analytical models without heat transfer and with heat transfer and viscosity variation The other reason for reduced temperature rise with producing time is that oil viscosity depends on temperature and pressure. We have used viscosity data from laboratory measurements for this particular reservoir fluid as presented by App (2010) and is reproduced in Fig. 8 in "Appendix C." Viscosity, in turn, influences flowing pressure gradient, and consequently, the reservoir pressure. However, fluid temperature depends on heat generation due to fluid expansion, which depends on the pressure gradient, dp/dr. Figure 4 shows this complex interdependency of fluid viscosity, pressure, and temperature in temperature trends. With the increase in production time, increasing fluid temperature causes lowering of oil viscosity. For a constant flow rate, lower viscosity causes lower pressure drawdown, resulting in higher reservoir pressure than if viscosity had remained constant. Higher pressure, however, triggers increase in oil viscosity, which contributes to increased pressure gradient near the wellbore. This increased pressure gradient in both the analytical and numerical models causes the fluid temperature (black solid and dashed lines in Fig. 4) to be lower than that after 100 days of production (blue lines) after 400 days of production. To investigate the effect of viscosity variation with temperature, we regenerated the solutions with constant fluid viscosity by keeping all other input parameters the same as that for the rigorous model. Figure 6 presents those results with solid lines representing rigorous solution (with property variation), while the dashed lines are for constant viscosity condition. Comparison of results of the analytical models with and without viscosity variation Lower temperatures in Fig. 6 for lower producing rates represent those with constant viscosity. However, for higher producing rates, the constant viscosity model generally estimates higher temperatures than does the rigorous (variable viscosity) model. Even for higher producing rates though, the trend reverses with producing times. Again, the interdependence of pressure, temperature, and viscosity precipitate these complex temperature profiles. Given the significant discrepancy in temperature profiles exhibited in Fig. 6, we use the temperature-dependent viscosity as default. In modeling temperature distribution in the reservoir, many investigators have neglected heat transfer to/from overburden and under-burden formations \(\dot{Q}\). This assumption simplifies the modeling approach and results in a much simpler expression for the fluid temperature as a function of radial distance and time; "Appendix B" presents this development. In addition, as Fig. 3 shows, neglecting formation heat loss does not appear to cause significant errors at low drawdowns. However, for higher flow rates and later times, the estimation error can be quite large; Chevarunotai (2014) presents further discussion on this topic. Model's application in gas reservoirs In arriving at the analytical solution for the fluid temperature for fluid flow from the reservoir bulk toward the wellbore, we assumed the fluid to be only slightly compressible, thereby allowing the PDE [Eq. (3)] to be linear. For gas, this assumption of small compressibility is generally untrue. However, by dividing the reservoir into many radial segments with logarithmic spacing, thereby keeping the small spatial steps, the gas compressibility may be still kept small so that Eq. (4) may be applied to a gas reservoir as well. Using this approach, we estimated the sandface gas temperature as a function of producing time and that is shown in Fig. 7. The input values, provided in "Appendix E," were taken from App's (2009) study. Good agreement of our estimates with those of App's numerical solution is evident in Fig. 7. Bottomhole temperature with time for the low-pressure gas The analytical reservoir temperature solution derived for the simplified reservoir system was validated with App's numerical model. Although we made several assumptions in our study to simplify the problem to 1D radial reservoir system, the flowing-fluid temperatures estimated by the proposed analytical solutions are very comparable to those calculated by App's rigorous numerical solution, as shown in all case studies. The simple form of the analytical solutions allows anyone to adopt and apply it to the problems of interest. The proposed solution can also be applied to more complex problems, such as in reservoirs of other geometry by considering the Dietz (1965) shape factor. The impact of Joule–Thomson phenomenon is actually a function of pressure drop across the reservoir, fluid flow rate, and the J–T throttling coefficient. In general, the J–T coefficient is positive in low-pressure gas reservoir and is negative in high-pressure gas reservoirs and oil reservoirs of any pressure range. As a result, J–T heating becomes the norm in low-to-high-pressure oil and high-pressure (> 7000 psi) gas reservoirs, whereas J–T cooling occurs in low-pressure (< 5000 psi) gas reservoirs. The approach to estimate the flowing-fluid temperature presented in this paper can be used as a basis and adapted for reservoir temperature estimation in gas reservoirs. Based on model results, we surmise that fluid temperature in the near-wellbore region can be significantly different from the original reservoir temperature. An accurate estimation of the reservoir fluid temperature from the analytical formulations can yield a better estimation of well productivity, which is useful in production optimization and well development planning. A reasonable estimation of the bottomhole flowing-fluid temperature is also advantageous in well design from the standpoints of equipment selection and management of annular-pressure buildup or APB (Oudeman and Kerem 2006; Hasan et al. 2010). When J–T heating is pronounced, excessive heating of annular fluid may result, thereby triggering APB. The proposed analytical solution also improves wellbore fluid and heat flow modeling because of more realistic temperature evaluation at sandface. The bottomhole flowing-fluid temperature derived from the analytical model can be coupled with wellbore heat transfer model to allow prediction of flowing-fluid temperature along the wellbore. Accurate flowing-fluid temperature profile along the wellbore is also desirable for well design and production optimization, as well as for pressure transient analysis (Onur and Cinar 2016). An accurate estimation of the reservoir fluid temperature from the analytical formulations can yield a better estimation of well productivity index, which is useful in production optimization and well development planning. To arrive at the analytical solution, we omitted radial heat conduction, meaning the influence of Peclet number on fluid temperature change has been ignored. In "Appendix C," we show that for the reservoir and fluid properties used in this study, Peclet numbers for all flow rates are higher than 5.6. As App and Yoshioka (2013) showed, the influence of Peclet number on fluid temperature is negligible when it exceeds 3. They also noted that because formation permeability affects oil production rate, it can influence oil temperature increase due to expansion. Earlier, Shook (2001) also reached a similar conclusion in that thermal conduction can be neglected in nonfractured geothermal systems because P e > 1 is satisfied. For the range of cases that we investigated, P e is greater than 5.6, and the effect of permeability, as well as thermal conductivity, is negligible within engineering accuracy. We note that any significant change in the wellbore flowing-fluid temperature is likely to occur in overpressure reservoirs for sustaining high-flow rates, and, in turn, high P e . This reality increases the likelihood of application of the proposed analytical model. These model results show that fluid temperature in the near-wellbore region can be significantly different from the original reservoir temperature during production. A reasonable estimation of the bottomhole flowing-fluid temperature assists in well design from the standpoints of equipment selection and management of annular-pressure buildup or APB. This paper presents an analytical model for the flowing-fluid temperature estimation in a single-phase oil reservoir. Concepts of energy balance and conservation of mass were applied to arrive at an analytical formulation to evaluate fluid temperature in a reservoir producing at a constant rate. Fluid temperature change due to the Joule–Thomson effect, as well as energy exchange between a reservoir and its surroundings (overburden and under-burden formations), was incorporated in this study. Results from a rigorous numerical model validated the simplified analytical model within engineering accuracy. Therefore, we reached the following conclusions: The proposed analytical model provides comparable reservoir temperature estimation to the rigorous numerical simulator developed by App (2010), which is anchored in actual field data. Calculations of Peclet numbers suggested that ignoring conductive heat transport appears reasonable for field production rates of interest within the scope of this investigation. Generally speaking, an analytical model is relatively simpler and allows the calculations to be performed in a spreadsheet. The advantage of this analytical model over other analytical solutions for reservoir temperature estimation is that heat transfer from/to overburden and under-burden formations \(\dot{Q}\) is included. While the derivation of the analytical solution neglects property variation, the use of the solution allows for property changes with pressure and temperature. We have shown that \(\dot{Q}\) is crucial in the estimation of flowing-fluid temperature in a reservoir, especially at long producing times when the reservoir fluid is heated significantly, and the reservoir fluid temperature is very different from that in its surroundings. Accounting for viscosity variation with temperature and pressure enhances the accuracy of temperature estimation. The analytical model can be further extended to gas reservoirs by accounting for changing properties, such as density, viscosity, and the J–T coefficient with pressure and temperature. Flow area, ft2, L2 B o : Oil formation volume factor, bbl/STB c p : System specific heat capacity, Btu/lbm °F, L2/t2T c pf : Formation specific heat capacity, Btu/lbm °F, L2/t2T c po : Oil specific heat capacity, Btu/lbm °F, L2/t2T c pw : Water specific heat capacity, Btu/lbm °F, L2/t2T C J : Joule–Thomson coefficient, °F/psi, TLt2/m Formation thickness, ft, L h c : Heat transfer coefficient, Btu/hr ft2 °F, m/t3/T \(\hat{H}\) : Enthalpy, lbm ft2/hr2, mL2/t2 k : Reservoir permeability, md, L2 p : Pressure, psi, m/Lt2 p b : Bubble point pressure, psi, m/Lt2 p e : Pressure at reservoir external boundary, psi, m/Lt2 p i : Initial reservoir pressure, psi, m/Lt2 p wf : Flowing-fluid pressure at well bottom, psi, m/Lt2 Peclet number (= ur/α), dimensionless Volumetric flow rate, ft3/hr, L3/t \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {q}\) : Conductive heat transport, Btu/hr ft2, m/Lt3 \(\dot{Q}\) : Net heat transfer rate between the system and surroundings, Btu/hr ft2, m/Lt3 Radius, ft, L r e : External reservoir radius, ft, L r w : Wellbore radius, ft, L S o : Oil saturation S w : Water saturation t : Time, hr, t Fluid temperature, °F, T T e : Fluid temperature at reservoir external boundary, °F, T T i : Initial reservoir temperature, °F, T T s : Temperature of overburden and under-burden formations, °F, T T wf : Flowing-fluid temperature at well bottom, °F, T \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {u}\) : Superficial velocity, ft/hr, L/t u r : Fluid local velocity in radial direction, ft/hr, L/t \(\hat{U}\) : Fluid internal energy, lbm ft2/hr2, mL2/t2 \(\hat{V}\) : Specific volume, ft3/lbm, L3/m λ : Reservoir thermal conductivity, Btu/hr ft °F, TLt2/m α : Thermal diffusivity (= λ/ρc p ), ft2/hr, L2/t μ : Fluid viscosity, cp, m/Lt ρ : Density, lbm/ft3, m/L3 ρ o : Oil density, lbm/ft3, m/L3 ρ w : Water density, lbm/ft3, m/L3 ρ f : Formation density, lbm/ft3, m/L3 σ o : Joule–Thomson throttling coefficient of oil, Btu/lbm psi, L3/m σ w : Joule–Thomson throttling coefficient of water, Btu/lbm psi, L3/m \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {\tau }\) : Stress, lbf/ft2, m/Lt2 ϕ : Al-Hadhrami AK, Elliott L, Ingham DB (2003) A new model for viscous dissipation in porous media across a range of permeability values. Transp Porous Media 53(1):117–122. doi:10.1023/A:1023557332542 App JF (2009) Field cases: nonisothermal behavior due to Joule–Thomson and transient fluid expansion/compression effects. In: Paper SPE-124338-MS presented at the 2009 SPE annual technical conference and exhibition, New Orleans, Louisiana, 4–7 Oct 2009. SPE-124338-MS. doi:10.2118/124338-MS App JF (2010) Nonisothermal and productivity behavior of high-pressure reservoirs. SPE J 15(1):50–63. doi:10.2118/114705-PA App JF, Yoshioka K (2013) Impact of reservoir permeability on flowing sandface temperatures: dimensionless analysis. SPE J 18(4):685–694. doi:10.2118/146951-PA Bird RB, Stewart WE, Lightfoot EN (2006) Transport phenomena, 2nd edn. Wiley, New York, p 336. ISBN 978-0470115398 Chevarunotai N (2014) Analytical model for flowing-fluid temperature distribution in single-phase oil reservoir accounting for Joule–Thomson effect. MS Thesis, Petroleum Engineering Department, Texas A&M University, College Station, Texas, USA Dawkrajai P, Lake LW, Yoshioka K et al (2006) Detection of water or gas entries in horizontal wells from temperature profiles. In: Paper SPE 100050 presented at the 2006 SPE/DOE Symposium on Improved oil Recovery, Tulsa, Oklahoma, 22–26 April 2006. doi:10.2118/100050-MS Dietz DN (1965) Determination of average reservoir pressure from build-up surveys. J Pet Technol 17(8):955–959. doi:10.2118/1156-PA Dranchuk PM, Purvis RA, Robinson DB (1973) Computer calculation of natural gas compressibility factors using the Standing and Katz correlation. In: Annual technical meeting, Petroleum Society of Canada, May 8–12, Edmonton, Canada. doi:10.2118/73-112 Duru OO, Horne RN (2010) Modeling reservoir temperature transients and reservoir-parameter estimation constrained to the model. SPE Res Eval Eng 13(6):873–883. doi:10.2118/115791-PA Hasan R, Izgec B, Kabir CS (2010) Sustaining production by managing annular-pressure buildup. SPE Prod Oper 25(2):195–203. doi:10.2118/120778-PA Izgec B, Kabir CS, Zhu D et al (2007) Transient fluid and heat flow modeling in coupled wellbore/reservoir systems. SPE Res Eval Eng 10(3):294–301. SPE-102070-PA. doi:10.2118/102070-PA Kabir CS, Izgec B, Hasan AR et al (2012) Computing flow profiles and total flow rate with temperature surveys in gas wells. J Nat Gas Sci Eng 4:1–7. doi:10.1016/j.jngse.2011.10.004 Lauwerier HA (1955) The transport of heat in an oil layer caused by injection of hot fluid. Appl Sci Res 5(2–3):145–150 Lee AL, Gonzalez MH, Eakin BM (1966) The viscosity of natural gases. Trans AIME 237:997–1000 Onur M, Cinar M (2016) Temperature transient analysis of slightly compressible, single-phase reservoirs. In: Paper SPE-180074-MS presented at the 78th EAGE conference and exhibition, Vienna, Austria, 30 May–2 June 2016. doi:10.2118/180074-MS Onur M, Palabiyik Y (2015) Nonlinear parameter estimation based on history matching of temperature measurements for single-phase liquid–water geothermal reservoirs. In: Proceedings, world geothermal congress, Melbourne, Australia, 19–25 April 2015. https://pangea.stanford.edu/ERE/db/WGC/papers/WGC/2015/22009.pdf Oudeman P, Kerem M (2006) Transient behavior of annular pressure build-up in HP/HT wells. SPE Drill Complet 21(4):234–241. doi:10.2118/88735-PA Ramazanov AS, Nagimov VM, Akhmetov RK (2013) Analytical model of temperature prediction for a given production history. Oil Gas Bus J (1):537–546. http://ogbus.ru/eng/authors/Ramazanov/Ramazanov_4e.pdf Satman A, Brigham WE, Zolotukhin AB (1979) A new approach for predicting the thermal behavior in porous media during fluid injection. Geotherm Resour Counc Trans 3:621–624 Shook GM (2001) Predicting thermal breakthrough in heterogeneous media from tracer tests. Geothermics 30(6):573–589. doi:10.1016/S0375-6505(01)00015-3 Spillette AG (1965) Heat transfer during hot fluid injection into an oil reservoir. J Can Pet Technol 4(4):213–218. doi:10.2118/65-04-06 Steffensen RJ, Smith RC (1973) The importance of Joule–Thomson heating (or cooling) in temperature log interpretation. In: Paper SPE-4636-MS presented at the SPE-AIME 48th annual fall meeting, Las Vegas, Nevada, 1–3 Oct 1973. doi:10.2118/4636-MS Tan H, Cheng X, Guo H (2012) Closed solutions for transient heat transport in geological media: new development, comparisons, and validations. Transp Porous Media 93(3):737–752. doi:10.1007/s11242-012-9980-5 Yoshioka K, Zhu D, Hill AD et al (2005) A comprehensive model of temperature behavior in a horizontal well. In: Paper SPE-95656-MS presented at the 2005 SPE annual technical conference and exhibition, Dallas, Texas, 9–12 Oct 2005. doi:10.2118/95656-MS Yoshioka K, Zhu D, Hill AD et al (2006) Detection of water or gas entries in horizontal wells from temperature profiles. In: Paper SPE-100209-MS presented at the SPE Europec/EAGE annual conference and exhibition, Vienna, Austria, 12–15 June 2006. doi:10.2118/100209-MS Chevron Thailand Exploration and Production Ltd., Bangkok, Thailand N. Chevarunotai Department of Petroleum Engineering, Texas A&M University, College Station, TX, 77843, USA A. R. Hasan & R. Islam Department of Petroleum Engineering, University of Houston, 5000 Gulf Freeway, Houston, TX, USA C. S. Kabir A. R. Hasan R. Islam Correspondence to C. S. Kabir. Appendix A: Comprehensive energy-balance equation of the system The energy conservation is the underlying principle for the estimation of fluid temperature distribution in the reservoir, involving rock and fluid. Conservation of mass for reservoir fluids allows achieving a comprehensive energy-balance equation of the system. The general form of thermal energy balance, in terms of the equation of change for internal energy, was presented in Eq. (1). This equation can also be written in terms of enthalpy, temperature, and pressure. In other words, Eq. (1) can be rewritten in the following form: $$\frac{\partial }{\partial t}\rho H - \frac{\partial p}{\partial t} = - \left( {\nabla \cdot \rho H\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {u} } \right) + \nabla \cdot p\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {u} - \left( {\nabla \cdot \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {q} } \right) - \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {\tau } :\nabla \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {u} } \right) - p\left( {\nabla \cdot \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {u} } \right) + \dot{Q}$$ For the 1D radial system, the double-dot product can be represented by Newton's law of viscosity, \(- \left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {\tau } :\nabla \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {u} } \right) = 2\mu \left( {\frac{{\partial u_{r} }}{\partial r}} \right)^{2}\), and heat conduction can be represented by \(- \nabla \cdot \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}} {q} = \frac{1}{r}\frac{\partial }{\partial r}\left[ {\lambda r\frac{\partial T}{\partial r}} \right];\) therefore, Eq. (11) can be rewritten as $$\frac{\partial }{\partial t}\rho H + \frac{1}{r}\frac{\partial }{\partial r}\left[ {r\rho Hu_{\text{r}} } \right] = \frac{1}{r}\frac{\partial }{\partial r}\left[ {\lambda r\frac{\partial T}{\partial r}} \right] + 2\mu \left( {\frac{{\partial u_{r} }}{\partial r}} \right)^{2} + \frac{Dp}{Dt} + \dot{Q}$$ Al-Hadhrami et al. (2003) examined the viscous dissipation term in the energy-balance equation in Cartesian coordinates. Following their work, we approximated the viscous term for the 1D radial flow in porous media with \(- u_{\text{r}} \frac{\partial p}{\partial r}\). Equation (12) then becomes $$\frac{\partial }{\partial t}\rho H + \frac{1}{r}\frac{\partial }{\partial r}\left[ {r\rho \hat{H}u_{\text{r}} } \right] = \frac{1}{r}\frac{\partial }{\partial r}\left[ {\lambda r\frac{\partial T}{\partial r}} \right] - u_{\text{r}} \frac{\partial p}{\partial r} + \frac{Dp}{Dt} + \dot{Q}$$ Applying mass conservation in 1D radial system, \(\frac{\partial p}{\partial t} + \frac{1}{r}\frac{\partial }{\partial r}(r\rho u_{\text{r}} ) = 0\), Eq. (13) can be rewritten as $$\rho \frac{\partial H}{\partial t} = \frac{1}{r}\frac{\partial }{\partial r}\left[ {\lambda r\frac{\partial T}{\partial r}} \right] + \frac{\partial p}{\partial t} + \dot{Q}.$$ Enthalpy of the reservoir rock depends only on its temperature, which is given by dH f = c pfdT. However, the enthalpy of a fluid depends both on its temperature and pressure; that is, $${\text{d}}H = \left( {\frac{\partial H}{\partial T}} \right)_{p} {\text{d}}T + \mathop {\left( {\frac{\partial H}{\partial p}} \right)}\nolimits_{T} {\text{d}}p = \mathop c\nolimits_{p} {\text{d}}T - C_{J} c_{p} {\text{d}}p$$ In Eq. (15), c p is the oil specific heat and C J is the Joule–Thomson coefficient. Therefore, reservoir oil enthalpy is expressed by Eq. (16): $${\text{d}}H_{o} = c_{\text{po}} {\text{d}}T + \sigma_{\text{o}} {\text{d}}p,$$ where σ o represents the product, − C J c p . A similar expression for the connate water can be written. We write enthalpy in terms of pressure and temperature for each reservoir component, that is, oil, connate water, and formation rock, and combine all parameters into Eq. (14) to obtain $$\begin{aligned} & \left[ {\varnothing s_{\text{o}} \rho_{\text{o}} c_{\text{po}} + \varnothing s_{\text{w}} \rho_{\text{w}} c_{\text{pw}} + \left( {1 - \varnothing } \right)\rho_{\text{f}} c_{\text{pf}} } \right]\frac{\partial T}{\partial t} + \rho_{\text{o}} u_{\text{r}} c_{\text{po}} \frac{\partial T}{\partial r} + \rho_{\text{o}} u_{\text{r}} \sigma_{\text{o}} \frac{\partial p}{\partial r}\\ & \quad + \left[ {\varnothing s_{\text{o}} \rho_{\text{o}} \sigma_{\text{o}} + \varnothing s_{\text{w}} \rho_{\text{w}} \sigma_{\text{w}} - 1} \right]\frac{\partial p}{\partial t} = \frac{1}{r}\frac{\partial }{\partial r}\left[ {\lambda r\frac{\partial T}{\partial r}} \right] + \dot{Q} \\ \end{aligned}$$ Equation (17) or Eq. (2) in the text is fundamentally the same as the thermal energy-balance equation presented by App (2009, 2010). This equation is also the basis for our analytical model to evaluate the flowing-fluid temperature distribution in the reservoir. Let us list the underlying assumptions of this model. This study presupposes that the reservoir is homogeneous, and the rock and fluid properties are time invariant. Other general assumptions include the following: The only flowing fluid in the reservoir is oil. The reservoir is producing at a constant rate. The original temperature of overburden and under-burden formations is the same as the reservoir temperature at initial conditions. The elevation differences from reservoir depth are negligible. Overburden and under-burden formations are infinite sources/sinks. Overburden and under-burden formations remain at their original temperatures even after heat transfer to/from the reservoir occurs. Radial heat conduction is negligible during constant rate production. The pressure transient term, ∂p/∂t, is assumed to be negligible. Therefore, for a given flow rate, pressure varies in the radial direction, but not with time. Fluid temperature and pressure remain constant at the reservoir boundary. Porosity and permeability remains unchanged. Variation in fluid properties of density and viscosity is negligible. The fluid's local velocity (superficial velocity) can be estimated from Darcy's equation: $$q = - \frac{kA}{\mu }\frac{\partial p}{\partial r} = - \frac{2\pi rhk}{\mu }\frac{\partial p}{\partial r}$$ $$u_{\text{r}} = \frac{q}{A} = \frac{q}{2\pi rh} = - \frac{k}{\mu }\frac{\partial p}{\partial r}$$ Note that these assumptions are necessary to obtain a useful analytical solution to the flow problem at hand; the model validation section explores the effects of some simplifying assumptions on solution quality. Appendix B: Analytical solution for reservoir flowing-fluid temperature estimation A comprehensive energy-balance equation for the system with a consideration of heat transfer between the system and surroundings is given by Eq. (18). Based on our general assumptions, radial heat conduction during constant rate production is negligible. Additionally, ∂p/∂t term is assumed to be minimal and can be omitted. Therefore, Eq. (17) becomes $$\left[ {\varnothing s_{\text{o}} \rho_{\text{o}} c_{\text{po}} + \varnothing s_{\text{w}} \rho_{\text{w}} c_{\text{pw}} + \left( {1 - \varnothing } \right)\rho_{\text{f}} c_{\text{pf}} } \right]\frac{\partial T}{\partial t} + \rho_{\text{o}} u_{\text{r}} c_{\text{po}} \frac{\partial T}{\partial r} + \rho_{\text{o}} u_{\text{r}} \sigma_{\text{o}} \frac{\partial p}{\partial r} = \dot{Q}$$ We rewrite velocity ur in terms of flow rate as q/(2πrh) and rewrite the ∂p/∂r in terms of flow rate as -(μq)/(2πrhk). Also, we replace q with –q because flow occurs in the negative r-direction during production. Therefore, Eq. (20) becomes $$\left[ {\varnothing s_{\text{o}} \rho_{\text{o}} c_{\text{po}} + \varnothing s_{\text{w}} \rho_{\text{w}} c_{\text{pw}} + \left( {1 - \varnothing } \right)\rho_{\text{f}} c_{\text{pf}} } \right]\frac{\partial T}{\partial t} - \frac{{q\rho_{\text{o}} c_{\text{po}} }}{2\pi rh} \frac{\partial T}{\partial r} - \frac{{\mu q^{2} \rho_{\text{o}} \sigma_{\text{o}} }}{{\left( {2\pi rh} \right)^{2} k}} = \dot{Q}$$ The net input rate of energy between reservoir and under- and overburden formations \(\dot{Q}\) is related to the formation undisturbed temperature in a complex manner. We approximate this term by following App's (2010) approach using Newton's law of cooling, giving \(\dot{Q} = - 2h_{\text{c}} \left[ {T - T_{s} } \right] /h\), where h c is the heat transfer coefficient. The complete energy-balance equation of the system then becomes $$\left[ {\varnothing s_{\text{o}} \rho_{\text{o}} c_{\text{po}} + \varnothing s_{\text{w}} \rho_{\text{w}} c_{\text{pw}} + \left( {1 - \varnothing } \right)\rho_{\text{f}} c_{\text{pf}} } \right]\frac{\partial T}{\partial t} - \frac{{q\rho_{\text{o}} c_{\text{po}} }}{2\pi rh} \frac{\partial T}{\partial r} - \frac{{\mu q^{2} \rho_{\text{o}} \sigma_{\text{o}} }}{{\left( {2\pi rh} \right)^{2} k}} = - \frac{{2h_{\text{c}} \left[ {T - T_{s} } \right]}}{h}$$ If we use lumped parameters A, B, and C, Eq. (21) can be simplified to the following expression: $$Ar^{2} \frac{\partial T}{\partial t} - Br \frac{\partial T}{\partial r} - C = - \frac{{4\pi r^{2} h_{\text{c}} }}{q} T + \frac{{4\pi r^{2} h_{\text{c}} }}{q}T_{s}$$ Equation (14) is a first-order, partial-differential equation where fluid temperature T is a function of radial distance r from wellbore into the reservoir and producing time t. Initially, fluid temperature in the reservoir is constant at T i. We apply the method of characteristics with the initial condition, \(T\left( {r,t = 0} \right) = T_{\text{i}}\) to arrive at the following solution to Eq. (23): $$T\left( {r,t} \right) = T_{\text{i}} + \frac{C}{ 2B}{\text{e}}^{{\frac{{H\left( {Ar^{2} + 2Bt} \right)}}{2B}}} {\text{Ei}}\left[ { - \frac{{H\left( {Ar^{2} + 2Bt} \right)}}{2B}} \right] - \frac{C}{ 2B}{\text{e}}^{{\left( {\frac{{HAr^{2} }}{2B}} \right)}} {\text{Ei}}\left[ { - \frac{{HAr^{2} }}{2B}} \right] ,$$ $$E = \frac{{4h_{\text{c}} \pi }}{q}T_{\text{i}}$$ Note that parameters A through H are constant for a particular reservoir and are reported in the main text as Eqs. (5)–(10); Chevarunotai (2014) presents further details of this derivation. Appendix C: Input data for model validation We used a real field example, reported previously in App's study (2010). App had generated his solutions with a newly developed numerical model. Table 1 shows static (rock) properties of the reservoir, which is considered as a base case in this study. We assumed that the reservoir is homogeneous and that the model parameters remain constant throughout the production period. Table 2 presents the reservoir fluid properties. Oil is the flowing phase, and the formation water is considered immobile and that no production of free gas occurs in the reservoir, given the low-saturation pressure. Table 1 Reservoir static parameters (after App 2010) Table 2 Reservoir fluid parameters (after App 2010) Another critical fluid property in flowing-fluid temperature and well productivity calculation is oil viscosity. Data from laboratory measurements for this particular reservoir were also presented in App's paper. Figure 8 presents the oil viscosity as a function of pressure and temperature, which is used for model validation. Oil viscosity as a function of pressure and temperature (after App 2010) An important assumption in our model is that conductive heat flow for our problem is negligible. App and Yoshioka (2013) has shown that the effect of formation thermal conductivity can be represented by the Peclet number. Relating fluid velocity u to production rate q, App and Yoshikawa expressed P e as follows: $$P_{e} = \frac{ur}{\alpha } = \frac{{q\rho_{\text{o}} c_{\text{po}} }}{2\pi h\lambda }$$ Their work showed that when 0.1 > P e > 3, the effect of Pe, and, therefore, formation thermal conductivity λ on fluid temperature change due to expansion, is negligible. The calculations for P e for our lowest production rate of 970 STB/D are $$q = 970\frac{\text{STB}}{D}*\left( {\frac{{5.615\frac{{{\text{f}}t^{3} }}{\text{STB}}}}{{24\frac{\text{hr}}{D}}}} \right) = 227 {\text{ft}}^{3} / {\text{hr}}$$ $$P_{e} = \frac{{q\rho c_{p} }}{2\pi h\lambda } = \frac{{227\frac{{{\text{ft}}^{ 3} }}{\text{hr}}*51.19\frac{\text{lbm}}{{{\text{ft}}^{ 3} }}*0.53\frac{\text{Btu}}{{{\text{lbm}}\,^\circ {\text{F}}}}}}{{2\pi *100 \,{\text{ft}} *1.73 \left( {\frac{\text{BTU}}{{{\text{hr}}\,^\circ {\text{F}}\,{\text{ft}}}}} \right)}} = 5.6$$ Similarly, for our highest production rate of 6200 STB/D, P e = 36.3. Therefore, for the cases considered in this study, omitting thermal conductivity of the formation does not introduce any significant error. We note that for most deepwater assets, economic production rates are expected to be high enough to result in correspondingly high Peclet numbers. As a consequence, the underlying assumptions made while deriving the analytical formulation of this coupled fluid and heat flow problem appear reasonable. Appendix D: Gas properties To make our model suitable for gas temperature calculation, we allowed several properties to vary. The models or correlations used for these computations are presented below. Gas viscosity Gas viscosity correlations have been presented by a number of authors. For our calculations, we used the one by Lee et al. (1966): $$\mu_{g} = A\left( {10^{ - 4} } \right){\text{EXP}}(B\rho_{g}^{c} )$$ $$A = \frac{{\left( {9.379 + 0.01607\,M_{\text{a}} } \right)T^{1.5} }}{{209.2 + 19.26\,M_{\text{a}} + T}}$$ $$B = 3.448 + \frac{986.4}{T} + 0.01009\,M_{\text{a}}$$ $$C = 2.447 - 0.2224B$$ Gas density The gas density is calculated from real gas law, as follows: $$pV = ZnRT = \frac{ZWRT}{{M_{a} }}$$ $$\rho = \frac{{pM_{\text{a}} }}{ZRT} = \frac{{0.00149406\,M_{\text{a}} }}{RT}$$ where ρ is gas density in gm/cc, M a is apparent molecular weight, T is the temperature in °R, Z is the gas law deviation factor, and p is the pressure in psia. Joule–Thomson coefficient An expression for calculating Joule–Thomson coefficient for real gases has been developed by Hasan et al. (2010): $$c_{p} C_{\text{J}} = - V - V - \left( {\frac{VT}{Z}} \right)\left( {\frac{\partial Z}{\partial T}} \right)_{p} = \left( {\frac{VT}{Z}} \right)\left( {\frac{\partial Z}{\partial T}} \right)_{p}$$ Equation (40) requires accurate estimates of Z and \(\left( {\frac{\partial Z}{\partial T}} \right)_{p}\). We use the following expression for these by Dranchuk et al. (1973): $$Z = 0.27p_{r} /T_{r} \rho$$ In Eq. (41), ρ is a polynomial function of p r (= p/p c) and T r (= T/T c), which is given by $$f\left( \rho \right) = a\rho^{6} + b\rho^{3} + c\rho^{2} + d\rho + e\rho^{3} \left( {1 + f\rho^{2} } \right)e^{{ - f\rho^{2} }} - g$$ The constants are as follows: a = 0.06423, b = 0.5353T r − 0.6123, c = 0.3151T r − 1.0467 − 0.5783/T 2 r , d = T r , e = 0.6816/T 2 r , f = 0.6845 g = 0.27p r The Newton–Raphson approach is used to solve for Z and \(\left( {\frac{\partial Z}{\partial T}} \right)_{p}\). Appendix E: Properties for low-pressure gas DST The following properties are taken from App (2009) for temperature analysis of a low-pressure gas production test (Tables 3, 4). Table 3 Reservoir and fluid properties (App 2009) Table 4 Component thermal and physical properties Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Chevarunotai, N., Hasan, A.R., Kabir, C.S. et al. Transient flowing-fluid temperature modeling in reservoirs with large drawdowns. J Petrol Explor Prod Technol 8, 799–811 (2018). https://doi.org/10.1007/s13202-017-0397-0 Issue Date: September 2018 Transient heat transport in porous media Joule–Thomson effect Heat transport to under- and overburden formations Validation of analytical solutions with numerical results
CommonCrawl
A neural network-based method for polypharmacy side effects prediction Raziyeh Masumshah1, Rosa Aghdam2 & Changiz Eslahchi1,2 BMC Bioinformatics volume 22, Article number: 385 (2021) Cite this article Polypharmacy is a type of treatment that involves the concurrent use of multiple medications. Drugs may interact when they are used simultaneously. So, understanding and mitigating polypharmacy side effects are critical for patient safety and health. Since the known polypharmacy side effects are rare and they are not detected in clinical trials, computational methods are developed to model polypharmacy side effects. We propose a neural network-based method for polypharmacy side effects prediction (NNPS) by using novel feature vectors based on mono side effects, and drug–protein interaction information. The proposed method is fast and efficient which allows the investigation of large numbers of polypharmacy side effects. Our novelty is defining new feature vectors for drugs and combining them with a neural network architecture to apply for the context of polypharmacy side effects prediction. We compare NNPS on a benchmark dataset to predict 964 polypharmacy side effects against 5 well-established methods and show that NNPS achieves better results than the results of all 5 methods in terms of accuracy, complexity, and running time speed. NNPS outperforms about 9.2% in Area Under the Receiver-Operating Characteristic, 12.8% in Area Under the Precision–Recall Curve, 8.6% in F-score, 10.3% in Accuracy, and 18.7% in Matthews Correlation Coefficient with 5-fold cross-validation against the best algorithm among other well-established methods (Decagon method). Also, the running time of the Decagon method which is 15 days for one fold of cross-validation is reduced to 8 h by the NNPS method. The performance of NNPS is benchmarked against 5 well-known methods, Decagon, Concatenated drug features, Deep Walk, DEDICOM, and RESCAL, for 964 polypharmacy side effects. We adopt the 5-fold cross-validation for 50 iterations and use the average of the results to assess the performance of the NNPS method. The evaluation of the NNPS against five well-known methods, in terms of accuracy, complexity, and running time speed shows the performance of the presented method for an essential and challenging problem in pharmacology. Datasets and code for NNPS algorithm are freely accessible at https://github.com/raziyehmasumshah/NNPS. Drug combination, commonly referred to as polypharmacy, has become a common practice in modern medicine especially in elderly and patients with complex diseases [1,2,3,4,5,6,7,8,9]. While this strategy may treat the diseases more effectively, drug-drug interactions (DDIs) can occur unexpectedly [5, 6, 10,11,12,13,14,15,16,17,18]. DDI is a change in the pharmacologic effect of one drug when used with another drug. DDIs are the most common reason for patients to go to emergency units [4, 6, 12, 19,20,21,22] and can associate with Adverse Drug Reactions (ADRs) (i.e. side effects) including death, and it is a critical problem for public health [6, 10, 23,24,25,26,27]. Shtar et al. demonstrated that between 3 and 5% of all hospital medication injuries were dedicated to DDI [19]. Although some side effects can be discovered in experiments and clinical trials, they are usually costly and consuming time [10]. Most of the known polypharmacy side effects are rare and they are usually not observed in small clinical trials. So, it is difficult to identify these side effects manually [16]. Therefore, developing computational methods is desired for predicting DDIs. The methods in DDI prediction problem are divided into two categories. The first category just determines the presence or the absence of interactions, and they do not detect the type of side effects. These methods collect the interactions via experiments and clinical studies, medical records, and also through network modeling based on DDIs similarities, side effects similarities, and structure similarities [11, 28,29,30,31,32,33,34,35,36,37,38,39,40,41]. On the other hand, the goal of the second category is determining the type of side effects between drugs [16, 42,43,44,45]. To reduce the impact of polypharmacy side effects, the methods in the second category execute their role. In the following, some studies are expressed which address this issue. Nickel et al. proposed the relational learning approach named RESCAL which was based on a tensor factorization method [42]. DEDICOM was introduced by Papalexakis et al. and similar to RESCAL method was based on tensor decomposition [43]. Deep Walk method was based on a neural embedding approach which used a logistic regression classifier [44, 45]. The concatenated drug features method used a gradient boosting trees classifier to predict side effects [16]. Zitnik et al. designed a multi-relational method called Decagon, which was based on a tensor factorization decoder [16]. In this study, we develop neural network-based method for polypharmacy side effects prediction (NNPS). NNPS utilizes the neural network model mentioned with novel features achieves better results in comparison with the results of 5 well-known methods in terms of accuracy, complexity, and running time speed. In next section, we describe the required datasets and the details of NNPS algorithm. In results section, the results of the NNPS model are compared with the results of the Decagon, Concatenated drug features, Deep Walk, DEDICOM, and RESCAL methods. The conclusion and some possible further works are presented in Discussion Section. In this section, the mono side effects, the drug–protein interactions (DPIs), and the DDIs information are presented in details. In the following, we describe the databases and the summary of these databases is given in Table 1. Table 1 Databases details Drug–drug interactions and mono side effects information As the multi-drug treatment is a common way [1,2,3], and modification in drug effect by another drug which is called DDIs, can produce adverse side effects, so, the knowledge of side effects information of DDI becomes the key issue in drug development and disease treatment. The DDI side effects (polypharmacy side effects) are collected from the TWOSIDES database [46]. TWOSIDES provides a reliable and comprehensive database for DDIs and has 1317 side effects on 645 drugs across 63,473 drug pairs. TWOSIDES is extracted from the Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS). Like the previous study in the predicting polypharmacy side effects task [16], we consider 964 polypharmacy side effects which are occurred in at least 500 DDIs. The side effects of individual drugs (mono side effects) are obtained from Side Effect Resource (SIDER) and OFFSIDES databases [46, 47]. The information of SIDER database is extracted from drug labels and contains 1556 drugs and 5868 side effects compiled from public documents. The information of OFFSIDES database is observed during clinical trials and contains 1332 drugs and 10,097 off-label side effects. Like TWOSIDES, OFFSIDES was generated from FAERS that collected from doctor reports, patients, and drug companies. Finally, by the union and the elimination of synonym side effects in SIDER and OFFSIDES databases, for 645 drugs which are in TWOSIDES database, 10,184 mono side effects are obtained. Drug–protein interactions DPIs are obtained from the Search Tool for Interactions of Chemicals (STITCH) database, which provide relationships between drugs and target proteins [48,49,50,51]. By using the STITCH database, we gain interactions between 8934 proteins and 645 drugs which are in TWOSIDES database. The number of interactions between these proteins and drugs is 18,690. Feature vectors For each side effect, two types of feature matrices including mono side effects matrix with dimension \(645 \times 10{,}184\) and DPIs matrix with dimension \(645 \times 8934\) are considered. Due to the large length of the features and their sparsities, using the feature extraction methods can be an effective way to reduce the size of features without losing important information. So, the Principle Components Analysis (PCA) is applied on mono side effects and DPIs matrices. The minimum number of the principle components is chosen such that 95% on variance in each matrix is retained. Two reduced feature matrices are denoted by \(F_{1}\) with dimension \(645 \times 503\) and \(F_{2}\) with dimension \(645 \times 22\), respectively. Then, by concatenating \(F_{1}\) (blue) and \(F_{2}\) (green), the drug feature matrix with dimension \(645 \times 525\) is resulted (Fig. 1a). The rows of the resulting drug feature matrix indicate the drugs ID, while the columns show the features information. For a given drug pair \((d_{i}, d_{j})\), i-th and j-th rows of the drug feature matrix are summed for representing the drug-drug pairs feature and feed to the neural network (Fig. 1b). For the i-th side effect, the NNPS architecture is used. a Concatenation of the PCA representation of mono side effects \((F_{1})\) (blue) and the PCA representation of drug–protein interactions \((F_{2})\) (green). b Sum of the i-th and j-th rows in the drug features matrix for each \(d_{i}\) and \(d_{j}\) drug pair. c A three-layer neural network that computes the probability \(p_{i}\) and classifies the i-th side effect based on the threshold i Training the neural network model The drug pairs associated with each type of side effects are split into training, validation, and test sets, and 5-fold cross-validation is considered. We use 80 percent of drug pairs for the training set, 10 percent for the validation set, and 10 percent for the test set. The following steps are considered to achieve the best neural network architecture based on training datasets. The number of hidden layers: \(\lbrace 1,2,3,4,5 \rbrace\) The number of neurons in hidden layers: \(\lbrace 25,50,100,200,300 \rbrace\) Activation functions: \(\lbrace\)Rectified Linear Unit (ReLU), hyperbolic tangent (tanh), and sigmoid\(\rbrace\) The dropout rate: \(\lbrace 0.1,0.3,0.5 \rbrace\) The learning rate: \(\lbrace 0.01,0.001 \rbrace\) The momentum: \(\lbrace 0.7,0.9 \rbrace\) We trained several networks with two, three, four, and five hidden layers and varying numbers of neurons (300, 200, 100, 50 and 25). We have included the best results for each trained network in the Table 2. As shown in this table, training a network with three hidden layers improves the results without significantly increasing the training time when compared to training a network with two hidden layers. The results improve slightly for networks with four or five hidden layers, but the computational time increases significantly. We chose a network with three hidden layers with 300, 200, and 100 neurons, respectively, due to the significant increase in computational cost and little benefit in terms of model performance of other structures. We had good results in terms of both Area Under the Receiver-Operating Characteristic (AUROC) and Area Under the Precision–Recall Curve (AUPRC) for the mentioned network, with a computational time of 8 h and 40 min. Table 2 Results of different neural network architectures The architecture of neural network The Neural Network is a feedforward network with fully connected layers consisting of an input layer, three hidden layers, and the output layer (Fig. 1c). The number of input layer neurons is equal to the size of the feature vector with size 525. The output layer has one neuron with probability value. For i-th side effect, we assign a class 0 (absence an interaction) or 1 (represent an interaction) to the output by using a threshold \(\theta _{i}\) in the range of (0, 1). If the probability value is greater than \(\theta _{i}\), the method suggests that the i-th side effect represents in the selected pair of drugs, otherwise, this side effects is not represent in the considered pair of drugs. For initialization weights, the Glorot normal initializer, also called Xavier normal initializer is applied [52]. By learning and investigating the results of the activation function of the neural network, we utilize the ReLU activation function between the layers of the neural model and consider a sigmoid activation function for the output layer (Fig. 1c). The optimization of the model parameters is done by using the binary-cross-entropy loss function and Stochastic Gradient Descent (SGD) [53]. In addition, we trained datasets based on different parameters (see Additional file 1: Table S1). We calculated and averaged loss value (MLoss) of each model over all 964 side effects for each epoch. Figure 2 shows the results of this investigation. In this work, MLoss is obtained by the following formula: $$\begin{aligned} MLoss_{i} =\frac{\Sigma _{j=1}^{964}Loss_{side~effect_{j}}}{964} ,\quad for\;epoch\; i=1,\ldots ,50 \end{aligned}$$ We depicted the Fig. 3 that considered AUROC against loss value when selecting epoch for the best performing model (NNPS).To do so, we calculated and averaged the AUROC (MAUROC) and MLoss of the best performing model for each epoch over all 964 side effects and plotted them, where MAUROC is obtained by the following formula: $$\begin{aligned} MAUROC_{i} =\frac{\Sigma _{j=1}^{964}AUROC_{side~effect_{j}}}{964} ,\quad for\;epoch\; i=1,\ldots ,50 \end{aligned}$$ As shown in this figure, the considering structure works well across 964 polypharmacy side effects. As a result, we considered epoch 50 based on Figs. 2 and 3 for the best performance model of our neural network. Loss curves of models based on different parameters for 50 epochs over all 964 polypharmacy side effects MAUROC and MLoss of NNPS model for 50 epochs over all 964 polypharmacy side effects Training hyperparameters According to Fig. 2, the hyperparameters based on 5-fold cross-validation for the best model which we named NNPS are tuned by 50 epochs and batch size 1024 with a dropout rate of 0.1 for preventing over-fitting and learning rate 0.01 and momentum value 0.9 by trial and error are considered. Because the presence or absence of polypharmacy side effects is determined by a threshold, a ROC curve for each side effect is plotted, and the threshold \(\theta _{i}\) with the highest F-score value is chosen. The hyperparameter values, the standard deviation, and the average thresholds for NNPS method are shown in Table 3. Table 3 The selected hyperparameter values Assessment and comparison In this section, the performance of NNPS is benchmarked against 5 well-known methods, Decagon, Concatenated drug features, Deep Walk, DEDICOM, and RESCAL, for 964 polypharmacy side effects. We adopt the 5-fold cross-validation for 50 iterations and use the average of the results to assess the performance of the NNPS method. The average of AUROC and AUPRC values of all methods for 964 polypharmacy side effects are presented in Table 4. Because only the source code and implementation of Decagon are available, we execute 5-fold cross-validation for 50 iterations for the Decagon method and see that the obtained results are very similar to the reported results of the Decagon method in [16]. In Table 4, we mention the average of the obtained results for the Decagon method and the reported performances of other methods that we do not have their source code by using Table 2 in [16]. According to Table 4, NNPS achieves the improvement 9.2% and 12.8% against Decagon which is the best algorithm among other well-known methods in terms of AUROC and AUPRC, respectively. To compare the results of NNPS more precisely, we compare it to the results of the Decagon with more details and by some more criteria. Figure 4 illustrates the boxplots of AUROC and AUPRC criteria for 964 polypharmacy side effects resulted by NNPS and Decagon methods, respectively. As shown in Fig. 4, it can be concluded that the median of the AUROC and AUPRC criteria related to NNPS are much higher than the median of the AUROC and AUPRC criteria related to the Decagon method, and the range of variation of the AUROC and AUPRC criteria for NNPS method are less than the range of variation of the AUROC and AUPRC criteria for the Decagon method which is the evidence of good performance of NNPS. Boxplot of area under the receiver-operating characteristic (AUROC) and area under the precision–recall curve (AUPRC) values of all 964 side effects for NNPS and Decagon methods Table 4 The average of Area under ROC curve (AUROC), area under precision–recall curve (AUPRC) for 964 polypharmacy side effects prediction For more evaluation, the best thresholds that have produced the best results for each polypharmacy side effects based on F-score values for NNPS and Decagon methods are detected and the results of NNPS and Decagon based on F-score, Accuracy (ACC), and Matthews Correlation Coefficient (MCC) are compared. Table 5 reports True Positive (TP), False Positive (FP), True Negative (TN), False Negative (FN), Precision, Recall, F-score, ACC and MCC of these two methods for all 964 side effects. According to Table 5, NNPS outperforms about 8.6%, 10.3%, and 18.1% against Decagon based on F-score, ACC, and MCC criteria, respectively. Table 5 The average of the best results for NNPS and Decagon methods for 964 side effects Receiver-Operating Characteristic (ROC) curve (part a) and loss curve (part b) of Schizoaffective disorder polypharmacy side effect for 50 epochs Evaluation of feature selection, aggregation, and train/test set sizes In this part, to show the significance of the PCA algorithm for dimension reduction, we compare the results of NNPS by using the low variance filter and autoencoder techniques as two another feature selection methods. We use these two techniques to reduce the mono side effects and drug–protein interaction matrices features to 503 and 22 features, respectively. In Table 6, the results of NNPS with both dimension reduction techniques are presented. This table shows that the performance of the NNPS method is higher when PCA technique was used. Also, we adopt two operators (i.e., summation and concatenation) to aggregate the feature vectors of two drugs into one feature vector for representing the drug-drug pairs in neural network architecture. As shown in Table 7, the summation operator achieves better results with respect to the results of NNPS when we concatenate the feature vectors of two drugs as features for feeding the neural network. We train the NNPS method with two different size of train, validation, and test sets, and represent the results in Table 8. This table shows that the performance of the NNPS method has very little reduction by decreasing the size of the train set which is evidence of the advantage of the method. Finally, we compared the performance of our method to four well-known machine learning algorithms using AUROC and AUPRC. The average results of these methods for all 964 polypharmacy side effects are shown in Table 9. According to the values in the Table 9, NNPS has the best performance among all methods. Table 6 The results for three dimension reduction techniques for 964 side effects Table 7 Results of two feature aggregation operators for 964 side effects Table 8 The results of NNPS method with different size of Training set (Tr set), Validation and Test sets (VT sets) of dataset Table 9 Results of different machine learning methods Time complexity Between the previous methods, only the source code and implementation of Decagon are available. So, we can only compare the time complexity of NNPS to Decagon method. The time of NNPS is about 8 h (Linux (Ubuntu 16.04), 15 CPUs, Intel Xeon(R) 2.00 GHz) on DPIs and DDIs datasets and is therefore noticeably faster than Decagon which requires 15 days for 5-fold cross-validation on a single GTX1080Ti graphic card. This decreased training time in NNPS that stems from the simplicity and efficiency of this model, is one of the main advantages of NNPS which can further be generalized to other purposes and datasets as well. Due to the enormous number of drug combinations, screening all possible pairs to achieve polypharmacy side effects are unfeasible in terms of cost and time. On the other hand, understanding the side effects of DDIs is an essential step in drug development and drug co-administration. So, some computational methods are developed for predicting polypharmacy side effects. The lately approach in this task (Decagon method) predicts the performance of polypharmacy side effects up to 0.874 and 0.825 in terms of accuracy on AUROC and AUPRC, respectively. In this study, we consider a neural network architecture with novel feature vectors. In NNPS method, each drug represents by a feature vector based on mono side effects and drug–protein interactions, and to decrease the method complexity, the PCA is used for dimension reduction of feature vectors. For a given drug pair, the corresponding drug feature vectors are summed to train the neural network for predicting polypharmacy side effects. The superior performance of NNPS occurs for two reasons. The first main reason is the novel feature vectors that are obtained by the dimension reduction techniques. The second reason is chosen a simple neural network architecture. We can see NNPS achieves excellent accuracy on the polypharmacy side effects prediction task that are shown in Additional file 1 and Table 10. We have provided 10 best and worst performance polypharmacy side effects based on AUROC and AUPRC in both NNPS and Decagon methods. The results can be found in Additional file 1: Tables S2–S7. These tables belong to the results of NNPS and Decagon which show that the performance of the NNPS method is better than the performance of the Decagon method. Figure 5 part (a) shows the ROC curve for Schizoaffective disorder side effect (one of the best performances of NNPS). Part (b) of Fig. 5 illustrates the loss curve of model for different epochs. Similarly, Fig. 6 part (a) and (b) show the ROC and Loss curves for NNPS related to Icterus side effect, one of the worst performances of NNPS, respectively. As shown by these figures, NNPS works well for each side effect alone and is acceptable with respect to the loss values for epoch 50. Among side effects with the best performance in NNPS, five important side effects that can lead to death or serious complications are selected [54,55,56,57,58]. The performance results in NNPS and Decagon methods and the literature evidence for supporting these dangerous side effects are collected in Table 10. According to Table 10, the performances of dangerous polypharmacy side effects in NNPS on AUROC have values of 1.0, but in Decagon are located between 0.791 and 0.936. Also, we can see that on AUPRC the NNPS method have values of 1.0 but the Decagon performances are between 0.789 and 0.911. The finding of this tables show that in dangerous side effects, the performance of NNPS is higher than the performance of Decagon, and the NNPS is an effective approach for predicting polypharmacy side effects especially in order to detect dangerous side effects. Receiver-Operating Characteristic (ROC) curve (part a) and loss curve (part b) of Icterus polypharmacy side effect for 50 epochs Table 10 Results of dangerous side effects in NNPS and Decagon on AUROC and AUPRC In summary, the evaluation of the NNPS against five well-known methods, in terms of accuracy, complexity, and running time speed shows the performance of the presented method for an essential and challenging problem in pharmacology. As for future work, we suggest adding the protein–protein interaction information to the model, as it plays a crucial role in many biological functions and may lead to more accurate results. Another avenue for research is to apply the proposed method to other datasets and compare their findings on the association of diseases and polypharmacy side effects with the current work. Datasets and code for NNPS algorithm are freely accessible at https://github.com/raziyehmasumshah/NNPS. Masnoon N, Shakib S, Kalisch-Ellett L, Caughey GE. What is polypharmacy? A systematic review of definitions. BMC Geriatr. 2017;17(1):1–10. https://doi.org/10.1186/s12877-017-0621-2. World Health Organization. Medication safety in polypharmacy. Med Without Harm. 2019;1(1):1–63. Wilson M, Mcintosh J, Codina C, Flemming G, Geitona M, Gillespie U, Harrison C, Illario M, Kinnear M, Fernandez-llimos F, Kempen T, Menditto E, Michael N, Scullin C, Wiese B. Alpana Mair Plus the SIMPATHY consortium. Robert Gordon University Aberdeen (2017) Avery T, Barber N, Ghaleb B, Franklin BD, Armstrong S, Crowe S, Dhillon S, Freyer A, Howard R, Pezzolesi C, Serumaga B, Swanwick G, Olanrenwaju T. Investigating the prevalence and causes of prescribing errors in general practice?: The PRACtICe Study (PRevalence And Causes of prescrIbing errors in general practiCe) A report for the GMC. General Med Counc. 2012;1(May):1–187. Rodrigues MCS, De Oliveira C. Interações medicamentosas e reações adversas a medicamentos em polifarmácia em idosos: Uma revisão integrativa. Rev Lat Am Enfermagem. 2016;24:1–17. https://doi.org/10.1590/1518-8345.1316.2800. Shah BM, Hajjar ER. Polypharmacy, adverse drug reactions, and geriatric syndromes. Clin Geriatr Med. 2012;28(2):173–86. https://doi.org/10.1016/j.cger.2012.01.002. Oluwaseun E. William\_P. Acad Div Child Health. 2015;101(4):1–13. Verrotti A, Tambucci R, Di Francesco L, Pavone P, Iapadre G, Altobelli E, Matricardi S, Farello G, Belcastro V. The role of polytherapy in the management of epilepsy: suggestions for rational antiepileptic drug selection. Expert Rev Neurother. 2020;20(2):167–73. https://doi.org/10.1080/14737175.2020.1707668. Hosseini L, Hajibabaee F, Navab E. Reviewing polypharmacy in elderly. Syst Rev Med Sci. 2020;1(1):17–24. Chen C-m, Kuo L-n, Cheng K-j, Shen W-c, Bai K-j, Wang C-c, Chiang Y-c, Chen H-y. The effect of medication therapy management service combined with a national PharmaCloud system for polypharmacy patients. Comput Methods Programs Biomed. 2016;134(1):109–11. Zhang P, Wang F, Hu J, Sorrentino R. Label propagation prediction of drug–drug interactions based on clinical side effects. Sci Rep. 2015;5(1):1–10. https://doi.org/10.1038/srep12339. Valenza PL, McGinley TC, Feldman J, Patel P, Cornejo K, Liang N, Anmolsingh R, McNaughton N. Dangers of polypharmacy. Vignettes Patient Saf. 2017;1(1):47–69. https://doi.org/10.5772/intechopen.69169. Stephen LJ, Brodie MJ. Antiepileptic drug monotherapy versus polytherapy: pursuing seizure freedom and tolerability in adults. Curr Opin Neurol. 2012;25(2):164–72. https://doi.org/10.1097/WCO.0b013e328350ba68. Andrew T, Milinis K, Baker G, Wieshmann U. Self reported adverse effects of mono and polytherapy for epilepsy. Seizure. 2012;21(8):610–3. https://doi.org/10.1016/j.seizure.2012.06.013. Aggarwal A, Mehta S, Gupta D, Sheikh S, Pallagatti S, Singh R, Singla I. Clinical & immunological erythematosus patients characteristics in systemic lupus Maryam. J Dent Educ. 2012;76(11):1532–9. https://doi.org/10.4103/ijmr.IJMR. Zitnik M, Agrawal M, Leskovec J. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics. 2018;34(13):457–66. https://doi.org/10.1093/bioinformatics/bty294. St. Louis E. Truly, "rational" polytherapy: maximizing efficacy and minimizing drug interactions, drug load, and adverse effects. Curr Neuropharmacolo. 2009;7(2):96–105. https://doi.org/10.2174/157015909788848929. Holmes LB, Mittendorf R, Shen A, Smith CR, Hernandez-Diaz S. Fetal effects of anticonvulsant polytherapies: different risks from different drug combinations. Arch Neurol. 2011;68(10):1273–9. https://doi.org/10.1001/archneurol.2011.133. Shtar G, Rokach L, Shapira B. Detecting drug–drug interactions using artificial neural networks and classic graph similarity measures. PLoS ONE 14(8), 1–25 (2019). https://doi.org/10.1371/journal.pone.0219796. arXiv:1903.04571 Mekonnen AB, Alhawassi TM, McLachlan AJ, Brien JE. Adverse drug events and medication errors in African hospitals: a systematic review. Drugs Real World Outcomes. 2018;5(1):1–24. https://doi.org/10.1007/s40801-017-0125-6. Alsulami Z, Conroy S, Choonara I. Medication errors in the Middle East countries: a systematic review of the literature. Eur J Clin Pharmacol. 2013;69(4):995–1008. https://doi.org/10.1007/s00228-012-1435-y. Sears K, Scobie A, Mackinnon NJ. Patient-related risk factors for self-reported medication errors in hospital and community settings in 8 countries. Can Pharm J. 2012;145(2):88–93. https://doi.org/10.3821/145.2.cpj88. Lin X, Quan Z, Wang Z-J, Ma T, Zeng X. KGNN: knowledge graph neural network for drug-drug interaction prediction. IJCAI. 2020. https://doi.org/10.24963/ijcai.2020/380. Davies EA, O'Mahony MS. Adverse drug reactions in special populations—the elderly. Br J Clin Pharmacol. 2015;80(4):796–807. https://doi.org/10.1111/bcp.12596. Molokhia M, Majeed A. Current and future perspectives on the management of polypharmacy. BMC Fam Pract. 2017;18(1):1–9. https://doi.org/10.1186/s12875-017-0642-0. Hubbard RE, O'Mahony MS, Woodhouse KW. Medication prescribing in frail older people. Eur J Clin Pharmacol. 2013;69(3):319–26. https://doi.org/10.1007/s00228-012-1387-2. Liu R, AbdulHameed MDM, Kumar K, Yu X, Wallqvist A, Reifman J. Data-driven prediction of adverse drug reactions induced by drug–drug interactions. BMC Pharmacol Toxicol. 2017;18(1):1–18. https://doi.org/10.1186/s40360-017-0153-6. Zhang W, Chen Y, Liu F, Luo F, Tian G, Li X. Predicting potential drug–drug interactions by integrating chemical, biological, phenotypic and network data. BMC Bioinform. 2017;18(1):1–12. https://doi.org/10.1186/s12859-016-1415-9. Lewis R, Guha R, Korcsmaros T, Bender A. Synergy maps: exploring compound combinations using network-based visualization. J Cheminform. 2015;7(1):1–11. https://doi.org/10.1186/s13321-015-0090-6. Percha B, Garten Y, Altman RB. Discovery and explanation of drug–drug interactions via text mining. Pac Symp Biocomput. 2012;1:410–21. Vilar S, Friedman C, Hripcsak G. Detection of drug–drug interactions through data mining studies using clinical sources, scientific literature and social media. Brief Bioinform. 2018;19(5):863–77. https://doi.org/10.1093/bib/bbx010. Chen D, Zhang H, Lu P, Liu X, Cao H. Synergy evaluation by a pathway–pathway interaction network: a new way to predict drug combination. Mol BioSyst. 2016;12(2):614–23. https://doi.org/10.1039/c5mb00599j. Huang L, Li F, Sheng J, Xia X, Ma J, Zhan M, Wong STC. DrugComboRanker: drug combination discovery based on target network analysis. Bioinformatics. 2014;30(12):228–36. https://doi.org/10.1093/bioinformatics/btu278. Sun Y, Sheng Z, Ma C, Tang K, Zhu R, Wu Z, Shen R, Feng J, Wu D, Huang D, Huang D, Fei J, Liu Q, Cao Z. Combining genomic and network characteristics for extended capability in predicting synergistic drugs for cancer. Nat Commun. 2015;6:1–10. https://doi.org/10.1038/ncomms9481. Takeda T, Hao M, Cheng T, Bryant SH, Wang Y. Predicting drug–drug interactions through drug structural similarities and interaction networks incorporating pharmacokinetics and pharmacodynamics knowledge. J Cheminform. 2017;9(1):1–9. https://doi.org/10.1186/s13321-017-0200-8. Gottlieb A, Stein GY, Oron Y, Ruppin E, Sharan R. INDI: a computational framework for inferring drug interactions and their associated recommendations. Mol Syst Biol. 2012;8(592):1–12. https://doi.org/10.1038/msb.2012.26. Li X, Xu Y, Cui H, Huang T, Wang D, Lian B, Li W, Qin G, Chen L, Xie LCO. Artif Intell Med. 2017;17(83):35–43. Li J, Zheng S, Chen B, Butte AJ, Swamidass SJ, Lu Z. A survey of current trends in computational drug repositioning. Brief Bioinform. 2016;17(1):2–12. https://doi.org/10.1093/bib/bbv020. Zitnik M, Zupan B. Data fusion by matrix factorization. IEEE Trans Pattern Anal Mach Intell. 2015;37(1):41–53. https://doi.org/10.1109/TPAMI.2014.2343973. arXiv:1307.0803. Ferdousi R, Safdari R, Omidi Y. Computational prediction of drug–drug interactions based on drugs functional similarities. J Biomed Inform. 2017;70:54–64. https://doi.org/10.1016/j.jbi.2017.04.021. Vilar S, Harpaz R, Uriarte E, Santana L, Rabadan R, Friedman C. Drug–drug interaction through molecular structure similarity analysis. J Am Med Inform Assoc. 2012;19(6):1066–74. https://doi.org/10.1136/amiajnl-2012-000935. Nickel M, Tresp V, Kriegel HP. A three-way model for collective learning on multi-relational data. In: Proceedings of the 28th international conference on machine learning, ICML 2011, vol. 1, p. 809–16 (2011) Papalexakis EE, Faloutsos C, Sidiropoulos ND. Tensors for data mining and data fusion: Models, applications, and scalable algorithms. ACM Trans Intell Syst Technol. 2016;8(2):1–44. https://doi.org/10.1145/2915921. Perozzi B, Al-Rfou R, Skiena S. DeepWalk: Online learning of social representations. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining, vol. 1, no. 1, p. 701–10, 2014. https://doi.org/10.1145/2623330.2623732. arXiv:1403.6652. Zong N, Kim H, Ngo V, Harismendy O. Deep mining heterogeneous networks of biomedical linked data to predict novel drug-target associations. Bioinformatics. 2017;33(15):2337–44. https://doi.org/10.1093/bioinformatics/btx160. Martinez CJ, Torrie JH, Allen ON. Correlation analysis of criteria of symbiotic nitrogen. Fixation by soybeans (Glycine max Merr.). Zentralblatt fur Bakteriologie, Parasitenkunde, Infektionskrankheiten und Hygiene. Zweite naturwissenschaftliche Abt.: Allgemeine, landwirtschaftliche und technische Mikrobiologie 124(3), 212–6 (1970). https://doi.org/10.1126/scitranslmed.3003377.Data-Driven Kuhn M, Letunic I, Jensen LJ, Bork P. The SIDER database of drugs and side effects. Nucleic Acids Res. 2016;44(D1):1075–9. https://doi.org/10.1093/nar/gkv1075. Menche J, Sharma A, Kitsak M, Ghiassian SD, Vidal M, Loscalzo J, Barabási AL. Uncovering disease–disease relationships through the incomplete interactome. Science. 2015;347(6224):841. https://doi.org/10.1126/science.1257601. Chatr-Aryamontri A, Breitkreutz BJ, Oughtred R, Boucher L, Heinicke S, Chen D, Stark C, Breitkreutz A, Kolas N, O'Donnell L, Reguly T, Nixon J, Ramage L, Winter A, Sellam A, Chang C, Hirschman J, Theesfeld C, Rust J, Livstone MS, Dolinski K, Tyers M. The BioGRID interaction database: 2015 update. Nucleic Acids Res. 2015;43(D1):470–8. https://doi.org/10.1093/nar/gku1204. Szklarczyk D, Morris JH, Cook H, Kuhn M, Wyder S, Simonovic M, Santos A, Doncheva NT, Roth A, Bork P, Jensen LJ, Von Mering C. The STRING database in 2017: quality-controlled protein–protein association networks, made broadly accessible. Nucleic Acids Res. 2017;45(D1):362–8. https://doi.org/10.1093/nar/gkw937. Rolland T, Taşan M, Charloteaux B, Pevzner SJ, Zhong Q, Sahni N, Yi S, Lemmens I, Fontanillo C, Mosca R, Kamburov A, Ghiassian SD, Yang X, Ghamsari L, Balcha D, Begg BE, Braun P, Brehme M, Broly MP, Carvunis AR, Convery-Zupan D, Corominas R, Coulombe-Huntington J, Dann E, Dreze M, Dricot A, Fan C, Franzosa E, Gebreab F, Gutierrez BJ, Hardy MF, Jin M, Kang S, Kiros R, Lin GN, Luck K, Macwilliams A, Menche J, Murray RR, Palagi A, Poulin MM, Rambout X, Rasla J, Reichert P, Romero V, Ruyssinck E, Sahalie JM, Scholz A, Shah AA, Sharma A, Shen Y, Spirohn K, Tam S, Tejeda AO, Trigg SA, Twizere JC, Vega K, Walsh J, Cusick ME, Xia Y, Barabási AL, Iakoucheva LM, Aloy P, De Las Rivas J, Tavernier J, Calderwood MA, Hill DE, Hao T, Roth FP, Vidal M. A proteome-scale map of the human interactome network. Cell. 2014;159(5):1212–26. https://doi.org/10.1016/j.cell.2014.10.050. Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. J Mach Learn Res. 2010;9(1):249–56. Bottou L. Stochastic gradient descent tricks. Lecture notes in computer science (including subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics) 7700 LECTURE NO(1), 421–436 (2012). https://doi.org/10.1007/978-3-642-35289-8-25 Serban B, Panti Z, Nica M, Pleniceanu M, Popa M, Ene R, Cîrstoiu C. Statistically based survival rate estimation in patients with soft tissue tumors. Rom J Orthop Surg Traumatol. 2019;1(2):84–9. https://doi.org/10.2478/rojost-2018-0085. Arbyn M, Weiderpass E, Bruni L, de Sanjosé S, Saraiya M, Ferlay J, Bray F. Estimates of incidence and mortality of cervical cancer in 2018: a worldwide analysis. Lancet Glob Health. 2020;8(2):191–203. https://doi.org/10.1016/S2214-109X(19)30482-6. Januszewicz A, Guzik T, Prejbisz A, Mikołajczyk T, Osmenda, G, Januszewicz W. 158\_Prejbisz\_ONLINE. PALSKIE 126Janusze(1), 86–93 (2016) Atci IB, Yilmaz H, Yaman M, Baran O, Türk O, Solmaz B, Kocaman Ü, Ozdemir NG, Demirel N, Kocak A. Incidence, hospital costs and in-hospital mortality rates of surgically treated patients with traumatic cranial epidural hematoma. Rom Neurosurg. 2018;32(1):110–5. https://doi.org/10.2478/romneu-2018-0013. Evans EC, Matteson KA, Orejuela FJ, Alperin M, Balk EM, El-Nashar S, Gleason JL, Grimes C, Jeppson P, Mathews C, Wheeler TL, Murphy M. Salpingo-oophorectomy at the time of benign hysterectomy: a systematic review. Obstet Gynecol. 2016;128(3):476–85. https://doi.org/10.1097/AOG.0000000000001592. Changiz Eslahchi and others would like to thank the School of Biological Sciences, Institute for Research in Fundamental Sciences (IPM) and Computing Center of IPM in performing a parallel computing is gratefully acknowledged. No funding to declare. Department of Computer and Data Sciences, Faculty of Mathematical Sciences, Shahid Beheshti University, Tehran, Iran Raziyeh Masumshah & Changiz Eslahchi School of Biological Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran Rosa Aghdam & Changiz Eslahchi Raziyeh Masumshah Rosa Aghdam Changiz Eslahchi CHE and RA developed the methods. RM performed the computational and statistical analysis. CHE, RM, and RA design the paper and RM wrote the paper. CHE and RA contributed to writing and editing the manuscript. All authors read and approved the final manuscript. Correspondence to Rosa Aghdam or Changiz Eslahchi. Different hyperparameters values for 964 side effects of each model, and the results of 10 best and worst performance of polypharmacy side effects in NNPS and Decagon on AUROC and AUPRC. Bold numbers show the best performance for each criteria. Masumshah, R., Aghdam, R. & Eslahchi, C. A neural network-based method for polypharmacy side effects prediction. BMC Bioinformatics 22, 385 (2021). https://doi.org/10.1186/s12859-021-04298-y Received: 30 March 2021 Polypharmacy side effects prediction Drug–drug interactions
CommonCrawl
On higher-dimensional Fibonacci numbers, Chebyshev polynomials and sequences of vector convergents Coffey, Mark W., Hindmarsh, James, Lettington, Matthew C. and Pryce, John D. 2017. On higher-dimensional Fibonacci numbers, Chebyshev polynomials and sequences of vector convergents. Journal de Theorie des Nombres de Bordeaux 29 (2) , pp. 369-423. 10.5802/jtnb.985 This is the latest version of this item. PDF - Accepted Post-Print Version Official URL: http://dx.doi.org/10.5802/jtnb.985 We study higher-dimensional interlacing Fibonacci sequen\-ces, generated via both Chebyshev type functions and $m$-dimensional recurrence relations. For each integer $m$, there exist both rational and integer versions of these sequences, where the underlying prime congruence structures of the rational sequence denominators enables the integer sequence to be recovered. From either the rational or the integer sequences we construct sequences of vectors in $\mathbb{Q}^m$, which converge to irrational algebraic points in $\mathbb{R}^m$. The rational sequence terms can be expressed as simple recurrences, trigonometric sums, binomial polynomials, sums of squares, and as sums over ratios of powers of the signed diagonals of the regular unit $n$-gon. These sequences also exhibit a ``rainbow type'' quality, and correspond to the Fleck numbers at negative indices, leading to some combinatorial identities involving binomial coefficients. It is shown that the families of orthogonal generating polynomials defining the recurrence relations employed, are divisible by the minimal polynomials of certain algebraic numbers, and the three-term recurrences and differential equations for these polynomials are derived. Further results relating to the Christoffel-Darboux formula, Rodrigues' formula and raising and lowering operators are also discussed. Moreover, it is shown that the Mellin transforms of these polynomials satisfy a functional equation of the form $p_n(s)=\pm p_n(1-s)$, and have zeros only on the critical line $\Re (s)=1/2$. Institut de Mathématiques de Bordeaux http://jtnb.cedram.org/?lang=en https://arxiv.org/abs/1502.03085 On higher-dimensional Fibonacci numbers, Chebyshev polynomials and sequences of vector convergents. (deposited 18 Aug 2016 10:15) [Currently Displayed]
CommonCrawl
Shortcut Tricks To Solve Alligation Unknown Published on: April 25, 2017 Shortcut Rules to Solve Problems on ALLIGATION Effective for IBPS PO - SBI PO Exam Here we will start a series of Quantitative Aptitude Shortcut Tricks for your upcoming SBI - IBPS - SSC and Other Government Competitive Exams. We will try to cover up all topics of the quantitative Aptitude Sections from which question was generally asked. Note: The page may takes sometime to load the Quantitative formula's. If you face any problem just comment below the posts. Trick - 1 'L' litres are drawn from a cask full of water and it is filled with milk. After n operations, if the quantity of water now left in the cask is to that of milk in it as a:b, then the capacity of cask is given by $\left[ \frac{L}{1-{{\left( \frac{a}{a+b} \right)}^{{}^{1}/{}_{n}}}} \right]litres$ If a person buys n kg of an item at the rate of Rs. P per kg. If he sells m kg at a profit off x% then the rate per kg, at which he should sell the remaining to get a profit of y% on the total deal, is given by Rs. $P\left[ 1+\frac{ny-mx}{(n-m)100} \right]$ A man mixes ${{M}_{1}}$ litres of milk at Rs.x per litre with ${{M}_{2}}$ litres at Rs. y per litre. Amount of water, that should be added to make the average value of the mixture Rs. z per litre, is given by $\left[ \frac{{{M}_{1}}(x-z)+{{M}_{2}}(y-z)}{z} \right]litres$ L litres of a mixture contains two liquids A and B in the ratio a:b. The amount of liquid B, that is added to get a new mixture containing liquid A and B in the ratio x,y is given by $\left[ \left( \frac{\frac{y}{x}}{1+\frac{b}{a}} \right)-\left( \frac{1}{1+\frac{a}{b}} \right) \right]L$ litres. Trick -5 If M kg of mixture, of which $\frac{a}{b}$ is A and the rest is B, be mixed with N kg of metal, of which $\frac{x}{y}$ is A and the rest is B in the mixture is given by $\left[ \frac{M\frac{a}{b}+N\frac{x}{y}}{M\left( 1-\frac{a}{b} \right)+N\left( 1-\frac{x}{y} \right)} \right]$ There are 'N' students in a class. Rs. X are distributed among them so that each boy gets Rs.x and each girl gets Rs.y. Then the ratio of boys to the girl is fiven by $\left[ \frac{X-Ny}{Nx-X} \right]$ and the no. of boys and the no. of girls are $\left( \frac{X-Ny}{x-y} \right)$ and $\left( \frac{Nx-X}{x-y} \right)$ respectively. Questions for Practice Q1. Nine litres are drawn from a cask full of water and it is filled with milk. Nine litres of mixture are drawn and the cask is again filled with milk. The quantity of water now left in the cask is to that of milk in it as 16:9. How much does the cask hold ? Q2. Jayshree purchased 150 kg of wheat at the rate of Rs. 7 per kg. She sold 50 kg at a profit of 10%. At what rate per kg should she sell the remaining to get a profit of 20% on the total deal, Q3. A man mixes 5 kilolitres of milk at Rs.600 per kilolitre with 6 kilolitres at Rs. 540 per kilolitre of water should be added to make the average value of the mixture Rs. 480 per kilolitre ? Q4. 729 litres of a mixture contains milk and water in the ratio 7:2. How much water is added to get a new mixture containing milk and water in the ratio 7:3 Q5. If 2 kg of metal, of which $\frac{1}{3}$ is zinc and the rest of copper, be mixed with 3 kg of metal, of which $\frac{1}{4}$ is zinc and the rest is copper, what is ratio of zinc to copper in the mixture ? Q6. There are 65 students in a class 39 rupees are distributed among them so that each boy gets 80 P and each girl gets 30 P. Find the number of boys and girls in that class. Answer 1. 45 litres Answer 2. 8.75 per kg Answer 3. 2 kilolitres Answer 5. 17:43 Answer 6. 26
CommonCrawl
Index theory The area of mathematics whose main object of study is the index of operators (cf. also Index of an operator; Index formulas). The main question in index theory is to provide index formulas for classes of Fredholm operators (cf. also Fredholm operator), but this is not the only interesting question. First of all, to be able to provide index formulas, one has to specify what meaning of "index" is agreed upon, then one has to specify to what classes of operators these formulas will apply, and, finally, one has to explain how to use these formulas in applications. A consequence of this is that index theory also studies various generalizations of the concept of Fredholm index, including $K$-theoretical and cyclic homology indices, for example. Moreover, the study of the analytic properties necessary for the index to be defined are an important part of index theory. Here one includes the study of conditions for being Fredholm or non-Fredholm for classes of operators that nevertheless have finite-dimensional kernels. Soon after (1970s), other invariants of elliptic operators have been defined that are similar in nature to the analytic index. The study of these related invariants is also commonly considered to be part of index theory. The most prominent of these new, related invariants are the Ray–Singer analytic torsion and the eta-invariant. Fixed-point formulas are also usually considered part of index theory, see [a9]. Finally, one of the most important goals of index theory is to study applications of the index theorems to geometry, physics, group representations, analysis, and other fields. There is a very long and fast growing list of papers dealing with these applications. Index theory has become a subject on its own only after M.F. Atiyah and I. Singer published their index theorems in the sequence of papers [a4], [a6], [a7] (cf. also Index formulas). These theorems had become possible only due to progress in the related fields of $K$-theory [a10], [a5] and pseudo-differential operators (cf. also Pseudo-differential operator) [a35], [a37], [a46]. Important particular cases of the Atiyah–Singer index theorems were known before. Among them, Hirzebruch's signature theorem (cf. also Signature) occupies a special place (see [a33], especially for topics such as multiplicative genera and the Langlands formula for the dimension of spaces of automorphic forms). Hirzebruch's theorem was generalized by A. Grothendieck (see [a22]), who introduced many of the ideas that proved to be fundamental for the proof of the index theorems. All these theorems turned out to be consequences of the Atiyah–Singer index theorems (see also Index formulas for some index formulas that preceded the Atiyah–Singer index formula). 1 Atiyah–Singer index formulas. 1.1 A single elliptic operator acting between sections of vector bundles. 1.2 Equivariant index theorem. 1.3 Families of elliptic operators. 2 $K$-theory in index theory. 3 Applications of index theorems. 4 Other invariants. 5 Generalized index theorems. Atiyah–Singer index formulas. A common characteristic of the first three main index formulas of Atiyah–Singer and Atiyah–Segal is that they depend only on the principal symbol of the operator whose index they compute. (For a differential operator, the principal symbol is given by the terms involving only the highest-order differentials and is independent of the choice of a coordinate system; cf. also Principal part of a differential operator; Symbol of an operator.) The main theorems mentioned above are: the index theorem for a single elliptic operator $P$ acting between sections of vector bundles on a smooth, compact manifold $M$ (Atiyah–Singer, [a4]); the equivariant index theorem for a single elliptic operator equivariant with respect to a compact group $G$ (Atiyah–Segal, [a5]); and the index theorem for families $( P _ { b } ) _ { b \in B }$ of elliptic operators acting on the fibres of a fibre bundle $Y \rightarrow B$ (Atiyah–Singer, [a7]). These results are briefly reviewed below. A single elliptic operator acting between sections of vector bundles. If $P$ is an elliptic differential, or, more generally, an elliptic pseudo-differential operator acting between sections of two smooth vector bundles (cf. also Elliptic operator), then $P$ defines a continuous operator between suitable Sobolev spaces with closed range and finite-dimensional kernel and cokernel, that is, a Fredholm operator. The first of the index theorems gives an explicit formula for the Fredholm, or analytic, index $\operatorname{ind} ( P )$ of $P$: \begin{equation*} \operatorname{ind} ( P ) : = \operatorname { dim } ( \operatorname{ker} ( P ) ) - \operatorname { dim } ( \operatorname { coker } ( P ) ). \end{equation*} Denote by $\mathcal{T} ( M )$ the Todd class of the complexification of the tangent bundle $T M$ of $M$. If $P$ is an elliptic operator as above, its principal symbol $a = \sigma ( P )$ defines a $K$-theory class $[ a ]$ with compact supports on $T ^ { * } M$ whose Chern character, denoted $\operatorname{Ch} ( [ a ] )$, is in the even cohomology of $T ^ { * } M$ with compact supports. The Atiyah–Singer index formula of [a6] then states that \begin{equation*} \operatorname{ind} ( P ) = ( - 1 ) ^ { n } \operatorname{Ch} ( [ a ] ) {\cal T} ( M ) [ T ^ { * } M ], \end{equation*} $n$ being the dimension of the manifold $M$ and $[ T ^ { * } M ]$ being the fundamental class of $T ^ { * } M$. (The factor $( - 1 ) ^ { n }$ reflects the choice of the orientation of $T ^ { * } M$ in the original articles. Other choices for this orientation will lead to different signs.) In other words, the index is obtained by evaluating the compactly supported cohomology class $\operatorname{Ch} ( [ a ] ) \mathcal{T} ( M )$ on the fundamental class of $T ^ { * } M$. Equivariant index theorem. The second of the index formulas refines the index when the operator $P$ above is invariant with respect to a compact Lie group, see [a5], [a6]. Recall that the representation ring of a compact group $G$ is defined as the ring of formal linear combinations with integer coefficients of equivalence classes of irreducible representations of $G$ (cf. also Irreducible representation). For operators $P$ equivariant with respect to a compact group $G$, the kernel and cokernel are representations of $G$, so their difference can now be regarded as an element of $R ( G )$, called the equivariant index of $P$. The Atiyah–Singer index formula in [a6] gives the value $\text{ind}_{ g } ( P )$ of the (character of the) index of $P$ at $g \in G$ in terms of invariants of $M ^ { g }$, the set of fixed points of $g$ in $M$. Denote by $a | _ { T ^{*} M ^{ g }}$ the restriction of $a$ to the cotangent bundle of $M ^ { g }$ and by $\mathcal{T} ( M ^ { g } )$ the Todd class of the complexification of the cotangent bundle of $M ^ { g }$. In addition to these ingredients, which are similar to the ingredients appearing in the formula for $\operatorname{ind} ( P )$ above, the formula for $\text{ind}_{ g } ( P )$ involves also a Lefschetz-type contribution, denoted below by $L ( N , g )$, obtained from the action of $g$ on the normal bundle to the set $M ^ { g }$: \begin{equation*} \operatorname { ind } _ { g } ( P ) = ( - 1 ) ^ { n } \operatorname { Ch } ( [ a | _ { T ^ { * } M ^ { g } } ] ) \mathcal{T} ( M ^ { g } ) L ( N , g ) [ T ^ { * } M ^ { g } ]. \end{equation*} Families of elliptic operators. For families of elliptic operators acting on the fibres of a fibre bundle $\pi : Y \rightarrow B$ (cf. also Fibration), a first problem is to make sense of the index. The solution proposed by Atiyah and Singer in [a7] is to define the index as an element of a $K$-theory group, namely $K ^ { 0 } ( B )$ in this case (cf. also $K$-theory). This fortunate choice has opened the way for many other developments in index theory. Actually, in the two index theorems mentioned above, the index can also be interpreted using a $K$-theory group, the $K$-theory of the algebra $\mathbf{C}$ of complex numbers in the first index theorem and the $K$-theory group of $C ^ { * } ( G )$, the norm closure of the convolution algebra of $G$, in the equivariant index theorem. For the Chern character of the family index of a family of elliptic operators $( P _ { b } )$ as above, there is a formula similar to the formula for the index of a single elliptic operator. The principal symbols $a _ { b } = \sigma ( P _ { b } )$ of the operators $P _ { b }$ define, in this case, a class $[ a ]$ in the $K$-theory with compact supports of $T _ { \text{vert} } ^ { * } Y : = T ^ { * } Y / \pi ^ { * } ( T ^ { * } B )$, the vertical cotangent bundle to the fibres of $\pi : Y \rightarrow B$, as in the case of a single elliptic operator. Denote by $\mathcal{T} ( M | B )$ the Todd class of the complexification of $T _ { \text { vert } } ^ { * } Y$ and by $\pi_{ *} : H _ { c } ^ { * } ( T _ { \text { vert } } ^ { * } Y ) \rightarrow H ^ { * - 2 n} ( B )$ the morphism induced by integration along the fibres, with $n$ being the common dimension of the fibres of $\pi$. Then \begin{equation*} \operatorname{Ch} ( \operatorname{ ind } ( P ) ) = ( - 1 ) ^ { n } \pi_{ *} ( \operatorname { ind } ( [ a ] ) {\cal T} ( M | B ) ). \end{equation*} This completes the discussion of these three main theorems of Atiyah and Singer. $K$-theory in index theory. The role of $K$-theory in the proof and applications of the index theorems can hardly be overstated and certainly does not stop at providing an interpretation of the index as an element of a $K$-theory group. A far-reaching consequence of the use of $K$-theory, which depends on Bott periodicity (or more precisely, the Thom isomorphism, cf. also Bott periodicity theorem), is that all elliptic operators can be connected, by a homotopy of Fredholm operators, to certain operators of a very particular kind, the so-called generalized Dirac operators (see below). It is thus sufficient to prove the index theorems for generalized Dirac operators. Due to their differential-geometric properties, it is possible to give more concrete proofs of the Atiyah–Singer index theorem for generalized Dirac operators, using heat kernels, for example (cf. also Heat content asymptotics). The generalized Dirac operator with coefficients in the spin bundle is called simply the Dirac operator (sometimes called the Atiyah–Singer operator). See below for more about generalized Dirac operators. Applications of index theorems. After the publication of the first papers by Atiyah and Singer, index theory has evolved into essentially three directions: a direction which consists of applications and new proofs of the index theorems (especially "local" proofs using heat kernels); a direction which studies invariants other than the index; and a direction which aims at more general index theorems. There is a very large number of applications of index theorems to topology and other areas of mathematics. A few examples follow. In [a12], Atiyah and W. Schmid used Atiyah's $L^{2}$-index theorem for coverings [a11] to construct discrete series representations. In [a34], N.J. Hitchin used the families index theorem to prove that there exist metrics whose associated Dirac operators have non-trivial kernels (in suitable dimensions). An index theorem for foliations that is close in spirit to Atiyah's $L^{2}$-index theorem was obtained by A. Connes [a27]. The index of Dirac (or Atiyah–Singer) operators was used to formulate and then prove the Gromov–Lawson conjecture [a32], which states that a compact, spin, simply connected manifold of dimension $\geq 5$ admits a metric of positive scalar curvature if and only if the index of the spin Dirac operator (in an appropriate $K$-theory group) is zero. This conjecture was proved by S. Stolz, [a48]. Dirac operators have been used to give a concrete construction of $K$-homology [a16]. Some of the applications of the index theorems require new proofs of these theorems, usually relying on the "heat-kernel method" . The main idea of this method is as follows. H. McKean and Singer [a39] stated the problem of investigating the behaviour, as $t \rightarrow 0$, $t > 0$, of the (super-trace of the) heat kernel. More precisely, let \begin{equation*} k _ { t } ( x , y ) = \operatorname { str } ( e ^ { - t D ^ { 2 } } ) = \operatorname { tr } ( e ^ { - t D _ { + } ^ { * } D _ { + } } ) - \operatorname { tr } ( e ^ { - t D _ { + } D _ { + } ^ { * } } ) \end{equation*} be the well-known term appearing in the McKean–Singer index formula, where $D = D _ { + } + D _ { + } ^ { * }$ is a self-adjoint geometric operator (cf. also Self-adjoint operator) with $D _ { + }$ mapping the subspace of even sections to the subspace of odd sections. They considered the case of the de Rham operator $D _ { + } + D _ { + } ^ { * }$, where $D _ { + }$ is then the de Rham differential (cf. also de Rham cohomology). It was known that the integral over the whole manifold of $k _ { t } ( x , x )$ gives the analytic index of $D _ { + }$, and they expressed the hope that $k _ { t } ( x , x )$ will have a definite limit as $t \rightarrow 0$. This was proved for various particular cases by V. Patodi in [a54] and then by P. Gilkey [a29], [a30] using invariant theory (see [a31] for an exposition of this method). This method was finally refined in [a1] to give a clear and elegant proof of the local index theorem for all Dirac operators. Inspired by a talk of Atiyah, J.-M. Bismut investigated connections between probability theory and index theory. He was able to use the stochastic calculus (cf. also Malliavin calculus) to give a new proof of the local index theorem [a17]. His methods then generalized to give proofs of the local index theorem for families of Dirac operators [a18] using Quillen's theory of super-connections [a43], and of the Atiyah–Bott fixed-point formulas [a19]. An application of his results is the determination of the Quillen metric on the determinant bundle [a21]. The local index theorems have many connections to physics, where Dirac operators play a prominent role. Actually, several physicists have come up with arguments for a proof of the local index theorem based on supersymmetry and functional integration, see [a8] and [a53], for example. Building on these arguments, E. Getzler has obtained a short and elegant proof of the local index theorem [a14], [a31], which also uses supersymmetry. Moreover, ideas inspired from physics have lead E. Witten to conjecture that certain twisted Dirac operators on $S ^ { 1 }$-manifolds have an index that is a trivial representation of $S ^ { 1 }$, see [a52]. This was proved by C.H. Taubes [a49] (see also [a23] and [a51]). For the Dirac operator, this had been proved before by Atiyah and F. Hirzebruch [a2]. Other invariants. Heat-kernel methods have proved very useful in dealing with non-compact and singular spaces. A common feature of these spaces is that the index formulas for the natural operators on them depend on more than just the principal symbol, which leads to the appearance of non-local invariants in these index formulas. In general, there exists no good understanding, at this time (2000), of what these non-local invariants are, except in particular cases. The most prominent of these particular cases is the Atiyah–Patodi–Singer index theorem for manifolds with boundary. Other results in these directions were obtained in [a25], [a40], [a41], [a47]. In all these cases, eta-invariants of certain boundary operators must be included in the formula for the index. Moreover, one has to either work on complete manifolds or to include boundary conditions to make the given problems Fredholm. The Atiyah–Patodi–Singer index theorem [a3], e.g., requires such boundary conditions; see below. Let $M$ be a compact manifold with boundary $\partial M$ and metric $g$ which is a product metric in a suitable cylindrical neighbourhood of $\partial M$. Fix a Clifford module $W$ on $M$ (cf. also Clifford algebra) and an admissible connection $\nabla$. Denote by $D : = \sum c ( e _ { i } ) \nabla _ { e_i }$ the generalized Dirac operator on $W$, where $c : T ^ { * } M \cong T M \rightarrow \operatorname { End } ( W )$ is the Clifford multiplication and $e _ { i }$ is a local orthonormal basis (cf. also Orthogonal basis). Also, let $D _ { 0 }$ be the corresponding generalized Dirac operator on $\partial M$, which is (essentially) self-adjoint because $\partial M$ is compact without boundary. Then the eigenvalues of $D _ { 0 }$ will form a discrete subset of the real numbers; denote by $P _ { + }$ the spectral projection corresponding to the eigenvalues of $D _ { 0 }$ that are $\geq 0$. Decompose $D = D _ { + } + D _ { + } ^ { * }$ using the natural ${\bf Z} / 2 {\bf Z}$-grading on $W$. The operator $D _ { + }$, the chiral Dirac operator, acts from sections of $W _ { + }$ to sections of $W_-$, and has an infinite-dimensional kernel. Because of that, Atiyah, Patodi and Singer have introduced a non-local boundary condition of the form $P _ { + } f = 0$, for $f$ a smooth section of $W _ { + }$ over $\partial M$, which is a compact perturbation of the Calderón projection boundary condition. The effect of this boundary condition is that the restriction of $D$ to the subspace of sections satisfying this boundary condition is Fredholm. Assume that $M$ is $\operatorname {spin}^ { c }$ with spinor bundle $S$, such that $W = S \otimes E$, and let $h$ denote the dimension of the kernel of $D _ { 0 }$. The index of the resulting operator $D _ { + }$ with the above boundary conditions is then \begin{equation*} \operatorname{ind}_{\alpha} ( D _ { + } ) = \int _ { M } \hat { A } ( M ) \operatorname{Ch} ( E ) - \frac { \eta ( D _ { 0 } ) + h } { 2 }. \end{equation*} This formula was generalized by Bismut and J. Cheeger in [a20] to families of manifolds with boundary, the result being expressed using the "eta form" $\hat{\eta}$. More precisely, using the notation above, they proved that provided that all Dirac operator associated to the boundaries of the fibres are invertible. Presently (2000), cyclic homology (cf. also Cyclic cohomology) is probably the only general tool to deal with index problems in which the index belongs to an abstract, possibly unknown, $K$-theory group, or to deal with index theorems involving non-local invariants. See [a26], [a36], [a38], or [a50] for the basic results on cyclic homology. The relation between the $K$-theory of the algebra $A$ and the cyclic homology of $A$ is via Chern characters $\operatorname{Ch} : K _ { 0 } ( A ) \rightarrow \operatorname{HC} _ { 2 n } ( A )$, $n \geq 0$, and is due to Connes and M. Karoubi. Generalized index theorems. In [a24], Connes and H. Moscovici have generalized Atiyah's $L^{2}$-index theorem, which allowed them to obtain a proof of the Novikov conjecture (cf. also $C ^ { * }$-algebra) for certain classes of groups. The index theorem, also called the higher index theorem for coverings, is as follows. Let $\tilde { M } \rightarrow M$ be a covering of a compact manifold $M$ with group of deck transformations $\Gamma$ (cf. also Monodromy transformation). If $D$ is an elliptic differential operator on $M$ invariant with respect to $\Gamma$ (such as the signature operator), then it has an index $\operatorname{ind} ( D ) \in K _ { 0 } ( C _ { r } ^ { * } ( \Gamma ) )$, the $K _ { 0 }$-group of the closure of the group algebra of $\Gamma$ acting on $l ^ { 2 } ( \Gamma )$. This index was defined by A.T. Fomenko and A. Mishchenko in [a28]. This index can be refined to an index in $K_0({\cal R}\otimes {\bf C}[\Gamma])$, where $\mathcal{R}$ is the algebra of infinite matrices with complex entries and with rapid decrease. Using cyclic cohomology and the Chern character in cyclic homology, every cohomology class $\phi \in H ^ { * } ( \Gamma ) = H ^ { * } ( B \Gamma )$ gives rise to a morphism $\phi * : K _ { 0 } ( {\cal R} \otimes {\bf C} [ \Gamma ] ) \rightarrow \bf C$, and the problem is to determine $\phi_{*} ( \text { ind } ( D ) )$. If $\phi = 1 \in H ^ { 0 } ( \Gamma )$, then this number is exactly the von Neumann index appearing in Atiyah's $L^{2}$-index formula. Let $f : M \rightarrow B \Gamma$ be the mapping that classifies the covering $\tilde { M } \rightarrow M$ and let $\mathcal{T} ( M )$ be the Todd class of the complexification of $T _ { \text { vert } } ^ { * } Y$. If $D$ is an elliptic invariant differential operator, its principal symbol $a = \sigma ( P )$ defines a $K$-theory class $[ a ]$ with compact supports on $T ^ { * } M$, whose Chern character $\operatorname{Ch} ( [ a ] )$ is in the even cohomology of $T ^ { * } M$ with compact supports, as in the case of the Atiyah–Singer index theorem for a single elliptic operator. Suppose $\phi \in H ^ { 2 m } ( \Gamma )$; then the Connes–Moscovici higher index theorem for coverings [a24] states that \begin{equation*} \phi _ { * } ( \text { ind } ( D ) ) = ( - 1 ) ^ { n } \left( 2 \pi i ) ^ { - m } ( \operatorname {Ch} ( [ a ] ) \mathcal{T} ( M ) f ^ { * } \phi \right) [ T ^ { * } M ]. \end{equation*} The Chern character in cyclic cohomology turns out to be a natural mapping, and this can be interpreted as a general index theorem in cyclic cohomology [a42]. It is hoped that this general index theorem will help explain the ubiquity of the Todd class in index theorems. For more information on index theory, see, e.g., [a14], [a15], [a45]. To get a balanced point of view, see also [a13] for an account of the original approach to the Atiyah–Singer index theorems, which also gives all the necessary background a student needs. [a1] M. Atiyah, R. Bott, V. Patodi, "On the heat equation and the index theorem" Invent. Math. , 19 (1973) pp. 279–330 (Erata ibid. 28 (1975), 277-280) MR0650828 Zbl 0364.58016 Zbl 0257.58008 [a2] M. Atiyah, F. Hirzebruch, "Spin manifolds and group actions" , Essays in Topology and Related subjects , Springer (1994) pp. 18–28 MR0278334 Zbl 0193.52401 [a3] M. Atiyah, V. Patodi, I. Singer, "Spectral asymmetry and Riemannian geometry I" Math. Proc. Cambridge Philos. Soc. , 77 (1975) pp. 43–69 MR0397797 MR0397798 MR0397799 Zbl 0297.58008 [a4] M. Atiyah, I. Singer, "The index of elliptic operators I" Ann. of Math. , 87 (1968) pp. 484–530 MR0236950 MR0232402 Zbl 0164.24001 [a5] M. Atiyah, G. Segal, "The index of elliptic operators II" Ann. of Math. , 87 (1968) pp. 531–545 MR0236953 MR0236951 Zbl 0164.24201 [a6] M. Atiyah, I. Singer, "The index of elliptic operators III" Ann. of Math. , 93 (1968) pp. 546–604 MR0236952 Zbl 0164.24301 [a7] M. Atiyah, I. Singer, "The index of elliptic operators IV" Ann. of Math. , 93 (1971) pp. 119–138 MR0279833 Zbl 0212.28603 [a8] L. Alvarez-Gaumé, "Supersymmetry and the Atiyah–Singer index theorem" Comm. Math. Phys. , 90 (1983) pp. 161–173 Zbl 0528.58034 [a9] M. Atiyah, R. Bott, "A Lefschetz fixed-point formula for elliptic complexes II: Applications." Ann. of Math. , 88 (1968) pp. 451–491 MR0232406 Zbl 0167.21703 [a10] M. Atiyah, "$K$-theory" , Benjamin (1967) MR0224084 MR0224083 Zbl 0159.53401 Zbl 0159.53302 [a11] M. Atiyah, "Elliptic operators, discrete subgroups, and von Neumann algebras" Astérisque , 32/33 (1969) pp. 43–72 [a12] M. Atiyah, W. Schmid, "A geometric construction of the discrete series" Invent. Math. , 42 (1977) pp. 1–62 MR0463358 Zbl 0373.22001 [a13] B. Booss–Bavnbek, D. Bleecker, "Topology and analysis. The Atiyah–Singer index formula and gauge-theoretic physics" , Universitext , Springer (1985) MR0771117 Zbl 0551.58031 [a14] N. Berline, E. Getzler, M. Vèrgne, "Heat kernels and Dirac operator" , Grundl. Math. Wissenschaft. , 298 , Springer (1996) MR2273508 MR1215720 [a15] B. Booss–Bavnbek, K. Wojciechowski, "Elliptic boundary problems for Dirac operators" , Math. Th. Appl. , Birkhäuser (1993) MR1233386 Zbl 0797.58004 [a16] P. Baum, R. Douglas, "Index theory, bordism, and $K$-homology" , Operator Algebras and $K$-Theory (San Francisco, Calif., 1981) , Contemp. Math. , 10 , Amer. Math. Soc. (1982) pp. 1–31 MR0658506 [a17] J.-M. Bismut, "The Atiyah–Singer theorems: a probabilistic approach" J. Funct. Anal. , 57 (1984) pp. 56–99 MR0756173 MR0744920 Zbl 0556.58027 Zbl 0538.58033 [a18] J.-M. Bismut, "The index theorem for families of Dirac operators: two heat equation proofs" Invent. Math. , 83 (1986) pp. 91–151 Zbl 0592.58047 [a19] J.-M. Bismut, "The Atiyah–Singer theorems: a probabilistic approach. II. The Lefschetz fixed point formulas" J. Funct. Anal. , 57 : 3 (1984) pp. 329–348 MR0744920 MR0756173 Zbl 0556.58027 [a20] J.-M. Bismut, J. Cheeger, "$ \eta $-invariants and their adiabatic limits" J. Amer. Math. Soc. , 2 (1989) pp. 33–70 MR0966608 Zbl 0671.58037 [a21] J.-M. Bismut, D. Freed, "The analysis of elliptic families: Metrics and connections on determinant bundles" Comm. Math. Phys. , 106 (1986) pp. 103–163 MR853982 [a22] A. Borel, J.-P. Serre, "Le téorème de Riemann–Roch (d'apreès Grothendieck)" Bull. Soc. Math. France , 86 (1958) pp. 97–136 [a23] R. Bott, C. Taubes, "On the rigidity theorems of Witten" J. Amer. Math. Soc. , 2 : 1 (1989) pp. 137–186 MR0954493 Zbl 0667.57009 [a24] A. Connes, H. Moscovici, "Cyclic cohomology, the Novikov conjecture and hyperbolic groups" Topology , 29 (1990) pp. 345–388 MR1066176 Zbl 0759.58047 [a25] J. Cheeger, "On the Hodge theory of Riemannian pseudomanifolds" , Geometry of the Laplace operator (Univ. Hawaii, 1979) , Proc. Symp. Pure Math. , XXXVI , Amer. Math. Soc. (1980) pp. 91–146 MR0573430 Zbl 0461.58002 [a26] A. Connes, "Non-commutative differential geometry" Publ. Math. IHES , 62 (1985) pp. 41–144 MR823176 Zbl 0592.46056 Zbl 0564.58002 [a27] A. Connes, "Sur la théorie noncommutative de l'intégration" , Algèbres d'Opérateurs , Lecture Notes in Mathematics , 725 , Springer (1982) pp. 19–143 [a28] A. Miščenko, A. Fomenko, "The index of elliptic operators over $C ^ { * }$-algebras" Izv. Akad. Nauk. SSSR Ser. Mat. , 43 (1979) pp. 831–859 MR548506 [a29] P. Gilkey, "Curvature and the eigenvalues of the Laplacian for elliptic complexes" Adv. Math. , 10 (1973) pp. 344–382 MR0324731 Zbl 0259.58010 [a30] P. Gilkey, "Curvature and the eigenvalues of the Dolbeault complex for Kaehler manifolds" Adv. Math. , 11 (1973) pp. 311–325 MR0334290 Zbl 0285.53044 [a31] P. Gilkey, "Invariance theory, the heat equation, and the Atiyah–Singer index theorem" , CRC (1994) MR1396308 MR0783634 Zbl 0856.58001 Zbl 0565.58035 [a32] M. Gromov, H. Lawson Jr., "The classification of simply connected manifolds of positive scalar curvature" Ann. of Math. , 111 (1980) pp. 423–434 MR0577131 Zbl 0463.53025 [a33] F. Hirzebruch, "Topological methods in algebraic geometry" , Grundl. Math. Wissenschaft. , 131 , Springer (1966) (Edition: Third) MR0202713 Zbl 0138.42001 [a34] N. Hitchin, "Harmonic spinors" Adv. Math. , 14 (1974) pp. 1–55 MR0358873 Zbl 0284.58016 [a35] L. Hörmander, "Pseudo-differential operators" Commun. Pure Appl. Math. , 18 (1965) pp. 501–517 MR0180740 Zbl 0125.33401 [a36] M. Karoubi, "Homology cyclique et K-theorie" Astérisque , 149 (1987) pp. 1–147 Zbl 0601.18007 [a37] J. Kohn, L. Nirenberg, "An algebra of pseudodifferential operators" Commun. Pure Appl. Math. , 18 (1965) pp. 269–305 MR176362 [a38] J.-L. Loday, D. Quillen, "Cyclic homology and the Lie homology of matrices" Comment. Math. Helv. , 59 (1984) pp. 565–591 MR780077 Zbl 0565.17006 [a39] H. McKean Jr., I. Singer, "Curvature and the eigenvalues of the Laplacian" J. Diff. Geom. , 1 (1967) pp. 43–69 [a40] R. Melrose, "The Atiyah–Patodi–Singer index theorem" , Peters (1993) MR1348401 Zbl 0796.58050 [a41] W. Müller, "Manifolds with cusps of rank one, spectral theory and an $L^{2}$-index theorem" , Lecture Notes in Mathematics , 1244 , Springer (1987) [a42] V. Nistor, "Higher index theorems and the boundary map in cyclic cohomology" Documenta Math. (1997) pp. 263–296 ((electronic)) MR1480038 Zbl 0893.19002 [a43] D. Quillen, "Superconnections and the Chern character" Topology , 24 (1985) pp. 89–95 MR0790678 Zbl 0569.58030 [a44] D. Ray, I. Singer, "$R$-torsion and the laplacian on Riemannian manifolds" Adv. Math. , 7 (1971) pp. 145–210 MR0295381 Zbl 0239.58014 [a45] J. Roe, "Elliptic operators, topology and asymptotic methods" , Pitman Res. Notes in Math. Ser. , 179 , Longman (1988) MR0960889 Zbl 0654.58031 [a46] R.T. Seeley, "Refinement of the functional calculus of Calderòn and Zygmund" Indag. Math. , 27 (1965) pp. 521–531 Nederl. Akad. Wetensch. Proc. Ser. A , 68 (1965) MR0226450 Zbl 0141.13302 [a47] Mark Stern, "$L^{2}$-index theorems on locally symmetric spaces" Invent. Math. , 96 (1989) pp. 231–282 MR0989698 Zbl 0694.58039 [a48] S. Stolz, "Simply connected manifolds of positive scalar curvature" Ann. of Math. , 136 : 2 (1992) pp. 511–540 MR1189863 Zbl 0784.53029 [a49] C. Taubes, "$S ^ { 1 }$ actions and elliptic genera" Comm. Math. Phys. , 122 (1989) pp. 455–526 MR0998662 Zbl 0683.58043 [a50] B.L. Tsygan, "Homology of matrix Lie algebras over rings and Hochschild homology" Uspekhi Mat. Nauk. , 38 (1983) pp. 217–218 MR0695483 Zbl 0526.17006 [a51] E. Witten, "Supersymmetry and Morse theory" J. Diff. Geom. , 17 (1982) pp. 661–692 MR0683171 Zbl 0499.53056 [a52] E. Witten, "Elliptic genera and quantum field theory" Comm. Math. Phys. , 109 (1987) pp. 525–536 MR0885560 Zbl 0625.57008 [a53] E. Witten, "Constraints on supersymmetry breaking" Nucl. Phys. B , 202 (1982) pp. 253–316 MR0668987 [a54] V. Patodi, "Curvature and the eigenforms of the Laplace operator" J. Diff. Geom. , 5 (1971) pp. 233–249 MR0292114 Zbl 0211.53901 Index theory. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Index_theory&oldid=50758 This article was adapted from an original article by Victor Nistor (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Index_theory&oldid=50758" TeX semi-auto TeX partially done
CommonCrawl
A linear algebra-free proof of the Matrix-Tree Theorem Posted on December 21, 2019 by Maria Gillespie As a new assistant professor at Colorado State University, I had the privilege this fall of teaching Math 501, the introductory graduate level course in combinatorics. We encountered many 'mathematical gemstones' in the course, and one of my favorites is the Matrix-Tree theorem, which gives a determinantal formula for the number of spanning trees in a graph. In particular, there is a version for directed graphs that can be stated as follows. Consider a directed graph $D=(V,E)$, consisting of a finite vertex set $V=\{v_1,\ldots,v_n\}$ and a set of directed edges $E\subseteq V\times V$. An oriented spanning tree of $D$ is a subset $T\subset E$ of the edges, along with a chosen root vertex $v_k$, such that there is a unique path in $T$ from any vertex $v_j\in V$ to the root $v_k$. Such a tree is said to be oriented towards $v_k$, since all the edges are `pointing towards' the root. The term spanning indicates that $T$ is incident to every vertex in $V$. For example, in the digraph $D$ at left below, an oriented spanning tree rooted at $v_9$ is shown using red edges in the graph at right. Define $\tau(D,v_k)$ to be the number of oriented spanning trees of $D$ rooted at $v_k$. One can check that, in the above graph, we have $\tau(D,v_9)=16$. Now, let $m_{i,j}$ be the number of directed edges from $v_i$ to $v_j$ in $D$, so that $m_{i,j}$ is equal to $1$ if $(v_i,v_j)$ is an edge and $0$ otherwise. Define the Laplacian of the digraph $D$ to be the matrix $$L(D)=\left(\begin{array}{ccccc} \mathrm{out}(v_1) & -m_{1,2} & -m_{1,3} & \cdots & -m_{1,n} \\ -m_{2,1} & \mathrm{out}(v_2) & -m_{2,3} & \cdots & -m_{2,n} \\ -m_{3,1} & -m_{3,2} & \mathrm{out}(v_3) & \cdots & -m_{3,n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -m_{n,1} & m_{n,2} & -m_{n,3} & \cdots & \mathrm{out}(v_n) \end{array}\right)$$ where $\mathrm{out}(v_i)$ is the outdegree of $v_i$, the number of non-loop edges having starting vertex $v_i$ (that is, the number of edges from $v_i$ to a vertex other than $v_i$). Then the (directed) Matrix-Tree theorem states that $$\tau(D,v_k)=\det(L_0(D,k))$$ where $L_0(D,k)$ is the deleted Laplacian obtained by deleting the $k$th row and column from $L(D)$. For instance, in the above graph, we have $$\det L_0(D,9)=\det \left(\begin{array}{cccccccc} 2 & -2 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & -1 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 2 & 0 & 0 & -1 & 0 & 0 \\ 0 & -1 & 0 & 2 & -1 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 2 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & -1 & 2 & 0 & -1 \\ 0 & 0 & 0 & 0 & -1 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right)=16$$ There are several known proofs of the Matrix-Tree theorem. One of the more `standard' proofs is by induction on the number of edges in the digraph, combined with a bit of linear algebra and row reduction. But it got me thinking: Is there be a way to prove that the determinant formula holds directly, without relying on induction or linear algebra? In particular, the determinant of a matrix $A=(a_{ij})$ can be defined explicitly as $$\det(A)=\sum_{\pi\in S_n} \mathrm{sgn}(\pi)\prod_{i} a_{i\pi(i)}$$ where $\pi:\{1,2,\ldots,n\}\to \{1,2,\ldots,n\}$ ranges over all permutations (bijections) in the symmetric group $S_n$. For instance, \begin{align*} \det\left(\begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array}\right)&=a_{11}a_{22}a_{33}-a_{12}a_{21}a_{33}-a_{11}a_{23}a_{32} \\ &\phantom{=}+a_{12}a_{23}a_{31}+a_{13}a_{21}a_{32}-a_{13}a_{22}a_{31}. \end{align*} It is natural to ask whether applying this formula to the deleted Laplacian gives any combinatorial insight into why the Matrix-Tree theorem should hold. And indeed, there is a direct proof using this combinatorial definition of the determinant! A combinatorial proof For simplicity we set $k=n$, so that we are deleting the $n$th row and column to create the deleted Laplacian $\det(L_0(D,n))$. It is sufficient to consider this case since we can always relabel the vertices to have the deleted vertex be the $n$th. We now give a combinatorial interpretation of each of the terms of the determinant $\det(L_0(D,n)$ as a sum over permutations of $\{1,2,\ldots,n-1\}$. The term corresponding to the identity permutation is the product of the diagonal entries of $L_0(D,n)$, which is $$\prod_{i\neq n} \mathrm{out}(v_i).$$ This counts the number of ways of choosing a non-loop edge starting at each vertex $v_i\neq v_n$; we call such a choice an out-edge subgraph $G$ of $D$. Note that all oriented spanning trees with root $v_n$ are out-edge subgraphs, but in general an out-edge subgraph may have cycles among the vertices other than $v_n$. In fact, it is not hard to see that every out-edge subgraph consists a number of nontrivial directed cycles among non-$v_n$ vertices, along with a unique directed path from every other vertex into either one of the cycles or into $v_n$. Two examples of out-edge subgraphs which are not trees are shown below. Now, for a general term corresponding to a permutation $\pi$ of $\{1,2,\ldots,n-1\}$, consider the decomposition of $\pi$ into disjoint cycles. Suppose there are $p$ fixed points and $r$ nontrivial cycles; let $a_1,\ldots,a_p$ be the fixed points of $\pi$ and $(a_{1}^{(j)}\cdots a_{c_j}^{(j)})$ are the other cycles of lengths $c_1,\ldots,c_r$. Then the sign of $\pi$ is $$\mathrm{sgn}(\pi)=(-1)^{(c_1-1)+\cdots+(c_r-1)}=(-1)^{(n-1-p)-r}.$$ The entries multiplied together in the term corresponding to $\pi$ are the outdegrees of $v_{a_1},\ldots, v_{a_p}$ along with the values $-m_{a_{t}^{(i)},a_{t+1}^{(i)}}$. Their product is $(-1)^{n-1-p}$ times the number of ways to choose an edge from $v_{a_t^{(i)}}$ to $v_{a_{t+1}^{(i)}}$ for each $i$ and $t$. Putting this all together, the entire term of the determinant corresponding to $\pi$ is $(-1)^{r}$ times the number of subgraphs formed by choosing a cyclic path on the vertices corresponding to each nontrivial cycle in $\pi$, as well as an out edge for each fixed point. Such a choice is an out-edge subgraph that is compatible with $\pi$ in the sense that any cycle of $\pi$ corresponds to a cycle on the subgraph. For some examples of compatibility, the permutations $(123)$, $(123)(57)$, $(57)$, and the identity are compatible with the out-edge subgraph drawn above at left. The permutations $(365)$ and the identity are compatible with the subgraph above at right. It follows that we can rewrite the determinant as: $$\det L_0(D,n)=\sum_{(G,\pi)} (-1)^{r(\pi)}$$ where $r(\pi)$ is the number of nontrivial cycles of the $\pi$, and where the sum ranges over all pairs $(G,\pi)$ where $G$ is an out-edge subgraph and $\pi$ is a permutation compatible with $G$. (Note that the same out-edge subgraph $G$ may occur several times, paired with different permutations $\pi$.) We finally construct a sign-reversing involution on the compatible pairs $(G,\pi)$ that cancel all the negative entries in the sum above. In particular, if $G$ has no cycles then send $(G,\pi)$ to itself, and otherwise consider the cycle $C$ in $G$ containing the vertex with the smallest label among all cycles in $G$. Define $\pi'$ by removing $C$ from $\pi$ if $\pi$ contains the cycle $C$, and otherwise adding $C$ to $\pi$ (in other words, toggle whether the elements of $C$ form a cycle or are all fixed points in the permutation). Then $\pi'$ is still compatible with $G$, so we can map $(G,\pi)$ to $(G,\pi')$ in this case. This forms a sign-reversing involution in which the only non-canceling terms come from the pairs $$(T,\mathrm{id})$$ where $T$ is an out-edge subgraph with no cycles and $\mathrm{id}$ is the identity permutation. Since a non-cyclic out-edge subgraph on $v_1,\ldots,v_{n-1}$ must be rooted at $v_n$ (for otherwise it would have a cycle), we can conclude that $\det L_0(D,n)$ is the number of spanning trees of $D$ rooted at $v_n$. This entry was posted in Gemstones, Sapphire by Maria Gillespie. Bookmark the permalink.
CommonCrawl
serious answers to absurd questions and absurd advice for common concerns from xkcd's Randall Munroe What If? 2: Additional Serious Scientific Answers to Absurd Hypothetical Questions is out! Order here! ◀︎ If every person on Earth aimed a laser pointer at the Moon at the same time, would it change color? —Peter Lipowicz Not if we use regular laser pointers. The first thing to consider is that not everyone can see the Moon at once. We could gather everyone in one spot, but we learned our lesson about that a few weeks ago. Instead, let's just pick a time when the Moon is visible to as many people as possible. Since about 75% of the world's population lives between 0°E and 120°E, we should try this while the Moon is somewhere over the Arabian Sea. We can try to illuminate either a new moon or a full moon. The new moon is darker, making it easier to see our lasers. But the new moon is a trickier target, because it's mostly visible during the day—washing out the effect. Brightness aside, an ideal time would probably be 2:00 PM EST on December 27th, 2012, when a full moon will be high in the sky above Mumbai and Islamabad. At that point, the Moon will be visible to approximately five billion people—most of Asia, Europe, and Africa—about as many as can ever see it at one time. But let's pick a quarter moon instead, so we can see the effect on the dark side. We'll avoid the December 21st quarter moon to avoid encouraging any Mayan nonsense, and pick the one on January 4th, 2013, half an hour after midnight (GMT). It'll be day in East Asia but night in Africa and Europe. Here's our target: The typical red laser pointer is about 5 milliwatts, and a good one has a tight enough beam to actually hit the Moon—though it'd be spread out over a large fraction of the surface when it got there. The atmosphere would distort the beam a bit, and absorb some of it, but most of the light would make it. Let's assume everyone has steady enough aim to hit the Moon, but no more than that, and the light is spread evenly across the surface. At half an hour after midnight (GMT), everyone aims and presses the button. This is what happens: Well, that's disappointing. It makes sense, though. Sunlight bathes the Moon in a bit over a kilowatt of energy per square meter. Since the Moon's cross-sectional area is around 10^13 square meters, it's bathed in about 10^16 watts of sunlight—ten petawatts, or two megawatts per person—far outshining their five milliwatt laser pointer. There are varying efficiencies in each part of this system, but none of it changes that basic equation. 5 milliwatts is wimpy. We can do better. A 1-watt laser is an extremely dangerous thing. It's not just powerful enough to blind you—it's capable of burning skin and setting things on fire. Obviously, they're not legal for consumer purchase in the US. Just kidding! You can pick one up for $300. So suppose we spend the $2 trillion to buy one-watt green lasers for everyone. (Memo to presidential candidates: this policy would win my vote.) In addition to being more powerful, green laser light is nearer to the middle of the visible spectrum, so the eye is more sensitive to it and it seems brighter. Here's the effect: Dang. The laser pointers we're using put out about 150 lumens of light (more than most flashlights) in a beam 5 arc-minutes wide. This lights up the surface of the Moon with about half a lux of illumination—compared to about 130,000 lux from the sun. (Even if we aimed them all perfectly, it would only manage half a dozen lux over about 10% of the Moon's face.) By comparison, the full moon lights up the Earth's surface with about one lux of illumination—which means that not only would our lasers be too weak to see from Earth, but if you were standing on the Moon, the laser light on the landscape would be fainter than Moonlight is to us on Earth. With advances in lithium batteries and LED technology over the last ten years, the high-performance flashlight market has exploded. But it's clear that flashlights aren't gonna cut it. So let's skip past all of that and give everyone a Nightsun. You may not recognize the name, but chances are you've seen one in operation: It's the searchlight mounted on police and Coast Guard helicopters. With an output on the order of 50,000 lumens, it's capable of turning a patch ground from night to day. The beam is several degrees wide, we'll want some focusing lenses to get it down to the half-degree needed to hit the Moon. It's hard to see, but we're making progress! The beam is providing 20 lux of illumination, outshining the ambient light on the night half by a factor of two! However, it's quite hard to see, and it certainly hasn't affected the light half. Let's swap out each Nightsun for an IMAX projector array—a 30,000-watt pair of water-cooled lamps with a combined output of over over a million lumens. Still barely visible. At the top of the Luxor Hotel in Las Vegas is the most powerful spotlight on Earth. Let's give one of them to everyone. Oh, and let's add a lens array to each so the entire beam is focused on the Moon: Our light is definitely visible, so we've accomplished our goal! Good job, team. … Well. The Department of Defense has developed megawatt lasers, designed for destroying incoming missiles in mid-flight. The Boeing YAL-1 was a megawatt-class chemical oxygen iodine laser mounted in a 747. It was an infrared laser, so it wasn't directly visible, but we can imagine building a visible-light laser with similar power. Let's give one to everyone. Finally, we've managed to match the brightness of sunlight! We're also drawing five petawatts of power, which is double the world's average electricity consumption. Ok, let's mount a megawatt laser on every square meter of the surface of Asia. Powering this array of 50 trillion lasers would use up Earth's oil reserves in approximately two minutes, but for those two minutes, the Moon would look like this: The Moon shines as brightly as the midmorning sun, and by the end of the two minutes, the lunar regolith is heated to a glow. Ok, let's step even more firmly outside the realm of plausibility. The most powerful laser on Earth is the confinement beam at the National Ignition Facility, a fusion research laboratory. It's an ultraviolet laser with an output of 500 terawatts. However, it only fires in single pulses lasting a few nanoseconds, so the total energy delivered is about equivalent to a quarter-cup of gasoline. Let's imagine we somehow found a way to power and fire it continuously, gave one to everyone, and pointed them all at the Moon. Unfortunately, the laser energy flow would turn the atmosphere to plasma, instantly igniting the Earth's surface and killing us all. But let's assume that the lasers somehow pass through the atmosphere without interacting. Under those circumstances, it turns out Earth still catches fire. The reflected light from the Moon would be four thousand times brighter than the noonday sun. Moonlight would become bright enough to boil away Earth's oceans in less than a year. But forget the Earth—what would happen to the Moon? The laser itself would exert enough radiation pressure to accelerate the Moon at about one ten millionth of a gee. This acceleration wouldn't be noticeable in the short term, but over the years, it adds up to enough to push it free from Earth orbit. … If radiation pressure were the only force involved. 40 megajoules of energy is enough to vaporize a kilogram of rock. Assuming Moon rocks have an average density of about 3 kg/liter, the lasers would pump out enough energy to vaporize four meters of lunar bedrock per second: \[\frac{5\text{ billion people}\times 500\frac{\mathrm{terawatts}}{\text{person}}}{\pi\times\text{Moon radius}^2}\times20\frac{\mathrm{megajoules}}{\mathrm{kilogram}}\times 3\frac{\mathrm{kilograms}}{\mathrm{liter}}\approx4 \frac{\mathrm{meters}}{\text{second}}\] However, the actual lunar rock won't evaporate that fast—for a reason that turns out to be very important. When a chunk of rock is vaporized, it doesn't just disappear. The surface layer of the Moon becomes a plasma, but that plasma is still blocking the path of the beam. Our laser keeps pouring more and more energy into the plasma, and the plasma keeps getting hotter and hotter. The particles bounce off each other, slam into the surface of the Moon, and eventually blast away into space at a terrific speed. This flow of material effectively turns the entire surface of the Moon into a rocket engine—and a surprisingly efficient one, too. Using lasers to blast off surface material like this is called laser ablation, and it turns out to be a promising method for spacecraft propulsion. The Moon is massive, but slowly and surely the rock plasma jet begins to push it away from the Earth. (The jet would also scour clean the face of the Earth and destroy the lasers, but we're pretending for the moment that they're invulnerable.) The plasma also physically tears away the lunar surface, a complicated interaction that's tricky to model. But if we make the wild guess that the particles in the plasma exit at an average speed of 500 kilometers per second, then it will take a few months for the Moon to be pushed out of range of our laser. It will keep most of its mass, but escape Earth's gravity and enter a lopsided orbit around the sun. Technically, the Moon won't become a new planet, under the IAU definition of a planet. Since its new orbit crosses Earth's, it will be considered a dwarf planet like Pluto. This Earth-crossing orbit will lead to periodic unpredictable orbital perturbation. Eventually it will either be slingshotted into the Sun, ejected toward the outer Solar System, or slammed into one of the planets—quite possibly ours. I think we can all agree that in this case, we'd deserve it. Scorecard: And that, at last, is enough power. Serious Scientific Answers to Absurd Hypothetical Questions Thing Explainer Complicated Stuff in Simple Words Absurd Scientific Advice for Common Real-World Problems What If? 2 More Serious Scientific Answers to Absurd Hypothetical Questions comics from xkcd what if? feed ©2022 xkcd, inc.
CommonCrawl
Why there's a Lorentz inner product in the unitary representations of the translation group? Consider Minkowski spacetime. Its translation group is just the additive group $\mathbb{R}^4$. This is an abelian locally compact group. Next, consider one unitary representation $T : \mathbb{R}^4\to \mathrm{U}(\mathcal{H})$ on the Hilbert space $\mathcal{H}$. It is said that the SNAG theorem implies that $$T(a)=\exp\left[i\eta_{\mu\nu}a^\mu P^\nu\right]$$ where $P^\mu$ are four Hermitian commuting observables and $\eta_{\mu\nu}$ is the Minkowski metric. I want to see how to derive this from the SNAG theorem. The theorem is stated as follows (Barut's group theory book): SNAG (Stone-Naimark-Ambrose-Godement) Theorem: Let $T$ be an unitary continuous representation of an abelian locally compact group $G$ in a Hilbert space $\mathscr{H}$. Then there exists on the character group $\hat{G}$ a spectral measure $E$ such that $$T(x)=\int_{\hat{G}}\langle \hat{x},x\rangle dE(\hat{x})$$ Now it is possible to show that for the additive group $\mathbb{R}^n$ the SNAG theorem tells us that there are $n$ self-adjoint commuting operators $Y_1,\dots,Y_n$ such that $$T(x)=\exp\left[i\sum_{k=1}^n x^k Y_k\right].$$ These operators $Y_k$ are defined in terms of the spectral measure $E$ given by the SNAG theorem as $$Y_k=\int y_k dE(y).$$ Now, the Minkowski spacetime translation group is exactly $\mathbb{R}^4$ so this theorem should apply. Indeed it is almost it, except that for Minkowski spacetime, the operators are $P_0,\dots, P_3$ and $$T(a)=\exp\left[i \eta_{\mu\nu}a^\mu P^\nu\right]$$ I can't get why. How does the Minkowski inner product ends up there if the translation group is just $\mathbb{R}^4$ which has nothing to do with the metric structure? This has something to do with the realization of the translation group as a subgroup of the Poincare group so that if $U(\Lambda,a)$ is a unitary representation of the latter one has $U(1,a)$ a unitary representation of the translations satisfying $$U(\Lambda,b)U(1,a)U(\Lambda,b)^\dagger=U(1,\Lambda a)$$ I think the answer comes from this, but I don't know how to justify it. quantum-mechanics quantum-field-theory special-relativity mathematical-physics group-theory $\begingroup$ My guess is the answer is something like: we can use any product between $a$ and $P$ that we like, since we are just talking about the translation group, which doesn't know anything about the Minkowski product. To make things convenient later we take the convention that positive time translations are mapped to $e^{itP^0}$ while positive space translations are mapped to $e^{-ixP^1}$ etc. Any physical meaning for the Minkowski product there comes later when we restrict our representation to positive energies only. $\endgroup$ – Luke Pritchett Jan 28 '19 at 2:24 $\begingroup$ The SNAG theorem is a neat generalization of the Stone's theorem and thus one can use the spectral theorem for both operator types (unitary and self-adjoint) to arrive at the result via the "exponential of an unbounded self-adjoint operator", a quite delicate mathematical concept. @Valter Moretti. $\endgroup$ – DanielC Jan 28 '19 at 17:34 $\begingroup$ I think I made some progress, but there is one last bit which is exactly what @LukePritchett talks about in his comment. How the Minkowski inner product ends up in the exponent if the translation group "knows nothing about said product"? I think the answer is in the fact that if $U(a,\Lambda)$ is a unitary representation of the Poincare group, then $$U(b,\Lambda)U(a,1)U(b,\Lambda)^\dagger = U(\Lambda a,1)$$. This implies in particular that $$U(b,\Lambda)P^\mu U(b,\Lambda)^\dagger = \Lambda^\mu_\nu P^\nu$$ so I think that somehow the answer comes from this. I just don't know how. $\endgroup$ – user1620696 Jan 28 '19 at 18:05 I will use the signature $(+,-,-,-)$ for the Minkowski metric $\eta$. If you got so far as showing that $$ \forall\ {\rm continuous\ unitary\ repesentation}\ T\ {\rm of}\ (\mathbb{R}^4,+), $$ $$ \exists\ {\rm commuting\ self-adjoint\ operators} \ Y_1,\ldots,Y_4, $$ $$ \forall x\in \mathbb{R}^4,\ \ T(x)=\exp[i(x^1T_1+x^2T_2+x^3T_3+x^4T_4)] $$ then the last step to conclude that $$ \forall\ {\rm continuous\ unitary\ repesentation}\ T\ {\rm of}\ (\mathbb{R}^4,+), $$ $$ \exists\ {\rm commuting\ self-adjoint\ operators} \ P^0,\ldots,P^3, $$ $$ \forall a=(a^0,\ldots,a^3)\in \mathbb{R}^4,\ \ T(a)=\exp[i(a^0P^0-a^1P^1-a^2P^2-a^3P^3)] $$ is trivial: just define $P^0=Y_1$, $P^1=-Y_2$, $P^2=-Y_3$, $P^3=-Y_4$. BTW, as mentioned in DanielC's comment, appealing to the SNAG Theorem is overkill here. The much more elementary Stone-von Neumann Theorem is enough. Abdelmalek AbdesselamAbdelmalek Abdesselam $\begingroup$ Thanks for the answer ! I've notice one could make such choice, but why is it convenient to do so? I mean, what's the motivation behind it? Has it to do with the fact that when the full Poincare group is considered we have the relation $$U(b,\Lambda)U(a,1)U(b,\Lambda)^\dagger = U(\Lambda a,1)$$ $\endgroup$ – user1620696 Feb 6 '19 at 20:55 $\begingroup$ This has nothing to do with the full Poincare group. Even if all we got is the Lorentz group, it is convenient to write everything in terms of Lorentz invariant objects like $\eta$ so one can easily track how things transform by Lorentz even in the midst of complicated computations (think, e.g., 5 loop Feynman diagram). So we do this because it is convenient and because we can. $\endgroup$ – Abdelmalek Abdesselam Feb 6 '19 at 21:27 Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-field-theory special-relativity mathematical-physics group-theory or ask your own question. Unitary representations of the diffeomorphism group in curved spacetime Why are only linear representations of the Lorentz group considered as fundamental quantum fields? Representations of Lorentz group on Hilbert spaces Representations of Lorentz group in interacting QFT Does there exist finite dimensional irreducible rep. of Poincare group where translations act nontrivially? (2+1)-d Unitary irreducible representations of the Poincare group Symmetry of the Pauli group and its representations What's wrong with this argument that parity is always conserved? Why in an irreducible unitary representation of the Poincare group all momenta are on the same mass shell? Can I define a unitary representation of the Lorentz group on the Hilbert space of a theory which breaks Lorentz invariance?
CommonCrawl
You are here: start » courses » lecture » wvlec » wvlecsprobdens Lecture (xx minutes) Slides: Probability density and probability Having established the connection between the wave function and the column vector as representations of the quantum state, for the continuous and discrete observables, respectively, it now remains to make some of the other connections that will be familiar to students who have studied the spin-1/2 system. The complex conjugate of the wave function follows easily: \[\left \langle x| \psi \right\rangle = \psi ^*\left( x \right)\] \[\left\langle \psi \right| \buildrel\textstyle.\over= {\psi ^*}\left( x \right)\] The notion of a probability density is new. It is a continuous function as opposed to the discrete probability functions the students encountered in Spins. (Relate to other continuous functions like height, for example.) Although the probability density was introduced in an earlier Modern Physics course, it now has much more impact. \[\wp \left( x \right) \equiv \psi ^*\left( x \right)\psi \left( x \right) = {\left| {\psi \left( x \right)} \right|^2}\] Discuss the dimensions of probability density and introduce the integral between two values of $x$ to calculate probability, which is dimensionless. \[{\wp _{a < x < b}} = \int\limits_a^b {\psi ^*\left( x \right)\psi \left( x \right)dx} \] Discuss normalized probability and relate to the same concept in the Spins course. Note from Winter 2012 (Mary Bridget Kustusch): Students were really bothered by the probability of finding a particle somewhere in space being $\langle{\psi}\vert{\psi}\rangle$ and not $\vert\langle{\psi}\vert{\psi}\rangle\vert^2$. It helped to go back to the idea of completeness and projectors to show how they actually were already doing the norm squared when looking at normalization. I think it also illuminated where the integral comes from $$\langle{\psi}\vert{\psi}\rangle=\bra{\psi}\left(\sum_n \ket{x_n}\bra{x_n}\right)\ket{\psi}=\sum_n \langle{\psi}\vert{x_n}\rangle\langle{x_n}\vert{\psi}\rangle= \sum_n \vert\langle{x_n}\vert{\psi}\rangle\vert^2 =\int_{-\infty}^{\infty} dx\,\, \psi^*(x)\psi(x)=\int_{-\infty}^{\infty} \vert\psi(x)\vert^2 dx=1$$ Go Back to the Waves Course Content Page Go Back to the old (3-week) Waves Course Content Page courses/lecture/wvlec/wvlecsprobdens.txt · Last modified: 2012/07/16 10:20 by tate
CommonCrawl
general term of expansionhonda civic for sale under $5,000 near jurong east There are three main possibilities to represent an arbitrary function as an infinite sum of simple functions. From the above pattern of the successive terms, we can say that the (r + 1) th term is also called the general term of the expansion (a + b) n and is denoted by T r+1. For example, the initial termcalled the zeroth, or monopole, momentis a constant, independent of angle. Write the first four nonzero terms and the general term for the Taylor series expansion of f (x) about x = 0. In binomial expansion, we generally find the middle term or the general term. r k!)] Click hereto get an answer to your question The general term in the expansion of (x + a)^n. by multiplying 5x4x3x2x1, just multiply the first 2, i.e. Different Kinds of Expansion Slots . Join / Login >> Class 11 >> Maths >> Binomial Theorem >> General and The openings on the rear of a computer case also sometimes go by this term. Examples: 1. Stack Exchange Network. The expansion always has (n + 1) terms. Find the first four terms in the binomial expansion of (1 - 3x) 3. expansions of an expression with distinct (i.e., not repeated) real roots. We Learn more about: General Term in binomial expansion: 1 General Term in (1 + x) n is nC r x r 2 In the binomial expansion of (x + y) n , the r th term from end is (n r + 2) th . More Advertisement Remove all ads. A mathematical representation of orientation distribution of structural units within the bulk polymer is given in terms of an expansion of the distribution function in a series of spherical harmonics. Stack Exchange network consists of 180 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share As $\;\biggl(\dfrac1{1-x}\biggr)^{(n-1)}=\dfrac{(n-1)! The multinomial theorem describes how to expand the power of a sum of more than two terms. in the binomial development is given by. ( n k)! Example: Find the 5 th term in the expansion of . Determine \left (r+1\right) (r +1) . Globbing a variable of a particular scope. Factorial: This is discussed in finding factorial of a number in Java post. It is known that the general term (T) r + 1 {which is the (r + 1) in the . Its increased focus on asset-liability management over the past two years has led to an expansion of long-term bond assets and a much narrowed asset-liability duration gap. What Netflix Teaches Us About International Expansion. The coefficient of x n in the Maclaurin series of e x / ( 1 x) is. The number of terms in the above expansion is equal to the number of non-negative integral Washington was one of five states that utilized a provision in the ACA that allowed for early expansion of Medicaid, prior to 2014. So, start with countries that wont pose significant regulatory or cultural challenges. The general term in the expansion of (x + y) n is the (r +1) th term that can be represented as T r+1, T r+1 = n C r x n-r y r The binomial coefficients in the expansion are arranged in an array, which is A new addition. Post not marked as liked. first 21 terms in the series expansion : f x = 1 1 -x =S n=0 xn To instruct Mathematica to sum the first 21 terms of this series, we write : Sum x^n, n, 0, 20 (Remember, since we are starting at n=0, we are summing over 21 terms culminating with the x20 term). binomial expansion of (a + b) n is given by T r + 1 = n C r a n r b r. Thus, the general term in the expansion of (x 2 y) 6 is. Need explanation for: A parallel-plate capacitor of area A, plate separation d and capacitance C is filled wih four It demonstrated that President Putin did not succeed in closing NATOs door," said NATO Secretary-General Jens Stoltenberg after Turkey lifted a veto on Finland and Sweden joining.At the summit, NATO agreed a longer-term support package for Short term vs long term capital gains, loss carryovers [ 4 Answers ] Back in 2000 and 2001, I had short term capital losses that exceeded the 3000 dollar limit. The middle term in the expansion of is term i.e. Even he did not give it in its modern form, but in terms of fluxions of the form y or d y d t, to be divided by x to give d y d x or f(x). contributed. As mentioned, the first formulation in terms of calculus was given by Simpson in 1740. Find the coefficient of the general term in the power series expansion of 1/ (1+ x): Obtain the power series expansion in Inactive form: Make a table of the power series expansions for different functions: A Taylor Series is an expansion of some function into an infinite sum of terms, where each term has a larger exponent like x, x 2, x 3, etc. 50c11+50c12+51c13-52c13. Firstly, write the expression as ( 1 + 2 x) Founded in 1999 Cryo Store is ISO 9001 certified and holds a GDP compliant wholesale distribution license. Solve Study Textbooks. The number of terms in $$\left(a+b\right)^{n} $$ or in $$\left(a-b\right)^{n} $$ is always equal to n + 1. Write the general term in the expansion of (x2 y x)12, x 0. Our portfolio is highly diversified by sector and region, with more than half of our investments outside of the United States and more than one-third in emerging markets. Consider the expansion of the binomial term (a + b) rained to the power n, i.e. the binomial expansion of (a + b) n: .. From the above pattern of the successive terms, we can say that the (r + 1) th term is also called the general term of the expansion (a + b) n and is denoted by T r+1. Here are the steps to do that. The general term in the expansion of \(\begin{pmatrix}x^3+2x\end{pmatrix}^6\) is: \[t_r = \begin{pmatrix}6 \\ r\end{pmatrix}2^rx^{18-2r}\] The \(x^{10}\) term occurs when \(r = 4\) and we We found that all of them have the same value, and that value is one. Step 1: Prove the formula for n = 1. Netflix is currently operating in more than 190 countries. Here, the coefficients n C r are called binomial coefficients. n = 2m. T r + 1 = ( 1) r n C r x n r a r. In the binomial expansion of ( 1 + x) n, we have. The binomial expansion formula is (x + y) n = n C 0 0 x n y 0 + n C 1 1 x n - 1 y 1 + n C 2 2 x n-2 y 2 + n C 3 3 x n - 3 y 3 + + n C n1 n 1 x y n - 1 + n C n n x 0 y n and it can be derived using mathematical induction. Expansion is the phase of the business cycle when the economy moves from a trough to a peak. NAExpansion and General terms of Binomial Theorem. Write the general term in iii) Find the term independent of x in the above expansion. Line splitting in case of 2 or more values in the same line. The general term in the binomial expansion of plus to the th power is denoted by sub plus one. Maclaurin series coefficients, ak are always calculated using the formula. Firstly, write the expression as ( 1 + 2 x) 2. 7 above. This is why Suyeon Khim. An example of this was done above. The following termthe first, or dipole, momentvaries once from positive to negative around the sphere. Answer: i) 15. Question Bank Solutions 9283. We will now summarize the key points from this video. The Binomial Expansion e.g.4 Find the 5th term of the expansion of in ascending powers of x. 2. Show Solution. + x 4 4! 11 th term. Mass General Brigham is a Boston-based non-profit hospital and physicians network long-term acute care, and skilled nursing services. Therefore, when n is an even number, then the number of the terms is (n + 1), which is an odd number. We would like to show you a description here but the site wont allow us. Expansion of a polynomial expression can be obtained by repeatedly replacing subexpressions that multiply two other subexpressions, at least one of which is an addition, by the equivalent sum of Use the result from part (a) to write the first three nonzero terms and the general term of the series expansion about x = 0 for g (x) = e^x/2 - 1/x. including comments, judgments, recommendations, or ratings concerning expansion, downsizing, reorganization, job restructuring, future compensation plans, promotion plans, and job assignments; General Contact. General term in binomial expansion is given by: Tr+1 = nCr An-r Xr. Typical slot formats include PCIe and PCI. General term in the expansion of . Expansionary policies increase the availability of funds, which, in turn, In the binomial expansion of ( x a) n, the general term is given by. T r + 1 = n C r x r. In the binomial expansion of ( 1 Answer: Question 10. Terms in the Binomial Expansion. 1:08:59. Question 9. i)Write the number of terms in the expansion of (a -b) 2n. Is there some general formula? General remarks. 4.4 k+. When the number of terms is odd, then there is a middle term in the expansion in which the exponents of a and b are the same. Write the General Term in the Expansion of (X2 Yx)12, X 0 . Learn how to calculate any term of a Binomial expansion using this simple formula. The expansion of metals and plastics in response to heat is well understood. Advertisement Advertisement New questions in Math. Example 6 : Find the middle Find the first four terms in ascending powers of x of the binomial expansion of 1 ( 1 + 2 x) 2. Senior Vice President & Group General Manager, Tech & Sustainability. Find the binomial expansion of (1 - x) 1/3 up to x 1 r1 x 2 r2 x 3 r3 x k rk. You know how to find the term in which x 27 exists from the discussion in No. Step 2: Assume that the formula is true for n = k. = 1 . The overall term. This formula is used to find A power series expansion of erf x can be obtained simply by expanding the exponential in Eq. It is a generalization of the binomial theorem to polynomials with any number of terms. where f is the given function, and in this case is e ( x ). Textbook Solutions 10377. The general term in the expansion of (x + y) n is given by t r+1 = n C r x n r y r. The binomial expansion of (x + y) n contain (n + 1) terms. We will now arrive at the General Term with the following CBSE CBSE (Arts) Class 11. In general where the a i are all unique. The general term in the above expansion is [(n!) Remember the laws of exponents? 12 )2( x+ 48 4 )2( 12 xC Solution: The 5th term contains 4 x Powers of a + b It is 48 )2(495 x= 4 126720 x= These numbers will always be the same. Condition 1: If the first common difference is a constant, use the linear equation ax + b = 0 in Prior to 2011, Washington had covered these residents under the state-run Basic Health Plan, which was write general term in the expansion of (x^2-y)^6. Wolfram|Alpha is a great tool for computing series expansions of functions. This can be generalized to get the formula for the \((r + 1)^{th}\) or the general term. 1. Show Solution. Expansion. Basically, if you're doing 5 choose 2, for example, instead of finding 5! The General Term: The general term formula is ( ( nC r)* (x^ ( n-r ))* (a^ r )). NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. General Term in Expansion of (a + b)n Middle term (s) in the expansion of (a + b)n Binomial Theorem for Negative Index Or Fraction Binomial Coefficients Sets and Relations Sets and Their write down the first four terms of the binomial expansion of (1-y)^8 in ascending power of y.By putting y=1/2x(1-x) in your expansion, find the value of p and q if [1-1/2x(1-x)]^8=1-4x+px^2+qx^3 . Find the first four terms in the binomial expansion of 1/ (1 + x) 2. Books. (9.47) erf x = 2 n = 0 ( - 1) nx2n + 1 ( 2n + 1) n!. If the radius of its base is R and its height is h then z0 is equal to : Q. Especially at the beginning of expansion, positive suprises beat expectations in GDP, sales and profits. r 2! NCERT P Bahadur Binomial Theorem its Properties and General Term OF Expansion. The early phase of an expansion is commonly termed a Recall that the formula for the general term for the expansion of ( + ) is = . Important Solutions 12. Replace r r in the formula for the \left (r+1\right)\text {th} (r +1)th term of the binomial expansion. 3.7 k+. 2 views 0 comments. The general term is also called as r th term. n n according to the exponent. Ex 8.2, 4 Write the general term in the expansion of (x2 yx)12, x 0 We know that General term of expansion (a + b)n is Tr+1 = nCr anr br For (x2 yx), Putting n = 12, a = The term does not include an independent contractor. Lagrange (1798) gave the modern formula, mentioning Newton and Raphson but not Simpson. Each year, I've been able to deduct 3000 dollars since my short term gains have been minimal. Write down the first three terms and the general term of the binomial expansion of (1 + x)-1 in ascending powers of x for |x| < 1. In this case, there will is only one middle term. m = n / 2. and maybe more. n = 2m. State the range of validity for your expansion. 15. Let x be the length of the longe Yes, it is the term in which the power of x is 0. Math - binomial General Term of Binomial Expansion T r+1 is the General Term in the binomial expansion The General term expansion is used to find the terms mentioned in the above Related Calculators. Partial fraction expansion can only be performed when the order of the denominator polynomial (the bottom term of the fraction) is greater than the order of the numerator (the top term). 5x4. NAExpansion and General terms of Binomial Theorem. Here. You made use of the general term T r + 1, you collected all the powers of x in the given binomial expansion and, you set the simplified collected powers of x to 27. Physics. 10x9x8, then divide by 3 !, which is just 6. r 3! The general term in the expansion of (x + a)^n (A) nCr x^n-r .a^r. How to use expansion in a sentence. My new office is in the Find the latest Portland General Electric Company (POR) stock quote, history, news and other vital information to help you with your stock trading and investing. There is no "elementary" Important Questions Ask Doubt. There will be (n+1) terms in the expansion mathematics . Using the general term formula, the Binomial Expansion of f ( x) can hence be expressed in this form: [ ] n ( r 1 ) n ( na ) r f ( x ) ( 1+ x ) 1+ x ; { n Rr N } r=1 a=0 r! Example : Find the middle terms in the expansion of (x / 4 + NCERT P Bahadur Binomial Theorem its Determine r r . as you see by doing "long multiplication" of the series for e x and 1 / ( 1 x). Economic expansion happens when real GDP grows from a trough to a peak within two or more subsequent quarters. General Atlantic seeks to identify investment themes that are driven by innovation and entrepreneurship and supported by long-term secular growth. This video is about General and Middle Terms of Binomial Expansion. The command, Sum, is capitalized and uses square brackets. By Observation we see that (r+1) th term is T r+1 = nC r a nrb r. The Middle term in the expansion of (a+b) n is If n is even, then the It is a period when the level of business activity surges and gross domestic General term in binomial expansion is given by: Tr+1 = nCr An-r Xr. GENERAL TERM OF BINOMIAL EXPANSION. Binomial Theorem its Properties and General Term OF Expansion. [4] (b) Use your expansion to estimate the value of (1:025)8, giving your answer to 4 decimal places. Physics. Example 3: It expresses a power. One piece is three times as long as the other. , and. Increased money supply higher consumption and greater economic growth. Then, divide by 2!. As of 2011, Washington was using a waiver from CMS to allow for federal funding to cover adults with incomes up to 133% of poverty. General term : T (r+1) = n c r x (n-r) a r. x = a/x, a = - x, r = 8 and n = 16. Chemistry. For the expansion $ \left( 2x - {1 \over x} \right)^{10} $, find the coefficient of the term with ${1 \over x^$}$ (9.42) and integrating term-by term. Find the first four terms in ascending powers of x of the binomial expansion of 1 ( 1 + 2 x) 2. + says that the function: e x. m = n / 2. Transcript. In order to find the middle term of the expansion of (a+x) n, we have to consider 2 cases. asked May 1, 2020 in RBSE by Ruksar03 (47.6k points) binomial theorem; class-11; 0 votes. The power n = 2 is negative and so we must use the second formula. The middle terms of expansion are T 4, and T 5. The middle terms of expansion are T 3, and T 4. The middle terms of expansion are T 2, and T 3. Writing code in comment? Please use ide.geeksforgeeks.org , generate link and share the link here. a n k x k Note that the factorial is given by N! Cheers! asked May 1, 2020 in RBSE by Ruksar03 (47.6k points) binomial theorem; class-11; 0 votes. Subbing the qualities we get. In this way we can calculate the general term in binomial theorem in Java. 643444085. (r+1)th term in the expansion of (x+a)^n will be. If n is even number: Let m be the middle term of binomial expansion series, then. The general term in the expansion of (a+x)n is (r+1)th term i.e. t r+1 = C (n,r)a n-r x r Thus, First term (r=0), t 1 = C (n,0)a n Second term (r=1), t 2 = C (n,1)a n-1 x 1 and so on. since the largest profits occur by deviating from general expectations and turning out More than just an online series expansion calculator. T (8 + 1) = 16 c 8 (a/x) (16-8) (- x) 8 = 16 c 8 a 8 x-8 = 16 c 8 a 8 x-4 = 16 c 8 a 8 /x 4. If it were 10 choose 3, then only do the first 3, i.e. You cant get everywhere at once. STORY: The decision to invite Finland and Sweden to become members demonstrates that NATOs door is open. + x 5 5! the inflation rate is approximately 2%-3% per year, which is The General Term & the Middle Term in Binomial Expansion. In the expansion of a binomial term (a + b) raised to the power of n, we can write the general and middle terms based on the value of n. Before getting into the general and middle terms in binomial expansion, let us recall some basic facts about binomial theorem and expansion.. The formula to find the numerically greatest term in the expansion of (1 + x) n is (n +1)|x| 1+|x| ( n + 1) | x | 1 + | x |. Books. General Term in Binomial Expansion (x + y)nis In order to find any term required in the binomial expansion,we use the General Term. SolveMyMath's Taylor Series Expansion Calculator. 1 answer. The result is. The multipole expansion is expressed as a sum of terms with progressively finer angular features. k = 0 n 1 k! State the range of validity for your expansion. The meaning of EXPANSION is expanse. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. Contact a legislative librarian: (651) 296-8338 or Email; Phone Numbers; Expansion of a polynomial expression can be obtained by repeatedly replacing subexpressions that multiply two other subexpressions, at least one of which is an addition, by the equivalent sum of products, continuing until the expression becomes a sum of (repeated) products. Filename : binomial-generalterm-illustration-withexpansion-ok.ggb The general term of binomial expansion can also be written as: ( a + x) n = k = 0 n n! (a) Find the rst 4 terms of the binomial expansion, in ascending powers of x, of 1 + x 4 8 giving each term in its simplest form. I understand the term "Parameter expansion" (A.K.A "Variable expansion") to be an umbrella term for several unrelated operations in shell-scripting in general and in Bash in particular, such as: Variable substitution. The general term in the expansion of (x + a)^n (A) nCr x^n-r .a^r. A four foot long wooden rod is cut into two pieces to make a kite. Syllabus. 1. Input the function you want to expand in Taylor serie : Variable : Around the Point a = (default a = 0) Maximum Power of the Expansion: How to Input. . The first three terms in the expansion of (x+1/x)^40 first term second term third term . Write the general term in the expansion of (x^2 yx)^12, x 0. asked Sep 7, 2018 in Mathematics by Sagarmatha (54.5k points) binomial theorem; In step 1, we are only using this formula to calculate coefficients. + x 3 3! The expansion in this exercise, (3x 2) 10, has power of n = 10, so the expansion will have eleven terms, and the terms will count up, not from 1 to 10 or from 1 to 11, but from 0 to 10. Concept Notes & Videos 557. 1 answer. The general term in the expansion of (x+a)n isnCrxnr.ar. Lets say, if youre expanding the expression, (x+y), later the middle terms are equal to (3+1 / 2) = 2nd term & (3+3 / 2) = 3rd term. Find the general term in the expansion of (x+2y)3 Get the answers you need, now! This series converges for all x, but the convergence becomes extremely slow if x significantly exceeds unity. An expansion slot is a port on a motherboard that accepts an expansion card. Great! Concerning the expansion of experiential learning opportunities through relationships with employers, and, in connection therewith, establishing a work-based learning incentive program, a digital navigation program, a career-aligned English as a second language program, a global talent task force to study in-demand occupations, and making an appropriation. Now, the binomial theorem may be represented using general term as, Middle term of Expansion. / (r 1! Hence, the third term will be Example: The Taylor Series for e x. e x = 1 + x + x 2 2! 1) If the sum of the first n terms of a series is Sn where Sn=2n-n, (a)prove that the series is an AP, stating the first term and the common difference; (b)find the sum of the terms from the 3rd to the 12th inclusive 2) In an AP Find the coefficient of specific term. ( x 1 + x 2 + + x k) n. (x_1 + x_2 + \cdots + x_k)^n (x1. Math; Advanced Math; Advanced Math questions and answers; 5. This is equal to choose multiplied by to ii) Find the general term in the expansion of (MARCH-2014) iii) Find the coefficient of x 6 y 3 in the expansion of (x + 2y) 9. Ex 8.2, 3 Write the general term in the expansion of (x2 y)6 We know that General term of expansion (a + b)n is Tr + 1 = nCr anr br For (x2 y)6 Putting n = 6 , a = The transition from expansion to contraction is termed a peak and the transition from contraction to expansion is termed a trough. Step 1. General term: General term in the expansion of \( (x+y)^{n}\) is given by the formula: \(T_{r+1}=^nC_rx^{n-r}y^{r}\) Middle terms: The middle term is the expansion of \( The fractional change in unit length per unit length per unit temperature change. x 0 = 1. General Term in the Expansion : The general term in the expansion of (a + x) n is the (r + 1)th term given as : t r+1 = n Cr a n-r x r Similarly the general term in the expansion of (x + a) n is I still haven't used up all my 2000 and 2001 short term losses. This process applies only to sequences whose nature are either linear or quadratic. Saracen (/ s r s n / SARR--sn) was a term used by Christian writers in Europe during the Middle Ages to refer to Muslims, primarily of Arab origin. The different Binomial Term involved in the binomial expansion is: General Term. We know that there will be n + 1 term so, n + 1 = 2m +1. Chemistry. A multipole expansion is a mathematical series representing a function that depends on anglesusually the two angles used in the spherical coordinate system (the polar and azimuthal angles) for three-dimensional Euclidean space, .Similarly to Taylor series, multipole expansions are useful because oftentimes only the first few terms are needed to provide a good approximation The first element inside [3] 10. If n is even number: Let m be the middle term of binomial expansion series, then. The companys approach to expansion was successful because it was methodic, data-driven and user-oriented. The first is the power series expansion and its two important generalizations, the Laurent series and the Puiseux series.The second is the series and Dirichlet series (general and periodic), and the third is the Fourier series Series representations. Effects of Expansionary Policy. Explore the relations between functions and their series expansions, and enhance your mathematical knowledge using Wolfram|Alpha's series expansion calculator. Recall that the first term in the expansion corresponds to the general term with = 0. k! Superior Monarch Socket Ballarat Aquatic Centre Coronavirus Craigslist Posting House For Rent In Hillsborough, Nj Threadbare Used In A Sentence San Jose Airport Domestic Arrivals general term of expansion 2022
CommonCrawl
9 Keynesian and Neoclassical Economics The Expenditure Multiplier Effect What you'll learn to do: explain policy implications of Keynesian economics By now, you know the basics of Keynesian economics and how it is connected to the AD-AS model. In this section, we will see how Keynesian economics plays out as government policies. In this section, you'll learn about the multiplier effect, the gdp gap, and Keynesian recommendations for reducing unemployment and inflation. Explain the expenditure multiplier effect Compute the size of the expenditure multiplier Keynesian economics has another important finding. You've learned that Keynesians believe that the level of economic activity is driven, in the short term, by changes in aggregate expenditure (or aggregate demand). Suppose that the macro equilibrium in an economy occurs at the potential GDP, so the economy is operating at full employment. Keynes pointed out that even though the economy starts at potential GDP, because aggregate demand tends to bounce around, it is unlikely that the economy will stay at potential. In 2007, U.S. investment expenditure collapsed with the fall of the housing market. As a result, the U.S. economy went into the Great Recession. But how much did GDP fall? Suppose investment fell by $100 billion. You might expect the result would be that GDP would fall by $100 billion too. If so, you would be wrong. It turns out that changes in any category of expenditure (Consumption + Investment + Government Expenditures + Exports-Imports) have a more than proportional impact on GDP. Or to say it differently, the change in GDP is a multiple of (say 3 times) the change in expenditure. This is the idea behind the multiplier. The reason is that a change in aggregate expenditures circles through the economy: households buy from firms, firms pay workers and suppliers, workers and suppliers buy goods from other firms, those firms pay their workers and suppliers, and so on. In this way, the original change in aggregate expenditures is actually spent more than once. This is called the expenditure multiplier effect: an initial increase in spending, cycles repeatedly through the economy and has a larger impact than the initial dollar amount spent. Watch this video for a quick overview of the expenditure multiplier. How Does the Expenditure Multiplier Work? It's easiest to see how the multiplier works with an increase in expenditure. Suppose government spontaneously purchase $100 billion worth of goods and services, perhaps because they feel optimistic about the future. The producers of those goods and services see an increase in income by that amount. They use that income to pay their bills, paying wages and salaries to their workers, rent to their landlords, payments for the raw materials they use. Any income left over is profit, which becomes income to their stockholders. Each of these economic agents takes their new income and spend some of it. Those purchases then become new income to the sellers, who then turn around and spend a portion of it. That spending becomes someone else's income. The process continues, though because economic agents spend only part of their income, the numbers get smaller in each round. When the dust settles the amount of new income generated is multiple times the initial increase in spending–hence, the name the spending multiplier. The table below gives an example of how this could work with an increase in government spending. Note that the multiplier works the same way in reverse with a decrease in spending. Table 1. Calculating the Multiplier Effect Original increase in aggregate expenditure from government spending 100 Save 10% of income. Spend 90% of income. Second-round increase of… 100 – 10 = 90 $90 of income to people through the economy: Save 10% of income. Spend 90% of income. Third-round increase of… 90 – 9 = 81 $81 of income to people through the economy: Save 10% of income. Spend 90% of income. Fourth-round increase of… 81 – 8.1 = 72.10 Table 1 works through the process of the multiplier. Over the first four rounds of aggregate expenditures, the impact of the original increase in government spending of $100 creates a rise in aggregate expenditures of $100 + $90 + $81 + $72.10 = $343.10, which is larger than the initial increase in spending. And the process isn't finished yet. CALCULATING THE MULTIPLIER Fortunately for everyone who is not carrying around a computer with a spreadsheet program to project the impact of an original increase in expenditures over 20, 50, or 100 rounds of spending, there is a formula for calculating the multiplier. The formula varies depending on how complex the version of the income-expenditure model is that you're using. Let's look at the simplest case. The marginal propensity to consume (MPC) is the fraction of any change in income that is consumed and the marginal propensity to save (MPS) is the fraction of any change in income that is saved. We'll assume for simplicity that there are no income taxes, and that imports are a set amount. In this case, the formula is: [latex]\displaystyle\text{Spending Multiplier}=\frac{1}{(1- \text{MPC} )}[/latex] Since a consumer's only two options (in this example) are to spend income or to save it, MPC + MPS = 1, 1 – MPC = MPS. Thus, an equivalent form for the multiplier is: [latex]\displaystyle\text{Spending Multiplier}=\frac{1}{(\text{MPS} )}[/latex] Suppose the MPC = 90%; then the MPS = 10% Therefore, the spending multiplier is: [latex]\displaystyle\text{Spending Multiplier}=\frac{1}{(1- \text{0.9} )}[/latex] [latex]\displaystyle\text=\frac{1}{(\text{0.1} )}\text=\frac{1}{(\frac{1}{10} )}\text=10 [/latex] In this simple case, a change in spending of $100 multiplied by the spending multiplier of 10 is equal to a change in GDP of $1,000. Watch the selected clip from this video (stopping at 3:14) for more practice in solving for the spending multiplier. In the real world, the multiplier formula is more complex since economic agents have more options than just spending or saving. They have to pay taxes, and they can buy imports, both of which reduce the amount of money being multiplied. Thus, the spending multiplier is somewhat smaller than the one we've calculated here. These other factors are known as "leakages," because they determine how much demand "leaks out" in each round of the multiplier effect. If the leakages are relatively small, then each successive round of the multiplier effect will have larger amounts of demand, and the multiplier will be high. Conversely, if the leakages are relatively large, then any initial change in demand will diminish more quickly in the second, third, and later rounds, and the multiplier will be small. Changes in the size of the leakages—a change in the marginal propensity to save, the tax rate, or the marginal propensity to import—will change the size of the multiplier. Thus, the spending multiplier in the real world is less than the multiplier derived in our simple example above. The multiplier applies to any type of expenditure (e.g. C + I + G + X-M), and it applies when expenditure decreases as well as when it increases. Say that business confidence declines and investment falls off, or that the economy of a leading trading partner slows down so that export sales decline. These changes will reduce aggregate expenditures, and then will have an even larger effect on real GDP because of the multiplier effect. This question allow you to get as much practice as you need, as you can click the link at the top of the question ("Try another version of this question") to get a new version of the question. Practice until you feel comfortable doing the question. Expenditure (or Spending) Multiplier: the ratio of the change in GDP to the change in aggregate expenditure which caused the change in GDP; the multiplier has a value greater than one Marginal Propensity to Consume: percentage of an increase (or decrease) in income which one spends (or reduces spending); also known as the MPC Marginal Propensity to Import: percentage of an increase (or decrease) in income which one spends (or reduces spending) on imported goods and services; also known as the MPI Marginal Propensity to Save: percentage of an increase (or decrease) in income which one saves (or reduces saving); also known as the MPS CC licensed content, Original Modification, adaptation, and original content. Provided by: Lumen Learning. License: CC BY: Attribution Introduction to Keynesian Policy Implications. Authored by: Steven Greenlaw and Lumen Learning. License: CC BY: Attribution Principles of Macroeconomics Appendix B. Authored by: OpenStax College. Located at: http://cnx.org/contents/[email protected]:2/Macroeconomics. License: CC BY: Attribution. License Terms: Download for free at http://cnx.org/contents/[email protected] All rights reserved content Macro Minute -- The Multiplier Effect. Provided by: You Will Love Economics. Located at: https://www.youtube.com/watch?v=AawBBHUGwJM. License: Other. License Terms: Standard YouTube License The Multiplier Effect- Macro 3.9B. Provided by: ACDCLeadership. Located at: https://www.youtube.com/watch?time_continue=49&v=RqWYmQQzXxs. License: Other. License Terms: Standard YouTube License
CommonCrawl
Difference between revisions of "Colloquia/Spring2020" Juliettebruce (talk | contribs) |Marshall |March 13 '''CANCELLED''' | [https://plantpath.wisc.edu/claudia-solis-lemus// Claudia Solis Lemus] (UW-Madison, Plant Pathology) |[[#Claudia Solis Lemus | New challenges in phylogenetic inference]] |[https://max.lieblich.us/ Max Lieblich] (Univ. of Washington, Seattle) |Boggess, Sankar |April 3 |April 3 '''CANCELLED''' |Caroline Turnage-Butterbaugh (Carleton College) All colloquia are on Fridays at 4:00 pm in Van Vleck B239, unless otherwise indicated. Jan 10 Thomas Lam (Michigan) Positive geometries and string theory amplitudes Erman Jan 21 Tuesday 4-5 pm in B139 Peter Cholak (Notre Dame) What can we compute from solutions to combinatorial problems? Lempp Jan 24 Saulo Orizaga (Duke) Introduction to phase field models and their efficient numerical implementation Jan 27 Monday 4-5 pm in 911 Caglar Uyanik (Yale) Hausdorff dimension and gap distribution in billiards Ellenberg Jan 29 Wednesday 4-5 pm Andy Zucker (Lyon) Topological dynamics of countable groups and structures Soskova/Lempp Jan 31 Lillian Pierce (Duke) On Bourgain's counterexample for the Schrödinger maximal function Marshall/Seeger Feb 7 Joe Kileel (Princeton) Inverse Problems, Imaging and Tensor Decomposition Roch Feb 10 Cynthia Vinzant (NCSU) Matroids, log-concavity, and expanders Roch/Erman Feb 12 Wednesday 4-5 pm in VV 911 Jinzi Mac Huang (UCSD) Mass transfer through fluid-structure interactions Spagnolie Feb 14 William Chan (University of North Texas) Definable infinitary combinatorics under determinacy Soskova/Lempp Feb 17 Yi Sun (Columbia) Fluctuations for products of random matrices Roch Feb 19 Zhenfu Wang (University of Pennsylvania) Quantitative Methods for the Mean Field Limit Problem Tran Feb 21 Shai Evra (IAS) Golden Gates in PU(n) and the Density Hypothesis Gurevich Feb 28 Brett Wick (Washington University, St. Louis) The Corona Theorem Seeger March 6 in 911 Jessica Fintzen (Michigan) Representations of p-adic groups Marshall March 13 CANCELLED Claudia Solis Lemus (UW-Madison, Plant Pathology) New challenges in phylogenetic inference Anderson March 20 Spring break March 27 CANCELLED Max Lieblich (Univ. of Washington, Seattle) Boggess, Sankar April 3 CANCELLED Caroline Turnage-Butterbaugh (Carleton College) Marshall April 17 JM Landsberg (TAMU) TBA Gurevich April 23 Martin Hairer (Imperial College London) Wolfgang Wasow Lecture Hao Shen April 24 Natasa Sesum (Rutgers University) Angenent May 1 Robert Lazarsfeld (Stony Brook) Distinguished lecture Erman Thomas Lam (Michigan) Title: Positive geometries and string theory amplitudes Abstract: Inspired by developments in quantum field theory, we recently defined the notion of a positive geometry, a class of spaces that includes convex polytopes, positive parts of projective toric varieties, and positive parts of flag varieties. I will discuss some basic features of the theory and an application to genus zero string theory amplitudes. As a special case, we obtain the Euler beta function, familiar to mathematicians, as the "stringy canonical form" of the closed interval. This talk is based on joint work with Arkani-Hamed, Bai, and He. Peter Cholak (Notre Dame) Title: What can we compute from solutions to combinatorial problems? Abstract: This will be an introductory talk to an exciting current research area in mathematical logic. Mostly we are interested in solutions to Ramsey's Theorem. Ramsey's Theorem says for colorings C of pairs of natural numbers, there is an infinite set H such that all pairs from H have the same constant color. H is called a homogeneous set for C. What can we compute from H? If you are not sure, come to the talk and find out! Saulo Orizaga (Duke) Title: Introduction to phase field models and their efficient numerical implementation Abstract: In this talk we will provide an introduction to phase field models. We will focus in models related to the Cahn-Hilliard (CH) type of partial differential equation (PDE). We will discuss the challenges associated in solving such higher order parabolic problems. We will present several new numerical methods that are fast and efficient for solving CH or CH-extended type of problems. The new methods and their energy-stability properties will be discussed and tested with several computational examples commonly found in material science problems. If time allows, we will talk about more applications in which phase field models are useful and applicable. Caglar Uyanik (Yale) Title: Hausdorff dimension and gap distribution in billiards Abstract: A classical "unfolding" procedure allows one to turn questions about billiard trajectories in a Euclidean polygon into questions about the geodesic flow on a surface equipped with a certain geometric structure. Surprisingly, the flow on the surface is in turn related to the geodesic flow on the classical moduli spaces of Riemann surfaces. Building on recent breakthrough results of Eskin-Mirzakhani-Mohammadi, we prove a large deviations result for Birkhoff averages as well as generalize a classical theorem of Masur on geodesics in the moduli spaces of translation surfaces. Andy Zucker (Lyon) Title: Topological dynamics of countable groups and structures Abstract: We give an introduction to the abstract topological dynamics of topological groups, i.e. the study of the continuous actions of a topological group on a compact space. We are particularly interested in the minimal actions, those for which every orbit is dense. The study of minimal actions is aided by a classical theorem of Ellis, who proved that for any topological group G, there exists a universal minimal flow (UMF), a minimal G-action which factors onto every other minimal G-action. Here, we will focus on two classes of groups: a countable discrete group and the automorphism group of a countable first-order structure. In the case of a countable discrete group, Baire category methods can be used to show that the collection of minimal flows is quite rich and that the UMF is rather complicated. For an automorphism group G of a countable structure, combinatorial methods can be used to show that sometimes, the UMF is trivial, or equivalently that every continuous action of G on a compact space admits a global fixed point. Lillian Pierce (Duke) Title: On Bourgain's counterexample for the Schrödinger maximal function Abstract: In 1980, Carleson asked a question in harmonic analysis: to which Sobolev space $H^s$ must an initial data function belong, for a pointwise a.e. convergence result to hold for the solution to the associated linear Schrödinger equation? Over the next decades, many people developed counterexamples to push the (necessary) range of s up, and positive results to push the (sufficient) range of s down. Now, these ranges are finally meeting: Bourgain's 2016 counterexample showed s < n/(2(n+1)) fails, and Du and Zhang's 2019 paper shows that s>n/(2(n+1)) suffices. In this talk, we will give an overview of how to rigorously derive Bourgain's 2016 counterexample, based on simple facts from number theory. We will show how to build Bourgain's counterexample starting from "zero knowledge," and how to gradually optimize the set-up to arrive at the final counterexample. The talk will be broadly accessible, particularly if we live up to the claim of starting from "zero knowledge." Joe Kileel (Princeton) Title: Inverse Problems, Imaging and Tensor Decomposition Abstract: Perspectives from computational algebra and optimization are brought to bear on a scientific application and a data science application. In the first part of the talk, I will discuss cryo-electron microscopy (cryo-EM), an imaging technique to determine the 3-D shape of macromolecules from many noisy 2-D projections, recognized by the 2017 Chemistry Nobel Prize. Mathematically, cryo-EM presents a particularly rich inverse problem, with unknown orientations, extreme noise, big data and conformational heterogeneity. In particular, this motivates a general framework for statistical estimation under compact group actions, connecting information theory and group invariant theory. In the second part of the talk, I will discuss tensor rank decomposition, a higher-order variant of PCA broadly applicable in data science. A fast algorithm is introduced and analyzed, combining ideas of Sylvester and the power method. Cynthia Vinzant (NCSU) Title: Matroids, log-concavity, and expanders Abstract: Matroids are combinatorial objects that model various types of independence. They appear several fields mathematics, including graph theory, combinatorial optimization, and algebraic geometry. In this talk, I will introduce the theory of matroids along with the closely related class of polynomials called strongly log-concave polynomials. Strong log-concavity is a functional property of a real multivariate polynomial that translates to useful conditions on its coefficients. Discrete probability distributions defined by these coefficients inherit several of these nice properties. I will discuss the beautiful real and combinatorial geometry underlying these polynomials and describe applications to random walks on the faces of simplicial complexes. Consequences include proofs of Mason's conjecture that the sequence of numbers of independent sets of a matroid is ultra log-concave and the Mihail-Vazirani conjecture that the basis exchange graph of a matroid has expansion at least one. This is based on joint work with Nima Anari, Kuikui Liu, and Shayan Oveis Gharan. Jinzi Mac Huang (UCSD) Title: Mass transfer through fluid-structure interactions Abstract: The advancement of mathematics is closely associated with new discoveries from physical experiments. On one hand, mathematical tools like numerical simulation can help explain observations from experiments. On the other hand, experimental discoveries of physical phenomena, such as Brownian motion, can inspire the development of new mathematical approaches. In this talk, we focus on the interplay between applied math and experiments involving fluid-structure interactions -- a fascinating topic with both physical relevance and mathematical complexity. One such problem, inspired by geophysical fluid dynamics, is the experimental and numerical study of the dissolution of solid bodies in a fluid flow. The results of this study allow us to sketch mathematical answers to some long standing questions like the formation of stone forests in China and Madagascar, and how many licks it takes to get to the center of a Tootsie Pop. We will also talk about experimental math problems at the micro-scale, focusing on the mass transport process of diffusiophoresis, where colloidal particles are advected by a concentration gradient of salt solution. Exploiting this phenomenon, we see that colloids are able to navigate a micro-maze that has a salt concentration gradient across the exit and entry points. We further demonstrate that their ability to solve the maze is closely associated with the properties of a harmonic function – the salt concentration. William Chan (University of North Texas) Title: Definable infinitary combinatorics under determinacy Abstract: The axiom of determinacy, AD, states that in any infinite two player integer game of a certain form, one of the two players must have a winning strategy. It is incompatible with the ZFC set theory axioms with choice; however, it is a succinct extension of ZF which implies many subsets of the real line possess familiar regularity properties and eliminates many pathological sets. For instance, AD implies all sets of reals are Lebesgue measurable and every function from the reals to the reals is continuous on a comeager set. Determinacy also implies that the first uncountable cardinal has the strong partition property which can be used to define the partition measures. This talk will give an overview of the axiom of determinacy and will discuss recent results on the infinitary combinatorics surrounding the first uncountable cardinal and its partition measures. I will discuss the almost everywhere continuity phenomenon for functions outputting countable ordinals and the almost-everywhere uniformization results for closed and unbounded subsets of the first uncountable cardinal. These will be used to describe the rich structure of the cardinals below the powerset of the first and second uncountable cardinals under determinacy assumptions and to investigate the ultrapowers by these partition measures. Yi Sun (Columbia) Title: Fluctuations for products of random matrices Abstract: Products of large random matrices appear in many modern applications such as high dimensional statistics (MANOVA estimators), machine learning (Jacobians of neural networks), and population ecology (transition matrices of dynamical systems). Inspired by these situations, this talk concerns global limits and fluctuations of singular values of products of independent random matrices as both the size N and number M of matrices grow. As N grows, I will show for a variety of ensembles that fluctuations of the Lyapunov exponents converge to explicit Gaussian fields which transition from log-correlated for fixed M to having a white noise component for M growing with N. I will sketch our method, which uses multivariate generalizations of the Laplace transform based on the multivariate Bessel function from representation theory. Zhenfu Wang (University of Pennsylvania) Title: Quantitative Methods for the Mean Field Limit Problem Abstract: We study the mean field limit of large systems of interacting particles. Classical mean field limit results require that the interaction kernels be essentially Lipschitz. To handle more singular interaction kernels is a longstanding and challenging question but which now has some successes. Joint with P.-E. Jabin, we use the relative entropy between the joint law of all particles and the tensorized law at the limit to quantify the convergence from the particle systems towards the macroscopic PDEs. This method requires to prove large deviations estimates for non-continuous potentials modified by the limiting law. But it leads to explicit convergence rates for all marginals. This in particular can be applied to the Biot-Savart law for 2D Navier-Stokes. To treat more general and singular kernels, joint with D. Bresch and P.-E. Jabin, we introduce the modulated free energy, combination of the relative entropy that we had previously developed and of the modulated energy introduced by S. Serfaty. This modulated free energy may be understood as introducing appropriate weights in the relative entropy to cancel the most singular terms involving the divergence of the kernels. Our modulated free energy allows to treat gradient flows with singular potentials which combine large smooth part, small attractive singular part and large repulsive singular part. As an example, a full rigorous derivation (with quantitative estimates) of some chemotaxis models, such as the Patlak-Keller-Segel system in the subcritical regimes, is obtained. Shai Evra (IAS) Title: Golden Gates in PU(n) and the Density Hypothesis. Abstract: In their seminal work from the 80's, Lubotzky, Phillips and Sarnak gave explicit constructions of topological generators for PU(2) with optimal covering properties. In this talk I will describe some recent works that extend the construction of LPS to higher rank compact Lie groups. A key ingredient in the work of LPS is the Ramanujan conjecture for U(2), which follows from Deligne's proof of the Ramanujan-Petersson conjecture for GL(2). Unfortunately, the naive generalization of the Ramanujan conjecture is false for higher rank groups. Following a program initiated by Sarnak in the 90's, we prove a density hypothesis and use it as a replacement of the naive Ramanujan conjecture. This talk is based on some joint works with Ori Parzanchevski and Amitay Kamber. Brett Wick (WUSTL) Title: The Corona Theorem Abstract: Carleson's Corona Theorem has served as a major motivation for many results in complex function theory, operator theory and harmonic analysis. In a simple form, the result states that for $N$ bounded analytic functions $f_1,\ldots,f_N$ on the unit disc such that $\inf \left\vert f_1\right\vert+\cdots+\left\vert f_N\right\vert\geq\delta>0$ it is possible to find $N$ other bounded analytic functions $g_1,\ldots,g_N$ such that $f_1g_1+\cdots+f_Ng_N =1$. Moreover, the functions $g_1,\ldots,g_N$ can be chosen with some norm control. In this talk we will discuss some generalizations of this result to certain vector valued functions and connections with geometry and to function spaces on the unit ball in several complex variables. Claudia Solis Lemus Title New challenges in phylogenetic inference Abstract: Phylogenetics studies the evolutionary relationships between different organisms, and its main goal is the inference of the Tree of Life. Usual statistical inference techniques like maximum likelihood and bayesian inference through Markov chain Monte Carlo (MCMC) have been widely used, but their performance deteriorates as the datasets increase in number of genes or number of species. I will present different approaches to improve the scalability of phylogenetic inference: from divide-and-conquer methods based on pseudolikelihood, to computation of Frechet means in BHV space, finally concluding with neural network models to approximate posterior distributions in tree space. The proposed methods will allow scientists to include more species into the Tree of Life, and thus complete a broader picture of evolution. Jessica Fintzen (Michigan) Title: Representations of p-adic groups Abstract: The Langlands program is a far-reaching collection of conjectures that relate different areas of mathematics including number theory and representation theory. A fundamental problem on the representation theory side of the Langlands program is the construction of all (irreducible, smooth, complex) representations of certain matrix groups, called p-adic groups. In my talk I will introduce p-adic groups and provide an overview of our understanding of their representations, with an emphasis on recent progress. I will also briefly discuss applications to other areas, e.g. to automorphic forms and the global Langlands program. Future Colloquia Past Colloquia WIMAW Retrieved from "https://www.math.wisc.edu/wiki/index.php?title=Colloquia/Spring2020&oldid=19248"
CommonCrawl
Specific cancer stem cell-therapy by albumin nanoparticles functionalized with CD44-mediated targeting Yuanyuan Li†1, Sanjun Shi†1, Yue Ming1, Linli Wang1, Chenwen Li1, Minghe Luo1, Ziwei Li1, Bin Li1 and Jianhong Chen1Email author †Contributed equally Journal of Nanobiotechnology201816:99 Received: 21 April 2018 Cancer stem cells (CSCs) are highly proliferative and tumorigenic, which contributes to chemotherapy resistance and tumor occurrence. CSCs specific therapy may achieve excellent therapeutic effects, especially to the drug-resistant tumors. In this study, we developed a kind of targeting nanoparticle system based on cationic albumin functionalized with hyaluronic acid (HA) to target the CD44 overexpressed CSCs. All-trans-retinoic acid (ATRA) was encapsulated in the nanoparticles with ultrahigh encapsulation efficiency (EE%) of 93% and loading content of 8.37%. TEM analysis showed the nanoparticles were spherical, uniform-sized and surrounded by a coating layer consists of HA. Four weeks of continuously measurements of size, PDI and EE% revealed the high stability of nanoparticles. Thanks to HA conjugation on the surface, the resultant nanoparticles (HA-eNPs) demonstrated high affinity and specific binding to CD44-enriched B16F10 cells. In vivo imaging revealed that HA-eNPs can targeted accumulate in tumor-bearing lung of mouse. The cytotoxicity tests illustrated that ATRA-laden HA-eNPs possessed better killing ability to B16F10 cells than free drug or normal nanoparticles in the same dose, indicating its good targeting property. Moreover, HA-eNPs/ATRA treatment decreased side population of B16F10 cells significantly in vitro. Finally, tumor growth was significantly inhibited by HA-eNPs/ATRA in lung metastasis tumor mice. These results demonstrate that the HA functionalized albumin nanoparticles is an efficient system for targeted delivery of antitumor drugs to eliminate the CSCs. Cationic albumin All-trans-retinoic acid Despite great advances in cancer treatment, many cancer patients are still threatened by cancer drug resistance and recurrence. For instance, lung cancer remains the leading cause of cancer death in men and women [1, 2]. There is mounting evidence that cancer stem cells (CSCs) in tumor mass play crucial roles in tumor progression, chemo- and radiotherapy resistance and relapse [3–5]. Current therapeutic strategies are capable of reducing tumor bulk, but lack of specificity to kill CSCs often resulted in development of recurrence and drug resistance [6–8]. Thus, new therapeutic approaches are required to overcome the limitation of conventional treatment. Recently, CSCs became a focus for targeted therapy [9–11]. Nanoparticle systems have been extensively employed as antitumor drug carriers for enhancing their anti-tumor capacity while reducing their side effects [12–20]. They not only prevent encapsulated drugs from enzymatic degradation, but also increase drug bioavailability and accumulation at selected tissues after systemic administration [21]. Nano-drugs with certain sizes also tends to passively accumulate in targeted tumor tissues through enhanced permeation and retention (EPR) effect caused by abnormally leaky vasculature and dysfunctional lymphatic drainage system in tumor tissues [22, 23]. Therefore, the therapeutic agents are released at the tumor sites. However, EPR effect-mediated passive targeting only is insufficient to achieve excellent CSC-specific targeted therapy. Positively charged nanoparticles have increased efficacy of drug delivery, but, they are accompanied with strong immune responses and increased removal from circulation. Recently, Lu et al. obtained cationic serum albumin through linking ethylenediamine with bovine serum albumin (eBSA) [24]. Han et al. reported eBSA-based self-assembled nanoparticles may be a promising delivery system for lung targeted therapy [25]. Nanoparticles based on the cationic serum albumin have the advantages of being nontoxic, non-immunogenic, biocompatible and biodegradable because cationization maintains protein structure and its activity [26, 27]. Additionally, modification on the surface of eBSA-based nanoparticles with tumor-targeting moieties, such as antibodies and ligands, may significantly increase the concentration of therapeutic agent on tumor site based on active targeted delivery mediated by antibody- or ligand-receptor interactions [9, 28]. Hyaluronic acid (HA), a natural biocompatible and biodegradable polysaccharide, has been used as a targeting ligand for cancer treatment due to its specific binding with CD44 receptor overexpressed on the surface of CSCs in many tumors [29]. Conjugating with HA enhances tumor targeting efficiency of normal nanoparticles. In addition, HA forms a hydrophilic layer on the surface of nanoparticles and protects the vector from opsonization, thus, prolonging systemic circulation time of HA-modified nanoparticles [30]. All-trans-retinoic acid (ATRA) is a derivative of retinoic acid, which is a naturally occurring compound known as vitamin A acid [31]. ATRA is a potent differentiating agent and involved in multiple signaling pathways related to stem cell maintenance. The anticancer efficacy of ATRA has been widely investigated in a variety of malignancies and successfully applied in clinical treatment of a stem cell malignancy, acute promyelocytic leukemia [32, 33]. The functions of ATRA rely on activating retinoic acid receptors and retinoid X receptor to regulate the transcription of genes and induce differentiation of stem cells and other biological effects [34]. However, unprotected ATRA suffers from poor solubility, instability, rapid clearance which leading to a quickly dropping in plasma concentration of ATRA and seriously dose-dependent side effects [35]. In addition, the use of ATRA is limited by low cellular uptake efficiency and lacking in tumor tissue selectivity. It is expected that encapsulated ATRA in nanoparticles may protect ATRA from degradation and increase its tumor specificity as compared with administration of free drug. Targeted delivery of ATRA into CSCs may stimulate CSCs to shift to a more differentiated status, resulting in more responsiveness to chemotherapy. We developed new hyaluronic acid-modified nanoparticles based on eBSA. We hypothesized that HA will specifically interact with CD44 receptor resulting in selective endocytosis of the nanoparticles by CD44-enriched CSCs. The new delivery system would significantly decrease the cytotoxicity with improved differentiation initiative ability on tumor cells. The targeting effect and antitumor efficacy of the formulation were evaluated on tumor model based on CD44 enriched B16F10 cells. Synthesis and characterization of eBSA Cationic BSA was obtained by modifying the surface of BSA by linking with ethylenediamine (eBSA) according with previously described method [36]. The eBSA used here has an average molecular weight (MW) of 69,614 Da analyzed by MALDI-TOF mass spectrum, indicating that each BSA molecule was on average bond 76 ethylenediamine groups (see detailed in Additional file 1: Fig. S1). Preparation and characterization of the HA grafted ATRA loaded cationic nanoparticles Albumin is an excellent material to prepare nanoparticles for drug delivery, due to its high binding capacity of drugs and good biocompatibility [27]. Nanoparticles based on cationic albumin were also designed to improve the delivery efficient into tumor site [24, 25, 37]. However, naked albumin-based nanoparticles usually are lacking in specific targeting capacity. In this research, HA was grafted onto the surface of cationic BSA-based nanoparticles, to improve the active targeting efficacy of the nanoparticles. The preparation of targeting nanoparticles system based on cationic albumin functionalized with HA (HA-eNPs) includes two steps. First step, ATRA dissolved in ethanol was added to eBSA solution and followed by homogenizing of sonication to form a crude emulsion. During the emulsion, ATRA can be encapsulated in the inside of eBSA molecule and the nanostructure (eNPs/ATRA) formed. After that, the crude emulsion was transferred into a high pressure homogenizer. During the second emulsion, the size of eNPs/ATRA should become smaller and more uniform. Second step, HA was added drop wise into eNPs/ATRA solution with stirring at room temperature. Consequently, HA was then grafted onto the surface of eNPs as a CSCs-targeting ligand in order to improve the targeting delivery efficiency. The physico-chemical properties of HA-eNPs were investigated. Dynamic light scattering (DLS) results showed that ATRA-loaded normal BSA nanoparticles (NPs/ATRA) and HA modified cationic nanoparticles (HA-eNPs/ATRA) have an average size of 179.26 ± 1.98 nm, 180.63 ± 0.38 nm, and a polydispersity index (PDI) of 0.227 ± 0.013, 0.180 ± 0.007, respectively (Fig. 1a). The zeta potentials were − 19.3 ± 0.62 mV and 32.1 ± 0.42 mV, respectively (Fig. 1b). TEM analysis revealed that the nanoparticles are spherical and surrounded by a grey ring, suggesting that HA formed a coating layer on the surface of the eNPs (Fig. 1c, right). The quantity of HA was estimated to be on average 72.9 μg of HA per milligram HA-eNPs/ATRA (Additional file 1: Fig. S2). Further studies revealed that ATRA was well incorporated into nanoparticles with encapsulation efficiencies of 89.8% for NPs/ATRA, 93.0% for HA-eNPs/ATRA (Fig. 1d). So, the presence of HA at the surface of nanoparticles did not diminish the encapsulation of ATRA. The drug loading capacity of HA-eNPs/ATRA was 8.37%, according to the percentage of ATRA in HA-eNPs/ATRA. These results suggest that HA-eNPs/ATRA were successfully prepared. In vitro characteristics of NP formulations. a, b Diameter and zeta potential measurements of NP formulations. c Different nanoparticles visualized by transmission electron microscopy (TEM), scale bar: 50 nm (arrows indicate the coated HA layer). d The encapsulation efficiency and drug loading of NP formulations. Results are shown as the means ± SD (n = 3). e Release profiles of ATRA from NP formulations in PBS with 10% (v/v) ethanol. Results are shown as the means ± SD (n = 3). f Differential scanning calorimetry (DSC) thermograms of ATRA formulations The in vitro release studies of the HA-eNPs were performed using a dialysis technique in PBS at pH 7.4, alongside with free ATRA. All samples were analyzed by high performance liquid chromatography (HPLC). The release profiles are shown in Fig. 1e. The profile of free ATRA showed 40% drug-release at the sampling time of 4 h and more than 80% by 12 h. However, it was less than 60% for NPs/ATRA and HA-eNPs/ATRA. NP formulations exhibited sustained ATRA release up to 48 h, reaching 88.1% for NPs and 84.0% for HA-eNPs (Fig. 1e). To investigate the physical state of the drug incorporated in the NP formulations, DSC analysis was performed to determine the thermal properties of the nanoparticles. As shown in Fig. 1f, ATRA exhibited a sharp endothermic peak at 185 °C, which indicates that free ATRA are in the crystal state. After incorporation into NPs, all characteristic peaks of ATRA disappeared. Thus, the majority of ATRA encapsulated in the NPs was in an amorphous or disordered crystalline phase dispersed in BSA matrix [38]. Stability analysis of HA-eNPs Good pharmaceutical preparations should have good stability for a certain period of time, so that they can be easily used in clinic. Therefore, the physical stability of HA-eNPs/ATRA was assessed in a period of 4 weeks storage at 2–8 °C in the refrigerator. As shown in Fig. 2a, b, no significant changes on sizes and zeta potentials were found within 4 weeks. In the end of evaluation, the diameter was 173.80 nm with a PDI of 0.184 and the zeta potential was 31.2 mV. Moreover, the good stability of HA-eNPs was confirmed by the high stable encapsulated efficiency of ATRA (Fig. 2c). In addition, the stability of HA conjugation was also assessed by continuous detecting density of HA on the surface of HA-eNPs. As shown in Fig. 2d, there was no significant change on density of ligand observed, showing 74.5 μg per milligram HA-eNPs/ATRA on day 28. These results demonstrated that the nanoparticles can be preserved for 4 weeks before it is used. Stabilities evaluation of ATRA loaded HA-eNPs. Continually determinations of a the diameter, b the zeta potential, c the encapsulation efficiency, and d HA density of HA-eNPs within 4 weeks, respectively. Results are shown as the means ± SD (n = 3) Cellular uptake of HA-eNPs The cellular uptake and distributions of NP formulations were determined by fluorescence microscopy and flow cytometry. To facilitate the detection, coumarin-6 was incorporated into NPs as a fluorescent signal. B16F10 cells seeded in 6-well culture plate were incubated with NP formulations at different concentrations and times. As shown in Fig. 3a, b, both two NP formulations were uptaken in a dose- and time-dependent manner. In addition, cellular uptake of HA-eNPs was significantly stronger than that of normal NPs. The result suggested that HA grafting onto nanoparticles facilitated cellular uptake of anticancer drug. Fluorescence analysis of uptake of HA-eNPs/ATRA in B16F10 cells. a Fluorescence images of uptake of coumarin-6-labeled NPs at different ATRA concentrations (4 h). b Fluorescence images of uptake of coumarin-6-labeled NPs at different time points (5 μM ATRA). Fluorescence signals were observed by fluorescence microscopy. Scale bar represents 50 μm To evaluate the efficiency of HA/CD44 mediated specific cellular uptake, the CD44-enriched B16F10 cells or CD44 low-expressing MCF-7 cells were co-incubated with NPs or HA-eNPs for 4 h in serum-free medium [39]. Then, the fluorescence signals of NP formulations were examined by confocal microscopy and flow cytometry. As shown in Fig. 4, the fluorescence intensity of HA-eNPs cell group was significant higher than that of normal NPs cell group in CD44-enriched B16F10 cells (Fig. 4a). Whereas, no significant increased fluorescence signal was observed in HA-eNPs group comparing with normal NPs group in CD44 low-expressing MCF-7 cells (Fig. 4b, Additional file 1: Fig. S4). Then, the cellular uptakes of ATRA in the two tested cancer cells were quantitatively evaluated via flow cytometry. As shown in Fig. 4c, HA-eNPs group showed significant higher cellular uptake percentage (73.3%) than that of NPs group (38.1%, P < 0.01) in B16F10 cells, respectively. However, only weak increase of cellular uptake percentage in HA-eNPs group was observed (23.7%) compared with that of NPs group (36.0%, P > 0.05) in MCF-7 cells. To further investigate the endocytosis mechanism of the nanoparticles, anti-CD44 antibody blocking experiments were performed. Figure 4d, e showed that the fluorescence signals in the HA-eNPs treated cells were significantly reduced after blocking by anti-CD44 antibody (P < 0.01). This phenomenon can attributed to the saturation of the receptor caused by CD44-antibody reaction prevented the binding of HA-coated NPs to its receptor. Those results revealed that the uptake of HA-eNPs by cancer cells is related to their CD44 level of expression. These results confirm the interest of nanoparticle functionalized with HA in the aim of CD44 over expressing cancer therapy. In vitro evaluation of detecting the targeting of HA-eNPs to CD44. a, b Coumarin-6 loaded HA-eNPs were incubated with CD44-enriched B16F10 cells or CD44-low expressing MCF-7 cells, respectively. Fluorescence signals were observed by confocal microscopy. c Flow cytometry analysis showing the cellular uptake percentage of the B16F10 and MCF-7 cells after incubation with free coumarin-6, coumarin-6 loaded NPs, or HA-eNPs. Data are shown as the means ± SD, **P < 0.01, ns not significant (n = 3). d, e B16F10 cells were preincubated with anti-CD44 antibody and followed by incubating with HA-eNPs for 4 h. Fluorescence signals were observed by confocal microscopy and flow cytometry. Data are shown as the means ± SD, **P < 0.01 (n = 3) HA-eNPs exhibited potent cell growth inhibition and apoptosis induction effects on B16F10 cells To investigate the possibility of utilizing the HA-eNPs for drug delivery, we tested the killing ability of the HA-eNPs on cancer cells. The cytotoxicity of HA-eNPs was evaluated using the MTT assay with the CD44-enriched B16F10 cells. The ATRA, NPs/ATRA, and HA-eNPs/ATRA containing equivalent concentrations of ATRA were used. As shown in Fig. 5a, untreated B16F10 cells were bright and crowded with a fusiformis appearance, whereas treated cells showed morphological changes including decreased density and increased floating cells. Interestingly, growth of the cells treated with HA-eNPs/ATRA was great inhibited and the remaining few cells became rounded, irregular or shrunken and detached from each other. Furthermore, cell viability decreased progressively with increasing ATRA concentrations. We noted that HA-eNPs/ATRA exhibited the most potent cell growth inhibitory effect than free ATRA and NPs/ATRA. When ATRA concentration of HA-eNPs increased to 16 μM, there was only 7% cells viable left, while 57%, 46% for free ATRA and NPs/ATRA treated cell, respectively (Fig. 5b). Incorporation in HA-eNPs greatly reduced IC50 of ATRA from 3.60 to 0.49 μM. However, HA-eNPs did not exhibit much increasing of cytotoxicity on MCF-7 cells comparing with NPs (see Additional file 1: Fig. S4). These results suggest that the enhanced cytotoxicity was likely resulted from improved uptake of ATRA by HA-CD44 interations, with increased intracellular concentration of ATRA. In vitro antitumor effects of NP formulations on the B16F10 cells. a Morphological changes of B16F10 cells after treatment with various ATRA formulations. Scale bar represents 50 μm. b MTT viability assay for B16F10 cells after treatment with various ATRA formulations. Results are shown as the means ± SD (n = 3). c Apoptosis of B16F10 cells induced by various ATRA formulations (5 μM) (arrows indicate cell apoptosis). Scale bar represents 20 μm. d Flow cytometry analysis of B16F10 cell apoptosis induced by ATRA formulations. Results are shown as the means ± SD (n = 3) To further investigate the influence of ATAR formulations on cell apoptosis, cell nuclei DAPI staining and annexin V-FITC and propidium idodide (PI) staining were performed. As shown in Fig. 5c, the nuclei of untreated cells displayed bright blue fluorescence and homogeneous chromatin. However, a number of cells showed extreme chromatin condensation and nuclear fragmentation in HA-eNPs/ATRA treated group. Similarly, flow cytometry showed that all treated cell groups had increased fraction of apoptotic cells (annexin V+/PI+ and annexin V+/PI−) compared with control cell (Fig. 5d). Notably, HA-eNPs/ATRA induced the greatest increase in annexin V+/PI+ cells (22.44%) and annexin V+/PI− cells (20.08%). These observations indicate that HA-eNPs enhanced the apoptosis inducing effect of ATRA on CD44-enriched B16F10 cells. It is probably due to the HA modification, which facilitated the uptake of ATRA by the CD44-enriched B16F10 cells. HA-eNPs altered the distribution of ATRA in mice In vivo fluorescence imaging analysis was performed to investigate the biodistribution and tumor targeting properties of HA-eNPs/ATRA. DiD was incorporated into NPs as the fluorescence signals. DiD signals were monitored after intravenous administration of DiD-labeled NP formulations into the lung metastasis tumor-bearing mice. Figure 6a showed real-time images of NPs in the tumor-bearing mice, in which the whole bodies of live mice were monitored over the course of 24 h. Considerable fluorescence was detected and gradually decreased in the whole body, resulting from circulation of NPs in the bloodstream. As well know, CD44 plays a critical role in regulating CSCs stemness properties and has been identified as a typical CSCs surface marker for enriching or targeting CSCs in various types of cancer [40]. Meanwhile, CD44 is a broadly distributed transmembrane glycoprotein in our body, the smallest standard isoform (CD44s) is found in most cells, although its variants (CD44v1–v10) primarily overexpress on tumor cells [41, 42]. Consequently, parts of HA-eNPs could bind with CD44 expressing on other tissues besides metastatic B16F10 tumor cells. However, DiD signals clearly visible in the whole mouse body except for tumor-bearing lungs (as shown in Fig. 6a), were mainly resulted from DiD flowing in the circulation system. Interestingly, HA-eNPs exhibited the strongest fluorescent signal in the tumor-bearing lung compared with NPs and eNPs (Fig. 6a and Additional file 1: Fig. S5) mouse groups, which indicating that more HA-eNPs accumulated in the tumor-bearing lungs. In HA-eNPs group, DiD signal in the lung reached maximum at 4 h post-injection and remained strong until 24 h post-injection. This high tumor targetability of HA-eNPs might be due to a combination of an EPR effect, as well as receptor-mediated endocytosis of HA-eNPs. In order to confirm the lung tumor targeting effect of HA-eNPs, major organs from euthanized mice were imaged under 8 h post-injection (Fig. 6b). As expected, strong DiD signal in tumor-bearing lungs was observed in HA-eNPs mouse group, whereas DiD in NPs groups was very weak. Furthermore, fluorescence microscopy images of lung tissue section from mice 8 h post-injection with DiD-labeled NPs confirmed that HA-eNPs accumulated in the tumor-bearing lung tissue (Fig. 6c). It was validated that the introduction of HA offered the nanoparticles an excellent tumor targeting efficacy, leading to a higher efficient cancer treatment. Fluorescence imaging of ATRA-loaded NP formulations in mice. a Time-dependent intensity images of fluorescence distribution in mice (arrows indicate accumulation of HA-eNPs in tumor-bearing lungs). b Fluorescence images of major organs at 8 h post-injection of NP formulations in mice. c Fluorescence microscopy images of tumor-bearing lung tissue section. Scale bar represents 50 μm HA-eNPs/ATRA suppressed the tumorigenicity of CD44-enriched B16F10 cells Side population cells (SP cells), with some stem cell properties, have been identified in cancers where they show increased capacity of self-renewal and tumorigenicity in vivo [43, 44]. To evaluate the influence of HA-eNPs/ATRA on the tumorigenicity of CD44-enriched B16F10 cells, SP cells analysis and in vivo xenograft experiments were performed. As shown in Fig. 7a, after 48 h treatment, ATRA or its NP formulations reduced the proportion of SP cells compared with untreated B16F10 cells. HA-eNPs/ATRA most potently decreased the proportion of SP cells (from 2.54 to 0.56%), indicating that HA modification facilitated specific delivery of ATRA into CD44-enriched B16F10 cells. HA-eNPs/ATRA decreased the tumorigenicity of CD44-enriched B16F10 cells. a Side population (SP) changes in B16F10 cells after treatment with various ATRA formulations. b A diagram of the tumorigenicity assessment. In the experiment, four flanks of each mouse were injected with the same treated or untreated B16F10 cells. c Representative photo of tumors 24 days after subcutaneous implantations of ATRA formulations treated or untreated B16F10 cells (arrows indicate tumor nodules). d, e Tumor occurrences and weights measured 24 day after implantation. Results are shown as the means ± SD (n = 16) Cancer stem cells are more proliferative and tumorigenic, injection of few CSCs will give rise to tumor development. Therefore, the influence of HA-eNPs on tumorigenicity was assessed by subcutaneously inoculating ATRA formulations treated and untreated B16F10 cells into the flanks of C57BL/6 mice (Fig. 7b). After 24 days, as shown in Fig. 7c–e, the untreated cells gave rise to more tumors than the treated cells, with an average occurrence of 81.25% and tumor weight of 0.54 g. ATRA and NPs/ATRA treated cells generated less tumors, with average occurrence of 62.5% and 37.5%, average tumor weight of 0.35 g and 0.11 g, respectively. However, no tumor formed in all mice injected with HA-eNPs/ATRA treated B16F10 cells. Similar trends were also observed in another xenograft experiment, in which four flank sites of each mouse were injected with different treated or untreated B16F10 cells, respectively (Additional file 1: Fig. S6). HA-eNPs/ATRA exhibited the best tumorigenicity suppression effect among all of the formulations, probably due to efficient stem-like cell targeting delivery of ATRA resulted from HA/CD44 mediated endocytosis. Inhibition of tumor growth In this study, in situ lung metastasis tumor-bearing mice were employed to assess the inhibitory effect of HA-eNPs/ATRA on tumor growth. Their tumor morphology, weights, and histology were analyzed. As shown in Fig. 8a, tumor nodules covered nearly the entire surface of the excised lungs in the untreated tumor-bearing mice. The tumor nodules in the lungs of ATRA-, NPs/ATRA-, and HA-eNPs/ATRA-treated groups decreased compared with that of PBS-treated group. The average weights of the tumor-bearing lungs were analyzed and are shown in Fig. 8b. As expected, the B16F10-bearing lung weights of ATRA-containing formulations-treated groups significant decreased compared with the PBS groups. More importantly, the HA-eNPs/ATRA exhibited the highest inhibitory effect compared with the NPs/ATRA group (P < 0.05) and the free drug group (P <0.01). Furthermore, the H&E staining analysis revealed that, tissue specimens from the HA-eNPs/ATRA treated mice shown dramatically lower densities of tumor cells than those of other groups (Fig. 8c). Based on these results, we can conclude that the HA-eNPs/ATRA exhibited the best antitumor effect due to the targeting to stem-like cells, which relies on HA/CD44 interactions. HA-eNPs/ATRA enhanced the inhibition of tumor growth in the in situ lung metastasis tumor-bearing mice. a Images of the B16F10-bearing lungs on day 24 after five consecutive i.v. injections of NP formulations (n = 5). b Antitumor effects of various treatments evaluated according to B16F10-bearing lung weight (n = 5). c Histological staining of the B16F10-bearing lungs after various treatments (arrows indicate tumor nodules, n = 5) Preliminary safety evaluation To evaluate in vivo toxicity of NP formulations, hematoxylin and eosin (H&E) staining of organs from treated mice was performed (Fig. 9). Histopathologic examination showed no abnormalities in those organs from treated mice compared with negative control mice, suggesting that NP formulations are biocompatible. These results indicate that encapsulation of ATRA into NPs reduces the toxicity of drugs, and HA-grafted NPs are excellent carriers for therapeutics, especially for targeting CD44-overexpressing cancers. Safety evaluation of ATRA-loaded NP formulations. Microscopic images of hematoxylin and eosin (H&E) staining of organs (liver, spleen, lung, kidney, heart and brain) from mice treated with ATRA-loaded NP formulations. No abnormality was observed. Scale bar represents 50 μm The study herein developed a kind of cationic albumin nanoparticle system functionalized with hyaluronic acid to target the CD44 overexpressed CSCs. The delivery system had high entrapment efficiency as well as good stability. The in vitro drug release profile revealed that the HA-eNPs displayed a sustained and prolonged release. The fluorescence images demonstrated the efficient and specific cellular uptake of HA-eNPs. In vivo biodistribution illustrated that HA-eNPs can selective accumulate in tumor-bearing lung of mouse. The MTT and apoptosis experiments indicated that HA-eNPs/ATRA showed a much higher cytotoxicity than the free drug. Finally, tumor growth was significantly inhibited by HA-eNPs/ATRA in lung metastasis tumor mice. This work demonstrate that the HA functionalized cationic albumin nanoparticles is an efficient system for targeted delivery of antitumor drugs to eliminate the CSCs. Bovine serum albumin (BSA MW 66430 Da) was purchased from Sinopharm Chemical Reagent Co., Ltd (Beijing, China). Hyaluronic acid (HA, ~ 10,000 Da polymers) was obtained from Bloomage Freda Biopharm Co., Ltd. (Shandong, China). 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide hydrochloride (EDCI·HCl) was purchased from Aladdin Chemistry Co., Ltd (Shanghai, China). Coumarin-6 and DAPI were bought from Sigma-Aldrich (St. Louis, MO). All-trans-retinoic acid (ATRA) was purchased from Saen Chemical Technology Co., Ltd (Shanghai, China). RPMI-1640 cell culture medium was supplied by Gibco (Life Technologies, Switzerland). 3-(4, 5-methylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) was obtained from Amresco (Solon, Ohio, USA). Cell lines and animals Murine melanoma cell line B16F10 was cultured in RPMI-1640 medium with 10% fetal bovine serum, 1% penicillin/streptomycin. MCF-7 breast cancer cells were purchased from the American Type Culture Collection (ATCC, Manassas, VA), which were cultured in Dulbecco's Modified Eagle's Medium (DMEM) with 10% fetal bovine serum, 1% penicillin/streptomycin. Female C57BL/6 mice (6–8 weeks) were obtained from the Laboratory Animal Center of Army Military Medical University (Chongqing, China). All animal experiments were approved by the Institutional Animal Care and Ethics Committee of Army Military Medical University. Synthesis and characterization of ethylenediamine conjugated bovine serum albumin (eBSA) First, 500 mg BSA (dissolved in 5.0 mL distilled water), and 150 mg EDCI·HCl were added to 40 mL 1.4 M ethylenediamine solution (pH = 4.75) and stirred at room temperature for 2 h. Then, 400 μL 4 M acetate buffer (pH = 4.75) was added to terminate the reaction. The product was purified by bag filter (molecular cut off: 7 kDa) to remove free ethylenediamine and EDCI. Water was removed by freeze-drying. The product was analyzed by MALDI-TOF–MS (Shimadzu MALDI-7090). Preparation and physiochemical characteristics of the HA grafted ATRA loaded cationic nanoparticles First, ATRA-loaded BSA nanoparticles (NPs/ATRA) were prepared according to description previously [45]. Briefly, 4.5 mg of ATRA was dissolved in 1 mL ethanol. The solution was added to 10 mL of BSA solution (0.5% w/v in distilled water). The mixture was homogenized by sonication for 15 min in order to form a crude emulsion, and then transferred into a high pressure homogenizer (ATS Engineering Inc., USA). The emulsification was performed at 15,000 psi for 20 cycles. The resulting system was transferred into a rotary evaporator to remove ethanol. Secondly, ATRA-loaded eBSA nanoparticles (eNPs/ATRA) were prepared according to the description above. At last, HA was added into the eNPs/ATRA solution with an HA/eBSA ratio of 1:5 (w/w) and stirred for 20 min at room temperature to prepare the targeting delivery system (HA-eNPs/ATRA). The size distribution and zeta potentials were measured by photon correlation spectroscopy (PCS) (Malvern zetasizer Nano ZS90, UK). Transmission electron microscope (TEM) was adopted to study the morphologies of the NP formulations which were disposited onto a copper grid and then stained with phosphotungstic acid (1%) for 10 s before observation using TEM instrument (TEM-1400 plus, JEOL). To determine the entrapment efficiency, nanoparticles were separated from the dispersion by centrifugation at 15,000g for 90 min. The supernatant was analyzed for free ATRA by high-performance liquid chromatography (HPLC) (Kromasil ODS-1 C18 column (150 × 4.6 mm, 5 μm); mobile phase: CH3COONH4 (0.1 M): CH3OH = 12:88 (v/v); flow rate: 1 mL/min; wavelength: 348 nm). Encapsulation efficiency (EE %) and drug loading yield (DL) of NPs were calculated by the following equations. $$ {\text{Encapsulation efficiency}}\% = \frac{{{\text{weight of the feeding drug}} - {\text{weight of free drug}}}}{\text{weight of the feeding drug}} \times 100 $$ $$ {\text{Drug loading yield}}\% = \frac{\text{weight of encapsulated drug in NPs}}{\text{total weight of NPs}} \times 1 0 0 $$ ATRA release from NPs was monitored by dialysis using a bag filter (molecular cut off: 7 kDa, Millipore, USA), against 50 mL PBS with 10% (v/v) ethanol (pH 7.4). At designated time intervals, aliquots were removed from the dialysate and replaced by an equal volume of PBS/ethanol buffer. The amount of ATRA in the dialysate was determined by HPLC (Agilent Technologies, Santa Clara, CA). DSC (DSC 200F3, NETZSCH) analysis was carried out for free BSA and ATRA, freeze dried ATRA-loaded NPs and HA-eNPs. All samples were weighted and sealed in an aluminum cell and then heated from 25 °C to 350 °C at a rate of 10 °C/min under a nitrogen atmosphere. Stability of nanoparticles The physical stability of ATRA loaded HA-eNPs were evaluated after storage at 2–8 °C for up to 4 weeks. At each time point, an aliquot of ATRA loaded HA-eNPs dispersion was collected to measure the mean particle size, polydispersity index (PDI), and surface potential by photon correlation spectroscopy (PCS) (Malvern zetasizer Nano ZS90, UK), EE% by HPLC (Agilent 1290, USA), and HA density by UV–vis spectrophotometer (Persee, China). The measurements were performed in triplicate. Cell uptake assay The cellular uptake and distribution of NP formulations were analyzed by fluorescence microscopy and flow cytometry. The CD44-enriched B16F10 cells were seeded in a 6-well culture plate 1 day before treatment with NPs. Coumarin-6 was incorporated into the NPs to obtain fluorescence-labeled formulations. CD44-low expressing MCF-7 cells were employed as the negative control to evaluate the HA mediated cellular uptake. After incubation with the fluorescence-labeled formulations, including NPs/ATRA and HA-eNPs/ATRA without fetal bovine serum at 37 °C, the cells were washed by PBS and analyzed by fluorescent microscopy (EVOS FL, Life Technologies, USA), confocal microscopy (LSM 800, Zeiss, Germany), and a flow cytometer (Novocyte, ACEA, USA). For the blocking experiments, B16F10 cells were first incubated with anti-CD44 antibody (20 μg/mL, ab112178, abcam) for 1 h. After removing the antibody, the cells were incubated with coumarin-6 labeled HA-eNPs for 4 h. Cells were then treated as described above. Assessment of cell growth inhibition and apoptosis induction The proliferation of drug-treated CD44-enriched B16F10 cells was determined by MTT assays. Briefly, the cells were seeded in 96-well culture plates and allowed to attach. On the next day, the cells were exposed to ATRA or NP formulations for 48 h at a series of ATRA concentrations and followed by addition of 20 μL MTT (5 mg/mL) to each well. The cells were continually incubated in total darkness for 4 h. Then, the supernatant was replaced by 150 μL DMSO in order to ensure the formazan crystals dissolved. Cell viabilities were determined by measuring the absorbance at 570 nm using a Wellscan MK3 microplate reader (Thermo, USA). Cell apoptosis was assessed by 4′6-diamidino-2-phenylindole (DAPI) staining and Annexin-V/FITC and PI staining methods. Briefly, CD44-enriched B16F10 cells seeded in two 6-well plates were exposed to ATRA or different NP formulations (5 μM of ATRA) for 48 h. After the treatment, one plate cells were washed with PBS, fixed by 4% poly formaldehyde solution and then incubated with DAPI (1 μg/mL) for 10 min to stain the nuclei. The nuclear morphology of cells was analyzed using a Life Technologies fluorescence microscope (Ex 358 nm, Em 461 nm). Cells were judged to be apoptotic based on nuclear morphology changes, including chromatin condensation, fragmentation and apoptotic body formation. The other plate cells were harvested and cell surface of phosphatidylserine in apoptotic cells was quantitatively estimated by using Annexin-V/FITC and PI apoptosis detection kit (KeyGEN BioTECH, China). The analysis of apoptotic cells was performed on a Beckman Coulter Navios flow cytometry. Evaluation of biodistribution property An in situ lung metastasis model was constructed to evaluate the biodistribution property. In brief, mice were injected with 3 × 105 B16F10 cells via the tail vein 3 weeks in advance. 1,1-Dioctadecyl-3,3,3,3-tetramethylindodicarbocyanine (DiD, KeyGEN BioTECH, China) was used to label the NP formulations. The mice were administered with NP formulations at an equivalent of 200 μg DiD/kg body weight for in vivo imaging. Then, the mice were anesthetized by injection with 3% isoflurane in oxygen, and the fluorescence signal of the mice were recorded (Ex = 640 nm; Em = 680 nm) using an IVIS® Spectrum system (Caliper, Hopkington, MA). In addition, one other groups of mice were sacrificed at 8 h, and organs of interest were subjected to similar fluorescence tissue distribution measurements. The removed lungs were frozen in tissue freezing medium (Leica, Germany) and cut using a CM 1950 microtome (Leica, Germany) into sections, followed by 4′6-diamidino-2-phenylindole (DAPI) staining for fluorescence microscopy analysis. Side population analysis The protocol was based on Naito et al. with slight modifications [46]. Briefly, CD44-enriched B16F10 cells seeded in 6-well plates were exposed to ATRA or NP formulations (5 μM of ATRA) for 48 h. The positive control cells were incubated with verapamil (100 μg/mL) for 20 min. After that, the cells were washed with PBS twice and harvested. Then, the cells were dispersed in fresh medium containing 2% FBS and 10 mM HEPES, and followed by incubating with Hoechst 33,342 (5 μg/mL final concentration) for 90 min at 37 °C with intermittent mixing. At the end of the incubation, cells were washed and suspended in cold PBS with 2% FBS and 10 mM HEPES. Five microlitre PI (50 μg/mL final concentration) was added 30 min ahead of flow cytometry analysis (BD biosciences, USA). In vivo tumorigenicity experiments Basic procedures were previously described [47]. Briefly, CD44-enriched B16F10 cells seeded in 6-well plates were exposed to ATRA or different NP formulations (5 μM of ATRA) for 48 h. After the treatment, cells were harvested, resuspended in sterile PBS, and 1 × 103 cells were injected subcutaneously into the flanks of mice. Tumor development was monitored after implantation. When tumor burden became obvious, the mice were sacrificed and tumors were harvested for analysis. The B16F10 tumor occurrence was calculated by the following equation. $$ {\text{Tumor occurrence}}\% = \frac{\text{the number of tumors}}{\text{the number of injections}} \times 1 0 0 $$ In vivo antitumor experiments A lung metastasis model was utilized for in vivo antitumor evaluation. Briefly, 5 × 105 B16F10 cells were injected into C57BL/6 mice through the tail vein on day 0. The mice were randomly divided into 4 groups (PBS, ATRA, NPs/ATRA, and HA-eNPs/ATRA). ATRA-containing formulations were subsequently administered every 2 days at the dose of 5 mg/kg ATRA starting on day 14 and continuing until day 22. The body weight was monitored. At the desired time, mice were sacrificed and the tumor-bearing lungs were removed, weighted and fixed in 10% neutral-buffered formalin for H&E staining. Safety evaluation Mice were administrated intravenously with ATRA-loaded NP formulations (5 mg/kg in ATRA) once every 2 days for 3 times. On day 7, the mice were sacrificed by anesthetizing with chloral hydrate. The visceral organs, including heart, liver, spleen, lung, kidney and brain, were removed, fixed in formalin, and embedded in paraffin. Serial 4-μm sections were prepared and stained with H&E for microscopic assessment. Unless otherwise noted all quantitative data were presented as the mean ± SD from at least 3 independently experiments. Differences between two groups were analyzed by Student's t test. Multiple comparisons of more than two groups, one-way ANOVA followed by post hoc t-test was performed. Statistical analysis was performed with SPSS/Win13.0 software (SPSS, Inc., Chicago, Illinois). Statistical significance was set at P < 0.05. Yuanyuan Li and Sanjun Shi contributed equally to this work CSCs: transmission electronic microscopy ATRA: EE: encapsulation efficiency NPs: HA-eNPs: nanoparticles system based on cationic albumin functionalized with HA YYL and SJS conceived and carried out experiments, analyzed data and wrote the paper. SJS and JHC designed the study, supervised the project. MHL, LLW and CWL assisted in the synthesis and characterizations of the NPs. BL, YM and ZWL assisted in the in vivo study. All authors read and approved the final manuscript. All authors read and approved the final manuscript. All animal experiments were approved by the Institutional Animal Care and Ethics Committee of Army Military Medical University. This work was supported by the project of the National Natural Science Foundation of China (81402876); the Key Project of Basic Science And Frontier Technology of Chongqing (CSTC2015jcyjBX0018); the Transformation Project of Scientific And Technological Achievements of Third Military Medical University (2015XZH19);the Major new drug creation project of China's Ministry of Science and Technology (2018ZX09J18109-05). 12951_2018_424_MOESM1_ESM.docx Additional file 1. Fig. S1. MALDI-TOF–MS spectrum of eBSA. Fig. S2. A working curve for quantification of HA content in HA-eNPs using Alcian blue assay. Fig. S3 Difference on expression of CD44 between B16F10 and MCF-7 cells. Fig. S4 In vitro antitumor effects of NP formulations on the MCF-7 cells. Fig. S5. Time-dependent intensity of fluorescence distribution of eNPs in mouse. Fig. S6. HA-eNPs/ATRA reduced the tumorigenicity of CD44-enriched B16F10 cells. Department of Pharmacy, Third Affiliated Hospital & Research Institute of Surgery of Army Medical University, 10# Changjiangzhilu, Chongqing, 400042, People's Republic of China Lortettieulent J, Soerjomataram I, Ferlay J, Rutherford M, Weiderpass E, Bray F. International trends in lung cancer incidence by histological subtype: adenocarcinoma stabilizing in men but still increasing in women. Lung Cancer. 2014;84:13–22.View ArticleGoogle Scholar Siegel RL, Miller KD, Jemal A. Cancer statistics, 2018. Cancer J Clin. 2018;68:7–30.View ArticleGoogle Scholar Jordan CT, Guzman ML, Noble M. Cancer stem cells. N Engl J Med. 2006;355:1253–61.View ArticlePubMedGoogle Scholar Nandy SB, Lakshmanaswamy R. Cancer stem cells and metastasis. Prog Mol Biol Transl Sci. 2017;151:137–76.View ArticlePubMedGoogle Scholar Reya T, Morrison SJ, Clarke MF, Weissman IL. Stem cells, cancer, and cancer stem cells. Nature. 2001;414:105.View ArticleGoogle Scholar He YC, Zhou FL, Shen Y, Liao DF, Cao D. Apoptotic death of cancer stem cells for cancer therapy. Int J Mol Sci. 2014;15:8335.View ArticlePubMedPubMed CentralGoogle Scholar Hermann PC, Huber SL, Herrler T, Aicher A, Ellwart JW, Guba M, Bruns CJ, Heeschen C. Distinct populations of cancer stem cells determine tumor growth and metastatic activity in human pancreatic cancer. Cell Stem Cell. 2007;1:313–23.View ArticleGoogle Scholar Zhao J. Cancer stem cells and chemoresistance: the smartest survives the raid. Pharmacol Ther. 2016;160:145–58.View ArticlePubMedPubMed CentralGoogle Scholar Shi S, Han L, Gong T, Zhang Z, Sun X. Systemic delivery of microRNA-34a for cancer stem cell therapy. Angew Chem. 2013;52:3901.View ArticleGoogle Scholar Liu C, Liu R, Zhang D, Deng Q, Liu B, Chao HP, Rycaj K, Takata Y, Lin K, Lu Y. MicroRNA-141 suppresses prostate cancer stem cells and metastasis by targeting a cohort of pro-metastasis genes. Nat Commun. 2017;8:14270.View ArticlePubMedPubMed CentralGoogle Scholar Zhou P, Bo L, Liu F, Zhang M, Qian W, Liu Y, Yuan Y, Dong L. The epithelial to mesenchymal transition (EMT) and cancer stem cells: implication for treatment resistance in pancreatic cancer. Mol Cancer. 2017;16:52.View ArticlePubMedPubMed CentralGoogle Scholar Gu FX, Karnik R, Wang AZ, Alexis F, Levy-Nissenbaum E, Hong S, Langer RS, Farokhzad OC. Targeted nanoparticles for cancer therapy. Nano Today. 2007;2:14–21.View ArticleGoogle Scholar Parveen S, Sahoo SK. Polymeric nanoparticles for cancer therapy. J Drug Target. 2008;16:108–23.View ArticlePubMedGoogle Scholar Blanco MD, Teijón C, Olmo RM, Teijón JM. Targeted nanoparticles for cancer therapy. In: Sezer AD, editor. Recent advances in novel drug carrier systems. London: Intech Open; 2012. https://doi.org/10.5772/51382.View ArticleGoogle Scholar Yang Q, Qi R, Cai J, Kang X, Sun S, Xiao H, Jing X, Li W, Wang Z. Biodegradable polymer-platinum drug conjugates to overcome platinum drug resistance. Rsc Adv. 2015;5:83343–9.View ArticleGoogle Scholar Chen JX, Shi Y, Zhang YR, Teng LP, Chen JH. One-pot construction of boronate ester based pH-responsive micelle for combined cancer therapy. Colloids Surf. 2016;143:285–92.View ArticleGoogle Scholar Rong D, Xiao H, Guo G, Jiang B, Xing Y, Li W, Yang X, Yu Z, Li Y, Jing X. Nanoparticle delivery of photosensitive Pt(IV) drugs for circumventing cisplatin cellular pathway and on-demand drug release. Colloids Surf B Biointerf. 2014;123:734–41.View ArticleGoogle Scholar Chen JX, Yuan J, Wu YL, Wang P, Zhao P, Lv GZ, Chen JH. Fabrication of tough poly(ethylene glycol)/collagen double network hydrogels for tissue engineering. J Biomed Mater Res. 2017;106:192.View ArticleGoogle Scholar Zhou M, Li X, Li Y, Yao Q, Ming Y, Li Z, Lu L, Shi S. Ascorbyl palmitate-incorporated paclitaxel-loaded composite nanoparticles for synergistic anti-tumoral therapy. Drug Deliv. 2017;24:1230–42.View ArticlePubMedGoogle Scholar Cao L, Zeng Q, Xu C, Shi S, Zhang Z, Sun X. Enhanced antitumor response mediated by the codelivery of paclitaxel and adenoviral vector expressing IL-12. Mol Pharm. 2013;10:1804–14.View ArticlePubMedGoogle Scholar Acharya S, Sahoo SK. PLGA nanoparticles containing various anticancer agents and tumour delivery by EPR effect. Adv Drug Deliv Rev. 2011;63:170–83.View ArticlePubMedGoogle Scholar Maeda H. The enhanced permeability and retention (EPR) effect in tumor vasculature: the key role of tumor-selective macromolecular drug targeting. Adv Enzyme Regul. 2001;41:189–207.View ArticlePubMedGoogle Scholar Wang L, Huang J, Chen H, Wu H, Xu Y, Li Y, Yi H, Wang YA, Yang L, Mao H. Exerting enhanced permeability and retention effect driven delivery by ultrafine iron oxide nanoparticles with T1–T2 switchable magnetic resonance imaging contrast. ACS Nano. 2017;11:4582–92.View ArticlePubMedPubMed CentralGoogle Scholar Lu W, Zhang Y, Tan YZ, Hu KL, Jiang XG, Fu SK. Cationic albumin-conjugated pegylated nanoparticles as novel drug carrier for brain delivery. J Control Release. 2005;107:428.View ArticlePubMedGoogle Scholar Han J, Wang Q, Zhang Z, Gong T, Sun X. Cationic bovine serum albumin based self-assembled nanoparticles as siRNA delivery vector for treating lung metastatic cancer. Small. 2013;10:524.View ArticlePubMedGoogle Scholar Fischer D, Bieber T, Brüsselbach S, Elsässer H, Kissel T. Cationized human serum albumin as a non-viral vector system for gene delivery? Characterization of complex formation with plasmid DNA and transfection efficiency. Int J Pharm. 2001;225:97–111.View ArticlePubMedGoogle Scholar Elzoghby AO, Samy WM, Elgindy NA. Albumin-based nanoparticles as potential controlled release drug delivery systems. J Control Release. 2012;157:168–82.View ArticlePubMedGoogle Scholar Kratz F, Elsadek B. Clinical impact of serum proteins on drug delivery. J Control Release. 2012;161:429–45.View ArticlePubMedGoogle Scholar Shi S, Zhou M, Li X, Hu M, Li C, Li M, Sheng F, Li Z, Wu G, Luo M. Synergistic active targeting of dually integrin αvβ3/CD44-targeted nanoparticles to B16F10 tumors located at different sites of mouse bodies. J Control Release Off J Control Release Soc. 2016;235:1.View ArticleGoogle Scholar Qi X, Fan Y, He H, Wu Z. Hyaluronic acid-grafted polyamidoamine dendrimers enable long circulation and active tumor targeting simultaneously. Carbohyd Polym. 2015;126:231–9.View ArticleGoogle Scholar Jr MW. The emerging role of retinoids and retinoic acid metabolism blocking agents in the treatment of cancer. Cancer. 1998;83:1471–82.View ArticleGoogle Scholar Huang ME, Ye YC, Chen SR, Chai JR, Lu JX, Zhao L, Gu LJ, Wang ZY. Use of all-trans retinoic acid in the treatment of acute promyelocyte leukemia. Berlin: Springer; 1989.Google Scholar Ledda F, Bravo AI, Adris S, Bover L, Mordoh J, Podhajcer OL. The expression of the secreted protein acidic and rich in cysteine (SPARC) is associated with the neoplastic progression of human melanoma. J Investig Dermatol. 1997;108:210–4.View ArticlePubMedGoogle Scholar Nefedova Y, Fishman M, Sherman S, Wang X, Beg AA, Gabrilovich DI. Mechanism of all-trans retinoic acid effect on tumor-associated myeloid-derived suppressor cells. Can Res. 2007;67:11021.View ArticleGoogle Scholar Ozpolat B, Mehta K, Lopez-Berestein G. Regulation of a highly specific retinoic acid-4-hydroxylase (CYP26A1) enzyme and all-trans-retinoic acid metabolism in human intestinal, liver, endothelial, and acute promyelocytic leukemia cells. Leuk Lymph. 2005;46:1497.View ArticleGoogle Scholar Thöle M, Nobmann S, Huwyler J, Bartmann A, Fricker G. Uptake of cationized albumin coupled liposomes by cultured porcine brain microvessel endothelial cells and intact brain capillaries. J Drug Target. 2002;10:337.View ArticlePubMedGoogle Scholar Lu W, Sun Q, Wan J, She Z, Jiang XG. Cationic albumin-conjugated pegylated nanoparticles allow gene delivery into brain tumors via intravenous administration. Can Res. 2006;66:11878.View ArticleGoogle Scholar Raval N, Mistry T, Acharya N, Acharya S. Development of glutathione-conjugated asiatic acid-loaded bovine serum albumin nanoparticles for brain-targeted drug delivery. J Pharm Pharmacol. 2015;67:1503.View ArticlePubMedGoogle Scholar Yin H, Zhao F, Zhang D, Li J. Hyaluronic acid conjugated β-cyclodextrin-oligoethylenimine star polymer for CD44-targeted gene delivery. Int J Pharm. 2015;483:169–79.View ArticlePubMedGoogle Scholar Chanmee T, Ontong P, Kimata K, Itano N. Key roles of hyaluronan and its CD44 receptor in the stemness and survival of cancer stem cells. Front Oncol. 2015;5:180.View ArticlePubMedPubMed CentralGoogle Scholar Misra S, Hascall VC, Markwald RR, Ghatak S. Interactions between hyaluronan and its receptors (CD44, RHAMM) regulate the activities of inflammation and cancer. Front Immunol. 2015;6:201.View ArticlePubMedPubMed CentralGoogle Scholar Yan Y, Zuo X, Wei D. Concise review: emerging role of CD44 in cancer stem cells: a promising biomarker and therapeutic target. Stem Cells Transl Med. 2015;4:1033–43.View ArticlePubMedPubMed CentralGoogle Scholar Patrawala L, Calhoun T, Schneiderbroussard R, Zhou J, Claypool K, Tang DG. Side population is enriched in tumorigenic, stem-like cancer cells, whereas ABCG2+ and ABCG2− cancer cells are similarly tumorigenic. Can Res. 2005;65:6207.View ArticleGoogle Scholar Golebiewska A, Brons NHC, Bjerkvig R, Niclou SP. Critical appraisal of the side population assay in stem cell and cancer stem cell research. Cell Stem Cell. 2011;8:136–47.View ArticlePubMedGoogle Scholar Desai NP, Tao C, Yang A, Louie L, Yao Z, Soon-Shiong P, Magdassi S. Protein stabilized pharmacologically active agents, methods for the preparation thereof and methods for the use there of. US; 2004.Google Scholar Naito H, Wakabayashi T, Kidoya H, Muramatsu F, Takara K, Eino D, Yamane K, Iba T, Takakura N. Endothelial side population cells contribute to tumor angiogenesis and antiangiogenic drug resistance. Can Res. 2016;76:3200.View ArticleGoogle Scholar Patrawala L, Calhoun T, Schneider-Broussard R, Li H, Bhatia B, Tang S, Reilly JG, Chandra D, Zhou J, Claypool K, Coghlan L, Marker PC. Highly purified CD44+ prostate cancer cells from xenograft human tumors are enriched in tumorigenic and metastatic progenitor cells. Urol Oncol Sem Orig Invest. 2006;25:277–8.Google Scholar
CommonCrawl
Previous volume | This volume | Most recent volume | All volumes | Next volume | Previous article | Next article Families of Baker domains II Authors: P. J. Rippon and G. M. Stallard Journal: Conform. Geom. Dyn. 3 (1999), 67-78 MSC (1991): Primary 30D05; Secondary 58F08 DOI: https://doi.org/10.1090/S1088-4173-99-00045-4 Published electronically: June 14, 1999 Full-text PDF Free Access Abstract | References | Similar Articles | Additional Information Abstract: Let $f$ be a transcendental meromorphic function and $U$ be an invariant Baker domain of $f$. We use estimates for the hyperbolic metric to show that there is a relationship between the size of $U$ and the proximity of $f$ in $U$ to the identity function, and illustrate this by discussing how the dynamics of transcendental entire functions of the following form vary with the parameter $a$: \begin{equation*} f(z) = az + bz^ke^{-z}(1+o(1)) \; \text { as } \Re (z) \rightarrow \infty , \end{equation*} where $k \in \mathbf N$, $a \geq 1$ and $b > 0$. I. N. Baker, Wandering domains in the iteration of entire functions, Proc. London Math. Soc. (3) 49 (1984), no. 3, 563–576. MR 759304, DOI https://doi.org/10.1112/plms/s3-49.3.563 I. N. Baker and J. Weinreich, Boundaries which arise in the dynamics of entire functions, Rev. Roumaine Math. Pures Appl. 36 (1991), no. 7-8, 413–420. Analyse complexe (Bucharest, 1989). MR 1144573 Alan F. Beardon, Iteration of rational functions, Graduate Texts in Mathematics, vol. 132, Springer-Verlag, New York, 1991. Complex analytic dynamical systems. MR 1128089 Walter Bergweiler, Iteration of meromorphic functions, Bull. Amer. Math. Soc. (N.S.) 29 (1993), no. 2, 151–188. MR 1216719, DOI https://doi.org/10.1090/S0273-0979-1993-00432-4 Walter Bergweiler, Invariant domains and singularities, Math. Proc. Cambridge Philos. Soc. 117 (1995), no. 3, 525–532. MR 1317494, DOI https://doi.org/10.1017/S0305004100073345 Walter Bergweiler, On the Julia set of analytic self-maps of the punctured plane, Analysis 15 (1995), no. 3, 251–256. MR 1357963, DOI https://doi.org/10.1524/anly.1995.15.3.251 Lennart Carleson and Theodore W. Gamelin, Complex dynamics, Universitext: Tracts in Mathematics, Springer-Verlag, New York, 1993. MR 1230383 P. Domínguez, Dynamics of transcendental meromorphic functions, Ann. Acad. Sci. Fenn. Math. 23 (1998), no. 1, 225–250. MR 1601879 P. Dominguez and I. N. Baker, Boundaries of unbounded Fatou components of entire functions, To appear in Ann. Acad. Sci. Fenn. Math. A. È. Erëmenko and M. Ju. Ljubich, Examples of entire functions with pathological dynamics, J. London Math. Soc. (2) 36 (1987), no. 3, 458–468. MR 918638, DOI https://doi.org/10.1112/jlms/s2-36.3.458 P. Fatou, Sur l'itération des fonctions transcendantes entières, Acta. Math. France, 47 (1926), 337–360. D. Gaier, Lectures on Complex Approximation, Birkhäuser, 1985. Michael-R. Herman, Are there critical points on the boundaries of singular domains?, Comm. Math. Phys. 99 (1985), no. 4, 593–612. MR 796014 Ch. Pommerenke, On the iteration of analytic functions in a halfplane, J. London Math. Soc. (2) 19 (1979), no. 3, 439–447. MR 540058, DOI https://doi.org/10.1112/jlms/s2-19.3.439 P. J. Rippon and G. M. Stallard, Families of Baker domains, I. To appear in Nonlinearity. P. J. Rippon and G. M. Stallard, On sets where iterates of a meromorphic function zip towards infinity. Preprint. I. N. Baker, Wandering domains in the iteration of entire functions. Proc. London Math. Soc. (3) 49 (1984), 563–576. I. N. Baker and J. Weinreich, Boundaries which arise in the iteration of entire functions. Rev. Roumaine Math. Pures Appl., 36 (1991), 413–420. A. F. Beardon, Iteration of Rational Functions, Springer, 1991. W. Bergweiler, Iteration of meromorphic functions, Bull. Amer. Math. Soc., 29 (1993), 151–188. W. Bergweiler, Invariant domains and singularities, Math. Proc. Camb. Phil. Soc., 117 (1995), 525–532. W. Bergweiler, On the Julia set of analytic self-maps of the punctured plane, Analysis, 15 (1995), 251–256. L. Carleson and T. W. Gamelin, Complex Dynamics, Springer, 1993. P. Dominguez, Dynamics of transcendental meromorphic functions, Ann. Acad. Sci. Fenn. Math., (1) 23 (1998), 225–250. A. E. Erememko and M. Yu. Lyubich, Examples of entire functions with pathological dynamics, J. London Math. Soc., (2) 36 (1987), 458–468. M. Herman, Are there critical points on the boundary of singular domains? Comm. Math. Phys., 99 (1985), 593–612. C. Pommerenke, On the iteration of analytic functions in a halfplane, I, J. London Math. Soc. (2), 19 (1979), 439–447. Retrieve articles in Conformal Geometry and Dynamics of the American Mathematical Society with MSC (1991): 30D05, 58F08 Retrieve articles in all journals with MSC (1991): 30D05, 58F08 P. J. Rippon Affiliation: Department of Pure Mathematics, The Open University, Walton Hall, Milton Keynes, MK7 6AA England Email: [email protected] G. M. Stallard Email: [email protected] Received by editor(s): January 5, 1999 Received by editor(s) in revised form: April 19, 1999 Article copyright: © Copyright 1999 American Mathematical Society
CommonCrawl
OS setting (details) Powered by Algolia Gatsby Interactive Plots Aug 30, 2019 4 min read JS, Tutorial, Web Dev, Science The possibilities and authoring experience when creating content for the web have advanced by leaps and bounds in recent years. MDX — the latest in a long line of game-changing innovations — allows writers to sprinkle snippets of JSX directly into their markdown content (hence MDX). Think WordPress shortcodes but fancier. Not only does this open up seemingly endless possibilities in terms of more interactive and engaging content, it also facilitates the clear separation of code and content (i.e. no more need to write text directly into .js files). There are plenty of excellent guides already on how to use MDX with a few different site generators like Gatsby and Next.js. What I couldn't find was a guide on how to create interactive research-grade 2d and 3d plots displayed alongside regular markdown content. This is my attempt at providing one. Assuming you're starting out from a Gatsby site that's already up and running, what follows are step-by-step instructions on how to combine MDX, Gatsby and Plotly — the latter being the tough guy in this three-way marriage. Setting up MDX The first step is to equip your existing site with MDX-support. Gatsby provides an official and excellent guide on how to do that. The process has gotten much simpler in recent months and by now is pretty much reduced to adding mdx and gatsby-plugin-mdx to your dependencies, yarn add gatsby-plugin-mdx @mdx-js/mdx @mdx-js/react followed by including it in your gatsby-config.js gatsby-config.js // .... `gatsby-plugin-mdx`, That's the most basic setup. Just like that, you can start adding .mdx files to your src/pages folder and they'll be automatically converted to HTML pages including any React components you import. Once you've started getting the hang of MDX's capabilities you'll probably want to look into how to create MDX pages programmatically in gatsby-node.js. There's an entire section on that in the Gatsby docs. Setting up Plotly Next we'll need to install a plotting library. I went with Plotly because it's been around for a while, it's consistently added new features over that period and hence has accumulated extensive functionality in both 2d and 3d data visualization. it provides official React support. it has multiple language bindings including Python and R. I looked at more than a dozen other Javascript plotting libraries and nothing else seemed to provide the functionality needed for scientific plots. they've really fleshed out their docs over the last year or two. Plotly offers loads of customization options, perhaps at the expense of being a little more verbose than other options. If you're more interested in financial data visualization where (I think) people use more standardized plots, you might want to take a look at Highcharts instead which also offers official React support. Recharts also looked quite nice. However, both seem to be 2d only and only the former seems to be actively maintained. To start using Plotly, we need to install it and its React components along with react-loadable yarn add plotly.js react-plotly.js react-loadable The reason we need react-loadable is that react-plotly.js as yet doesn't support server-side rendering (SSR). They rely on several browser APIs such as the global document variable, bounding boxes and WebGL contexts which don't exist in node, making it hard for them to support SSR. I read elsewhere that working around this issue is as simple as customizing Gatsby's Webpack config to use a null loader for Plotly by adding the following export to gatsby-node.js gatsby-node.js exports.onCreateWebpackConfig = ({ stage, loaders, actions }) => { if (stage === `build-html`) { actions.setWebpackConfig({ module: { rules: [ test: /plotly/, use: loaders.null(), That did not work for me, though. I was still getting Minified React error #130 during builds. Hence I scrapped the Webpack config and used react-loadable to turn the <Plot /> component (the default export of react-plotly.js) into a lazily-loading version that's only executed on the client where document and all the other browser APIs are available. Here's how LazyPlot is implemented: src/components/Plotly.js import React from 'react' import Loadable from 'react-loadable' import { FoldingSpinner } from './Spinners' import { withTheme } from 'styled-components' const Plotly = Loadable({ loader: () => import(`react-plotly.js`), loading: ({ timedOut }) => timedOut ? ( <blockquote>Error: Loading Plotly timed out.</blockquote> ) : ( <FoldingSpinner /> timeout: 10000, export const LazyPlot = withTheme(({ theme, ...rest }) => ( <Plotly layout={{ margin: { t: 0, r: 0, l: 35 }, paper_bgcolor: `rgba(0, 0, 0, 0)`, plot_bgcolor: `rgba(0, 0, 0, 0)`, font: { color: theme.textColor, size: 16, // The next 3 directives make the plot responsive. autosize: true, style={{ width: `100%` }} useResizeHandler config={{ displayModeBar: false, showTips: false, {...rest} Note how surprisingly simple it was to add dark mode support. All it took was a transparent background and pulling in the theme to make the text color adaptive (color: theme.textColor). I also threw in a cool <FoldingSpinner /> courtesy of Tobias Ahlin refactored with styled-components to support dark mode as well. It is displayed while the plot is still loading to let the user know more content is about to appear in the spinner's place. Here's what it looks like in action. (You can click it to pause the animation if it disturbs your reading.) And here's how its implemented. src/components/Spinners/styles.js import React, { useState } from 'react' import { FoldingDiv } from './styles' export const FoldingSpinner = props => { const [active, setActive] = useState(true) <FoldingDiv {...props} active={active} onClick={() => setActive(!active)}> {Array(4) .fill() .map((e, i) => ( <div key={i} /> </FoldingDiv> import styled from 'styled-components' export const FoldingDiv = styled.div` margin: 2em auto; height: 2em; transform: rotateZ(45deg); transform: scale(1.1); div:before { background: ${({ color, theme }) => theme[color] || theme.textColor}; ${props => props.active && `animation: foldCube 2.4s infinite linear both`}; transform-origin: 100% 100%; ${[2, 4, 3] .map( (el, idx) => `div:nth-child(${el}) { transform: scale(1.1) rotateZ(${90 * (idx + 1)}deg); div:nth-child(${el}):before { animation-delay: ${0.3 * (idx + 1)}s; }` .join(`\n`)} @keyframes foldCube { 0%, 10% { transform: perspective(140px) rotateX(-180deg); 25%, transform: perspective(140px) rotateX(0deg); 100% { transform: perspective(140px) rotateY(180deg); Finally, to make the LazyPlot globally available in all your MDX files, wrap your app's root component in MDXProvider and pass it an object of global components. src/components/App.js import { MDXProvider } from '@mdx-js/react' import { LazyPlot } from './Plotly' const components = { LazyPlot, export default function App(props) { <MDXProvider components={components}> <main {...props} /> </MDXProvider> As mentioned in the MDX docs, it's important not to define this component inline with your JSX. This would make it referentially unstable and cause it to trigger a rerender of your entire page during every render cycle (even if most of the DOM remains unchanged). This results in bad performance and can cause unwanted side effects like breaking in-page browser navigation (e.g. clicking on headings in a table of contents). Avoid this by declaring your mapping as a variable. Note that every component passed to MDXProvider in this way will be directly added to your page bundle which can quickly add up and increase load times. So while it makes your life as an author easier because you don't have to clutter your MDX files with import statements, you don't want to do this with too many components. (If I understand correctly, I believe gatsby-plugin-mdx is actually doing that with all components at the moment, no matter if they're imported in a file or globally provided. But that's expected to be remedied soon.) That's it for the setup. We're now ready to produce some actual plots. Framework Popularity Let's start simple with a 2d plot. Frontend framework popularity over time measured by Google search frequency. Source: Google Trends All the MDX this requires is import fpProps from './frameworkPopularity' ### Frontend Framework Popularity <LazyPlot {...fpProps} /> frameworkPopularity.js const colors = [`red`, `green`, `blue`, `orange`] // prettier-ignore const months = [`2012/01`, `2012/02`, `2012/03`, `2012/04`, `2012/05`, `2012/06`, `2012/07`, `2012/08`, `2012/09`, `2012/10`, `2012/11`, `2012/12`, `2013/01`, `2013/02`, `2013/03`, `2013/04`, `2013/05`, `2013/06`, `2013/07`, `2013/08`, `2013/09`, `2013/10`, `2013/11`, `2013/12`, `2014/01`, `2014/02`, `2014/03`, `2014/04`, `2014/05`, `2014/06`, `2014/07`, `2014/08`, `2014/09`, `2014/10`, `2014/11`, `2014/12`, `2015/01`, `2015/02`, `2015/03`, `2015/04`, `2015/05`, `2015/06`, `2015/07`, `2015/08`, `2015/09`, `2015/10`, `2015/11`, `2015/12`, `2016/01`, `2016/02`, `2016/03`, `2016/04`, `2016/05`, `2016/06`, `2016/07`, `2016/08`, `2016/09`, `2016/10`, `2016/11`, `2016/12`, `2017/01`, `2017/02`, `2017/03`, `2017/04`, `2017/05`, `2017/06`, `2017/07`, `2017/08`, `2017/09`, `2017/10`, `2017/11`, `2017/12`, `2018/01`, `2018/02`, `2018/03`, `2018/04`, `2018/05`, `2018/06`, `2018/07`, `2018/08`] const data = { React: [2, 2, 3, 3, 3, 2, 2, 2, 3, 3, 3, 3, 2, 3, 3, 3, 2, 2, 3, 3, 3, 4, 4, 3, 4, 4, 5, 5, 5, 5, 6, 7, 7, 10, 10, 10, 13, 18, 18, 19, 20, 21, 24, 25, 27, 28, 29, 29, 32, 39, 39, 41, 42, 43, 41, 43, 41, 47, 49, 50, 55, 65, 68, 68, 71, 79, 76, 83, 73, 80, 74, 66, 74, 82, 88, 89, 94, 95, 98, 100], AngularJS: [1, 1, 1, 2, 3, 3, 4, 4, 4, 6, 8, 9, 11, 13, 17, 17, 20, 22, 25, 24, 23, 30, 33, 35, 39, 41, 45, 49, 53, 53, 58, 51, 48, 52, 58, 61, 60, 61, 69, 74, 67, 67, 65, 58, 57, 53, 61, 62, 59, 59, 64, 56, 59, 51, 53, 51, 51, 54, 57, 62, 55, 55, 55, 51, 52, 47, 46, 41, 34, 37, 41, 38, 37, 38, 38, 35, 35], Angular: [2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 5, 5, 6, 6, 8, 7, 9, 9, 12, 12, 11, 13, 15, 15, 16, 17, 20, 20, 21, 20, 24, 22, 20, 24, 27, 26, 29, 30, 33, 35, 33, 34, 31, 30, 29, 29, 32, 36, 33, 37, 40, 35, 36, 38, 36, 38, 36, 42, 48, 52, 49, 49, 55, 48, 55, 49, 52, 51, 44, 48, 53, 53, 54, 56, 54, 60, 56], Vue: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 1, 2, 3, 3, 3, 3, 3, 4, 5, 5, 6, 6, 8, 8, 8, 9, 10, 10, 12, 12, 12, 12, 12, 14, 14, 16, 16, 16, 18, 18, 19] data: Object.keys(data).map((key, index) => ({ x: months, y: data[key], type: `scatter`, mode: `lines+markers`, name: key, marker: { color: colors[index] }, })), Next, let's aim a little higher and visualize the saddle point of x2−y2x^2 - y^2x2−y2 at (0,0)(0,0)(0,0). The surface x2−y2x^2 - y^2x2−y2 plotted over the domain [-10,10]^2. In your MDX, you again have import saddleProps from './saddle' ### Saddle <LazyPlot {...saddleProps} /> where saddleProps is exported as: saddle.js const [points, middle] = [21, 10] const range = Array.from(Array(points), (e, i) => i - middle) const z = range.map(x => range.map(y => x * x - y * y)) data: [ z, x: range, y: range, type: `surface`, contours: { z: { show: true, usecolormap: true, highlightcolor: `white`, project: { z: true }, style: { height: `30em` }, Trimodal Distribution Here's another surface, this one a tri-modal distribution consisting of three unnormalized Gaussians. \exp[-\frac{1}{20}(x^2 + y^2)] + \exp\{-\frac{1}{10}[(x-10)^2 + y^2]\} + \exp\{-\frac{1}{10}[(x-7)^2 + (y-7)^2]\}. This is generated by import triModalProps from './triModal' ### Trimodal Distribution <LazyPlot {...triModalProps} /> triModal.js const range = Array.from(Array(points), (e, i) => 0.5 * (i - middle)) const z = range.map(x => range.map( y => Math.exp(-0.05 * (x ** 2 + y ** 2)) + 0.7 * Math.exp(-0.1 * ((x - 10) ** 2 + y ** 2)) + 0.5 * Math.exp(-0.1 * ((x + 7) ** 2 + (y - 7) ** 2)) showscale: false, I'm still getting the hang of Plotly's APIs but the possibilities already seem endless. What a great time to be developing for the web! If you have questions or cool ideas for more plots to add to the demo, let me know in the comments! useDarkMode Training BNNs with HMC © 2020 - Janosh Riebesell RSSThis site is open source
CommonCrawl
Adding ceramic polishing waste as filler to reduce paste volume and improve carbonation and water resistances of mortar Leo Gu Li1,2, Yi Ouyang3, Zhen-Yao Zhuo4 & Albert Kwok Hung Kwan2 Advances in Bridge Engineering volume 2, Article number: 3 (2021) Cite this article The use of ceramic waste in concrete/mortar production as aggregate replacement or cement replacement has been under consideration in the last decade to find an effective way to tackle the growing hazard of ceramic waste disposal. In this study, the authors reutilize ceramic polishing waste (CPW) as a filler to replace an equal volume of cement paste in mortar while keeping the mixture proportions of the cement paste unchanged, i.e., in a new way as paste replacement. This mixture design strategy allows a larger amount of CPW to be added to substantially reduce the paste volume, cement and carbon footprint. The mortar mixes so produced had been subjected to carbonation and water absorption tests, and the results showed that as paste replacement, the CPW can significantly enhance the carbonation and water resistances, in addition to the environmental benefits of reducing waste, cement and carbon footprint. Regression analysis of test results indicated that for carbonation resistance, the cementing efficiency factor of the CPW was around 0.5, whereas for water resistance, the cementing efficiency factor was higher than 1.0 at low CPW content and lower than 1.0 at high CPW content. In 2019, waste recycling has once again become the focus of many heated discussions among environmentalists and politicians, not because the technology in this field has achieved major advancement, but because scandals have revealed that "recycled waste" in some developed countries has, in reality, ended up in poorly managed landfills in developing countries (Wang et al. 2017; Harrabin and Edgington 2019; Wang et al. 2020; Li et al. 2021a). Therefore, people are still facing grim prospect for genuine "sustainable development" if proper actions are not taken promptly. An ideal waste recycling process should involve effective reutilization of the waste as raw materials in manufacturing or construction. In this study, a new way of reutilizing ceramic waste in concrete/mortar production, which allows a greater consumption of the waste per volume of production, is explored. There are two main sources of ceramic waste (de Brito et al. 2005; Pacheco-Torgal and Jalali 2010): (a) from the production of red-paste-related products like bricks and roof tiles; and (b) from the production of ceramic products like wall tiles, floor tiles and sanitary wares. In previous studies, the ceramic waste has been used either as aggregate replacement (as shown in Fig. 1a) or cement replacement (as shown in Fig. 1b). When used as aggregate replacement, it was added to replace fine aggregate (Binici 2007; Lopez et al. 2007; Guerra et al. 2009; Torkittikul and Chaipanich 2010; Siddique et al. 2018) or coarse aggregate (Senthamarai and Manoharan 2005; Suzuki et al. 2009; Medina et al. 2012; Feng et al. 2013) or both (Halicka et al. 2013; Awoyera et al. 2018). Such usage may offer certain benefits to the performance of concrete. For instance, the addition of porous ceramic waste as aggregate can provide internal water curing for high performance concrete (Suzuki et al. 2009). When used as cement replacement, it has been shown that ceramic waste powders have certain pozzolanic reactivity and thus may be added to reduce the cement content, while still attaining satisfactory strength (Pereira-de-Oliveira et al. 2012; Heidari and Tavakoli 2013; Mas et al. 2015; Kannan et al. 2017; Aly et al. 2019). Meanwhile, very fine ceramic waste has also been used in cement production (Ay and Ünal 2000; García-Díaz et al. 2011), but this is outside the scope of the present research, which is on the direct reutilization of ceramic waste in concrete/mortar production. Ceramic waste replacement methods. a Aggregate replacement method. b Cement replacement method. c Paste replacement method The fresh properties of the concrete/mortar produced would be affected by the addition of ceramic waste. Medina et al. (2013a) discovered that the shear yield stresses of cement mixtures under fresh state would be lowered due to the addition of fine ceramic waste powder. De Matos et al. (2018) found that porcelain polishing residues having a mean particle size of around 10 μm could be used to produce self-compacting concrete with similar rheological properties but better passing ability if added to replace cement by not more than 20%. The hardened properties would also be affected, beneficially or adversely. Correia et al. (2006) conducted a series of tests on the abrasive and water resistances of concrete made with recycled ceramic coarse aggregate and noted that the abrasive resistance was improved by the use of recycled ceramic coarse aggregate, but the water resistance was substantially afflicted. Medina et al. (2013b) showed that the use of ceramic waste as coarse aggregate would enhance the resistance against freeze-thaw cycles. Similarly, Kuan et al. (2020) pointed out that using ceramic powder could improve the freeze–thaw cycle resistance of concrete. However, Senthamarai et al. (2011) observed that the use of ceramic waste as coarse aggregate would adversely affect both the water and chloride resistances of concrete. In this study, the authors focus on carbonation and water resistances of mortar containing fine ceramic polishing waste (CPW) powder. The mix design strategy adopted herein regarding the usage of CPW is neither as aggregate replacement nor as cement replacement, but as "paste replacement". The "paste replacement" method is to partially replace the cement paste volume (cement + water) in the mortar/concrete by an equal volume of solid powder (as shown in Fig. 1c). It should be noted that by using this method, whilst the cement paste volume is lowered, the mix proportions of the cement paste would not be changed (i.e. the water/cement ratio is unchanged). If the solids have some degree of pozzolanic reactivity, part of them will end up forming cementing compounds while the rest will become fillers. A series of previous studies conducted by the authors' research group have shown that this strategy of paste replacement can be applied to the addition of limestone fines (Li and Kwan 2015; Li et al. 2017a), marble dust (Li et al. 2018a; Li et al. 2019a), granite dust (Li et al. 2018b; Li et al. 2019b), clay brick waste (Li et al. 2019c) and ceramic waste (Li et al. 2019e; Li et al. 2020a; Li et al. 2020b) to mortar/concrete mixtures with satisfactory performances attained. The applicability of such strategy to the addition of CPW to mortar mixtures for reducing the paste volume and improving the carbonation and water resistances is to be investigated herein so as to extend this strategy to CPW. The raw materials of the mortar mixes consisted of water, cement, fine aggregate, ceramic polishing waste (CPW) and superplasticizer (SP), of which the technical details are as follows: (a) the cement was PO 42.5 ordinary Portland cement produced in compliance with the Chinese Standard GB 175–2007 (2007) and had a specific gravity of 3.10; (b) the fine aggregate was river sand having a maximum particle size of 1.18 mm, a specific gravity of 2.58, a water absorption of 1.10%, and a moisture content of 0.10%; (c) the CPW was a light-grey powder with a specific gravity of 2.43; and (d) the SP was a PC-based admixture with 20% solid by mass and a specific gravity of 1.05. The CPW was obtained from a ceramics factory in Foshan city, a famous powerhouse of ceramics production in China. The CPW was generated during polishing of ceramic tiles. The original CPW collected was wet with some debris inside. In order to dry and minimize variation in quality, the CPW was treated as follows: first, the CPW was heated with an oven at 105 °C for 8 h to remove the water and then mechanically sieved using a 1.18 mm sieve to remove the debris. After the treatment, the CPW was turned to a light-grey dry powder. The SEM image and XRD pattern of the CPW are shown in Fig. 2a and b, respectively. From the SEM image, it is clear that the CPW particles are angular in shape. From the XRD pattern, it is evident that the CPW is a typical ceramic material composed mainly of SiO2, Al2O3, Fe2O3 and CaO, indicating that the CPW should have certain pozzolanic properties. SEM image and XRD pattern of CPW. a SEM image. b XRD pattern The grading curves of the main ingredients are presented in Fig. 3. It is noted that compared to the cement, the CPW has a similar pattern of particle size distribution but a marginally larger mean particle size. In other words, the CPW was almost as fine as the cement. Grading curves of cement, CPW and fine aggregate Mix proportion A total of 20 mortar mixes in four groups, with each group having a different water/cement (W/C) ratio among the values of 0.40, 0.45, 0.50 and 0.55, were designed and tested. The information on the mix proportions is comprehensively depicted in Table 1. For each of the mortar mixes, the volume of the fine aggregate accounted for, nominally, 40% of the total mortar volume, with the remaining 60% being the cement paste (cement + water) only or the cement paste and CPW. In each group of the mixes, which had the same W/C ratio, the combinations of the "cement paste + CPW" in terms of nominal percentages of the total mortar volume are (60% + 0%), (55% + 5%), (50% + 10%), (45% + 15%) and (40% + 20%), as illustrated in the second and the third columns of Table 1. The identifications of the mixes follow the format of "X-Y" where the W/C ratio of the mix is placed at the position of X and the "CPW volume" (defined as the ratio of the CPW's volume to the total volume of mortar) at the position of Y. Table 1 Mixture proportions of mortar mixes The addition of SP was to control the mortar mixes to have similar workability, and let them all meet a certain workability requirement, which in the current study, was set as having a flow spread measured by the mini slump cone test (Okamura and Ouchi (2003) of 250 ± 50 mm, or in other words, 200 to 300 mm. Therefore, the SP dosages (mass of liquid SP as a percentage of combined mass of cement + CPW) of the mortar mixes varied from one another and were determined through the process of trial mortar mixing, in which the SP was dosed into the mortar mixture bit by bit until the measured value of flow spread was within the required range. The results in SP dosage obtained from the trials for all the mortar mixes were then applied during the final production of mortar samples for subsequent testing. Mixing procedure and testing methods A 10 Litre horizontal single-shaft mixer was used to mix each batch of mortar and the mixing procedure was presented as follows: (1) the cement, CPW, water and one half of SP were added and mixed for 60 s; (2) the fine aggregate and the remaining half of SP were added, and the mixing continued for 60 s. The details of the mini slump cone test introduced by Okamura and Ouchi can be found in Ref. (Okamura and Ouchi 2003). The carbonation resistance, which is an important aspect of durability, of the mortar was evaluated through the carbonation test implemented in compliance with Chinese Standard GB/T 50082–2009 (2009). During the carbonation tests, the mortar specimens were placed in a carbonation chamber, as shown in Fig. 4a, to allow intrusion of carbon dioxide through the exposed surfaces. The depths of the intrusion were then measured as the carbonation depths to reflect the carbonation resistances of the specimens. For details of the carbonation test, the following reference is referred to (Li et al. 2017b). Photographs of carbonation test and water absorption test. a Carbonation chamber. b Water absorption test The water resistance, which is another important aspect of durability, of the mortar can be reflected by the water absorption rates. In the current study, the tests to measure the initial/secondary water absorption rates of the mortar were carried out in compliance with American Standard ASTM C1585–04 (2004) (as shown in Fig. 4b). Since it is a common test, the testing procedures are not repeated herein. Lastly, in order to study the effect of CPW on the microstructure, micrographs of the hardened mortar samples were captured by using the Hitachi S-3400 N-II scanning electron microscope (SEM). SP dosage and flow spread The SP dosages added to achieve the target flow spread of within 200 to 300 mm are listed in the last column of Table 1, while the actual flow spread achieved, which were all within the target range, are tabulated in the second column of Table 2. From these results, it can be seen that as expected, the SP dosage was generally higher at a lower W/C ratio and lower at a higher W/C ratio. More importantly, at a fixed W/C ratio, the SP dosage increased significantly as the CPW volume increased. This was because of the decrease in water content of the mortar mix. Table 2 Test results of mortar mixes Carbonation depth Table 2 depicts the carbonation depths for all the mortar mixes. When the carbonation depths of all the mortar mixes are plotted against the CPW volume in Fig. 5, it is found that when the value of CPW volume was unchanged, a lower W/C ratio would result in a smaller carbonation depth of the mortar. For example, when the W/C ratio decreased from 0.55 to 0.40 while the CPW volume was kept as 0%, the carbonation depth would be reduced from 7.70 mm to 3.27 mm. Such variation of carbonation depth with the W/C ratio is expected, since the W/C ratio is well known to be the main factor affecting the carbonation resistance of mortar (Li et al. 2017b; Leemann et al. 2015). Carbonation depth versus CPW volume More significantly, when the W/C ratio was fixed, the carbonation depth of the mortar would become smaller in the presence of higher CPW volume. For instance, when the CPW volume increased from 0% to 20% while the W/C ratio was maintained at 0.55, the carbonation depth would drop from 7.70 mm to 2.13 mm. Consequently, the presence of CPW as paste replacement has enhanced the carbonation resistance of the mortar. The probable reason is the densification of the microstructure after the addition of the CPW, as will be seen from the SEM images later. Water absorption rates As mentioned previously, the initial and the secondary water absorption rates of each mortar mix were measured through water absorption tests, with the values of presented in Table 2. When the two types of water absorption rates for the mortar mixes with various W/C ratios are plotted against the corresponding CPW values in Figs. 6 and 7, respectively, it is found that: at a fixed CPW value, a lower W/C ratio would lead to a lower initial/secondary water absorption rate. For instance, when the W/C ratio decreased from 0.55 to 0.40 while the CPW volume was kept at 0%, the initial water absorption rate would go down from 14.07 × 10− 4 mm/s1/2 to 6.42 × 10− 4 mm/s1/2, and the secondary water absorption rate would go down from 13.02 × 10− 4 mm/s1/2 to 3.70 × 10− 4 mm/s1/2. Such change of water absorption rate with the W/C ratio is reasonable, because similar results have been found in other studies (Li and Kwan 2015; Du et al. 2016). Initial water absorption rate versus CPW volume Secondary water absorption rate versus CPW volume More importantly, when the W/C ratio was kept unchanged, increasing the CPW amount would result in a lower water absorption rate. For example, when the CPW volume was raised from 0% to 20% while the W/C ratio was maintained as 0.55, the initial water absorption rate would go down from 14.07 × 10− 4 mm/s1/2 to 3.71 × 10− 4 mm/s1/2, and the secondary water absorption rate would go down from 13.02 × 10− 4 mm/s1/2 to 1.50 × 10− 4 mm/s1/2. Therefore, it is evident that the paste replacement method is a very effective way of adding the CPW to enhance the water resistance of mortar. Microstructure from SEM images The SEM images of the mortar specimens 0.55–0 and 0.55–20 are presented in Figs. 8a and 7b, respectively. Through comparison, it is found that the microstructure was rather loose in the specimen 0.55–0, which has no CPW content, but significantly more compact in the specimen 0.55–20, which has 20% CPW volume. Therefore, the addition of CPW as paste replacement would densify the microstructure and reduce the voids inside, and thus should improve the impermeability to increase the carbonation and water resistances of mortar. SEM micrographs of mortar mixes at 28 days. a Mortar mix 0.55–0. b Mortar mix 0.55–20 The explanations on the densification of microstructure by adding CPW may be given as follows: (1) adding CPW can fill into the voids between aggregate particles so as to reduce the volume of voids to be filled with cement paste, so as to improve the packing density of the particle system; (2) the CPW has certain pozzolanic reactivity and thus could participate in hydration reaction to produce more C-S-H gel to fill the voids; (3) the addition of more SP to achieve the required workability would help to better disperse the cement grains and CPW particles to allow more uniform mixing and better compaction during casting. Detailed analysis of test results Concurrent changes in cement content and durability performance From Table 1, it can be seen that the addition of CPW to replace an equal volume of cement paste would allow the consumption of waste up to 20% of the mortar volume and whittle down the cement content by up to 33%, while achieving higher carbonation and water resistances for improving the durability performance. To illustrate the concurrent reduction in cement content and improvement in durability, the carbonation depth, initial water absorption rate and secondary water absorption rate are plotted against the cement content for different CPW volumes and W/C ratios in Figs. 9, 10 and 11, respectively. It is evident that the conventional method of decreasing the W/C ratio to improve the durability would increase the cement content, whereas the proposed method of adding CPW as paste replacement to improve the durability would decrease the cement content for reducing carbon footprint and increase the waste consumption for reducing waste disposal. Carbonation depth versus cement content Initial water absorption rate versus cement content Secondary water absorption rate versus cement content Compared to the addition of solid waste as aggregate replacement, which does not reduce the cement content and might not improve the mortar performance, and the addition of solid waste as cement replacement, which often adversely affects the mortar performance and therefore imposes a severe limit on the amount of solid waste to be added, this paste replacement method offers the advantages in consuming a larger amount of waste, reducing a larger amount of cement and improving the mortar performance, all at the same time. However, this paste replacement method is applicable only if the solid waste to be added is almost as fine as the cementitious materials so that the solid waste would intermix with the cement paste to form a powder paste having the same volume as the original cement paste for filling up the voids between aggregate particles. In fact, such addition of the solid waste as paste replacement may also be viewed as an addition of the solid waste as a filler to fill up part of the voids between aggregate particles so that the cement paste volume needed may be reduced by the volume of solid waste added. This would increase the packing density of the solid waste plus aggregate mixture (Yu et al. 1997; Zhang et al. 2011; Cepuritis et al. 2014; Li et al. 2019d) and quite possibly, this is one of the root causes for the improvement in mortar performance. Cementing efficiency of CPW For evaluating the effectiveness of supplementary cementitious material (SCM) in performance attributes of cement-based material, it has been proposed to employ the concept of cementing efficiency (Hobbs 1988; Wong and Abdul Razak 2005; European Committee for Standardization 2013; Li et al. 2021b). Basically, the cementing efficiency of a SCM is evaluated in terms of a cementing efficiency factor (CEF), defined as the mass of cement that is replaceable per mass of the SCM added without changing the performance. Smith (1967) first proposed this concept and suggested that a CEF of 0.25 for fly ash (FA) may be adopted for preliminary design of FA concrete. Papadakis and Tsimas (2002) tested the durability performance of low-calcium FA and high-calcium FA, and obtained their carbonation resistance CEFs as 0.5 and 0.7, respectively. Previous studies by other researchers (Pereira-de-Oliveira et al. 2012; Heidari and Tavakoli 2013; Mas et al. 2015; Kannan et al. 2017) have indicated that ceramic waste in powder form can have certain pozzolanic reactivity and thus cementing property, but so far, such cementing property has never been quantified. Herein, it is proposed to evaluate the cementing property of the CPW in terms of the CEF, which is defined as the ratio of the equivalent mass of cement to the mass of CPW added (Hobbs 1988; Wong and Abdul Razak 2005). To evaluate the cementing efficiency of the CPW, an effective water to cementitious materials ratio (W/CMeff) is introduced, as given below: $$ \mathrm{W}/{\mathrm{C}\mathrm{M}}_{\mathrm{eff}}=\frac{{\mathrm{m}}_{\mathrm{W}}}{{\mathrm{m}}_{\mathrm{C}}+\upalpha \times {\mathrm{m}}_{\mathrm{C}\mathrm{PW}}} $$ in which α is the CEF of CPW; and mW, mC and mCPW are the water, cement and CPW contents in kg/m3. It should be noted that the values of α for the carbonation depth, initial water absorption rate and secondary water absorption rate are not the same. For differentiation, the α for carbonation depth is denoted by αc, the α for initial water absorption rate is denoted by αw,I and the α for secondary water absorption rate is denoted by αw,II. To evaluate each CEF, the corresponding mortar performance is correlated to W/CMeff by regression analysis, and different values of α are tried until the highest R2 value is obtained. Fig. 12 presents the analysis results for the carbonation resistance, where the carbonation depth is correlated to W/CMeff using the following equation: $$ \mathrm{Carbonation}\ \mathrm{depth}=28.57\bullet \mathrm{W}/{\mathrm{CM}}_{\mathrm{eff}}-8.14 $$ in which the carbonation depth is in mm. Maximization of the R2 value yielded αc = 0.51 and R2 = 0.969. During the analysis, it appeared that the CEF αc is not sensitive to the CPW content. Hence, it may be said that for evaluation of the carbonation resistance, the CPW may be treated as equivalent to 0.5 times its mass of cement. The R2 value so obtained is very high, suggesting that the correlation equation above may be used to predict the carbonation depth of mortar made with CPW added as paste replacement. Carbonation depth versus effective W/CM ratio Figs. 13 and 14 present the analysis results for the water resistance, where the initial water absorption rate and secondary water absorption rate are correlated to W/CMeff using the following equations: Initial water absorption rate versus effective W/CM ratio Secondary water absorption rate versus effective W/CM ratio $$ \mathrm{Initial}\ \mathrm{water}\ \mathrm{absorption}\ \mathrm{rate}=112.55\bullet \mathrm{W}/{\mathrm{CM}}_{\mathrm{eff}}^{3.31} $$ $$ \mathrm{Secondary}\ \mathrm{water}\ \mathrm{absorption}\ \mathrm{rate}=123.42\bullet \mathrm{W}/{\mathrm{CM}}_{\mathrm{eff}}^{3.89} $$ in which the two types of water absorption rates are both in × 10− 4 mm/s1/2. However, the analysis revealed that the CEFs αw,I and αw,II are both dependent on the CPW content, The values of αw,I and αw,II so determined are given in the tables inserted in the two figures. For the initial water absorption rate, the value of αw,I varies from 1.20 at a CPW volume of 5% to 0.56 at a CPW volume of 20%. For the secondary water absorption rate, the value of αw,II varies from 1.45 at a CPW volume of 5% to 0.70 at a CPW volume of 20%. Roughly, for evaluation of the water resistance, the CPW may be treated conservatively as equivalent to 0.6 times its mass of cement. The R2 values so obtained for Eqs. (3) and (4) are 0.979 and 0.963, respectively, which are both very high, suggesting that these two equations may be used to predict the two types of water absorption rates of mortar made with CPW added as paste replacement. The feasibility of adding ceramic polishing waste (CPW) as paste replacement in mortar to reduce waste disposal and cement consumption, and also to improve durability has been studied by making a series of mortar mixes with different CPW volumes (as % of mortar volume) and W/C ratios for testing of their carbonation and water resistances. Up to 20% CPW by volume of mortar had been added and very promising results were obtained, from which the following conclusions are made. The addition of up to 20% CPW to replace an equal volume of cement paste in mortar would reduce the paste volume and cement content by 33% without adversely affecting the performance of the mortar. The addition of CPW up to 20% by volume of mortar as paste replacement would dramatically reduce the carbonation depth by more than 70% and the initial/secondary water absorption rates by also more than 70%. Densification of the microstructure, as revealed by SEM images, is the most probable cause of such dramatic improvements in durability. Compared to the conventional method of decreasing the W/C ratio to improve durability, which is less effective in increasing the carbonation and water resistances and would increase the cement content, this method of adding CPW as paste replacement to improve durability is more effective and would at the same time reduce the cement content. Compared to the method of adding CPW as aggregate replacement, which does not reduce the cement content, and the method of adding CPW as cement replacement, which may not improve the durability performance, this method of adding CPW as paste replacement is a much better method in terms of waste reutilization, carbon reduction and durability performance. The pozzolanic reactivity of the CPW used has been quantified in terms of cementing efficiency factors. For the carbonation depth, the cementing efficiency factor has been found to be insensitive to the CPW volume and is generally around 0.5. For the initial and secondary water absorption rates, the cementing efficiency factors have been found to decrease with increasing CPW volume, but nevertheless remain at around 0.6 or higher even at a CPW volume of 20%. CPW: Ceramic polishing waste Superplasticizer W/C ratio: Water/cement ratio Scanning electron microscope W/CMeff : Effective water to cementitious materials ratio α: Cementing efficiency factor of CPW αc : Cementing efficiency factor of CPW for carbonation depth αw,I : Cementing efficiency factor of CPW for initial water absorption rate αw,II : Cementing efficiency factor of CPW for secondary water absorption rate mW : mC : Cement content mCPW : CPW content Aly ST, El-Dieb AS, Taha MR (2019) Effect of high-volume ceramic waste powder as partial cement replacement on fresh and compressive strength of self-compacting concrete. J Mater Civ Eng 31(2):04018374 ASTM International (2004) ASTM C1585–04. Standard test method for measurement of rate of absorption of water by hydraulic-cement concretes Awoyera PO, Akinmusuru JO, Dawson AR, Ndambuki JM, Thom NH (2018) Microstructural characteristics, porosity and strength development in ceramic-laterized concrete. Cem Concr Compos 86:224–237 Ay N, Ünal M (2000) The use of waste ceramic tile in cement production. Cem Concr Res 30(3):497–499 Binici H (2007) Effect of crushed ceramic and basaltic pumice as fine aggregates on concrete mortars properties. Constr Build Mater 21(6):1191–1197 Cepuritis R, Wigum BJ, Garboczi EJ, Mørtsell E, Jacobsen S (2014) Filler from crushed aggregate for concrete: pore structure, specific surface, particle shape and size distribution. Cem Concr Compos 54:2–16 Correia JR, De Brito J, Pereira AS (2006) Effects on concrete durability of using recycled ceramic aggregates. Mater Struct 39(2):169–177 de Brito J, Pereira AS, Correia JR (2005) Mechanical behaviour of non-structural concrete made with recycled ceramic aggregates. Cem Concr Compos 27(4):429–433 de Matos PR, de Oliveira AL, Pelisser F, Prudêncio LR Jr (2018) Rheological behavior of Portland cement pastes and self-compacting concretes containing porcelain polishing residue. Constr Build Mater 175:508–518 Du H, Gao HJ, Dai PS (2016) Improvement in concrete resistance against water and chloride ingress by adding graphene nanoplatelet. Cem Concr Res 83:114–123 European Committee for Standardization (2013) EN 206: 2013. Concrete - specification, performance, production and conformity Feng D, Yi J, Wang D (2013) Performance and thermal evaluation of incorporating waste ceramic aggregates in wearing layer of asphalt pavement. J Mater Civ Eng 25(7):857–863 García-Díaz I, Palomo JG, Puertas F (2011) Belite cements obtained from ceramic wastes and the mineral pair CaF2/CaSO4. Cem Concr Compos 33(10):1063–1070 General Administration of Quality Supervision, Inspection and Quarantine, China (2007) GB 175–2007. Common Portland Cement. (in Chinese) Guerra I, Vivar I, Llamas B, Juan A, Moran J (2009) Eco-efficient concretes: the effects of using recycled ceramic material from sanitary installations on the mechanical properties of concrete. Waste Manag 29(2):643–646 Halicka A, Ogrodnik P, Zegardlo B (2013) Using ceramic sanitary ware waste as concrete aggregate. Constr Build Mater 48:295–305 Harrabin R and Edgington T (2019) Recycling: Where is the plastic waste mountain? BBC Real Check (online). Available at: https://www.bbc.com/news/science-environment-46566795 Heidari A, Tavakoli D (2013) A study of the mechanical properties of ground ceramic powder concrete incorporating nano-SiO2 particles. Constr Build Mater 38:255–264 Hobbs DW (1988) Portland-pulverized fuel ash concretes: water demand, 28 day strength, mix design and strength development. Proc Inst Civ Eng 85(2):317–331 Kannan DM, Aboubakr SH, El-Dieb AS, Taha MM (2017) High performance concrete incorporating ceramic waste powder as large partial replacement of Portland cement. Constr Build Mater 144:35–41 Kuan P, Hongxia Q, Kefan C (2020) Reliability analysis of freeze–thaw damage of recycled ceramic powder concrete. J Mater Civ Eng 32(9):05020008 Leemann A, Nygaard P, Kaufmann J, Loser R (2015) Relation between carbonation resistance, mix design and exposure of mortar and concrete. Cem Concr Compos 62:33–43 Li LG, Chen JJ, Kwan AKH (2017a) Roles of packing density and water film thickness in strength and durability of limestone fines concrete. Mag Concr Res 69(12):595–605 Li LG, Huang ZH, Tan YP, Kwan AKH, Chen HY (2019a) Recycling of marble dust as paste replacement for improving strength, microstructure and eco-friendliness of mortar. J Clean Prod 210:55–65 Li LG, Huang ZH, Tan YP, Kwan AKH, Liu F (2018a) Use of marble dust as paste replacement for recycling waste and improving durability and dimensional stability of mortar. Constr Build Mater 166:423–432 Li LG, Kwan AKH (2015) Adding limestone fines as cementitious paste replacement to improve tensile strength, stiffness and durability of concrete. Cem Concr Compos 60:17–24 Li LG, Lin ZH, Chen GM, Kwan AKH, Li ZH (2019c) Reutilization of clay brick waste in mortar: Paste replacement versus cement replacement. J Mater Civ Eng 31(7):04019129 Li LG, Wang YM, Tan YP, Kwan AKH (2019b) Filler technology of adding granite dust to reduce cement content and increase strength of mortar. Powder Technol 342:388–396 Li LG, Wang YM, Tan YP, Kwan AKH, Li LJ (2018b) Adding granite dust as paste replacement to improve durability and dimensional stability of mortar. Powder Technol 333:269–276 Li LG, Xiao BF, Fang ZQ, Xiong Z, Chu SH, Kwan AKH (2021a) Feasibility of glass/basalt fiber reinforced seawater coral sand mortar for 3D printing. Addit Manuf https://doi.org/10.1016/j.addma.2020.101684 Li LG, Zheng JY, Ng PL, Kwan AKH (2021b) Synergistic cementing efficiencies of nano-silica and micro-silica in carbonation resistance and sorptivity of concrete. J Build Eng 33:101862 Li LG, Zhu J, Huang ZH, Kwan AKH, Li LJ (2017b) Combined effects of micro-silica and nano-silica on durability of mortar. Constr Build Mater 157:337–347 Li LG, Zhuo HX, Zhu J, Kwan AKH (2019d) Packing density of mortar containing polypropylene, carbon or basalt fibres under dry and wet conditions. Powder Technol 342:433–440 Li LG, Zhuo ZY, Kwan AKH, Zhang TS, Lu DG (2020b) Cementing efficiency factors of ceramic polishing residue in compressive strength and chloride resistance of mortar. Powder Technol 367:163–171 Li LG, Zhuo ZY, Zhu J, Chen JJ, Kwan AKH (2019e) Reutilizing ceramic polishing waste as powder filler in mortar to reduce cement content by 33% and increase strength by 85%. Powder Technol 355:119–126 Li LG, Zhuo ZY, Zhu J, Kwan AKH (2020a) Adding ceramic polishing waste as paste substitute to improve sulphate and shrinkage resistances of mortar. Powder Technol 362:149–156 Lopez V, Llamas B, Juan A, Moran JM, Guerra I (2007) Eco-efficient concretes: impact of the use of white ceramic powder on the mechanical properties of concrete. Biosyst Eng 96(4):559–564 Mas MA, Reig Cerdá L, Monzó J, Borrachero MV, Payá J (2015) Ceramic tiles waste as replacement material in Portland cement. Adv Cem Res 28(4):221–232 Medina C, Banfill PFG, de Rojas MS, Frías M (2013a) Rheological and calorimetric behaviour of cements blended with containing ceramic sanitary ware and construction/demolition waste. Constr Build Mater 40:822–831 Medina C, de Rojas MI, Frías M (2013b) Freeze-thaw durability of recycled concrete containing ceramic aggregate. J Clean Prod 40:151–160 Medina C, Frías M, De Rojas MS (2012) Microstructure and properties of recycled concretes using ceramic sanitary ware industry waste as coarse aggregate. Constr Build Mater 31:112–118 Ministry of Housing and Urban-Rural Development, China (2009) GB/T 50082–2009. Standard for test methods of long-term performance and durability of ordinary concrete. (in Chinese). Okamura H, Ouchi M (2003) Self-compacting concrete. J Adv Concr Technol 1(1):5–15 Pacheco-Torgal F, Jalali S (2010) Reusing ceramic wastes in concrete. Constr Build Mater 24(5):832–838 Papadakis VG, Tsimas S (2002) Supplementary cementing materials in concrete: part I: efficiency and design. Cem Concr Res 32(10):1525–1532 Pereira-de-Oliveira LA, Castro-Gomes JP, Santos PM (2012) The potential pozzolanic activity of glass and red-clay ceramic waste as cement mortars components. Constr Build Mater 31:197–203 Senthamarai RM, Manoharan PD (2005) Concrete with ceramic waste aggregate. Cem Concr Compos 27(9–10):910–913 Senthamarai RM, Manoharan PD, Gobinath D (2011) Concrete made from ceramic industry waste: durability properties. Constr Build Mater 25(5):2413–2419 Siddique S, Shrivastava S, Chaudhary S (2018) Influence of ceramic waste as fine aggregate in concrete: Pozzolanic, XRD, FT-IR, and NMR investigations. J Mater Civ Eng 30(9):04018227 Smith IA (1967) The design of fly-ash concretes. Proc Inst Civil Eng 36(4):769–790 Suzuki M, Meddah MS, Sato R (2009) Use of porous ceramic waste aggregates for internal curing of high-performance concrete. Cem Concr Res 39(5):373–381 Torkittikul P, Chaipanich A (2010) Utilization of ceramic waste as fine aggregate within Portland cement and fly ash concretes. Cem Concr Compos 32(6):440–449 Wang D, Wang Q, Xue J (2020) Reuse of hazardous electrolytic manganese residue: Detailed leaching characterization and novel application as a cementitious material. Resour Conserv Recycl 154:104,645 Wang Q, Wang D, Chen H (2017) The role of fly ash microsphere in the microstructure and macroscopic properties of high-strength concrete. Cem Concr Compos 83:125–137 Wong HS, Abdul Razak H (2005) Efficiency of calcined kaolin and silica fume as cement replacement material for strength performance. Cem Concr Compos 35(4):696–702 Yu AB, Bridgwater J, Burbidge A (1997) On the modelling of the packing of fine particles. Powder Technol 92(3):185–194 Zhang T, Yu Q, Wei J, Zhang P, Chen P (2011) A gap-graded particle size distribution for blended cements: analytical approach and experimental validation. Powder Technol 214(2):259–268 The authors would like to acknowledge the financial support by the National Natural Science Foundation of China (Project Nos.: 51608131, 51678161 and 51808134), Colleges Innovation Project of Guangdong Province (Project No. 2017KTSCX061) and Pearl River S&T Nova Program of Guangzhou City (Project No. 201906010064). Guangdong University of Technology, Guangzhou, China Leo Gu Li The University of Hong Kong, Hong Kong, China Leo Gu Li & Albert Kwok Hung Kwan Mott Macdonald (Hong Kong) Ltd., Hong Kong, China Yi Ouyang Agile Property Holdings Ltd., Guangzhou, China Zhen-Yao Zhuo Albert Kwok Hung Kwan Leo Gu Li: Conceptualization; Project administration; Methodology; Writing - original draft; Funding acquisition. Yi Ouyang: Methodology; Writing - Review & Editing. Zhen-Yao Zhuo: Investigation. Albert Kwok Hung Kwan: Validation; Writing - Review & Editing; Supervision. The author(s) read and approved the final manuscript. Correspondence to Leo Gu Li. Li, L.G., Ouyang, Y., Zhuo, ZY. et al. Adding ceramic polishing waste as filler to reduce paste volume and improve carbonation and water resistances of mortar. ABEN 2, 3 (2021). https://doi.org/10.1186/s43251-020-00019-2 Carbonation resistance Cementing efficiency
CommonCrawl
On the motion of polygonal curves with asymptotic lines by crystalline curvature flow with bulk effect DCDS-S Home Hot spots for the two dimensional heat equation with a rapidly decaying negative potential August 2011, 4(4): 851-864. doi: 10.3934/dcdss.2011.4.851 On a new kind of convexity for solutions of parabolic problems Kazuhiro Ishige 1, and Paolo Salani 2, Mathematical Institute, Tohoku University, Aoba, Sendai 980-8578 Dipartimento di Matematica 'U. Dini', Viale Morgagni 67/A, 50137 Firenze, Italy Received December 2009 Revised February 2010 Published November 2010 We introduce the notion of $\alpha$-parabolic quasi-concavity for functions of space and time, which extends the usual notion of quasi-concavity and the notion of parabolic quasi-cocavity introduced in [18]. Then we investigate the $\alpha$-parabolic quasi-concavity of solutions to parabolic problems with vanishing initial datum. The results here obtained are generalizations of some of the results of [18]. Keywords: Parabolic equations, convexity.. Mathematics Subject Classification: Primary: 35K20; Secondary: 52A2. Citation: Kazuhiro Ishige, Paolo Salani. On a new kind of convexity for solutions of parabolic problems. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 851-864. doi: 10.3934/dcdss.2011.4.851 C. Bianchini, M. Longinetti and P. Salani, Quasiconcave solutions to elliptic problems in convex rings,, Indiana Univ. Math J., 58 (2009), 1565. doi: doi:10.1512/iumj.2009.58.3539. Google Scholar C. Borell, Brownian motion in a convex ring and quasiconcavity,, Comm. Math. Phys., 86 (1982), 143. doi: doi:10.1007/BF01205665. Google Scholar C. Borell, A note on parabolic convexity and heat conduction,, Ann. Inst. H. Poincar\'e Probab. Statist., 32 (1996), 387. Google Scholar H. J. Brascamp and E. H. Lieb, On extensions of the Brunn-Minkowski and Prékopa-Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation,, J. Functional Anal., 22 (1976), 366. doi: doi:10.1016/0022-1236(76)90004-5. Google Scholar P. Daskalopoulos, R. Hamilton and K. Lee, All time $C^\infty$-Regularity of interface in degenerated diffusion: A geometric approach,, Duke Math. Journal, 108 (2001), 295. doi: doi:10.1215/S0012-7094-01-10824-7. Google Scholar P. Daskalopoulos and K.-A. Lee, Convexity and all-time $C^\infty$-regularity of the interface in flame propagation,, Comm. Pure Appl. Math., 55 (2002), 633. doi: doi:10.1002/cpa.10028. Google Scholar P. Daskalopoulos and K.-A. Lee, All time smooth solutions of the one-phase Stefan problem and the Hele-Shaw flow,, Comm. in P.D.E., 12 (2004), 71. Google Scholar J. I. Diaz and B. Kawohl, On convexity and starshapedness of level sets for some nonlinear elliptic and parabolic problems on convex rings,, Preprint n. 393 (1986), 123 (1986). Google Scholar J. I. Diaz and B. Kawohl, On convexity and starshapedness of level sets for some nonlinear elliptic and parabolic problems on convex rings,, J. Math. Anal. Appl., 177 (1993), 263. doi: doi:10.1006/jmaa.1993.1257. Google Scholar E. Francini, Starshapedness of level sets for solutions of nonlinear parabolic equations,, Rend. Ist. Mat. Univ. Trieste, 28 (1996), 49. Google Scholar Y. Giga, S. Goto, H. Ishii and M.-H. Sato, Comparison principle and convexity preserving properties for singular degenerate parabolic equations on unbounded domains,, Indiana Univ. Math. J., 40 (1991), 443. doi: doi:10.1512/iumj.1991.40.40023. Google Scholar A. Greco, Extremality conditions for the quasi-concavity function and applications,, Arch. Math., 93 (2009), 389. doi: doi:10.1007/s00013-009-0035-2. Google Scholar A. Greco and B. Kawohl, Log-concavity in some parabolic problems,, Electron. J. Differential Equations, 1999 (1999), 1. Google Scholar P. Guan and Lu Xu, Extremality conditions for the quasi-concavity function and applications,, eprint arXiv:1004.1187v2 (2010), (2010). Google Scholar G. H. Hardy, J. E. Littlewood and G. Pólya, "Inequalities,", Cambridge Univ. Press, (1934). Google Scholar K. Ishige and P. Salani, Is quasi-concavity preserved by heat flow?,, Arch. Math., 90 (2008), 455. doi: doi:10.1007/s00013-008-2437-y. Google Scholar K. Ishige and P. Salani, Convexity breaking of free boundary in porous medium equation,, Interfaces Free Bound., 12 (2010), 75. doi: doi:10.4171/IFB/227. Google Scholar K. Ishige and P. Salani, Parabolic quasi-concavity for solutions to parabolic problems in convex rings,, Math. Nachr., 283 (2010), 1526. doi: doi:10.1002/mana.200910242. Google Scholar S. Janson and J. Tysk, Preservation of convexity of solutions to parabolic equations,, J. Differential Equations, 206 (2004), 182. doi: doi:10.1016/j.jde.2004.07.016. Google Scholar B. Kawohl, "Rearrangements and Convexity of Level Sets in PDE,", Lecture Notes in Math. \textbf{1150}, 1150 (1985). Google Scholar A. U. Kennington, Convexity of level curves for an initial value problem,, J. Math. Anal. Appl., 133 (1988), 324. doi: doi:10.1016/0022-247X(88)90404-0. Google Scholar N. J. Korevaar, Convex solutions to nonlinear elliptic and parabolic boundary value problems,, Indiana Univ. Math. J., 32 (1983), 603. doi: doi:10.1512/iumj.1983.32.32042. Google Scholar O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, "Linear and Quasilinear Equations of Parabolic Type,", Amer. Math. Soc., (1968). Google Scholar K.-A. Lee, Power-concavity on nonlinear parabolic flows,, Comm. Pure Appl. Math., 58 (2005), 1529. doi: doi:10.1002/cpa.20068. Google Scholar K.-A. Lee and J. L. Vázquez, Geometrical properties of solutions of the porous medium equation for large times,, Indiana Univ. Math. J., 52 (2003), 991. doi: doi:10.1512/iumj.2003.52.2200. Google Scholar P.-L. Lions and M. Musiela, Convexity of solutions of parabolic equations,, C. R. Math. Acad. Sci. Paris, 342 (2006), 915. Google Scholar M. Longinetti and P. Salani, On the Hessian matrix and Minkowski addition of quasiconcave functions,, J. Math. Pures Appl., 88 (2007), 276. doi: doi:10.1016/j.matpur.2007.06.007. Google Scholar Kim Dang Phung. Carleman commutator approach in logarithmic convexity for parabolic equations. Mathematical Control & Related Fields, 2018, 8 (3&4) : 899-933. doi: 10.3934/mcrf.2018040 Chuanqiang Chen. On the microscopic spacetime convexity principle for fully nonlinear parabolic equations II: Spacetime quasiconcave solutions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 4761-4811. doi: 10.3934/dcds.2016007 Chuanqiang Chen. On the microscopic spacetime convexity principle of fully nonlinear parabolic equations I: Spacetime convex solutions. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3383-3402. doi: 10.3934/dcds.2014.34.3383 David L. Finn. Convexity of level curves for solutions to semilinear elliptic equations. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1335-1343. doi: 10.3934/cpaa.2008.7.1335 Qing Liu, Atsushi Nakayasu. Convexity preserving properties for Hamilton-Jacobi equations in geodesic spaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 157-183. doi: 10.3934/dcds.2019007 Victor Isakov. On increasing stability of the continuation for elliptic equations of second order without (pseudo)convexity assumptions. Inverse Problems & Imaging, 2019, 13 (5) : 983-1006. doi: 10.3934/ipi.2019044 Juraj Földes, Peter Poláčik. On asymptotically symmetric parabolic equations. Networks & Heterogeneous Media, 2012, 7 (4) : 673-689. doi: 10.3934/nhm.2012.7.673 H. Gajewski, I. V. Skrypnik. To the uniqueness problem for nonlinear parabolic equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 315-336. doi: 10.3934/dcds.2004.10.315 Jan Prüss, Gieri Simonett, Rico Zacher. On normal stability for nonlinear parabolic equations. Conference Publications, 2009, 2009 (Special) : 612-621. doi: 10.3934/proc.2009.2009.612 Wolfgang Walter. Nonlinear parabolic differential equations and inequalities. Discrete & Continuous Dynamical Systems - A, 2002, 8 (2) : 451-468. doi: 10.3934/dcds.2002.8.451 Farid Ammar Khodja, Cherif Bouzidi, Cédric Dupaix, Lahcen Maniar. Null controllability of retarded parabolic equations. Mathematical Control & Related Fields, 2014, 4 (1) : 1-15. doi: 10.3934/mcrf.2014.4.1 Guangying Lv, Hongjun Gao, Jinlong Wei, Jiang-Lun Wu. The effect of noise intensity on parabolic equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2019248 Baojun Bian, Pengfei Guan. A structural condition for microscopic convexity principle. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 789-807. doi: 10.3934/dcds.2010.28.789 Arrigo Cellina, Carlo Mariconda, Giulia Treu. Comparison results without strict convexity. Discrete & Continuous Dynamical Systems - B, 2009, 11 (1) : 57-65. doi: 10.3934/dcdsb.2009.11.57 Takesi Fukao, Masahiro Kubo. Nonlinear degenerate parabolic equations for a thermohydraulic model. Conference Publications, 2007, 2007 (Special) : 399-408. doi: 10.3934/proc.2007.2007.399 Qiang Guo, Dong Liang. An adaptive wavelet method and its analysis for parabolic equations. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 327-345. doi: 10.3934/naco.2013.3.327 Maria Michaela Porzio. Existence of solutions for some "noncoercive" parabolic equations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 553-568. doi: 10.3934/dcds.1999.5.553 A. V. Rezounenko. Inertial manifolds with delay for retarded semilinear parabolic equations. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 829-840. doi: 10.3934/dcds.2000.6.829 Ryuichi Suzuki. Universal bounds for quasilinear parabolic equations with convection. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 563-586. doi: 10.3934/dcds.2006.16.563 Lijian Jiang, Yalchin Efendiev, Victor Ginting. Multiscale methods for parabolic equations with continuum spatial scales. Discrete & Continuous Dynamical Systems - B, 2007, 8 (4) : 833-859. doi: 10.3934/dcdsb.2007.8.833 Kazuhiro Ishige Paolo Salani
CommonCrawl
How to format your references using the ACM Transactions on Programming Languages and Systems citation style This is a short guide how to format citations and the bibliography in a manuscript for ACM Transactions on Programming Languages and Systems (TOPLAS). For a complete guide how to prepare your manuscript refer to the journal's instructions to authors. Klaus Peters. 2005. Obituary: Saunders Mac Lane (1909-2005). Nature 435, 7040 (May 2005), 292. Laty A. Cahoon and H. Steven Seifert. 2009. An alternative DNA structure is necessary for pilin antigenic variation in Neisseria gonorrhoeae. Science 325, 5941 (August 2009), 764–767. Michael Petrides, Geneviève Cadoret, and Scott Mackey. 2005. Orofacial somatomotor responses in the macaque monkey homologue of Broca's area. Nature 435, 7046 (June 2005), 1235–1238. Guo-Cheng Yuan, Yuen-Jong Liu, Michael F. Dion, Michael D. Slack, Lani F. Wu, Steven J. Altschuler, and Oliver J. Rando. 2005. Genome-scale identification of nucleosome positions in S. cerevisiae. Science 309, 5734 (July 2005), 626–630. Michel Soustelle. 2013. Handbook of Heterogenous Kinetics. John Wiley & Sons, Inc., Hoboken, NJ USA. Jean-François Boulicaut, Luc De Raedt, and Heikki Mannila (Eds.). 2006. Constraint-Based Mining and Inductive Databases: European Workshop on Inductive Databases and Constraint Based Mining, Hinterzarten, Germany, March 11-13, 2004, Revised Selected Papers. Springer, Berlin, Heidelberg. Tao Jia, Wen Zhao, and Lifu Wang. 2007. Pr $\mathcal{SH}$ : A Belief Description Logic. In Agent and Multi-Agent Systems: Technologies and Applications: First KES International Symposium, KES-AMSTA 2007, Wroclaw, Poland, May 31– June 1, 2007. Proceedings, Ngoc Thanh Nguyen, Adam Grzech, Robert J. Howlett and Lakhmi C. Jain (eds.). Springer, Berlin, Heidelberg, 31–41. Sometimes references to web sites should appear directly in the text rather than in the bibliography. Refer to the Instructions to authors for ACM Transactions on Programming Languages and Systems. Elise Andrew. 2015. The Science Behind The Most Gruesome Deaths In Game of Thrones. IFLScience. Government Accountability Office. 1977. Award of Grant by the San Francisco Operations Office of the U.S. Energy Research and Development Administration for the Energy Awareness Project. U.S. Government Printing Office, Washington, DC. George Allen Hrivnak. 2009. Extending a model of leader -member exchange development: Individual and dyadic effects of personality, similarity and liking. George Washington University, Washington, DC. Sophia Hollander. 2000. Romance Does Not Stop Even After the Race Begins. New York Times, D1. This sentence cites four references [3,4,6,8]. ACM Transactions on Programming Languages and Systems ACM Trans. Program. Lang. Syst. Canine Genetics and Epidemiology Journal of Chemical Ecology VirusDisease
CommonCrawl
Voronin, Andrei Vladimirovich Total publications: 16 (16) in MathSciNet: 16 (16) in zbMATH: 13 (13) in Web of Science: 12 (12) in Scopus: 11 (11) Cited articles: 10 Citations in Math-Net.Ru: 9 Citations in MathSciNet: 3 Citations in Web of Science: 20 Citations in Scopus: 22 This page: 1150 Abstract pages: 1880 Full texts: 527 References: 174 Candidate of physico-mathematical sciences (1987) Speciality: 01.04.02 (Theoretical physics) Birth date: 1.09.1959 Keywords: operator algebras; Lie (super)algebras; Virasoro algebra; field algebras; representations; modules; cohomologies; quantum fields; conformal fields; observables; BRST symmetry. Main publications: Horuzhy S. S., Voronin A. V. Field algebras do not leave fields domains invariant // Commun. Math. Phys., 1986, 102, 687–692. Horuzhy S. S., Voronin A. V. Remarks on mathematical structure of BRST–theories // Commun. Math. Phys., 1989, 123, 677–685. Horuzhy S. S., Voronin A. V. BRST quantization of the Schwinger model // J. Math. Phys., 1992, 33 (8), 2823–2841. Horuzhy S. S., Voronin A. V. Representations of the BRST algebra and unsolvable algebraic problems // J. Math. Phys., 1997, 38 (8), 4301–4322. http://www.mathnet.ru/eng/person12468 List of publications on Google Scholar http://zbmath.org/authors/?q=ai:voronin.a-v https://mathscinet.ams.org/mathscinet/MRAuthorID/211312 http://elibrary.ru/author_items.asp?authorid=4624 Full list of publications: | by years | by types | by times cited in WoS | by times cited in Scopus | scientific publications | common list | 1. A. V. Voronin, S. S. Horuzhy, "Conformal Theories, BRST Formalism and Representations of the Lie Superalgebras", Proc. Steklov Inst. Math., 228 (2000), 145–157 2. S. S. Horuzhy, A. V. Voronin, "Representations of the BRST algebra and unsolvable algebraic problems", J. Math. Phys., 38:8 (1997), 4301–4322 3. A. V. Voronin, S. S. Horuzhy, "BRST quantization as a problem in the representation theory of Lie superalgebras", Proc. Steklov Inst. Math., 203 (1995), 43–51 4. S. S. Horuzhy, A. V. Voronin, "BRST and $l(1,1)$", Rev. Math. Phys., 5:1 (1993), 191–208 (cited: 1) (cited: 1) 5. S. S. Horuzhy, A. V. Voronin, "BRST quantization of the Schwinger model", J. Math. Phys., 33:8 (1992), 2823–2841 (cited: 3) (cited: 2) 6. S. S. Horuzhy, A. V. Voronin, "A new approach to BRST operator cohomologies: Exact results for the BRST-fock theories", Theoret. and Math. Phys., 93:2 (1992), 1318–1327 (cited: 1) (cited: 2) 7. A. V. Voronin, S. S. Horuzhy, "True BRST symmetry algebra and the theory of its representations", Theoret. and Math. Phys., 91:1 (1992), 327–335 8. A. V. Voronin, S. S. Horuzhy, "$\mathrm{Op}^*$ and $\mathrm{C}^*$ dynamical systems. II. Structural differences: Borchers anomaly", Theoret. and Math. Phys., 82:3 (1990), 225–230 9. A. V. Voronin, S. S. Horuzhy, "$\mathrm{Op}^*$ and $\mathrm C^*$ dynamical systems I. Structural parallels", Theoret. and Math. Phys., 82:2 (1990), 113–123 10. S. S. Horuzhy, A. V. Voronin, "Remarks on mathematical structure of BRST theories", Comm. Math. Phys., 123:4 (1989), 677–685 (cited: 8) (cited: 8) 11. S. S. Horuzhy, A. V. Voronin, "Field algebras do not leave field domains invariant", Comm. Math. Phys., 102:4 (1986), 687–692 (cited: 2) (cited: 7) 12. A. V. Voronin, "Discrete vacuum superselection rule in Wightman theory with essentially self-adjoint field operators", Theoret. and Math. Phys., 66:1 (1986), 8–19 13. A. V. Voronin, V. A. Shestakov, "Subspaces in $\phi(X_0,X_1)$", Problems in functional analysis, Petrozavodsk. Gos. Univ., Petrozavodsk, 1985, 16–24 (Russian) 14. S. S. Horuzhy, V. N. Sushko, A. V. Voronin, "$\mathrm{Op}^*$-algebras and vacuum superselection rules", Proceedings of the second international conference on operator algebras, ideals, and their applications in theoretical physics (Leipzig, 1983), Teubner-Texte Math., 67, Teubner, Leipzig, 1984, 96–102 15. A. V. Voronin, V. N. Sushko, S. S. Horuzhy, "Algebras of unbounded operators and vacuum superselection rules in quantum field theory II. Mathematical structure of vacuum superselection rules", Theoret. and Math. Phys., 60:3 (1984), 849–862 (cited: 2) (cited: 1) 16. A. V. Voronin, V. N. Sushko, S. S. Horuzhy, "Algebras of unbounded operators and vacuum superselection rules in quantum field theory. I. Some properties of Op*-algebras and vector states on them", Theoret. and Math. Phys., 59:1 (1984), 335–350 (cited: 3) (cited: 2) Steklov Mathematical Institute of Russian Academy of Sciences, Moscow V. A. Steklov Mathematical Institute, USSR Academy of Sciences
CommonCrawl
Journal Home About Issues in Progress Current Issue All Issues Vol. 8, •https://doi.org/10.1364/OPTICA.428096 Spectrally resolved linewidth enhancement factor of a semiconductor frequency comb Nikola Opačak, Florian Pilat, Dmitry Kazakov, Sandro Dal Cin, Georg Ramer, Bernhard Lendl, Federico Capasso, and Benedikt Schwarz Nikola Opačak,1,4 Florian Pilat,1 Dmitry Kazakov,1,2 Sandro Dal Cin,1 Georg Ramer,3 Bernhard Lendl,3 Federico Capasso,2 and Benedikt Schwarz1,2,* 1Institute of Solid State Electronics, TU Wien, Gusshausstrasse 25-25a, 1040 Vienna, Austria 2John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA 3Institute of Chemical Technologies and Analytics, TU Wien, Getreidemarkt 9/164, A 1060 Vienna, Austria 4e-mail: [email protected] *Corresponding author: [email protected] Nikola Opačak https://orcid.org/0000-0002-8262-1980 Dmitry Kazakov https://orcid.org/0000-0001-8769-0478 Benedikt Schwarz https://orcid.org/0000-0002-9513-2019 N Opačak F Pilat D Kazakov S Dal Cin G Ramer B Lendl F Capasso B Schwarz Nikola Opačak, Florian Pilat, Dmitry Kazakov, Sandro Dal Cin, Georg Ramer, Bernhard Lendl, Federico Capasso, and Benedikt Schwarz, "Spectrally resolved linewidth enhancement factor of a semiconductor frequency comb," Optica 8, 1227-1230 (2021) Check for updates Defect-engineered ring laser harmonic frequency combs Dmitry Kazakov, et al. Optica 8(10) 1277-1280 (2021) Signatures of a frequency-modulated comb in a VECSEL Christian Kriso, et al. Optica 8(4) 458-463 (2021) Linewidth enhancement factor in a microcavity Brillouin laser Zhiquan Yuan, et al. Optica 7(9) 1150-1153 (2020) Frequency combs Laser modes Multimode lasers Single mode lasers Original Manuscript: April 15, 2021 Revised Manuscript: June 23, 2021 Manuscript Accepted: August 9, 2021 Suppl. Mat. (1) The linewidth enhancement factor (LEF) has recently moved into the spotlight of research on frequency comb generation in semiconductor lasers. Here we present a novel modulation experiment that enables direct measurement of the spectrally resolved LEF in a laser frequency comb. By utilizing a phase-sensitive technique, we are able to extract the LEF for each individual comb mode in any laser type. We first investigate and verify this universally applicable technique using Maxwell–Bloch simulations. Following, we present the experimental demonstration on a quantum cascade laser frequency comb, confirming the predicted key role of the LEF in frequency comb dynamics. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI. Semiconductor lasers are compact and electrically pumped and provide substantial broadband gain. They are recently gaining vast attention due to a wide range of applications that utilize their coherence properties, such as high-precision spectroscopy [1]. Their asymmetric gain spectrum additionally sets them apart from other laser types, where the lasing transition takes place between two discrete levels. Following the Kramers–Kronig relations, an asymmetric gain shape results in a dispersion curve of the refractive index that has a non-zero value at the gain peak [2]. As a consequence, a remarkable property of semiconductor lasers is that both the refractive index and the optical gain change simultaneously with the varying carrier population [3]. This property was quantified with the linewidth enhancement factor (LEF), also called the $\alpha$-factor, defined by Henry as the ratio of changes of the modal index and gain [4]. Many unique properties of semiconductor lasers can be traced back to the non-zero value of this factor at the gain peak. The LEF was first introduced in the 1980s to describe the broadening of the semiconductor laser linewidth [4,5] beyond the Schawlow–Townes limit [6]. Furthermore, the LEF determines the dynamics of semiconductor lasers, as it describes the coupling between the amplitude and phase of the optical field [7,8]. In lasers with fast gain recovery times, the LEF was recently connected to the onset of a giant Kerr nonlinearity [9] and frequency modulated combs [10]. It was shown that the light amplitude–phase coupling, quantified by the LEF, can lead to a low-threshold multimode instability and frequency comb formation [11,12]. Appropriate values of the LEF were predicted to result in the emission of solitons in active media [13]. Precise knowledge of the LEF represents a key point in understanding many astonishing features of semiconductor lasers and subsequently controlling them. The physical origin of the non-zero LEF at the gain peak is explained through the asymmetric gain spectrum of semiconductor lasers. In interband lasers, gain asymmetry is due to the opposite curvatures of the valence and conduction bands in k-space [7], which yield LEF values ${\sim}{{2 - 7}}$ [3]. In intersubband lasers, such as quantum cascade lasers (QCLs), where the states have similar curvatures, the gain asymmetry originates from the subband non-parabolicity [14], counterrotating terms [15], and Bloch gain [9]. In QCLs, measured values of the LEF range from −0.5 to 2.5 [11,16–19]. Fig. 1. Intensity modulation (IM) and frequency modulation (FM) of a single-mode laser and the LEF extraction. (a) Modulation sidebands and their beating signals ${B_ \pm}$ with the center mode in the case of a pure IM or FM (top). The beatings are represented in the complex plane (red). A mixture of both IM and FM is analyzed for two values of the IM-FM phase shift $\theta$. (b) Simulated time traces of the laser intensity and instantaneous frequency. Analytical curves given by Eq. (1) (red dashed-dotted lines) are fitted to the numeric time traces (blue solid lines), obtained from simulation [10]. The frequency is modulated around the lasing frequency $f$, and the time is normalized to the modulation period ${T_{{\rm{mod}}}}$, with ${f_{{\rm{mod}}}} = 3 {\rm{GHz}}$. The phase shift between the frequency and intensity modulations is $\theta \approx \pi /2$. (c) Dependence of the calculated LEF on the lasing frequency of a simulated single-mode laser. Simulations are conducted for three values of the LEF at the gain peak ${\alpha _{{\rm{peak}}}}$. The lasing frequency is swept in discrete steps around the gain peak frequency ${f_{{\rm{peak}}}}$. The values extracted by using Eq. (4) are represented with symbols. The analytic model of the LEF, given by Eq. (S14) in Supplement 1, is plotted with solid lines. The two methods are in agreement. Download Full Size | PPT Slide | PDF An established technique for extracting subthreshold values of the LEF is the Hakki–Paoli method by measuring the peak gain and wavelength shift [20,21]. Above threshold values can be inferred from the measurements of the linewidth broadening compared to the Schawlow–Townes limit [4] and the phase noise [22]. Other methods are based on analysis of the locking regimes induced by optical injection from a master laser [23], or on the optical feedback and characterization of the self-mixing signal [24,25]. Harder et al. provided a study of a single-mode laser's response under modulation of the injection current and were able to extract the LEF value [26]. In a modified experiment, heterodyne detection of a modulated single-mode QCL signal allowed a direct measurement of the LEF [16]. All of the mentioned techniques have one substantial limitation: they do not resolve the spectral dependence of the LEF. Most methods rely on single-mode operation, which is achieved either in Fabry–Pérot lasers slightly above lasing threshold or by using distributed feedback (DFB) lasers. In DFBs, the LEF is not measured at the exact position of the gain peak, since the lasing frequency is detuned. In a semiconductor optical amplifier, the LEF was measured spectrally resolved using a tunable single-mode laser [27]. However, measuring the LEF in an amplifier does not consider the impact of gain saturation, which has been shown to affect the LEF in an operating laser [9]. In this work, we introduce a novel measurement technique that enables the direct spectrally resolved measurement of the LEF of a semiconductor laser operating in the frequency comb regime. It extends the modulation experiment used by Harder et al. [26] and enables extraction of the LEF for each individual comb mode from a single measurement. Each comb mode, together with its neighboring sidebands, created by RF modulation of the laser bias current, produces two beatings. The knowledge of both the amplitude and phase of these beatings enables the extraction of the LEF for every mode across the entire comb spectrum. This is made possible by shifted-wave interference Fourier transform spectroscopy (SWIFTS) [28,29], as it allows to measure the spectrally resolved amplitudes and phases of all beatings in a single-shot measurement. We first lay down the theoretical foundations of our method. Subsequently, we compare the obtained analytic values of the LEF with those extracted from numerical simulations of a single-mode laser and a laser frequency comb. This is followed by an experimental demonstration, where the method is employed on a QCL frequency comb. In the presence of a non-zero value of the LEF, a sinusoidal modulation of the laser bias current results in both intensity modulation (IM) and optical frequency modulation (FM) of the laser output [7,26]. The optical field $E(t)$ of a modulated single-mode laser is given by (1)$$\begin{split}E(t)& = \sqrt {{I_0}}{\sqrt {1 + m\cos (2\pi\! {f_{{\rm{mod}}}}t + \phi)}}\\&\quad \times \cos (2\pi\! ft + \beta \sin (2\pi\! {f_{{\rm{mod}}}}t + \phi + \theta)),\end{split}$$ where ${I_0}$ is the average intensity, $f$ is the lasing frequency, ${f_{{\rm{mod}}}}$ is the modulation frequency, and $m = \Delta I/{I_0}$ and $\beta = \Delta f/{f_{{\rm{mod}}}}$ are the IM and FM indices, respectively, where $\Delta I$ and $\Delta f$ are the amplitudes of the modulation-induced variations of the intensity and optical frequency [30]. We include an additional arbitrary phase shift $\phi$ with respect to the current modulation and an FM-IM phase shift $\theta$. The modulation of the instantaneous optical frequency is given by ${f_i} = f + \beta {f_{{\rm{mod}}}}\cos (2\pi {f_{{\rm{mod}}}}t + \phi + \theta)$. Using the Jacobi–Anger expansion [31], we write Eq. (1) as a Fourier series (Supplement 1): (2)$$E(t) = \sqrt {{I_0}} \sum\limits_{n = - \infty}^{+ \infty} {E_n}\exp (2i\pi (f + n{f_{{\rm{mod}}}})t).$$ Under the assumption of weak modulation strength ($m,\beta \ll 1$), the complex beating signals ${B_ \pm}$ between the central mode ${E_0}$ and its first modulation sidebands ${E_ \pm}$ can be written as (3)$$\begin{split}&{{B_ +}}={ {E_ +}E_0^* = {{\rm{e}}^{i(\phi + \theta)}}\left(\frac{\beta}{2} + \frac{m}{4}{{\rm{e}}^{- i\theta}}\right)},\\&{{B_ -}}={ {E_0}E_ - ^* = {{\rm{e}}^{i(\phi + \theta)}}\left(- \frac{\beta}{2} + \frac{m}{4}{{\rm{e}}^{- i\theta}}\right).}\end{split}$$ The extraction of the modulation indices $m$ and $\beta$ is possible from Eq. (3) only if both the amplitudes and phases of ${B_ \pm}$ are known. Furthermore, this is also valid in the case of a multimode laser, where each mode $k$ produces beatings ${B_{k, \pm}}$ with its neighboring modulation sidebands. With knowledge of the modulation indices, the spectral LEF for each mode $k$ can be calculated directly [3,16,18,26], and the determination of its sign is explained in Supplement 1: (4)$${{\rm{LEF}}_k} = 2\frac{{{\beta _k}}}{{{m_k}}} = \left|\frac{{{B_{k, +}} - {B_{k, -}}}}{{{B_{k, +}} + {B_{k, -}}}}\right|.$$ The emission spectrum of a modulated single-mode laser is sketched in Fig. 1(a) (top) in the cases when only IM or FM is present. The corresponding beating signals ${B_ \pm}$ between the central mode and the sidebands are plotted below in the complex plane. They are in-phase with each other in the case of a pure IM, and anti-phase ($\pi$ phase-shifted) for a pure FM. In a modulated semiconductor laser, a mixture of IM and FM is always present, due to the coupling between the gain and the refractive index. The beating signals in this case are sketched in Fig. 1(a) for two exemplary values of the FM-IM phase shift $\theta = 0$ and $\pi /3$, which in a real experimental setting is unknown a priori. Fig. 2. LEF of a simulated multimode semiconductor laser frequency comb. The normalized intensity spectrum is depicted on the top. The spectrally resolved LEF is shown on the bottom. Red dots represent the LEF for each lasing mode, calculated using Eq. (4). The solid blue line depicts the LEF from the analytic model [Eq. (S14) in Supplement 1]. Fig. 3. Experimental data of a SWIFTS measurement with LEF evaluation. (a) Sketch of the experimental setup. FTIR, Fourier transform infrared spectrometer; BS, beam splitter; QCL, quantum cascade laser; DC, bias voltage; RF, RF generator; LO, local oscillator; QWIP, quantum well infrared photodetector; LNA, low-noise amplifier. (b) Measured intensity spectrum of the QCL and extracted LEF values for each mode (red dots) with a fit using Eq. (S14) in Supplement 1 (blue dashed line). (c) Intensity (top), beating intensity (middle), and beating phase spectrum as obtained from the SWIFTS measurement for the highlighted gray area in (b). In Fig. 1(b), we show the instantaneous intensity and frequency of a simulated semiconductor single-mode laser biased above threshold with a small superimposed sinusoidal modulation. We obtain them from a numerical spatiotemporal model of the laser based on the Maxwell–Bloch formalism [10,11], which additionally includes the LEF. The analytical model given by Eq. (1) is fitted to the numerical time traces of both the instantaneous intensity and frequency. The IM-FM phase shift $\theta$ approaches $\pi /2$ [Fig. 1(b)]. However, its value cannot be a priori assumed without measurement, since effects such as thermal and adiabatic chirp can have a large impact [18,32]. Therefore, our measurement technique removes this uncertainty by not depending on the value of $\theta$ (Eq. (4)). In Fig. 1(c), we show the extracted LEF of a simulated single-mode laser, whose lasing frequency was tuned in discrete steps around the gain peak frequency ${f_{{\rm{peak}}}}$. We extracted the LEF using Eq. (4). The simulations are conducted for three different values of the LEF at gain peaks ${\alpha _{{\rm{peak}}}} = 0.5$, 2 and 4, represented by green triangles, red squares, and blue circles, respectively. The extracted LEF values from the numerical model are in excellent agreement with the LEF obtained from the analytical model (solid lines) of the optical susceptibility [33,34] [Eqs. (S12) and (S14) in Supplement 1]. Now, we extend this technique to a laser frequency comb. The modulation of the laser current induces modulation sidebands around each comb mode. By finding the beatings ${B_{k, \pm}}$ of each mode with its sidebands, we can obtain the LEF of each comb mode. Figure 2 shows an intensity spectrum of a simulated laser in a multimode comb regime. The extracted spectrally resolved LEF is plotted with red dots below, together with the LEF from an analytical model of the laser gain medium [Eq. (S14) in Supplement 1], represented with the blue solid line. We attribute the slight deviations to coherent mechanisms that couple the frequency comb modes. Experimentally, the amplitudes and phases of the beatings ${B_{k, \pm}}$ can be measured most elegantly using SWIFTS [Fig. 3(a)]. SWIFTS employs a Fourier-transform infrared (FTIR) spectrometer to spectrally resolve all individual beatings. We measure the modulation of the laser output waveform using a fast photodetector and a lock-in amplifier to obtain the SWIFTS interferogram (Fig. 1 in Supplement 1). The beatings are then obtained using the Fourier transformation. A detailed derivation of the SWIFTS spectrum can be found in Supplement 1. We extended our existing SWIFTS setup [35] with a custom-built high-resolution FTIR spectrometer (${\sim}500 \;{\rm{MHz}}$) to resolve the narrowly spaced beatings. The setup consists of a Newport Optical Delay Line Kit (using DL325), a broadband mid-infrared beam splitter, a temperature stabilized He–Ne laser, and a Zurich Instruments HF2LI lock-in amplifier for acquisition of the intensity and SWIFTS interferograms of the measured laser, as well as the interferogram of the He–Ne laser. A more detailed explanation of the experiment can be found in Supplement 1. The QCL that we used in this measurement is a 3.5 mm long ridge laser emitting at around 8 µm and optimized for RF injection [36]. The laser was operated in a free-running frequency comb state with a repetition frequency of 12.2 GHz. The frequency of the weak modulation was chosen to be sufficiently lower at 9.593 GHz, and a demodulation frequency of 9.570 GHz was set on the local oscillator. For light detection, we used an RF-optimized quantum well infrared photodetector (QWIP) cooled to 78 K. The intensity spectrum [Fig. 3(b)] shows a frequency comb spanning over 75 modes. A zoom of the gray-shaded area is shown in Fig. 3(c). Although the weak modulation sidebands are not visible in the intensity spectrum (top), the amplitudes and phases of the beatings (middle and bottom, respectively) can be obtained with a high signal-to-noise ratio by using SWIFTS [Fig. 3(c)]. Using Eq. (4), we extract the LEF for each individual comb mode [Fig. 3(b), bottom]. The spectrally resolved LEF follows the expected shape [see Figs. 1(c) and 2]. Using Eq. (S14) to fit the extracted LEF, we can also infer the laser gain width above threshold. The extracted spectral LEF values match the prediction from a recent theoretical work [9], which pinpointed its origin in QCLs to the Bloch gain. The measured LEF furthermore supports its predicted key role in frequency modulated combs [10,37], as well as in ring QCLs emitting localized structures akin to dissipative Kerr solitons [11–13]. In conclusion, we present a novel technique to directly measure the spectrally resolved LEF of a running semiconductor laser frequency comb. The measurement concept is first verified using elaborate numerical simulations of a modulated semiconductor laser. There, an excellent agreement is observed with the expected spectral LEF from the analytical model. The experimental demonstration was performed on a QCL frequency comb, while the technique itself is universal. It will allow to extract the spectral LEF in frequency combs based on any type of a semiconductor laser. The LEF governs many coherent processes in a running semiconductor laser, including frequency comb operation. Its precise knowledge will provide a better fundamental understanding of light evolution, which will promote further technological advancements. European Union's Horizon 2020 research and innovation programme (871529); Austrian Science Fund (P28914); European Research Council (853014). Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Supplemental document See Supplement 1 for supporting content. 1. T. W. Hänsch, Rev. Mod. Phys. 78, 1297 (2006). [CrossRef] 2. A. Yariv, Quantum Electronics, 3rd ed. (Wiley, 1989). 3. M. Osinski and J. Buus, IEEE J. Quantum Electron. 23, 9 (1987). [CrossRef] 4. C. Henry, IEEE J. Quantum Electron. 18, 259 (1982). [CrossRef] 5. K. Vahala and A. Yariv, IEEE J. Quantum Electron. 19, 1096 (1983). [CrossRef] 6. A. L. Schawlow and C. H. Townes, Phys. Rev. 112, 1940 (1958). [CrossRef] 7. G. P. Agrawal and N. K. Dutta, Semiconductor Lasers (Springer, 1995). 8. G. Gray and G. Agrawal, IEEE Photon. Technol. Lett. 4, 1216 (1992). [CrossRef] 9. N. Opačak, S. Dal Cin, J. Hillbrand, and B. Schwarz, Phys. Rev. Lett. 127, 093902 (2021). [CrossRef] 10. N. Opačak and B. Schwarz, Phys. Rev. Lett. 123, 243902 (2019). [CrossRef] 11. M. Piccardo, B. Schwarz, D. Kazakov, M. Beiser, N. Opačak, Y. Wang, S. Jha, J. Hillbrand, M. Tamagnone, W. T. Chen, A. Y. Zhu, L. L. Columbo, A. Belyanin, and F. Capasso, Nature 582, 360 (2020). [CrossRef] 12. B. Meng, M. Singleton, M. Shahmohammadi, F. Kapsalidis, R. Wang, M. Beck, and J. Faist, Optica 7, 162 (2020). [CrossRef] 13. L. Columbo, M. Piccardo, F. Prati, L. Lugiato, M. Brambilla, A. Gatti, C. Silvestri, M. Gioannini, N. Opačak, B. Schwarz, and F. Capasso, Phys. Rev. Lett. 126, 173903 (2021). [CrossRef] 14. T. Liu, K. E. Lee, and Q. J. Wang, Opt. Express 21, 27804 (2013). [CrossRef] 15. M. F. Pereira, Appl. Phys. Lett. 109, 222102 (2016). [CrossRef] 16. T. Aellen, R. Maulini, R. Terazzi, N. Hoyler, M. Giovannini, J. Faist, S. Blaser, and L. Hvozdara, Appl. Phys. Lett. 89, 091121 (2006). [CrossRef] 17. L. Jumpertz, F. Michel, R. Pawlus, W. Elsässer, K. Schires, M. Carras, and F. Grillot, AIP Adv. 6, 015212 (2016). [CrossRef] 18. A. Hangauer and G. Wysocki, IEEE J. Sel. Top. Quantum Electron. 21, 74 (2015). [CrossRef] 19. J. von Staden, T. Gensty, W. Elsäßer, G. Giuliani, and C. Mann, Opt. Lett. 31, 2574 (2006). [CrossRef] 20. B. W. Hakki and T. L. Paoli, J. Appl. Phys. 46, 1299 (1975). [CrossRef] 21. I. Henning and J. Collins, Electron. Lett. 19, 927 (1983). [CrossRef] 22. C. Henry, IEEE J. Quantum Electron. 19, 1391 (1983). [CrossRef] 23. G. Liu, X. Jin, and S. Chuang, IEEE Photon. Technol. Lett. 13, 430 (2001). [CrossRef] 24. Y. Yu, G. Giuliani, and S. Donati, IEEE Photon. Technol. Lett. 16, 990 (2004). [CrossRef] 25. Y. Fan, Y. Yu, J. Xi, G. Rajan, Q. Guo, and J. Tong, Appl. Opt. 54, 10295 (2015). [CrossRef] 26. C. Harder, K. Vahala, and A. Yariv, Appl. Phys. Lett. 42, 328 (1983). [CrossRef] 27. N. Storkfelt, B. Mikkelsen, D. Olesen, M. Yamaguchi, and K. Stubkjaer, IEEE Photon. Technol. Lett. 3, 632 (1991). [CrossRef] 28. D. Burghoff, T.-Y. Kao, N. Han, C. W. I. Chan, X. Cai, Y. Yang, D. J. Hayton, J.-R. Gao, J. L. Reno, and Q. Hu, Nat. Photonics 8, 462 (2014). [CrossRef] 29. D. Burghoff, Y. Yang, D. J. Hayton, J.-R. Gao, J. L. Reno, and Q. Hu, Opt. Express 23, 1190 (2015). [CrossRef] 30. X. Zhu and D. T. Cassidy, J. Opt. Soc. Am. B 14, 1945 (1997). [CrossRef] 31. M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables (Courier, 1965). 32. A. Hangauer, G. Spinner, M. Nikodem, and G. Wysocki, Opt. Express 22, 23439 (2014). [CrossRef] 33. F. Prati and L. Columbo, Phys. Rev. A 75, 053811 (2007). [CrossRef] 34. L. L. Columbo, S. Barbieri, C. Sirtori, and M. Brambilla, Opt. Express 26, 2829 (2018). [CrossRef] 35. J. Hillbrand, A. M. Andrews, H. Detz, G. Strasser, and B. Schwarz, Nat. Photonics 13, 101 (2018). [CrossRef] 36. M. R. St-Jean, M. I. Amanti, A. Bernard, A. Calvar, A. Bismuto, E. Gini, M. Beck, J. Faist, H. C. Liu, and C. Sirtori, Laser Photon. Rev. 8, 443 (2014). [CrossRef] 37. M. Dong, N. M. Mangan, J. N. Kutz, S. T. Cundiff, and H. G. Winful, IEEE J. Quantum Electron. 53, 1 (2017). [CrossRef] Article Order T. W. Hänsch, Rev. Mod. Phys. 78, 1297 (2006). [Crossref] A. Yariv, Quantum Electronics, 3rd ed. (Wiley, 1989). M. Osinski and J. Buus, IEEE J. Quantum Electron. 23, 9 (1987). C. Henry, IEEE J. Quantum Electron. 18, 259 (1982). K. Vahala and A. Yariv, IEEE J. Quantum Electron. 19, 1096 (1983). A. L. Schawlow and C. H. Townes, Phys. Rev. 112, 1940 (1958). G. P. Agrawal and N. K. Dutta, Semiconductor Lasers (Springer, 1995). G. Gray and G. Agrawal, IEEE Photon. Technol. Lett. 4, 1216 (1992). N. Opačak, S. Dal Cin, J. Hillbrand, and B. Schwarz, Phys. Rev. Lett. 127, 093902 (2021). N. Opačak and B. Schwarz, Phys. Rev. Lett. 123, 243902 (2019). M. Piccardo, B. Schwarz, D. Kazakov, M. Beiser, N. Opačak, Y. Wang, S. Jha, J. Hillbrand, M. Tamagnone, W. T. Chen, A. Y. Zhu, L. L. Columbo, A. Belyanin, and F. Capasso, Nature 582, 360 (2020). B. Meng, M. Singleton, M. Shahmohammadi, F. Kapsalidis, R. Wang, M. Beck, and J. Faist, Optica 7, 162 (2020). L. Columbo, M. Piccardo, F. Prati, L. Lugiato, M. Brambilla, A. Gatti, C. Silvestri, M. Gioannini, N. Opačak, B. Schwarz, and F. Capasso, Phys. Rev. Lett. 126, 173903 (2021). T. Liu, K. E. Lee, and Q. J. Wang, Opt. Express 21, 27804 (2013). M. F. Pereira, Appl. Phys. Lett. 109, 222102 (2016). T. Aellen, R. Maulini, R. Terazzi, N. Hoyler, M. Giovannini, J. Faist, S. Blaser, and L. Hvozdara, Appl. Phys. Lett. 89, 091121 (2006). L. Jumpertz, F. Michel, R. Pawlus, W. Elsässer, K. Schires, M. Carras, and F. Grillot, AIP Adv. 6, 015212 (2016). A. Hangauer and G. Wysocki, IEEE J. Sel. Top. Quantum Electron. 21, 74 (2015). J. von Staden, T. Gensty, W. Elsäßer, G. Giuliani, and C. Mann, Opt. Lett. 31, 2574 (2006). B. W. Hakki and T. L. Paoli, J. Appl. Phys. 46, 1299 (1975). I. Henning and J. Collins, Electron. Lett. 19, 927 (1983). C. Henry, IEEE J. Quantum Electron. 19, 1391 (1983). G. Liu, X. Jin, and S. Chuang, IEEE Photon. Technol. Lett. 13, 430 (2001). Y. Yu, G. Giuliani, and S. Donati, IEEE Photon. Technol. Lett. 16, 990 (2004). Y. Fan, Y. Yu, J. Xi, G. Rajan, Q. Guo, and J. Tong, Appl. Opt. 54, 10295 (2015). C. Harder, K. Vahala, and A. Yariv, Appl. Phys. Lett. 42, 328 (1983). N. Storkfelt, B. Mikkelsen, D. Olesen, M. Yamaguchi, and K. Stubkjaer, IEEE Photon. Technol. Lett. 3, 632 (1991). D. Burghoff, T.-Y. Kao, N. Han, C. W. I. Chan, X. Cai, Y. Yang, D. J. Hayton, J.-R. Gao, J. L. Reno, and Q. Hu, Nat. Photonics 8, 462 (2014). D. Burghoff, Y. Yang, D. J. Hayton, J.-R. Gao, J. L. Reno, and Q. Hu, Opt. Express 23, 1190 (2015). X. Zhu and D. T. Cassidy, J. Opt. Soc. Am. B 14, 1945 (1997). M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables (Courier, 1965). A. Hangauer, G. Spinner, M. Nikodem, and G. Wysocki, Opt. Express 22, 23439 (2014). F. Prati and L. Columbo, Phys. Rev. A 75, 053811 (2007). L. L. Columbo, S. Barbieri, C. Sirtori, and M. Brambilla, Opt. Express 26, 2829 (2018). J. Hillbrand, A. M. Andrews, H. Detz, G. Strasser, and B. Schwarz, Nat. Photonics 13, 101 (2018). M. R. St-Jean, M. I. Amanti, A. Bernard, A. Calvar, A. Bismuto, E. Gini, M. Beck, J. Faist, H. C. Liu, and C. Sirtori, Laser Photon. Rev. 8, 443 (2014). M. Dong, N. M. Mangan, J. N. Kutz, S. T. Cundiff, and H. G. Winful, IEEE J. Quantum Electron. 53, 1 (2017). Abramowitz, M. Aellen, T. Agrawal, G. Agrawal, G. P. Amanti, M. I. Andrews, A. M. Barbieri, S. Beck, M. Beiser, M. Belyanin, A. Bernard, A. Bismuto, A. Blaser, S. Brambilla, M. Burghoff, D. Buus, J. Cai, X. Calvar, A. Capasso, F. Carras, M. Cassidy, D. T. Chan, C. W. I. Chen, W. T. Chuang, S. Collins, J. Columbo, L. Columbo, L. L. Cundiff, S. T. Dal Cin, S. Detz, H. Donati, S. Dong, M. Dutta, N. K. Elsässer, W. Elsäßer, W. Faist, J. Fan, Y. Gao, J.-R. Gatti, A. Gensty, T. Gini, E. Gioannini, M. Giovannini, M. Giuliani, G. Gray, G. Grillot, F. Guo, Q. Hakki, B. W. Han, N. Hangauer, A. Hänsch, T. W. Harder, C. Hayton, D. J. Henning, I. Henry, C. Hillbrand, J. Hoyler, N. Hu, Q. Hvozdara, L. Jha, S. Jin, X. Jumpertz, L. Kao, T.-Y. Kapsalidis, F. Kazakov, D. Kutz, J. N. Lee, K. E. Liu, G. Liu, H. C. Liu, T. Lugiato, L. Mangan, N. M. Mann, C. Maulini, R. Meng, B. Michel, F. Mikkelsen, B. Nikodem, M. Olesen, D. Opacak, N. Osinski, M. Paoli, T. L. Pawlus, R. Pereira, M. F. Piccardo, M. Prati, F. Rajan, G. Reno, J. L. Schawlow, A. L. Schires, K. Schwarz, B. Shahmohammadi, M. Silvestri, C. Singleton, M. Sirtori, C. Spinner, G. Stegun, I. A. St-Jean, M. R. Storkfelt, N. Strasser, G. Stubkjaer, K. Tamagnone, M. Terazzi, R. Tong, J. Townes, C. H. Vahala, K. von Staden, J. Wang, Q. J. Wang, R. Winful, H. G. Wysocki, G. Xi, J. Yamaguchi, M. Yang, Y. Yariv, A. Yu, Y. Zhu, A. Y. Zhu, X. AIP Adv. (1) Appl. Opt. (1) Appl. Phys. Lett. (3) Electron. Lett. (1) IEEE J. Quantum Electron. (5) IEEE J. Sel. Top. Quantum Electron. (1) IEEE Photon. Technol. Lett. (4) J. Appl. Phys. (1) J. Opt. Soc. Am. B (1) Laser Photon. Rev. (1) Nat. Photonics (2) Opt. Express (4) Opt. Lett. (1) Optica (1) Phys. Rev. (1) Phys. Rev. A (1) Phys. Rev. Lett. (3) Rev. Mod. Phys. (1) Supplementary Material (1) Supplement 1 Supplemental document Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper View in Article | Download Full Size | PPT Slide | PDF Equations on this page are rendered with MathJax. Learn more. (1) E ( t ) = I 0 1 + m cos ⁡ ( 2 π f m o d t + ϕ ) × cos ⁡ ( 2 π f t + β sin ⁡ ( 2 π f m o d t + ϕ + θ ) ) , (2) E ( t ) = I 0 ∑ n = − ∞ + ∞ E n exp ⁡ ( 2 i π ( f + n f m o d ) t ) . (3) B + = E + E 0 ∗ = e i ( ϕ + θ ) ( β 2 + m 4 e − i θ ) , B − = E 0 E − ∗ = e i ( ϕ + θ ) ( − β 2 + m 4 e − i θ ) . (4) L E F k = 2 β k m k = | B k , + − B k , − B k , + + B k , − | . Prem Kumar, Editor-in-Chief Issues in Progress
CommonCrawl
Comparative study of macroscopic traffic flow models at road junctions A new mixed finite element method for the n-dimensional Boussinesq problem with temperature-dependent viscosity June 2020, 15(2): 247-259. doi: 10.3934/nhm.2020011 Deep neural network approach to forward-inverse problems Hyeontae Jo 1, , Hwijae Son 1, , Hyung Ju Hwang 1,, and Eun Heui Kim 2, Department of Mathematics, Pohang University of Science and Technology, South Korea Department of Mathematics and Statistics, California State University Long Beach, US * Corresponding author: Hyung Ju Hwang Received January 2020 Revised April 2020 Published April 2020 Figure(7) / Table(3) In this paper, we construct approximated solutions of Differential Equations (DEs) using the Deep Neural Network (DNN). Furthermore, we present an architecture that includes the process of finding model parameters through experimental data, the inverse problem. That is, we provide a unified framework of DNN architecture that approximates an analytic solution and its model parameters simultaneously. The architecture consists of a feed forward DNN with non-linear activation functions depending on DEs, automatic differentiation [2], reduction of order, and gradient based optimization method. We also prove theoretically that the proposed DNN solution converges to an analytic solution in a suitable function space for fundamental DEs. Finally, we perform numerical experiments to validate the robustness of our simplistic DNN architecture for 1D transport equation, 2D heat equation, 2D wave equation, and the Lotka-Volterra system. Keywords: Differential equation, approximated solution, inverse problem, artificial neural networks, deep learning. Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35. Citation: Hyeontae Jo, Hwijae Son, Hyung Ju Hwang, Eun Heui Kim. Deep neural network approach to forward-inverse problems. Networks & Heterogeneous Media, 2020, 15 (2) : 247-259. doi: 10.3934/nhm.2020011 W. Arloff, K. R. B. Schmitt and L. J. Venstrom, A parameter estimation method for stiff ordinary differential equations using particle swarm optimisation, Int. J. Comput. Sci. Math., 9 (2018), 419-432. doi: 10.1504/IJCSM.2018.095506. Google Scholar A. G. Baydin, B. A. Pearlmutter, A. A. Radul and J. M. Siskind, Automatic differentiation in machine learning: A survey, J. Mach. Learn. Res., 18 (2017), 43pp. Google Scholar J. Berg and K. Nystr{ö}m, Neural network augmented inverse problems for PDEs, preprint, arXiv: 1712.09685. Google Scholar J. Berg and K. Nystr{ö}m, A unified deep artificial neural network approach to partial differential equations in complex geometries, Neurocomputing, 317 (2018), 28-41. doi: 10.1016/j.neucom.2018.06.056. Google Scholar G. Chavet, Nonlinear Least Squares for Inverse Problems. Theoretical Foundations and Step-By-Step Guide for Applications, Scientific Computation, Springer, New York, 2009. doi: 10.1007/978-90-481-2785-6. Google Scholar N. E. Cotter, The Stone-Weierstrass theorem and its application to neural networks, IEEE Trans. Neural Networks, 1 (1990), 290-295. doi: 10.1109/72.80265. Google Scholar R. Courant, K. Friedrichs and H. Lewy, On the partial difference equations of mathematical physics, IBM J. Res. Develop., 11 (1967), 215-234. doi: 10.1147/rd.112.0215. Google Scholar G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303-314. doi: 10.1007/BF02551274. Google Scholar L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, 19, American Mathematical Society, Providence, RI, 2010. doi: 10.1090/gsm/019. Google Scholar G. E. Fasshauer, Solving partial differential equations by collocation with radial basis functions, Proceedings of Chamonix, 1997 (1996), 1-8. Google Scholar K. Hornik, M. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366. doi: 10.1016/0893-6080(89)90020-8. Google Scholar D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. Google Scholar I. E. Lagaris, A. Likas and D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Trans. Neural Networks, 9 (1998), 987-1000. doi: 10.1109/72.712178. Google Scholar I. E. Lagaris, A. C. Likas and D. G. Papageorgiou, Neural-network methods for boundary value problems with irregular boundaries, IEEE Trans. Neural Networks, 11 (2000), 1041-1049. doi: 10.1109/72.870037. Google Scholar K. Levenberg, A method for the solution of certain non-linear problems in least squares, Quart. Appl. Math., 2 (1944), 164-168. doi: 10.1090/qam/10666. Google Scholar L. Jianyu, L. Siwei, Q. Yingjian and H. Yaping, Numerical solution of elliptic partial differential equation using radial basis function neural networks, Neural Networks, 16 (2003), 729-734. doi: 10.1016/S0893-6080(03)00083-2. Google Scholar J. Li and X. Li, Particle swarm optimization iterative identification algorithm and gradient iterative identification algorithm for Wiener systems with colored noise, Complexity, 2018 (2018), 8pp. doi: 10.1155/2018/7353171. Google Scholar X. Li, Simultaneous approximations of multivariate functions and their derivatives by neural networks with one hidden layer, Neurocomputing, 12 (1996), 327-343. doi: 10.1016/0925-2312(95)00070-4. Google Scholar D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters, J. Soc. Indust. Appl. Math., 11 (1963), 431-441. doi: 10.1137/0111030. Google Scholar W. S. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biophys., 5 (1943), 115-133. doi: 10.1007/BF02478259. Google Scholar A. Paszke, et al., Automatic differentiation in PyTorch, Computer Science, (2017). Google Scholar M. Raissi, P. Perdikaris and G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378 (2019), 686-707. doi: 10.1016/j.jcp.2018.10.045. Google Scholar S. J. Reddi, S. Kale and S. Kumar, On the convergence of ADAM and beyond, preprint, arXiv: 1904.09237. Google Scholar S. A. Sarra, Adaptive radial basis function methods for time dependent partial differential equations, Appl. Numer. Math., 54 (2005), 79-94. doi: 10.1016/j.apnum.2004.07.004. Google Scholar P. Tsilifis, I. Bilionis, I. Katsounaros and N. Zabaras, Computationally efficient variational approximations for Bayesian inverse problems, J. Verif. Valid. Uncert., 1 (2016), 13pp. doi: 10.1115/1.4034102. Google Scholar F. Yaman, V. G. Yakhno and R. Potthast, A survey on inverse problems for applied sciences, Math. Probl. Eng., 2013 (2013), 19pp. doi: 10.1155/2013/976837. Google Scholar Figure 1. Network architecture Figure Options Download as PowerPoint slide Figure 2. Experimental result for 1D transport equation Figure 3. Experimental result for 2D heat equation with $ u(0,x,y) = x(1-x)y(1-y) $ Figure 4. Experimental result for 2D heat equation with $ u(0,x,y) = 1 \text{, if } (x,y) \in \Omega, 0 \text{, otherwise} $ Figure 5. Experimental result for 2D wave equation Figure 6. Experimental result for Lotka-Volterra equation Figure 7. Experimental result for CFL condition Table201 Algorithm 1: Training 1: procedure train(number of epochs) 2: Initialize the nerural network. 3: For number of epochs do 4: sample $ z^1, z^2,..., z^m $ from uniform distribution over $ \Omega $ 5: sample $ z_I^1, z_I^2,..., z_I^m $ from uniform distribution over $ \{0\} \times\Omega $ 6: sample $ z_B^1, z_B^2,..., z_B^m $ from uniform distribution over $ \partial\Omega $ 7: sample k observation points $ z_O^1, z_O^2,..., z_O^k $ 8: Find the true value $ u_j = u_p(z_O^j) $ for $ j=1,2,...,k $ 9: Update the neural network by descending its stochastic gradient : $\begin{equation} \nonumber \nabla_{w, b} [\frac{1}{m} \sum\limits_{i = 1}^m [L_p(u_N)(z^i)^2 + (u_N(z_I^i)-f(z_I^i))^2 + (u_N(z_B^i)-g(z_B^i))^2] + \frac{1}{k}\sum\limits_{j = 1}^k (u_N(z_O^j)-u_j)^2] \end{equation}$ 10: end for 11: end procedure Table 1. Information of grid and observation points Grid Range Number of Grid Points Number of Observations 1D Transport $ (t,x) \in [0,1]\times[0,1] $ $ 17 \times 100 $ 17 2D Heat $ (t,x,y) \in [0,1]\times[0,1]\times[0,1] $ $ 100 \times 100 \times 100 $ 13 2D Wave $ (t,x,y) \in [0,1]\times[0,1]\times[0,1] $ $ 100 \times 100 \times 100 $ 61 Lotka-Volterra $ t \in [0,100] $ 20,000 40 Table 2. Neural network architecture Neural Network Architecture Fully Connected Layers Activation Functions Learning Rate 1D Transport 2(input)-128-256-128-1(output) ReLU $ 10^{-5} $ 2D Heat 3(input)-128-128-1(output) Sin, Sigmoid $ 10^{-5} $ 2D Wave 3(input)-128-256-128-1(output) Sin, Tanh $ 10^{-5} $ Lotka-Volterra 1(input)-64-64-2(output) Sin $ 10^{-4} $ Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020 doi: 10.3934/jcd.2021006 Bingyan Liu, Xiongbing Ye, Xianzhou Dong, Lei Ni. Branching improved Deep Q Networks for solving pursuit-evasion strategy solution of spacecraft. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021016 Leslaw Skrzypek, Yuncheng You. Feedback synchronization of FHN cellular neural networks. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021001 Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019 Shumin Li, Masahiro Yamamoto, Bernadette Miara. A Carleman estimate for the linear shallow shell equation and an inverse source problem. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 367-380. doi: 10.3934/dcds.2009.23.367 Stanislav Nikolaevich Antontsev, Serik Ersultanovich Aitzhanov, Guzel Rashitkhuzhakyzy Ashurova. An inverse problem for the pseudo-parabolic equation with p-Laplacian. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021005 Hongfei Yang, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng, Tieyong Zeng. A new initialization method based on normed statistical spaces in deep networks. Inverse Problems & Imaging, 2021, 15 (1) : 147-158. doi: 10.3934/ipi.2020045 Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002 Kien Trung Nguyen, Vo Nguyen Minh Hieu, Van Huy Pham. Inverse group 1-median problem on trees. Journal of Industrial & Management Optimization, 2021, 17 (1) : 221-232. doi: 10.3934/jimo.2019108 Jianli Xiang, Guozheng Yan. The uniqueness of the inverse elastic wave scattering problem based on the mixed reciprocity relation. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021004 Shahede Omidi, Jafar Fathali. Inverse single facility location problem on a tree with balancing on the distance of server to clients. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021017 Lekbir Afraites, Chorouk Masnaoui, Mourad Nachaoui. Shape optimization method for an inverse geometric source problem and stability at critical shape. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021006 Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $ q $-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440 Siyang Cai, Yongmei Cai, Xuerong Mao. A stochastic differential equation SIS epidemic model with regime switching. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020317 Ryuji Kajikiya. Existence of nodal solutions for the sublinear Moore-Nehari differential equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1483-1506. doi: 10.3934/dcds.2020326 Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391 Weihong Guo, Yifei Lou, Jing Qin, Ming Yan. IPI special issue on "mathematical/statistical approaches in data science" in the Inverse Problem and Imaging. Inverse Problems & Imaging, 2021, 15 (1) : I-I. doi: 10.3934/ipi.2021007 Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020384 Hyeontae Jo Hwijae Son Hyung Ju Hwang Eun Heui Kim
CommonCrawl
Optimization of the process parameters for reduction of gossypol levels in cottonseed meal by functional recombinant NADPH-cytochrome P450 reductase and cytochrome P450 CYP9A12 of Helicoverpa armigera Cheng Chen†1, Yan Zhang†2, Wenhui Pi3, Wenting Yang1, Cunxi Nie1, 4, Jing Liang1, Xi Ma1, 4 and Wen-ju Zhang1Email authorView ORCID ID profile AMB Express20199:98 Gossypol is a toxic polyphenolic product that is derived from cotton plants. The toxicity of gossypol has limited the utilization of cottonseed meal (CSM) in the feed industry. The gene, Helicoverpa armigera CYP9A12, is a gossypol-inducible cytochrome P450 gene. The objective of our study was to obtain the functional recombinant H. armigera CYP9A12 enzyme in Pichia pastoris and to verify whether this candidate enzyme could decrease gossypol in vitro. Free and total gossypol contents were detected in the enzyme solution and in CSM. The H. armigera CYP9A12 enzyme degraded free concentration of gossypol. After optimization of the single-test and response surface method, free gossypol content could be decreased to 40.91 mg/kg in CSM by the H. armigera CYP9A12 enzyme when the initial temperature was 35 °C, the enzymatic hydrolysis time lasted 2.5 h, the enzyme addition was 2.5 mL, and the substrate moisture was 39%. Helicoverpa armigera CYP9A12 The presence of toxic free gossypol in cottonseed meal (CSM) greatly limited its efficient use in animal feed (Matlin and Zhou 1984; Matlin et al. 1988; Yildirimaksoy et al. 2004). The toxicity of gossypol results from two active aldehyde groups, which also have applications in pharmaceutical and therapeutic methods for such diseases as cancer (Dodou 2005). Our previous studies demonstrated that microbial detoxification of CSM can effectively eliminate its toxic effect (Zhang et al. 2006a, b), but the production efficiency was low for a long fermentation period. The hypothesis was that gossypol-degrading enzymes could play a key role during gossypol detoxification. However, studies on the gossypol-degrading enzyme and its functional gene are rare. Therefore, the target enzyme protein could not be obtained from a heterologous translational system based on the functional gene of that microbial enzyme. The induction of the H. armigera P450 monooxygenase CYP6AE14 and CYP9A12 genes (Mao et al. 2007; Celorio-Mancera et al. 2011; Zhou et al. 2010) were possibly involved in the resistance and metabolism of gossypol (Jia et al. 2008; Kong et al. 2010; Krempl et al. 2016a, b). The expression of H. armigera CYP9A12 and CYP9A14 in yeast can detoxify xenobiotics (Yang et al. 2008), but metabolism of gossypol was not studied. The growth and development of H. armigera larvae was retarded after they were fed dsRNA CYP6AE14 transgenic plants (Mao et al. 2007, 2011). The NADPH-cytochrome P450 reductase (CPR) was essential to help cytochrome P450 monooxygenase detoxify the substrates and xenobiotics (Guengerich et al. 2009) because the coexpression of house fly NADPH P450 reductase with H. armigera CYP6AE14 (Tao et al. 2012) and CYP6AE14 microsomes (Krempl et al. 2016a) resulted in epoxidation activity towards aldrin. In our previous study, the functionally coexpressed H. armigera CYP9A12 and its CPR had been obtained in the Pichia pastoris system. The H. armigera CYP9A12 microsomal protein could significantly decrease gossypol concentration in enzyme solution and accelerate oxidization of the free gossypol intermediate metabolites G1 (m/z 265) and G2 (m/z 293) to the final product G0 (m/z 209) and G0′ (m/z 249) (Chen et al. 2019). In this study the effect of the H. armigera CYP9A12 microsomes on gossypol detoxification in vitro and in the CSM was validated. The optimization of the single-factor test and response surface methodology was used to determine the optimal conditions for gossypol-enzymatic detoxification in CSM to be utilized in the feed industry in the future. Construction and expression of the recombinant plasmids H. armigera CPR-2A-α-factor signal-CYP9A12 The H. armigera CPR (Accession Number: HM347785.1) and CYP9A12 (Accession Number: AY371318.1) were amplified from H. armigera midgut cDNA (Chen et al. 2019) by specific primers and then cloned into the pGEM-T easy vector (Promega, Madison, Wisconsin, USA). The 2A sequence from the foot-and-mouth disease virus (FMDV) (Donnelly et al. 2001) was inserted between the CPR and the α-factor secretion signal sequences to obtain the functional protein independently. The CPR, the 2A-α-factor signal, and CYP9A12 fragments were combined with linearized pPICZαA vector using Gibson assembly which was described in our previous study in details (Chen et al. 2019). The constructed plasmid were linearized and subsequently transformed into P. pastoris GS115 competent cells by electroporation. The transformants were selected by Zeocin antibiotic plate. The prepared buffered glycerol-complex medium (BMGY) and buffered methanol-complex medium (BMMY) were used to the yeast growth and induction. The microsomes were isolated by differential centrifugation of the cell homogenate. The separated H. armigera monooxygenase and reductase were detected by SDS-PAGE and the H. armigera CYP9A12 proteins were specifically detected with an anti-His-tagged mouse polyclonal antibody (CST, USA) and an eECL Western blotting kit (CW Biotech, Beijing, China) (Chen et al. 2019). Validation of the H. armigera CYP9A12 for the detoxification of gossypol from CSM A total of 2.5 mL H. armigera CYP9A12 enzyme (equivalent to 575 U/mg enzyme activity unit) was added to 50 g CSM at an initial temperature of 30 °C, substrate moisture of 50% and was incubated for 4 h. The conditions of the Candida tropicalis group were fermentation time 48 h, fermentation temperature 30 °C, inoculum volume 5% (v/w), substrate moisture content 50%, natural pH, and three replicates per treatment (Zhang et al. 2006a, b). The control group (without any enzyme) was inoculated with the same volume of distilled water. After the reaction was finished, the samples were dried in a vacuum freezer and crushed through 60-mesh screens for detection (Firestone 2003). CSM or treated CSM (500 mg) was accurately weighed and ultrasonically extracted with 70% acetone and butanone, respectively, at room temperature for 1 h. The total and free gossypol contents in the CSM were determined using HPLC as described above (Chen et al. 2018). Optimization of the H. armigera CYP9A12 enzymatic hydrolytic conditions for the CSM Single-factor test The detoxification conditions of the CSM enzymatic hydrolysis primarily include temperature, time, the amount of enzyme added, and substrate moisture. Use of the proper enzymatic hydrolysis conditions could not only improve the quality of CSM products and save resources but also increase production efficiency. The single-factor design was used to optimize the conditions of the H. armigera CYP9A12 enzyme in CSM enzymatic detoxification. The level conditions were based on the single factor test. CSM (50 g) was weighed into the corresponding number conical bottle. Sterilized water was added according to the ratio of the material to water (35%, 40%, 45% and 50%) (Table 1). The enzymatic hydrolytic temperature and time were set based on different levels in the table. The H. armigera CYP9A12 enzyme of 0.5 mL, 1.5 mL, 2.5 mL and 3.5 mL were added which represented to 1%, 3%, 5% and 7% of CSM weight, respectively. The H. armigera CYP9A12 enzyme activities were 115 U/mg, 345 U/mg, and 575 U/mg, respectively. The concentration of the crude enzyme protein was 1.05 mg/mL and 230 U/mg, respectively. The concentration of the crude enzyme protein was 1.05 mg/mL and 230 U/mg, respectively (Bradford 1976). The enzymatic hydrolysis of the CSM was sampled after the reaction and dried in a vacuum freezer. Free and total gossypol content was determined by HPLC after crushing, sieving through 60-mesh, and supersonic extraction with acetone and butanone. The same treatment was repeated in three replicates. The level table of single test factor Initial temperature °C Enzymolysis time h Enzyme amount mL Substrate moisture % Box–Behnken experimental design The Box–Behnken experimental design is based on the mathematical model: $$ Y = \beta_{0} + \sum {\beta_{i} x_{i} } + \sum {x_{i} x_{j} } + \sum {\beta_{ii} x_{i}^{2} } $$ Y is the response value (gossypol content); β0 is a constant term, βi and βii are regression coefficients, and xi and xj are coded variables (temperature, time, enzyme addition and substrate moisture) (Ferreira et al. 2007). The contents of free gossypol in the detoxified CSM was used as the evaluation index in the Box–Behnken design. Each factor in the Box–Behnken design was coded as three levels with − 1, 0, and + 1 (Table 2). The quadratic regression fitting was conducted to get the quadratic equation with interactive terms and square terms using the corresponding code. The primary effects and interaction effects of each factor were analyzed. Finally, the optimal value was obtained within a certain level. One-way ANOVA analysis of multiple variables were carried out with the SPSS 17.0. Box–Behnken experimental factors and coding levels The coding level − 1 Enzyme additive amount mL Detoxification effect of the H. armigera CYP9A12 enzyme in enzyme solution The H. armigera CYP9A12 was obtained as described in our previous study (Chen et al. 2019). To validate the detoxification effect of H. armigera CYP9A12 in the enzyme reaction solution, the free and total gossypol contents were determined in the control group (without enzyme), endogenous group, and H. armigera CYP9A12 enzyme group, respectively. The results are shown in Table 3. The total gossypol was significantly decreased by 22.5% and 14.8% in comparison with the control group, respectively. The free gossypol content in solution was decreased by 2.5% and 2.6% after the addition of H. armigera CYP9A12 enzyme and endogenous enzyme, respectively. Effect of different treatments on gossypol content in cottonseed meal Enzyme reaction solution Total gossypol (TG) Detoxification rate (%) Free gossypol (FG) 34.25a ± 3.35 27.53 ± 2.80 Endogenous group 29.16b ± 2.37 H. armigera CYP9A12 Enzymatic hydrolysis of cottonseed meal 147.21a ± 0.88 125.71c ± 12.6 99.92a ± 6.7 Candida tropicalis 53b.28 ± 3.65 42.32c ± 5.52 Because gossypol is unstable and is easily oxidized, the P450 enzyme co-factor NADPH-Na4 was added to initiate the reaction and stabilize the gossypol. The ability of the H. armigera CYP9A12 enzyme to degrade gossypol was defined as the amount of one micromole of gossypol degraded per minute catalyzed by 1 mg of enzyme at 30 °C (pH 6). A unit of enzyme activity was expressed as U/mg. According to the definition of this enzyme activity, the concentration of H. armigera CYP9A12 enzyme protein in the supernatant of cell fragmentation induced by methanol for 72 h was 1.05 mg/mL, and the enzyme activity was 230 U/mg. Detoxification effect of H. armigera CYP9A12 enzyme on CSM To validate the detoxification effect of H. armigera CYP9A12 enzyme in CSM, the free and total gossypol contents were determined in the control group, endogenous group, H. armigera CYP9A12 enzyme, and C. tropicalis group. The results are shown in Table 3. The free gossypol content in the CSM decreased by 52.7% and 63.8% after the addition of H. armigera CYP9A12 enzyme and C. tropicalis yeast (p < 0.05), respectively. Total gossypol significantly decreased by 69.7% and 61.6% for H. armigera CYP9A12 and C. tropicalis, respectively, in comparison to the control group (p < 0.05). Single factor test results of CSM enzymatic hydrolysis and detoxification by the H. armigera CYP9A12 enzyme To optimize conditions of the H. armigera CYP9A12 enzyme on CSM, four different factors (initial temperature, enzymatic hydrolysis time, enzyme content, and substrate moisture) were planned in accordance with the abscissa, the ordinate free gossypol content for result analysis. The highest gossypol concentration in the enzymatic hydrolysis time was at 40 °C, and the lowest was observed at 35 °C. The hydrolysis time lasted 12 h, and the gossypol concentration was the highest, while the lowest was observed when the reaction time was 2 h. The highest and lowest contents of free gossypol were observed when the amount of enzyme was 0.5 and 2.5 mL, respectively. When the substrate moisture was 45%, the gossypol content was the highest. When the substrate moisture was 50%, gossypol concentration decreased to the lowest amount. In a single factor analysis test, the optimal condition of the effect of H. armigera CYP9A12 enzyme on CSM gossypol degradation was achieved with 2.5 mL of enzyme. The enzymatic hydrolysis lasted 2 h at 35 °C, and the substrate moisture was 40% (Fig. 1) in accordance with the abscissa, the ordinate free gossypol content for concentration curve for result analysis. The single-factor test of the effect on gossypol of cottonseed meal by the H. armigera CYP9A12. The four different factors include initial temperature, enzymatic hydrolysis time, enzyme content, and substrate moisture. In accordance with the abscissa, the ordinate free gossypol content for concentration curve for result analysis. The optimal condition of the effect of H. armigera CYP9A12 enzyme on CSM gossypol degradation was achieved with 2.5 mL of enzyme. The enzymatic hydrolysis lasted 2 h at 35 °C, and the substrate moisture was 40% Box–Behnken test results of CSM enzymatic hydrolysis and detoxification of H. armigera CYP9A12 enzyme Model establishment and significance test analysis Various factors do not exist independently in the process of enzymatic hydrolysis because they interact with each other. The response surface method was used to optimize the H. armigera CYP9A12 enzyme application conditions. The Box–Behnken test was designed using Design Expert software to quantify the effectiveness of key factors (initial temperature, enzymatic hydrolysis time, enzyme content, and substrate moisture) and the interactions among them. The free gossypol content was chosen as the response value Y for the purpose of this test. The independent variables, codes, and levels are shown in Table 2. The results obtained from 29 test points are shown in Table 4. Box–Behnken design and gossypol content Y mg/kg Gossypol content mg/kg This table is the experimental results of Box–Behnken design and corresponding schemes. The gossypol content is selected as the response value Y. The independent variables A, B, C and D was corresponding to initial temperature, enzymatic hydrolysis time, enzyme content, and substrate moisture are shown in the table Twenty-nine test points were divided into two categories. One category was a factorial point of 24 points corresponding to the independent variable values in A, B, C, and D constituting a three-dimensional vertex. The others were zero points in the center of the region, which had been repeated five times to estimate the test error. The results have shown the gossypol content of the CSM under the Box–Behnken experimental design (Table 4). The minimum and maximum gossypol contents were 36.54 and 73.18 mg/mL, respectively. The overall average value was 50.67 mg/mL; the overall standard deviation (SD) was 7.27, and the coefficient of variation (CV) was 7.35% meeting the requirements of the Box–Behnken analysis. The results were analyzed using multivariable regression fitting to obtain the quadratic polynomial regression equation of Y pairs of coded independent variables A, B, C and D. $$ \begin{aligned} Y = & \, 3 7. 7 4- 3.0 3 {\text{A}} - 5. 40{\text{B}} - 0. 2 7 {\text{C}} - 0. 8 3 {\text{D}} \\ &+ 1. 5 4 {\text{AB}} + 0. 7 3 {\text{AC}} + 2. 10{\text{AD}} - 1. 9 1 {\text{BC}} + 8. 50{\text{BD}} \\ & + 2. 1 9 {\text{CD}} + 7. 3 2 {\text{A}}^{ 2} + 5. 2 4 {\text{B}}^{ 2} + 8. 4 1 {\text{C}}^{ 2} + 10. 2 9 {\text{D}}^{ 2} \\ \end{aligned} $$ The results showed that the experimental data in Table 4 were statistically significant (p = 0.0411, R2 = 0.7238), and the significant factor was the enzymatic hydrolytic time (B, p = 0.0221). Response surface analysis of gossypol on the effect of the H. armigera CYP9A12 enzyme on CSM To obtain the range of responses for the four factors studied, two of the variables could be fixed at the central value, and the effects of the other two variables on gossypol content in CSM were analyzed and evaluated based on the response surface diagram and contour plot of the multivariate quadratic equation. The shape of the contours could reflect the intensity of the interaction effect. A circle indicates that the interaction between the two factors is not significant, but an ellipse indicates the interaction is significant. Results of the response surface graph and contour plot showed that factor B (enzymatic hydrolysis time) had a meaningful effect on gossypol content (p = 0.02), and the interaction between factor B and D was also important (p = 0.034) (Fig. 2). The interaction between the other factors was not significant difference. The predicted minimum gossypol content was 36.73 mg/kg under the following conditions: initial temperature 35.44 °C, enzymatic hydrolysis time 2.5 h, enzyme addition 2.55 mL and substrate moisture 39.23%. The predicted value was higher than the actual minimum value in the Box–Behnken experiment. Contour plots and response surface diagram of the effect of reaction time (B) and cottonseed meal moisture (D) was analyzed on gossypol contents. The shape of contours could reflect the intensity of the interaction effect. The ellipse indicates that the interaction between the effect of reaction time (B) and cottonseed meal moisture (D) on gossypol content is significant To verify the accuracy of the model, three experiments were conducted using the optimal conditions. The minimum gossypol content was 42.34 mg/kg, 40.72 mg/kg, and 39.68 mg/kg, with an average of 40.91 mg/kg. The results showed that the model could predict the optimal conditions for the H. armigera CYP9A12 enzymatic detoxification. After the optimization of the single factor and the Box–Behnken experimental design, the free gossypol content of the CSM decreased 88.4% from 352.94 to 40.91 mg/kg. Gossypol is a natural phenolic compound that is derived from cotton plants. The toxicity of gossypol results from two active aldehyde groups which are toxic to most organisms (Nomeir and Abou-Donia 1985; Eisele 1986; Brocas et al. 1997; Chenoweth et al. 2000). The H. armigera CYP6 and CYP9 family genes CYP321A1, CYP9A12, CYP9A14, CYP6AE11, CYP6B6 and CYP6B7 are gossypol-inducible, which probably explains gossypol resistance of the H. armigera larvae (Yang et al. 1999; Celorio-Mancera et al. 2011; Mao et al. 2007, 2011; Tao et al. 2012; Zhou et al. 2010, Tian et al. 2017) or Helicoverpa zea larve (Stipanovic et al. 2006). Usually, the effects of the insect P450 enzymes on xenobiotics after expression in E. coli (Andersen et al. 1994; Guzov et al. 1998; Hayashi et al. 2003; Kaewpa et al. 2007; Liu et al. 2012), yeast (Pompon et al. 1996; Dietrich et al. 2005; Mirzaei et al. 2010) or insect Sf9 cells (Krempl et al. 2016a, b) are nonfunctional when isolated from insect tissue (Andersen et al. 1994). The H. armigera CYP6AE14 was solely expressed in Sf9 cells but had no gossypol metabolic activity (Krempl et al. 2016a). However, the CYP6AE14 microsomes (Krempl et al. 2016a) and the coexpression of the NADPH P450 reductase from houseflies with H. armigera CYP6AE14 (Tao et al. 2012) had epoxidation activity towards aldrin. Presumably assistance of NADPH-cytochrome P450 reductase (CPR) to donate an electron was required. H. armigera CYP9A12 and CYP9A14 enzymes obtained from yeast have the ability to detoxify xenobiotics (Yang et al. 2008), but there have been no further metabolic studies related to gossypol. After H. armigera CYP9A12 or endogenous enzyme treatment, free and total gossypol was extracted with 70% aqueous acetone and butanone, respectively, and prepared for HPLC analysis. The NADPH-Na4 was added to initiate the P450s enzyme reaction (Yang et al. 2004). Total gossypol content was equal to bound gossypol plus the free gossypol. Bound gossypol would be formed by free gossypol covalently binding to proteins in the reaction. Therefore, the endogenous group was assigned to remove the error. Total gossypol content decreased significantly from 34.25 to 29.16 and 26.53 μg/mL in the endogenous group and the H. armigera CYP9A12 group, respectively. In addition, free gossypol concentration decreased significantly from 27.53 to 26.81 and 20.59 μg/mL in the endogenous group and the H. armigera CYP9A12 group, respectively (Table 3). This value refers to the free gossypol that was metabolized by the H. armigera CYP9A12 enzyme rather than bound to the proteins. Gossypol was characterized into gossypolone, gossypolonic acid, and demethylated gossic acid in pig livers (Abou-Donia and Dieckert 1975). The suggested oxidation pathway of gossypol (m/z 517) was first the formation of gossypolone (m/z 545) and subsequently gossypolonic acid (m/z 577), which was cleaved and oxidized to demethylated gossic acid (m/z 265) (Abou-donia and Dieckert 1974; Liu et al. 2014). Based on our previous study, the gossypol with an ion mass of 517.1910 would be spontaneously degraded to the gossypol metabolites, G1 and G2 with ion masses of 265.0411 and 293.1123. In addition, the two gossypol metabolites G1 and G2 would be degraded by H. armigera CYP9A12 enzyme into G0 and G0′ with ion masses of 209.0833 and 248.9578, respectively (Chen et al. 2019). Thus, the accumulation products G0′ and G0 were proposed to indirectly accelerate the metabolism of gossypol by the H. armigera recombinant CYP9A12 enzyme (Fig. 3). Extracted LC–MS ion chromatogram of the gossypol metabolites in negative ion mode and representative mass spectra of respective of chromatograms for quantification of gossypol and metabolites. a The control group without enzyme (blue) the free gossypol spontaneously degraded to compounds G1 (m/z 265) and G2 (m/z 293); b the recombinant of H. armigera CPR and CYP9A12 enzyme (black) was capable to degrade free gossypol by decarboxylation of G1 (m/z 265) and G2 (m/z 293) to compound G0 (m/z 209) and G0′ (m/z 249). The endogenous enzyme was as shown in blue (Chen et al. 2019) Currently, the effective method of gossypol detoxification in CSM is microbial solid-state fermentation (Zhong and Wu 1989; Zhang et al. 2006a, b). However, the addition of corn flour, bran and other auxiliary materials and high-pressure sterilization during the drying process would result in the reduction of the free gossypol in solid-state processed CSM. The traditional solid-state fermentation of CSM has usually lasted for at least 2 days (Zhang et al. 2006b). It is difficult and costly to avoid the contamination of bacteria caused by excessive fermentation time. If the H. armigera recombinant CYP9A12 enzyme could be used in CSM, it could rapidly degrade gossypol content and greatly save time. To validate the detoxification effect of the H. armigera CYP9A12 enzyme in CSM, free and total gossypol contents were determined in the control group, the endogenous group, the H. armigera CYP9A12 enzyme, and the C. tropicalis group. Before the experiment, an ultraviolet lamp was used to sterilize the CSM instead of autoclaving it. In addition, the experimental CSM was dried by vacuum freeze drying to avoid thermal influence on the gossypol content in CSM. The decrease of free gossypol in the CSM was degraded by microbial fermentation instead of binding with proteins to form bound gossypol (Wei et al. 2011). The free and total gossypol content in the CSM significantly decreased after the H. armigera CYP9A12 reaction and C. tropicalis fermentation (Table 3). The optimal conditions of the four factors (temperature, time, enzyme addition and substrate moisture) were determined using the single-factor (Fig. 1) and response surface methods, respectively. Because the thermal stability of gossypol was below 40 °C, the temperature range of the H. armigera CYP9A12 enzyme detoxification of CSM was selected from 25 to 40 °C. However, in this experiment, the gossypol content in the CSM after processing was the highest when the temperature was 40 °C. We hypothesized that if the CSM in the enzymatic hydrolysis test was not sterilized after autoclaving, the growth of residual bacteria might lead to the increase in free gossypol content. We also found that the free gossypol content in the CSM was higher when the enzymatic hydrolysis temperature was 40 °C or the enzymatic hydrolysis time had lasted 12 h. A temperature of 37 °C and growth incubation time of 12 h are optimal conditions for Bacillus subtilis and E. coli growth, respectively. The optimal degradation time of H. armigera CYP9A12 enzyme in vitro was 30 min. Considering the existence of free gossypol in the CSM, the H. armigera CYP9A12 enzyme required full contact with the gossypol to degrade it. Therefore, the enzymatic hydrolysis time was set to 1, 2, 4, 6, and 12 h, respectively. The optimal enzymatic hydrolysis time for H. armigera CYP9A12 enzyme in CSM was 2 h. The coenzyme NADPH-Na4 was added to ensure the H. armigera CYP9A12 enzyme functioned. The free gossypol concentration in the CSM did not change significantly when the H. armigera CYP9A12 enzyme was added between 0.5 mL and 2.5 mL. The optimal CYP9A12 enzyme addition was determined to be 2.5 mL. When the substrate moisture was 50%, the gossypol level was the lowest, which could effectively increase the contact area between the H. armigera CYP9A12 enzyme and the gossypol in the CSM. Low moisture content does not help to decrease the gossypol, and excessive moisture would lead to additional drying and cost. The p values of the primary and quadratic terms were less than 0.05 in the Box–Behnken response surface analysis, of which primary, secondary and interactive terms of the model equation were highly significant. Additionally, the CV value of this experimental design was 7.35 which reflects the confidence in the model, and the model's ability to accurately reflect real test data. Based on the response surface diagram and the contour plot of the multivariate quadratic equation, factor B (enzymatic hydrolysis time) had a significant effect on gossypol content (p = 0.02) and the interaction between factor B and D (substrate moisture) (p = 0.034) influenced gossypol content (Fig. 2). After the predictive model and validation tests, the gossypol content could have reached a minimum of 40.91 mg/kg in CSM when the initial temperature was 35 °C; the enzymatic hydrolysis time had lasted 2.5 h; the enzyme addition was 2.5 mL, and the substrate moisture was 39%. At present the detoxification effect of recombinant CYP450s on cottonseed meal was still limited by the cost of the co-factor NADPH and the enzyme yield. If the enzyme could be further highly expressed by heterologous system and in industrial production which could greatly increase the rate of gossypol degradation and provide a new strategy for cottonseed meal detoxification. The recombinant H. armigera CYP9A12 and its reductase were successfully expressed in the P. pastoris system. The CYP9A12 was able to accelerate metabolism of the gossypol intermediate metabolites. Treatment of CSM with H. armigera CYP9A12 enzyme significantly degraded free and total gossypol. After optimization of the single-test and response surface method, the free gossypol content could decrease to 40.91 mg/kg in the CSM when the initial temperature was 35 °C, the enzymatic hydrolysis time lasted 2.5 h, the enzyme addition was 2.5 mL, and the substrate moisture was 39%. Cheng Chen and Yan Zhang contributed equally to this work G0′ to G2: gossypol metabolite 0′ to gossypol metabolite 2 CPR: NADPH-cytochrome P450-reductase CYP9A12: cytochrome P450-monooxygenase CYP9A12 CSM: cottonseed meal H. armigera : P. pastoris : Pichia pastoris GA: Gibson assembly The authors would like to thank professor Lee J. Johnston from University of Minnesota for reviewing an early draft of this article. This study was funded by National Natural Science Foundation of China (Grant no. 31860660). WZ and WP designed research; CC, YZ and WY performed research; JL, CN and XM discussed results and provided advice; CC and WY analyzed data; and CC and YZ wrote the paper. All authors read and approved the final manuscript. This article does not contain any studies with human participants or animals performed by any of the authors. College of Animal Science and Technology, Shihezi University, Shihezi, 832000, Xinjiang, China School of Chemistry and Chemical Engineering, Shihezi University, Shihezi, 832000, Xinjiang, China State Key Laboratory for Sheep Genetic Improvement and Healthy Production, Xinjiang Academy of Agricultural and Reclamation Sciences, Shihezi, 832000, Xinjiang, China State Key Laboratory of Animal Nutrition, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China Abou-donia MB, Dieckert JW (1974) Urinary and biliary excretion of 14C-gossypol in swine. J Nutr 104:754–760. http://jn.nutrition.org/content/104/6/754.long View ArticleGoogle Scholar Abou-Donia MB, Dieckert JW (1975) Metabolic fate of gossypol: the metabolism of 14C-gossypol in swine. Toxicol Appl Pharm 5:32–46View ArticleGoogle Scholar Andersen JF, Utermohlen JG, Feyereisen R (1994) Expression of house fly CYP6A1 and NADPH-cytochrome P450 reductase in Escherichia coli and reconstitution of an insecticide-metabolizing P450 system. Biochemistry 33:2171–2177View ArticleGoogle Scholar Bradford MMA (1976) A rapid and sensitive method for the quantitation on microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem 72:248–254View ArticleGoogle Scholar Brocas C, Rivera RM, Paulalopes FF, Mcdowell LR, Calhoun MC, Staples CR, Wilkinson NS, Boning AJ, Chenoweth PJ, Hansen PJ (1997) Deleterious actions of gossypol on bovine spermatozoa, oocytes, and embryos. Biol Reprod 57:901View ArticleGoogle Scholar Celorio-Mancera ML, Ahn SJ, Vogel H, Heckel DG (2011) Transcriptional responses underlying the hormetic and detrimental effects of the plant secondary metabolite gossypol on the generalist herbivore Helicoverpa armigera. BMC Genomics 12:575. https://doi.org/10.1186/1471-2164-12-575 View ArticlePubMed CentralGoogle Scholar Chen C, Nie CX, Liang J, Wang YQ, Liu YF, Ge WX, Zhang WJ (2018) Validated method to determine (±)-gossypol in Candida tropicalis culture by high-performance liquid chromatography. Acta Chromatogr 30(4):269–273. https://doi.org/10.1556/1326.2018.00420 View ArticleGoogle Scholar Chen C, Pi WH, Zhang Y, Nie CX, Liang J, Ma X, Wang YQ, Ge WX, Zhang WJ (2019) Effect of a functional recombinant cytochrome P450 enzyme of Helicoverpa armigera on gossypol metabolism co-expressed with NADPH-cytochrome P450 reductase in Pichia pastoris. Pestic Biochem Phys. https://doi.org/10.1016/j.pestbp.2019.01.003 View ArticleGoogle Scholar Chenoweth PJ, Chase CC Jr, Risco CA, Larsen RE (2000) Characterization of gossypol-induced sperm abnormalities in bulls. Theriogenology 53:1193–1203. https://doi.org/10.1016/S0093-691X(00)00264-8 View ArticlePubMedGoogle Scholar Dietrich M, Grundmann L, Kurr K, Valinotto L (2005) Recombinant production of human microsomal cytochrome P450 2D6 in the methylotrophic yeast Pichia pastoris. ChemBioChem 6:2014. https://doi.org/10.1002/cbic.200500200 View ArticlePubMedGoogle Scholar Dodou K (2005) Investigations on gossypol: past and present developments. Expert Opin Invest Drugs 14:1419–1434. https://doi.org/10.1517/13543784.14.11.1419 View ArticleGoogle Scholar Donnelly ML, Hughes LE, Luke G, Mendoza H, Ten DE, Gani D, Ryan MD (2001) The 'cleavage' activities of foot-and-mouth disease virus 2A site-directed mutants and naturally occurring '2A-like' sequences. J Gen Virol 82:1027–1041. https://doi.org/10.1099/0022-1317-82-5-1027 View ArticlePubMedGoogle Scholar Eisele GR (1986) A perspective on gossypol ingestion in swine. Vet Hum Toxicol 28:118PubMedGoogle Scholar Ferreira SLC, Bruns RE, Ferreira HS, Matos GD, David JM, Brandão GC, Silva EGP, DaPortugal LA, Reis PS, DosSouza AS (2007) Box–Behnken design: an alternative for the optimization of analytical methods. Anal Chim Acta 597(2):179–186. https://doi.org/10.1016/j.aca.2007.07.011 View ArticlePubMedGoogle Scholar Firestone D (2003) Official methods and recommended practices of the AOCS, 5th edn. Champaign, AOCSGoogle Scholar Guengerich FP, Martin MV, Sohl CD, Cheng Q (2009) Measurement of cytochrome P450 and NADPH-cytochrome P450 reductase. Nat Protoc 4(9):1245–1251. https://doi.org/10.1038/nprot.2009.121 View ArticlePubMedGoogle Scholar Guzov VM, Unnithan GC, Chernogolov AA, Feyereisen R (1998) CYP12A1, a mitochondrial cytochrome P450 from the house fly. Arch Biochem Biophys 359:231–240. https://doi.org/10.1006/abbi.1998.0901 View ArticlePubMedGoogle Scholar Hayashi S, Omata Y, Sakamoto H, Hara T, Noguchi M (2003) Purification and characterization of a soluble form of rat liver NADPH-cytochrome P450 reductase highly expressed in Escherichia coli. Protein Expr Purif 29:1View ArticleGoogle Scholar Jia L, Coward LC, Kerstnerwood CD, Cork RL, Gorman GS, Noker PE, Kitada S, Pellecchia M, Reed JC (2008) Comparison of pharmacokinetic and metabolic profiling among gossypol, apogossypol and apogossypol hexaacetate. Cancer Chemother Pharmacol 61:63–73View ArticleGoogle Scholar Kaewpa D, Boonsuepsakul S, Rongnoparut P (2007) Functional expression of mosquito NADPH-cytochrome P450 reductase in Escherichia coli. J Econ Entomol 100:946–953View ArticleGoogle Scholar Kong G, Daud MK, Zhu S (2010) Effects of pigment glands and gossypol on growth, development and insecticide-resistance of cotton bollworm (Heliothis armigera (Hübner)). Crop Prot 29:813–819View ArticleGoogle Scholar Krempl C, Heidel-Fischer HM, Jiménez-Alemán GH, Reichelt M, Menezes RC, Boland W, Vogel H, Heckel DG, Joußen N (2016a) Gossypol toxicity and detoxification in Helicoverpa armigera and Heliothis virescens. Insect Biochem Mol Biol 78:69–77. https://doi.org/10.1016/j.ibmb.2016.09.003 View ArticlePubMedGoogle Scholar Krempl C, Sporer T, Reichelt M, Ahn S-J, Heidel-Fischer H, Vogel H, Heckel DG, Joußen N (2016b) Potential detoxification of gossypol by UDP-glycosyltransferases in the two Heliothine moth species Helicoverpa armigera and Heliothis virescens. Insect Biochem Mol Biol 71:49–57. https://doi.org/10.1016/j.ibmb.2016.02.005 View ArticlePubMedGoogle Scholar Liu X, Zhang L, Zhang X, Xi WG (2012) Molecular cloning and recombinant expression of cytochrome P450 CYP6B6 from Helicoverpa armigera in Escherichia coli. Mol Biol Rep 40:1211–1217. https://doi.org/10.1007/S11033-012-2163-1 View ArticlePubMedGoogle Scholar Liu H, Sun H, Lu D, Zhang Y, Zhang X, Ma Z, Wu B (2014) Identification of glucuronidation and biliary excretion as the main mechanisms for gossypol clearance: in vivo and in vitro evidence. Xenobiotica 44:696–707. https://doi.org/10.3109/00498254.2014.891780 View ArticlePubMedGoogle Scholar Mao YB, Cai WJ, Wang JW, Hong GJ, Tao XY, Wang LJ, Huang YP, Chen XY (2007) Silencing a cotton bollworm P450 monooxygenase gene by plant-mediated RNAi impairs larval tolerance of gossypol. Nat Biotechnol 25:1307–1313. https://doi.org/10.1038/nbt1352 View ArticlePubMedGoogle Scholar Mao YB, Tao XY, Xue XY, Wang LJ, Chen XY (2011) Cotton plants expressing CYP6AE14 double-stranded RNA show enhanced resistance to bollworms. Transgenic Res 20:665. https://doi.org/10.1007/s11248-010-9450-1 View ArticlePubMedGoogle Scholar Matlin SA, Zhou R (1984) Resolution of gossypol: analytical and preparative HPLC. J Sep Sci 7:629–631Google Scholar Matlin SA, Zhou RH, Belenguer A, Tyson RG, Brookes AN (1988) Large-scale resolution of gossypol enantiomers for biological evaluation. Contraception 37:229View ArticleGoogle Scholar Mirzaei SA, Yazdi MT, Sepehrizadeh Z (2010) Secretory expression and purification of a soluble NADH cytochrome b5 reductase enzyme from Mucor racemosus in Pichia pastoris based on codon usage adaptation. Biotechnol Lett 32:1705–1711. https://doi.org/10.1007/s10529-010-0348-z View ArticlePubMedGoogle Scholar Nomeir AA, Abou-Donia MB (1985) Toxicological effects of gossypol. In: Lobl TJ, Hafez ESE (eds) Male fertility and its regulation. Advances in reproductive health care, vol 5. Springer, DordrechtGoogle Scholar Pompon D, Louerat B, Bronine A, Urban P (1996) Yeast expression of animal and plant P450s in optimized redox environments. Method Enzymol 272:51–64View ArticleGoogle Scholar Stipanovic RD, Lopez JD, Dowd MK, Puckhaber LS, Duke SE (2006) Effect of racemic and (+)- and (−)-gossypol on the survival and development of Helicoverpa zea larvae. J Chem Ecol 32:959–968. https://doi.org/10.1603/0046-225X(2008)37%5b1081:EORAGO%5d2.0.CO;2 View ArticlePubMedGoogle Scholar Tao X, Xue X, Huang Y, Chen X, Mao YB (2012) Gossypol-enhanced P450 gene pool contributes to cotton bollworm tolerance to a pyrethroid insecticide. Mol Ecol 21:4371–4385. https://doi.org/10.1111/j.1365-294X.2012.05548.x View ArticlePubMedGoogle Scholar Tian K, Liu D, Yuan Y, Li M, Qiu X (2017) CYP6B6 is involved in esfenvalerate detoxification in the polyphagous lepidopteran pest, Helicoverpa armigera. Pestic Biochem Phys. https://doi.org/10.1016/j.pestbp.2017.02.006 View ArticleGoogle Scholar Wei E, Zhang W, Liu D, Yu L, Shi G (2011) Effect of mixed culture fermentation on the contents of free and bound gossypol in cottonseed cake. Feed Ind 32(20):34–37 (In Chinese) Google Scholar Yang WH, Li HM, Zhu HQ (1999) Effects of different enantiomers of gossypol on the growth and development of cotton bollworm and Fusarium wilt. Acta Gossypii Sinica 11(1):34–37 (In Chinese) Google Scholar Yang Y, Wu Y, Chen S, Devine GJ, Denholm I, Jewess P, Moores GD (2004) The involvement of microsomal oxidases in pyrethroid resistance in Helicoverpa armigera from Asia. Insect Biochem Mol Biol 34:763–773. https://doi.org/10.1016/j.ibmb.2004.04.001 View ArticlePubMedGoogle Scholar Yang Y, Yue L, Chen S, Wu Y (2008) Functional expression of Helicoverpa armigera CYP9A12 and CYP9A14 in Saccharomyces cerevisiae. Pestic Biochem Phys 92(2):101–105. https://doi.org/10.1016/j.pestbp.2008.07.001 View ArticleGoogle Scholar Yildirimaksoy M, Lim C, Wan P, Klesius PH (2004) Effect of natural free gossypol and gossypol-acetic acid on growth performance and resistance of channel catfish (Ictalurus punctatus) to Edwardsiella ictaluri challenge. Aquac Nutr 10:153–165View ArticleGoogle Scholar Zhang WJ, Xu ZR, Sun JY, Yang X (2006a) Effect of selected fungi on the reduction of gossypol levels and nutritional value during solid substrate fermentation of cottonseed meal. J Zhejiang Univ Sci B 7:690–695View ArticleGoogle Scholar Zhang WJ, Xu ZR, Zhao SH, Jiang JF, Wang YB (2006b) Optimization of process parameters for reduction of gossypol levels in cottonseed meal by Candida tropicalis ZD-3 during solid substrate fermentation. Toxicon 48:221–226View ArticleGoogle Scholar Zhong YC, Wu LJ (1989) Detoxification of gossypol in cottonseed by microorganisms. Acta Sci Nat Univ Sunyatseni 3:67–72 (In Chinese) Google Scholar Zhou X, Ma C, Li M, Sheng C, Liu H, Qiu X (2010) CYP9A12 and CYP9A17 in the cotton bollworm, Helicoverpa armigera: sequence similarity, expression profile and xenobiotic response. Pest Manag Sci 66:65–73. https://doi.org/10.1002/ps.1832 View ArticlePubMedGoogle Scholar
CommonCrawl
New paradigms in economic theory? Not so fast. I've recently read two big-think piece about new paradigms for economic theory - this one by evolutionary biologist David Sloan Wilson, and this one by venture capitalist and activist Nick Hanauer and speechwriter Eric Liu. Is economic theory due for a paradigm shift? Maybe, but I don't think we know what the new paradigm will be yet. Wilson's piece - grandiosely titled "Economic Theory Is Dead. Here's What Will Replace It." - claims that evolutionary theory will be the magic bullet that will breathe life into econ. The piece doesn't explain how to incorporate evolutionary thinking into econ, but it does link to a 2013 special issue of JEBO that Wilson edited along with Barkley Rosser about incorporating evolution into economics. In a paper in that volume, Wilson lays out his ideas slightly more explicitly. But only slightly. Wilson's paper does three things: 1) it references economists who have suggested making use of evolutionary ideas in the past, 2) it discusses some arguments against using evolutionary theory in econ, and 3) it lays out some broad general principles of evolutionary theory. Concrete examples are left to the other papers in the volume (which are all sadly paywalled). In principle I think evolutionary theory might add a lot to economics, because economies obviously involve things like birth and death of firms, competition, predation, and other features similar to natural ecosystems. But the case for using evolutionary theory in econ is not yet a slam-dunk. The big reason is that we don't have much evidence that inheritance of traits occurs in economies. In biological evolution, we have many clear examples of heredity. In econ, to my knowledge, we have none. Evolution needs heredity, so evolutionary theorists who want to change the econ world should focus on demonstrating the existence of traits that are passed from company to company, or person to person, or industry to industry, within economies. Or, alternatively, they should show that companies and/or individuals have traits that change over time in a way similar to the way that biological traits change between generations. They should be very concrete and consistent about how to measure these traits. Then we can start talking about using evolution to create a new economics. Would-be evolutionary economists should realize that the measure of their success will be quantitative prediction. Get some numbers right out of sample, and they will win. What won't be useful is for them to simply point at various economic phenomena and say "Hey, this looks kind of like it conforms qualitatively to one or more general principles of evolution!" That sort of vague hand-waving does not really generate any progress in humanity's understanding of our world - it merely creates a feel-good sense of "truthiness" that makes for some good hypey media articles but little else. Evolutionary theorists, like all other researchers in all fields, should focus on predictive power and leave the hand-wavey just-so stories to a minimum. Hanauer and Liu's piece - similarly titled "Traditional Economics Failed. Here's a New Blueprint." - is, in my opinion, much more promising, if also pretty vague. It is also much more diverse, specifying many different sweeping changes that they believe need to be made in economic theory. Among these are: 1) The replacement of reductionist models with ones based on "complex adaptive systems" 2) The use of network models 3) The use of disequilibrium models 4) The use of nonlinear models 5) The replacement of "mechanistic" theories with "behavioral" ones 6) The replacement of optimization with something resembling "satisficing" 7) The replacement of forward-looking agents with adaptive agents in econ theories 8) Modeling people as interdependent instead of independent 9) Modeling people as irrational approximators instead of rational calculators 10) Modeling people as caring about reciprocity 11) Modeling win-win situations instead of environments characterized by rivalry 12) Replacing models of competition with models of cooperation There is a lot to digest here. I'd split these points into three categories: Category 1: Good points. These include (2), (4), (6), (7), (8), (9), (10), and (12). Hanauer and Liu identify networks, nonlinearity, incomplete optimization, incomplete forward-looking-ness, externalities, behavioral heuristics, social preferences, and cooperative games as areas needing more attention in economics. I agree with all of this. Good job, guys! Category 2: Points that misunderstand the current state of economics research. These include (3) and (11). Regarding "equilibrium", economists don't use it in the physics sense that Hanauer and Liu cite. Instead, they have redefined the word "equilibrium" to mean "any solution to a system of equations in any model." The term is thus now meaningless. Many, many mainstream economic models include "transition dynamics" or "short-run equilibrium" that is exactly the same as what Hanauer and Liu call "disequilibrium". As for "win-win" models, most existing mainstream econ models are all about win-win situations. This is the concept of Pareto Efficiency. Category 3: Points that are not very well-defined. These include (1) and (5). "Complex adaptive systems" is a term that gets thrown around a lot but rarely gets a concrete definition. In computer science research, "complexity" basically just means "displaying emergent properties", and progress in that field has been rather halting. I'm still not sure what the term "complex adaptive systems" means in terms of economics. Hanauer and Liu assert that "We understand now how whirlpools arise from turbulence, or how bubbles emerge from economic activity." The latter is not the case; we do not actually know what bubbles are, or even whether they are a single phenomenon or several similar-looking phenomena. Are bubbles based on "greater fool" speculation? Rational mispricing? Emotion-based irrational mispricing? Information cascades? Other forms of herd behavior? Bayesian "information overshoot"? Some combination of these? None of these? We just don't know. As for making economics less "mechanistic" and more "behavioral", Hanauer and Liu do not explain what this means, and merely reference a David Brooks book (which is not encouraging). So I agree with about two-thirds of Hanauer and Liu's points. The others need tightening up, but not bad overall. The question is whether these ideas, together, represent a new paradigm in economic theory. Hanauer and Liu argue that they do, but I am not so sure. There seem to be three mini-paradigms here: bounded rationality, interdependence, and holistic analysis. The first two have already been making inroads in economics, though I think they should make more inroads. The latter is kind of an older idea that doesn't seem to have panned out as well as many hoped - there isn't actually going to be a Second Enlightenment replacing reductionist science with holism. But I think that more important than any of these theoretical changes - or the evolutionary theory suggested by Wilson - is the empirical revolution in econ. Ten million cool theories are of little use beyond the "gee whiz" factor if you can't pick between them. Until recently, econ was fairly bad about agreeing on rigorous ways to test theories against reality, so paradigms came and went like fashions and fads. Now that's changing. To me, that seems like a much bigger deal than any new theory fad, because it offers us a chance to find enduringly reliable theories that won't simply disappear when people get bored or political ideologies change. So the shift to empiricism away from philosophy supersedes all other real and potential shifts in economic theory. Would-be econ revolutionaries absolutely need to get on board with the new empiricism, or else risk being left behind. David Sloan Wilson replies. He laughs at economics for being very very late to embrace empiricism. While the rebuke is very deserved, it's also true that good data is a lot harder to gather in econ, and that technology has made this a lot easier in recent decades. Wilson also writes: But there is more to Science 101 than the need to test theories. Let's imagine that there were ten million cool theories out there. How long would it take to test them? Hundreds of millions of years. ...Does Smith really believe that any old idea that comes into the head of an economist is equally worthy of attention? Answer: Of course not, but this is another reason empirical results are important. They typically leave a trail of clues that help guide scientists toward good theories. You see electrons making a sort of wavey pattern, and you invent quantum mechanics to explain that - and luckily it turns out to explain a lot of other stuff too. That's a typical progression - you start out with some fact or phenomenon, then you make a theory to fit it, then you test that theory on different phenomena to see if you've got something structural. Of course, sometimes pure intuition gets things right the first time around - general relativity and auction theory are examples of this - but usually we're not that smart, and theorists have to follow the empirical bread crumbs. The main reason that the so-called orthodox school of economics achieved its dominance is because it seemed to offer a grand unifying theoretical framework. Too bad that its assumptions were absurd and little effort was made to test its empirical predictions. Well, there has definitely been some of that going on. But "orthodox" econ has achieved a lot of solid results, like auction theory, random utility discrete choice models, and a number of other models in tax, labor, and other areas. Nor are orthodox assumptions always absurd, since some of them hold up well in lab experiments and other micro-level studies. I think Wilson would enjoy learning about these successes, in addition to the well-publicized failures. I like that economics is shifting in the direction of empiricism, but I think philosophy is still indispensable. For example, in his interview with Ezra Klein, World Bank President Jim Yong Kim made references to Marx, Derrida, Friere, Post-Modernism, and Liberation Theology as totally relevant to his work. All the data and analytics in the world won't help you if you don't have an idea of what you want to do with it, or how it relates to other people. Britonomist 5:16 PM For a second I thought these guys managed to acquire the domain economics.com, and used it to propagate blatantly non neutral articles in a deeply cynical move, then I looked at the spelling of the domain more closely. Still, this site is clearly, once again, overly adversarial and overly sectarian - it's like these guys go out of their way to hold the discipline back rather than move things forward. Frog 12:10 AM Well, they aren't anywhere near part of the discipline, so they only hold it back in the sense that they contribute to filling the sea of economic misinformation on the internet. Pablo 1:26 PM Just wanted to upvote. Realizing Noah Smith spends time reading 'evonomic' articles made me lose a little respect for him (sorry Noah). The only thing evonomics is concerned with is creating its own macroeconomic school of thought. Which, apparently is all that any of the scientist on there know about economics. Category ones are mostly examples of replacing simple models with more complex ones having more free parameters. These are exponentially more difficult to estimate and verify empirically without succumbing to problems like over-fitting. It really only makes sense to dig into the more complex models once you have settled the relative accuracy of the simpler summarizing ones. If you have difficulty getting enough good data to determine which simpler models are more correct, it's a good sign you won't be able to prove or disprove the more detailed ones. Increasing abstraction and complexity of your model and claiming you have a new distinct answer is also a common fallacy (see https://xkcd.com/1318/ ) This type of confusion is rooted in the reification fallacy or the "mind projection fallacy". Different maps of the same place having different levels of details do not show something radically different and the more detailed ones are more difficult to maintain, more likely to be outdated and have errors. It is good to aspire to get to the most complete models of things you can reach with the data available but it is crucial to always be aware of the limits of what can be validated given the amount and the cleanliness of the data available. See also http://omega.albany.edu:8008/JaynesBook.html Chapter 24 Noah Smith 5:47 PM While you're right about the danger of overfitting and the need for low dimension, you're very wrong that most of the things in Category 1 increase the number of free parameters in models. Many of these models reduce the parameter space instead of expanding it. They look at different things, not more things. Tom Brown 6:32 PM Benoit, here's a bit on overfitting you may like (or not). The problem with economics these days is that it's no longer about economics. The economics space is dominated by physicists and mathematicians. Obviously career prospects in these fields are not very good while economics is an attractive milch cow. Perhaps, even, there's too much competition in science or mathematics and they're tough games to make a name in, particularly when it is more difficult in these fields to originate a new theory and have it empirically validated or at least not falsified. So we now have a plethora of new economic paradigms which are just neoclassical economics in entropic drag or ecological drag or whatever. Strangely, it seems, I had always thought that economic behaviour was about human beings not atoms and amoebas. Maybe I should just dust off the old microscope and go and sample some murky pond water. Humans aren't special. We're just robots controlled by modest sized meat computers. We're even less special in large groups. That meat computer, I would argue, does not behave in ways analogous to the one in your iPod. It is a great deal more complex. " We're even less special in large groups." It doesn't stop large groups behaving in unpredictable ways or ways not amenable to stochastic modelling. Show me the stochastic model which explains 2008. Henry. Huhuhu 9:53 PM "Explaining" an economic event doesn't do much to show the value of stochastic models vs. any other methods. With such a vague goal, you could validate your explanation by matching it to your "definition" of whatever you are purporting to explain. ""Explaining" an economic event doesn't do much to show the value of stochastic models vs. any other methods." I wasn't suggesting that be done. I was asking whether there was any stochastic model that even came close. I'm not suggesting other model types were up to explaining 2008 either. Henry, we don't have any models that predict the precise timing of Earthquakes yet. That doesn't mean that the field of plate tectonics is without valid mathematical models. There are other predictions that the theory makes (i.e. the "ring of fire" etc.) which can be checked against the data (both old and new). I am not even necessarily talking about prediction. Let's start with explanation. I can't see how an economic model based on entropy or amoeba ethology would be able to do explain 2008. But then again, there's plenty of things I don't understand. Tom Brown 3:53 AM The entropy model (if you mean Jason's) is that something like 2008 represents a loss of entropy due to non-ideal information transfer (I think). In such cases the problem becomes more or less intractable in that framework. I don't think he has any strong claim of being able to predict or explain why the mob's actions suddenly become coordinated (rather than so complex as to look random and uncoordinated). The tractable problems are when information transfer is ideal (information equilibrium (IE)), and people look to be behaving randomly (or so complex so as to be indistinguishable from randomly). Jason might concede that 2008 may be a problem for behaviorists. However the liquidity trap itself (post 2008) does have an ITM explanation. I'm going off my understanding, so I could be wrong about any or all of that. Recall, it's supposed to be a low order model. Behaviorally sorting out why a mob panics is probably going to be higher order and thus more complicated. But you've got to crawl before you can walk, right? ... although in terms of explanations (rather than predictions) there is a tie in with Friedman's "plucking model." Actually believing expectations (which turn out to vary somewhat ... it turns out not very much) from reality can cause that. It's the extremely rare case of an expectation that nails it which can result in better than random performance. "Recall, it's supposed to be a low order model." Given Jason's high handedness you would think he was offering the full monte. "The tractable problems are when information transfer is ideal (information equilibrium (IE)), and people look to be behaving randomly (or so complex so as to be indistinguishable from randomly)." And then it just becomes the facsimile of the good ol' neoclassical paradigm. Sine ira 9:50 PM Or maybe having theoretical people waving hands and speaking loudly is a waste of everybody's time? When you treat it as a real science you need people who can do Science, not witch doctors. Sorry if you have been out-evolved. Frog 7:04 PM Right, people who can do science, like a venture capitalist and a speechwriter, and a very crusty evolutionary biologist. Gimme a break. Henry, well like I said: that's my limited understanding. You might want to go to the source. Jason has hinted at building on what he has in the future... perhaps a higher order model? But of course that's totally useless until you can show you do a good job (as good as the order allows) with a low order model. "You might want to go to the source." Apparently the source will only treat with people of a similar mind. I think that post was part joke, part frustration and part hyperbole... he's dropped moderation since and Ramanan and Brian Romanchuk have commented there again recently... in fact he did another couple of SFC posts, one trying to cast it in an IT framework. Plus he and John Handley have been having some interesting disagreements (I couldn't replicate either of their results, in the case of Japan's Philip's curve). ... also, Cameron Murray (commenting below) left a note of encouragement (especially regarding the Gary Becker angle), and I guess he's an econ educator, so I found that interesting. I'm still a tad mystified by the SFC thing... and as you know I spent a lot of time digging into it, even adding a bit yesterday. The fact that what I came up with (seemingly contradicting the point of that post) only works over a relatively narrow range of parameters I think has something to do with it. Henry: my idea for a gameshow: take normal people off the street and read them some work from arrogant sounding mainstream, heterodox, and insane economics enthusiasts (professionals & amateurs)... and ask them to identify the crackpots. Trouble is, nobody wins... for 20 years, when the crackpots are finally committed and/or receive their Nobel prizes. If nothin' else, I learned how to leave snazzy looking comments! e.g. $$(3)\:\alpha_{2} = \frac{1 - \alpha_{1}(1 - \theta) - \theta}{K_{H}\theta}$$ (probably will look like crap here... the blogger needs to do this before we can arrogantly abuse them with perfectly typeset math) Gameshow? That sounds like reality. "Would-be econ revolutionaries absolutely need to get on board with the new empiricism, or else risk being left behind." I agree with that. Also, the 1st part of Hanauer and Liu's 1st point sounds OK to me: "1) The replacement of reductionist models..." If left there, or completed differently, that could just be read as an acknowledgement of appropriate scale. It's not appropriate to attempt to understand a natural structure on our scale (i.e. within a few orders of magnitude of the size of a human) by reducing it to one of elementary particles like quarks. However, for Hanauer and Liu to then go on and in the very next phrase and mention "complex adaptive systems" seems to contradict that. Oh well. I think this ties in with a sense of optimism and pessimism regarding economics being science. From a scientific perspective, I think the following are deeply pessimistic attitudes: 1. Math cannot be useful in economics. 2. Economic data is not and cannot be informative enough to falsify models or theories. 3. Economics is so complex that it cannot be modeled successfully, or it requires no less than hyper-complex multi-million agent models, each with thousands of parameters. Now, if science isn't your thing, you might have the opposite view: those attitudes might be OPTIMISTIC from the perspective of a professional derp peddler. Cameron Murray 6:57 PM I can't help but feel you hold new approaches to economics to a higher standard than standard approaches. If you started an economics undergraduate degree today with your current philosophical view on science and the progress of knowledge, what would you think? I could see you writing the same sorts of articles about the standard model - "Rationality, representative agents and partial equilibrium have made some inroads into economics, but really, unless they pass muster with out-of-sample prediction the new empirical world of econ they risk being left behind." I mean, any model with K (aggregate physical capital) in it can't actually inform empirical approaches because we know that it is impossible to determine a single measure of K in any unit of measurement. Yet, on we go pretending that we can, telling policy-makers about productivity, indoctrinating the next generation, and pretending this issue has never crossed our minds. As a final point, there are plenty of people researching cultural and economic traits of firms, individual etc. I mean, any time series aggregate is essentially a measure of the frequency of a trait in a population over time. "any model with K ... can't actually inform empirical approache" ... here's one (two equations & five parameters total). Noah Smith 4:14 AM I can't help but feel you hold new approaches to economics to a higher standard than standard approaches. Well...probably not, no. If anything, the opposite, because cool new stuff is cool and new. Well, the fact is, those things all do risk falling behind. All theories are in danger from the new empirical revolution. This paragraph shows just how much people are used to thinking about econ in terms of theory rather than evidence. What you're talking about is a theoretical argument against a modeling convention, not an empirical argument against a model. If models that use K can predict the dynamics of K (and/or other stuff), then who cares about that theoretical argument?? It's a model. It predicts something. Cameron Murray 4:18 AM How did you measure the dynamics of K that you are now successfully predicting without a theory of what K is? You measure the number of dollars spent on capital goods and divide by a price index of capital goods to adjust for changes in the prices of said goods. Voila! Without a theory of K, which good are capital and how do you create a quantity index? Doesn't matter. Again you're going back to *theoretical* criticisms. But empirically, if the thing you decide is K can be described by models, and if it helps you make models that predict other stuff you care about, that's all that matters. Who cares whether your "theory of K" is good or bad in someone's opinion? It works. That's all that really matters. There are implicit theories in all empirical work, and even more so when you translate "this seems to predict something I measured" into anything of use to anybody (except others who want to predict what you measured). "If models that use K can predict the dynamics of K (and/or other stuff), then who cares about that theoretical argument?? It's a model. It predicts something." But then you have to admit (and I think that this is a good thing) that you are not predicting the dynamics of capital but rather the dynamics of a specific measure, which is a more or less imperfect proxy for capital. My point is that as serious scientists, economists should avoid using general terms for their predictions of specific measures. E.g. capital and labour inputs are actually gross operating surplus and compensation of employees; these are social not technological categories. It is often misleading to talk in such general terms. 'It works. That's all that really matters.' I'd would go even further and say if it works and works best, it is a good theory. " Would-be econ revolutionaries absolutely need to get on board with the new empiricism, or else risk being left behind." Empiricism may be on the rise, however, an hypothesis or theory is still needed for data to be tested against. Leamer may want the "con" removed from "econometrics", unfortunately it still leaves the "tric" behind. "Empiricism may be on the rise, however, an hypothesis or theory is still needed for data to be tested against." Agree. If left unstated, then no doubt there's a implicit theory sneaking in anyway. Why not just be explicit about it? Lord 7:19 PM So, neoclassicism is dead? No, it is just one tool in the toolbox. JJF 7:55 PM I'll foolishly stick my neck into the hand-wavey just-soification of evolution interjecting itself into economics, on the strength of my former career in corporate IT and having read way too many books by Dan Dennett and Richard Dawkins. The big problem with you hand-wavey dismissal of evolutionary applicability to econ is "Evolution needs heredity". This doesn't really reflect the way Dawkins and his school thinks about evolution. The one they prefer is something more like "entities capable of imperfect self reproduction constrained in their growth by limited resources". So strict genetic heredity with limited mutation as is seen in eukaryotes and multicellulars with germ cell lines is one variety of evolution, but the biology of viral evolution in which reproduction and mutation happen several orders of magnitude faster is sufficiently different that the terms "heredity" and "species" are sufficiently different as to strain meaning. With bacterial evolution, where horizontal gene transfer between discrete "species" is the rule instead of the norm, the concepts are even more strained. Dawkins positing of memes behaving like viruses as the foundational point of cultural evolution is probably flawed in as much as ideas reproduce and proliferate like bacteria instead of viruses, with idea "genes" jumping between wholley different and intuitively incompatible hereditary lineages in the connections a given human mind puts on them. Say, for example, a maverick economics blogger mashing up the sententious, utopian ponderings of Eliezer Yudkowsky with the image of the King of Town from "Homestar Runner" or utilizing the lyrics of "Particle Man" as a metaphorical backbone to a story about American Libertarianism's contempt for the rights of individuals other than themselves. Once you've acknowledged that ideas work that way, and you know the distinction between the approaches in programming between genetic algorithms and "centrally planned" software solutions, and you become familiar with the very big problems in scaling up the latter to ever more complex systems, I think understanding markets as large scale "genetic" algorithms for organizing efficient organizations of economic production becomes sufficiently compelling to not be just a metaphor. It works at the level of individual manager. She's always on the lookout for ideas about organizing, motivating or otherwise improving the productivity. Since this pattern seeking is intelligent and not rational, you get about as much witch doctoring as medicine. One of my former employers, DaVita, had a CEO laboring under the delusion that the company was saved by his dressing as one of Dumas' Three Musketeers. http://www.cbsnews.com/news/meet-the-ceo-who-makes-his-staff-sing-the-company-song/ I rather suspect that the company's emergence from near bankruptcy around 1999-2001 had more to do with the long term drop in real interest rates making the debt load it had more sustainable. My suspicion is that Jack Welch's success in turning the cash rich GE into a financial company in the early 1980s had to do with the opposite trend at that time. In this understanding of markets, the importance of biodiversity and exogamy in robust ecologies and populations becomes a fairly exact analogues for the importance of competition in markets and a constant inflow of new ideas in firms. So how does one go about empiricizing that? To quote the late lamented Terry Pratchett: "sodomy non sapiens" (buggered if I know). A lot of my thinking along these lines has been aimed at popularizing Brookings Institute style macro through entertaining sketch comedy in the Monty Python tradition. (Another form of intellectual miscegenation that would lead at best to sterile mules in a universe where intellectual heredity followed a strict genealogy.) And the problem with that is that although biological evolution may be a fertile source for your own understanding of micro and macro econ, if you plan on selling it in Tulsa, you then have to go to the immense trouble of weeding out ALL reference to biological evolution. Tom Barson 8:44 PM "Not so fast" seems like an understatement. The evolutionary stuff makes no sense. It's like the AI guys talking about singularity. The H&L list is shall we say, ambitious? There are ideas worth discussing here, but remodeling economics along the lines of even one of them would be an enormous undertaking. Meanwhile, as you you say, economists' current willingness to do a bit of measurement-without-theory seems to be paying dividends. Is this comment actually in response to mine? I didn't say anything about "measurement-without-theory". And I assume the "H&L" list referred to is the numbered list in Noah's post attributed to Hanauer & Liu above? I don't mind being told I'm making intuitive leaps beyond reasonable bounds, but I generally prefer the criticism to actually be in response to my argument. :) Huhuhu 10:17 PM It's not an important tangent, but it's difficult to see how interest rates, real or otherwise, were important to Davita's turnaround from '99 to '01. The notable decline in real interest rates occurred in the subsequent boom period; nominal rates are more indicative of financial stability in a corporate level analysis; Davita's revenues and operating income significantly grew in that period. Even with the same debt load, and using 1999's nominal spread on 2006's CPI growth, Davita would have delevered due to very strong operating results. This is a textbook case of an actual turnaround, and not a real rates thing. JJF 10:34 PM It was an intuition on relative timing rather than something based on looking at the actual numbers, and you're right it wasn't important. I'm reasonably sure that the costumes and singing were NOT the cause of the turnaround, and it was supposed to be an example of a management strategy that's clearly voodoo based. Sorry, John. I was making a general comment and got in the wrong tree. Hmmm, I guess I should reply to this given that my name has been brought up in regard to coediting the special issue of JEBO on evolutionary economics cited above (and I am sorry about the walls, but that is Elsevier; the journal I edit now, The Review of Behavioral Economics, is not published by them). I shall just note a few things. First I might concede that Noah makes a valid point that it will be hard to justify the evolutionary approach by empirical methods, although I do not say impossible. Indeed, part of the problem is pinning down what is the meme or object of evolution. In biology this is usually viewed as being the gene, but its equivalent in economics is not obvious. Many people think about natural selection as occurring in the competition between firms, a view that Armen Alchian argued for in 1950, but that is sort of like arguing in biology that it is about the competition between individual organisms. Most evolutionary economists either follow Schumpeter and say it is about technology, and this can be empirically studied, the diffusion of new technologies and all that, while others focus more on such things as behavioral practices (yes, evolutionary economics is consistent with behavioral, as Herbert Simon certainly argued), with people like Nelson and Winter prominently arguing this, although this view can be traced back to Veblen. Obviously this sort of thing is harder to estimate empirically than measuring the use of technology, although I will say that I do not think it is impossible. I would note beyond what Wilson and Gowdy argue that in fact the link between evolutionary theory and economics runs very deep and is a two-way street. In spite of the dissing of Adam Smith, he made evolutionary arguments and influenced Darwin. Another who crucially influenced Darwin was Malthus, from whom Darwin directly got the idea of natural selection (I could cite some of my own work on this, but will not do so tendentiously and suggest people read Geoff Hodgson on this stuff instead). Darwin then strongly influenced several important economists before Veblen put out his call for economics to become an "evolutionary science," with both Karl Marx and Alfred Marshall being among them, with Marshall famously arguing that "biology is the Mecca of economics," and specifically quoted Darwin on several points, even if he arguably followed physics models more than biology in developing his version of neoclassical theory. (Ironically, even though Schumpeter is generally viewed as being an evolutionary economist, he himself rejected biological approaches.) One problem is that there are many strands of evolutionary economics, most of them around for some time, obviously. So it is not clear which if any might be the key part of the paradigm that replaces standard modern economics. I note that among others there are old institutionalist strands coming from Veblen, there is the whole neo-Shumpeterian path, which includes the Nelson and Winter group (even if they focus on behaviors rather than technologies), there is evolutionary game theory, which like Darwinian natural selection itself involved a going back and forth between economists like Selten and biologists like Maynard-Smith. Then there are the modern complexity evolutionists like Kaufmann at the Santa Fe Institute, who upset many conventional biological evolutionary theorists. Coming out of that are such things as agent-based models that depend on genetic algorithsm (evolutionary biology for sure), which have been jockeying recently for attention, if not sweeping the board. I would close this by noting that the rather eclectic list of Hanauer and Li includes items that fit into this, an evolutionary economics that is consistent with complexity and behavioralism, which I think is where we are heading, even if it is matter of infiltration of the existing paradigm rather than a total paradigm shift. I meant to add signature to this last post, for anybody who does not know that it was by Many people think about natural selection as occurring in the competition between firms If so, it's an interesting, new kind of evolution. Potentially immortal organisms that can change in response to stimuli, rather than heredity between finitely lived organisms. That's cool. Here is some evolutionary econ research I found interesting: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.8.6243&rep=rep1&type=pdf [email protected] 12:55 AM Noah, Was not able to download that link. Who wrote the paper and what is its title, please? " Karl Marx and Alfred Marshall being among them, with Marshall famously arguing that "biology is the Mecca of economics," and specifically quoted Darwin on several points" One wonders whether this influence was just faddy, evolution being the latest scientific development to grab the imagination of the day? Try this one: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1495497 "On the Evolution of Overconfidence and Entrepreneurs", by Antonio Bernardo and Ivo Welch Thanks. Will check out. Maybe, but in the case of Marx, he jumped on the bandwagon almost the minute it went out the door with publication of Origin of the Species before it had become well known. As a completely distinctive uber heterodox, his view was serious, and he in fact took a mixed view, praising Darwin's view of biological evolution and natural selection (partly because it overturned the religious orthodoxy of the day), while rejecting it for human societies, arguing that a planned socialist society would move beyond natural selection. The argument might be made more seriously for Marshall, although by the time he published the first edition of his Principles of Economics in 1890, Darwin was no longer all that in fashion, with Herbert Spencer and Lamarckism more fashionable and prominent. It would take advances in genetics and probability theory over the next several decades to really put Darwin back on top, this not happening until the 1930s with the neo-Darwinian synthesis of Fisher, Haldane, and Wright. So, I do not think it applies to Marshall either. "even if it is matter of infiltration of the existing paradigm rather than a total paradigm shift. " Barkly, I can begin to see how the sorts of models that are being proposed might apply to aggregate microeconomic behaviour (if that is not contradiction, some if not most would probably think it is). Personally, I think what is more of interest is how these models might explain aggregate macroeconomic behaviour. You mightn't agree there is a the difference, I don't know. (Personally, I don't think models like the New Classical say much about macroeconomics.) I wonder if you have any thoughts on the matter. Basically the more aggregated one gets, the less the evolutionary approach is relevant. "we don't have much evidence that inheritance of traits occurs in economies" We do for one aspect - the traits of the human agents. Good point. Also I suppose you could argue it from a "memes" perspective, but that would probably just lead to more trouble. A lot of what "empirical" means to economists is running through large numerical data sets through computers. OK, that is part of what it is. But it is also qualitative. Like an historian or any other investigator you have to find documentation that says "X has done or is doing this because of Y". And you need a lot of documentation from the contradictory mess of reality to make the case. And you do not make up stories for things that are too complex or for which satisfactory evidence has not been found to get a mathematical model. Rather you point out where the the things are for which we do not yet have a satisfactory answer and say, " but we DO know this...." Models have a role for forecasting. And ultimately they should be judged on how well they forecast. But they should not be used to explain society. Mathematical models can't possibly be expected to do that. Economics has a long way to go before it can be considered an empirical social science. >Rather you point out where the the things are for which we do not yet have a satisfactory answer and say, " but we DO know this...." Well, this goes on. >Models have a role for forecasting. And ultimately they should be judged on how well they forecast. But they should not be used to explain society. Mathematical models can't possibly be expected to do that. Why not? Do you think you're such a special unique snowflake that you can't be boiled down to a regression? Well, you're right. It takes a mixed logit. I have never understood the multitudinous existence of these sorts of internet commenters who seem to both have strong opinions about how economics should be done and painfully weak knowledge about how it *is* done. Unlearningecon 2:46 PM I don't see how anon #1s comment reveals any sort of 'weak knowledge' about economics. It is true that the vast majority of empirical research in economics is done by running regressions on datasets from the comfort of an office computer. I read their comment as arguing that, while useful, this cannot substitute for an in-depth, qualitative understanding of the reality the datasets are supposed to approximate. I agree - economic practice would benefit from more of a systemic appreciation and assimilation of this type of evidence. rayward 8:03 AM Shifting the means of production from economies/cultures with a history of technology and economic growth to economies/cultures that don't and measuring the difference in outcome would be an example of evolutionary economics. "2) The use of network models" I wonder for a long time why Econ make so few use of Graph Theory. Safe for a paper by Acemoglu, I could not find so much Econ paper using graphs. Even the link you provided altough disserting about network seem to ignore them. Such an under utilisation come from a too much Physic related way use in Econ Maths or is there any others reasons ? Ludovic Coval. Helen Jackson 6:46 AM You might be interested in the work of UK economist Paul Ormerod who has made quite frequent use of network models: http://www.paulormerod.com/ I think the most important idea that is missing in all of these kinds of discussions about economics is a definition of the goal. This is why microeconomics has such a better reputation than macroeconomics, because the goal of microeconomics tends to be implied, universally understood, and generally agreed upon, whereas the goals of macroeconomics are not implied, not universally understood, and not generally agreed upon. Anyone suggesting that evolution is important seems to have some implied goals in mind: 1) optimization of something or other and 2) how optimization algorithms change over time. Optimization of what? This is the obvious question. GDP? Efficiency? Equality? Inequality? Dominance? Cooperation? Optimization of what, I ask again! In our world today, the goals seem intimately linked to politics. Rich people want to be richer, but they don't want to cause revolution that would undermine their position. People who are reasonably paid for their passions feel things are great and operate out of of fear of change. The status quo might be the goal here. Poor people want to alleviate their stressors in life, like finding food, clothes and shelter. Their goals are of progress, but defining that progress is hard and their motivations easily hijacked in the name of GDP gains. What is the goal? I depends upon who you ask, obviously. In macro, I'd suggest that predictability is the best goal, because if the result is predictable, honest politicians can use the model to best weigh the differing goals and come to the optimum policy prescriptions. Predictability in macro is severely lacking. I consider this the antithesis of evolutionary theory, which will never be predictable in my opinion. Despite our best efforts, evolution controls our very thoughts that contemplate the idea. Good luck beating that truth... I consider the best goal of macro to provide predictable results while providing policy prescriptions that if understood by everyone would be agreed to be fair and desirable. Evolution does imply some near term optimization simply due to natural selection, that which does best (firms that make more profits) in the near term will survive. But it is a mistake to declare that there must be "goals." There are people who have posed teleological views on evolution, that it moves to higher levels of complexity or consciousness or even an Omega Point, to take the extreme example of Catholic theologian and anthropologist, Pierre Teilhard de Chardin. But most evolutionary theorists, including Darwin and even including such evolutionary economists as Veblen, impose nor posit any long run goals or direction. Evolution simply goes wherever it goes as it involves a co-evolution between the species and their environments. Agreed that evolution needs no goal. But I don't see an analogy to economics in general. This sounds like a confusion of capitalism with economics. The goal of capitalism is profits, and profits sometimes equate to survival, but do profits serve the motives of the actors? Not necessarily. I don't think evolution needs a goal, but I do believe that the pursuit of economic study needs a goal other than arbitrary outcomes. Humans have essentially conquered the environment in our current context. We have enough food, clothing and shelter that it isn't necessary for anyone to go with out these essentials. Nobody on the planet needs to go without these. Instead, we've turned to social darwinism as an argument for hierarchy and dominance. The unending goal of evolution is survival. Is that the same goal of economics? Has it shifted from the survival of the individual to the survival of the species? And if so, do you feel comfortable sacrificing the survival of individuals for the survival of the species? No matter the viewpoint, humans have a goal. Natural processes don't need a goal, but humans all have goals. So the goal is necessary if there is a reason to study economics. I'm a late comer to the the problems we have. I defer to people like Mark Wojahn: http://www.markwojahn.com that are more in touch with this than I. It is shocking how similar our views are. Identical. I'm not worthy of Mark's dedication to the welfare of others. But can I bring some attention to it? Perhaps. He's just as good as he seems. I've known him since high school, and he's the real thing. Noah, you have a response from John Handley. "ten million cool theories, " This is not a good thing. In critiques of Rodick - who says everything in economics post Financial crisis is well and good - people have called this "the smorgasbord of models" problem. AXEC / E.K-H 3:52 AM Vain hopes in the ruins of economics Comment on Noah Smith on 'New paradigms in economic theory? Not so fast.' Economics is a failed science and the ultimate cause is the proven multi-generation scientific incompetence of economists. Since Adam Smith economists have not grasped what science is all about — despite the fact that it is unambiguously defined: "Research is in fact a continuous discussion of the consistency of theories: formal consistency insofar as the discussion relates to the logical cohesion of what is asserted in joint theories; material consistency insofar as the agreement of observations with theories is concerned." (Klant, 1994, p. 31) It is always BOTH, logical AND empirical consistency and NOT either/or. This is the critical hazard: instead of keeping the balance on the high methodological tightrope the incompetent researcher tumbles down either on the side of vacuous deductivism or on the side of blind empiricism. What the history of economic thought clearly shows is a pointless flip-flop between fact-free model bricolage and theory-free application of statistical tools or, worse, commonsensical stylized-facts storytelling. So it comes as no surprise that, after the proven failure of maximization-and-equilibrium economics it is again the turn of 'empirical revolution' (See intro). Needless to emphasize that this is just another instant of scientific incompetence because methodologically the theoretical revolution must precede any empirical revolution: "The moral of the story is simply this: it takes a new theory, and not just the destructive exposure of assumptions or the collection of new facts, to beat an old theory." (Blaug, 1998, p. 703) Walrasianism, Keynesianism, Marxianism, and Austrianism is logically inconsistent or empirically inconsistent or both (2015). Always when economics is in open crisis four reactions are to be observed: (i) self-delusional denial, (ii) back pedaling and relativization, e.g. 'economics is not a Science with a capital S' (Solow), (iii) admission of the most noticeable flaws with the reassurance that our best brains are already working on them, (iv) clueless actionism and innovation showbiz. The two main pseudo-innovations consist of borrowing from either evolution theory or from the latest vintages of physics (complexity, networks, chaos, non-linearity, thermodynamics, disequilibrium, information, emergence, etc.). The actual confused state of these misdirected approaches may be gleaned from The Journal of Evolutionary Economics and from the EconoPhysics blog. Mindless copying/borrowing is the characteristic of what Feynman famously characterized as cargo cult science. Neither evolution theory nor EconoPhysics is the way forward. Economics has to redefine itself in a genuine paradigm shift. In very general terms, the methodological revolution consists in the switch from behavior-centered bottom-up, i.e. subjective microfoundation, to structure-centered top-down, i.e. objective macrofoundation (2014). First of all, the orthodox set of axioms (Weintraub, 1985, p. 109) has to be fully replaced.* Nothing short of a theoretical revolution, a.k.a. paradigm shift, will do. Needless to stress that the superior paradigm has to be logically AND empirically consistent. After more than 200 years of failure and Noah Smith's latest methodological wind egg economics needs the true theory — fast. Egmont Kakarot-Handtke References see http://axecorg.blogspot.de/2016/03/vain-hopes-in-ruins-of-economics.html What empirical evidence will indicate that your theory is wrong? Please be specific. Specifically, I propose to test the structural-axiomatic employment equation/Phillips curve (33) of the working paper 'Keynes' Employment Function and the Gratuitous Phillips Curve Disaster' http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2130421 because of its overriding importance for improving the actual employment situation. Egmont, have you shared your views with Scott Sumner at www.themoneyillusion.com? Scott welcomes all in his comments section and would probably enjoy delving into your ideas. The sole exception (after 7 years of blogging) being a gentleman (or gentlewoman... hard to tell which) commentator referring to themselves as "Shmebulock, Crusher of Pussy." Some observations: 1. Quantitative validation rather than quantitative prediction would be a hallmark of success. It may turn out that certain variables are inherently impossible to predict, for example because they behave chaotically. 'Prediction' could give the impression to policy-makers and central bankers that economic variables can be accurately forecast from period to period – let's face it, what they really want – rather than shown to follow a particular statistical distribution or pattern of behaviour. The latter is a much tougher sell in policy circles. 2. 10-12 in Hanauer and Liu's list sound normative, therefore not scientific. 3. Economics would benefit from a more meaningful concept of equilibrium, more akin to the concept of dynamic equilibrium in the natural sciences and dynamical systems theory. Its use in meaning "the solution to a set of equations", often a set of solutions with no concept of time, is portentous and obfuscating to people from other disciplines. Why not just call that a solution? 4. I'd say a "complex adaptive system" is a system of inter-related parts which is capable of change over time both through internal interaction of those parts and changes in exogenous factors. These changes may be inherently unpredictable due to the configuration or dynamics of the system. They may also affect different parts of the system at different times. 5. Perhaps a more important distinction between "mechanistic" vs "behavioral" is that between "unfounded but nicely mathematically tractable behavioural assumptions" vs observed ones. 6. I totally agree about the need for empiricism in economics, which really should be a given. However, it would concern me if empirical macro relationships would still be seen as needing to be derived from microeconomic foundations based on unrealistic assumptions. An empirical finding should be seen as valid even if we don't ever fully understand its underlying causation in terms of a straightforward mathematical model. 7. "good data is a lot harder to gather in econ, and that technology has made this a lot easier in recent decades". Absolutely, economics need to harness the data revolution. In the UK GDP is still calculated through company surveys – not even using electronic tax records – which I find incredible. If anyone's interested, this is coming on the policy radar. A recent government review of economic statistics in the UK has recommended that national statistics need to catch up with data science: https://www.gov.uk/government/publications/independent-review-of-uk-economic-statistics-final-report Dr. Smith, here's some data to help you pick a theory: If your culture has deadly relationships with the sky and ocean, your cultural genome sucks. We absolutely need a physics / evolution based economic theory. Physicist David Deutsch: "And since inductivism is false, empiricism must be as well." "Empiricism never did achieve its aim of liberating science from authority." David Deutsch "Beginning of Infinity" "The story of human intelligence starts with a universe that is capable of encoding information." Ray Kurzweil – "How to Create a Mind" Here's a working hypothesis re code in a physics / evolution context, including monetary code, then add exponential complexity: http://postgenetic.com/Postgenetic/Culture,_Complexity_%26_Code.html Well, now that David has provided his own reply I think I shall add just a bit more. I guess I find Noah's view that the choice is either philosophy or some sort of crude predictive empiricism to be a false one, or at least a very limiting one. Really, can't we do better methodologically than Milton Friedman circa 1953? And that is from someone who fully welcomes and supports the greater emphasis on all sorts of empirical testing that has been going on now in economics for a good three decades or so. Let me note that even in biology, evolutionary law (a better term than "theory," which has opened the door to too many creationist wackos) does not provide neat predictions in the sense of forecasting. However, there have been many chances of refuting it, and none have happened. It is a fact or a law that organizes and integrates biology as a whole, thus possibly falling into that useless category of "philosophy" as Noah poses it. It may be similar in economics. I think the economy is an evolving complex system, to quote titles of some old SFI volumes, and that this is an overarching fact, and that there has been this deep interaction between economics and biology regarding evolution going back centuries is just part of this, and I think it is worth keeping this in mind, even if it is just useless philosophy. I have muddied the waters by noting the many strands of evolutionary economics, but I do like to think that the ABM complexity version of this offers a way forward and may be useful even empirically, even if it is still being kept at bay among most macroeconomists. I guess I shall let this stop here, other than to reinforce that I think the philosophy side of this is still important and useful, whatever comes out on the empirical side in the end. I guess I find Noah's view that the choice is either philosophy or some sort of crude predictive empiricism to be a false one, or at least a very limiting one. This does not characterize my views at all, actually. Though it does seem to characterize the views of people who troll my comments section and Twitter feed! What I think really works is the combination of theory and empirics. Use empirics to give you clues as to which theories to make, and to test theories after you make them. The theories represent our understanding of the world, but empirics are there to make sure we're not just theorizing about a made-up world in our heads. That is my view, rather than what you said. As for evolution, it's possible the whole economy evolves like a single organism, I suppose, but that's going to be hard to test. The kinds of evolution I was talking about in my post - evolution of firms, or of strategies - seems much more promising in terms of testability, for obvious reasons. As for "complexity", I'd encourage you to read the blog post I linked to about the progress of complex-systems research in computer science. Fair enough, Noah, although I would say that your post certainly looked a lot like you were making the sharp dichotomy I posed, even if I imagine what you have stated here is really what you think. On the link, I shall note that quite a few of us (will not cite my own work, but it has done it) have been making this distinction between complicated and complex for quite a long time. Although this post by this Ben is getting a lot of airplay, maybe because of the term "complexicated," which may be useful, it certainly has been known for quite awhile by most complexity theorists that that most systems are both complicated and complex. BTW, not everybody does or has agreed that these are to be distinguished, with two exceptions being the very late, but important, John von Neumann, and the still alive (and now a star in the movie "Bridege of Spies"), Fred Pryor. Fair enough, Noah, although I would say that your post certainly looked a lot like you were making the sharp dichotomy I posed You're certainly not the first to have said this, though I've taken great pains to try to disabuse people of this. "Theory vs. data" is a false dichotomy. I think some of the confusion arises due to the use of Words, semantics. There is no meaningful distinction between "empiricism" and "philosophy". Rather its a Classic discussion in the philosophy of science as to what sort of methodology or epistemology should be employed in the pursuit of knowledge. The two main camps are traditionally rationalism and empiricism. Rationalism typically employing some sort of logical deductive method of enquiry, rooted in pure thinking and the creative use of the faculty of mind. Empiricism has traditionally been contrasted, where sense experience is seen as the main source of knowledge. But today this is really regarded as a false dichotomy, since Kant demonstrated back in late 18th Century that we employ a combination of the two. Something that has been aptly confirmed by recent research in cognitive science. Economics as a science has just for some time strayed of into the realm of theory building, employing the axiomatic deductive method typical in mathematics. But a change is already taking place and has been so for some time. The complex problem is really detailing what weight should be given to the two faculties involved. Sloan Wilson is therefore constructing a strawman when he claims that economics is not an empirical science. Don Coffin 4:37 PM I would note that the idea of using evolutional approaches to economic theory are hardly new; the problem is that making progress in using concepts from evolution in analyzing economic systems is fairly difficult. Among economists who have explored evolutional approached, two stand out, Ken Boulding and Nicholas Georgescu-Roegen (whose name I have probably misspelled), whose work dates to the late 1960s/early 1970s. I did not mention this strand, which I would label as being an ecological economics strand. This is what would probably be relevant to Noah's suggestion that the economy as a whole is an evolving organism. Given that most views of evolution involve natural selection amongst competing entities (of some sort) would rule out a single organism "evolving," the only way to make this sensible is to view the economy "as a whole" being the entire world economy in some sense co-evolving with the global environment. You are, of course, right. I was working from memory rather than checking...I will say, though (having checked Boulding's essay on economics as an evolutionary science), that Boulding did seem to mean it as an evolutionary process. I think he meant, and intended to mean, that evolution could work at the societal level. How to get out of the morass of ignorance The natural cognitive state vis-à-vis reality — and by implication vis-à-vis the economy — is this: "We are lost in a swamp, the morass of our ignorance. ... We have to find the roots and get ourselves out! ... Braids or bootstraps are necessary for two purposes: to pull ourselves out of the swamp and, afterwards, to keep our bits an pieces together in an orderly fashion." (Schmiechen, 2009, p. 11) How to get out of the swamp is known since more than 2000 years as the scientific method: "When the premises are certain, true, and primary, and the conclusion formally follows from them, this is demonstration, and produces scientific knowledge of a thing." (Aristotle) Orthodoxy has followed this method and laid down its hard-core propositions: "HC1 economic agents have preferences over outcomes; HC2 agents individually optimize subject to constraints; HC3 agent choice is manifest in interrelated markets; HC4 agents have full relevant knowledge; HC5 observable outcomes are coordinated, and must be discussed with reference to equilibrium states.(Weintraub, 1985, p. 147) Orthodoxy is a failed approach because this axiom set contains nonentities, i.e. HC2, HC4, HC5 cannot by any stretch of the imagination taken to be true. Clearly, when the premises are not 'certain, true, and primary' the whole theoretical superstructure falls apart. Exactly this happened with maximization-and-equilibrium economics. In this situation an 'empirical revolution' is pointless. Propositions that contain nonentities like utility, equilibrium, or Easter Bunny are not testable to begin with. What instead has to be done is to replace the orthodox set of foundational propositions with a new set. J. S. Mill identified the very first question of methodology: "What are the propositions which may reasonably be received without proof? That there must be some such propositions all are agreed, since there cannot be an infinite series of proof, a chain suspended from nothing. But to determine what these propositions are, is the opus magnum of the more recondite mental philosophy." The current state of economics is that Heterodoxy, too, has failed at the opus magnum. It is not sufficient to throw in any number of unrelated concepts like evolution, complexity, networks, nonlinearity, incomplete optimization, incomplete forward-looking-ness, externalities, behavioral heuristics, social preferences, cooperative games. This only proves the utter confusion about the subject matter. Economics is about the properties of the monetary economy. Because of this, economic analysis has to start with the objective system component of reality. The necessary paradigm shift requires the replacement of the false orthodox axiom set by an entirely new one. The most elementary configuration of the economy consists of the household and the business sector which in turn consists initially of one giant fully integrated firm and is given by these three objective structural axioms: A1. Yw=WL wage income Yw is equal to wage rate W times working hours L, A2. O=RL output O is equal to productivity R times working hours L, A3. C=PX consumption expenditure C is equal to price P times quantity bought/sold X. These premises are certain, true, and primary, and therefore satisfy all methodological requirements. The paradigm shift consists of the move from HC1/HC6 to A1/A3. Everything else is frog quacking in the morass of ignorance. Schmiechen, M. (2009). Newton's Principia and Related 'Principles' Revisited, volume 1. Norderstedt: Books on Demand, 2nd edition. Weintraub, E. R. (1985). Joan Robinson's Critique of Equilibrium: An Appraisal. American Economic Review, Papers and Proceedings, 75(2): 146–149. Egmont, What a joke. Your supposed axioms are nothing more than accounting identities, things that are true by definition. They lead nowhere at all other than to help in a "measurement without theory" sort of empirical investigation, which you claim is "pointless." But your axioms are good for nothing more, and not even all that useful for even that. Sorry, this is as a joke, and pretty pathetic one, especially coming from somebody as totally pompous as you seem to be. AXEC / E.K-H 4:58 PM (i) As you should know from econ101, accounting identities consist exclusively of nominal magnitudes. The three structural axioms consists of nominal AND real variables. Therefore, they are NO accounting identities, stupid. (ii) The three structural axioms constitute the formal backbone of economics. Models that do not consistently fit into this elementary mathematical framework, e.g. real models or Keynesian models, are out of economics for good. (iii) The three structural axioms define the elementary consumption economy which every economist should thoroughly understand.* Because, who does not understand the most elementary case has no chance at all to understand anything. (iv) Your problem is that you are not even aware that you never understood what profit is. For your overdue enlightenment see the Palgrave Dictionary: "A satisfactory theory of profits is still elusive." (Desai). What do you call an economist who cannot tell what profit is? Clearly, incompetent would be an euphemism. (v) From a deeper analysis of the consumption economy follows the most elementary version of the Profit Law.** Every economist — even you — has now a chance to understand the basics of economics. (vi) From the differentiated structural axiom set follows the employment equation, see eq. (33) of the working paper 'Keynes' Employment Function and the Gratuitous Phillips Curve Disaster' http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2130421. Either you empirically refute this equation or simply get out of the way. To recall, science is about formal and material proof. Blather does not count for much. * See 'Toward the true economic axioms' http://axecorg.blogspot.de/2016/03/toward-true-economic-axioms.html ** See 'How the intelligent non-economist can refute every economist hands down' http://axecorg.blogspot.de/2015/12/how-intelligent-non-economist-can.html [email protected] 11:12 PM Oh gosh, Egmont, I take it all back. I now understand the true nature of profit thanks to you informing me that the wage bill equals the number of hours worked times labor productivity! I had never known that the wage bill equaled labor hours worked times labor productivity. All these years I thought that the wage bill was one half of the hours worked times labor productivity. Now that I realize the truth, mine eyes have been opened and now I see the glory of the Lord, not to mention what profits REALLY are. Wow! Danke schoen! You say: "Your supposed axioms are ... true by definition." Yes, this is exactly what I assert. So we have common ground. Not only this, we are in perfect accordance with what Aristotle defined as scientific method: "When the premises are certain, true, and primary, and the conclusion formally follows from them, this is demonstration, and produces scientific knowledge of a thing." Having established a rock solid starting point we now can advance: "The object of reasoning is to find out, from the consideration of what we already know, something else which we do not know." (Peirce) From the axioms we have agreed upon to be true follows that Walrasianism and Keynesianism is provably false. This, in a nutshell, is the straightforward application of the axiomatic-deductive method to economics. The method is known since more than 2000 years and you can look it up in Wikipedia: "Euclid's method consists in assuming a small set of intuitively appealing axioms, and deducing many other propositions (theorems) from these." One very important theorem is the Profit Law which says in its most elementary form (i) Qm=-Sm (Qm monetary profit, Sm monetary saving). This is the beauty of the scientific method to 'find out, from the consideration of what we already know, something else which we do not know.' What theorem (i) tells you is that all I=S models are false and by implication that both Walrasianism and Keynesianism are as dead as a doornail. I am pretty sure that you did not know this until now. You say about the axioms A1 to A3 "They lead to nowhere at all other than to help in a 'measurement without theory' sort of empirical investigation, which you claim is 'pointless.' But your axioms are good for nothing more, and not even all that useful for even that." It cannot be said that my axioms 'lead to nowhere.' At minimum they have led to the incontrovertible conclusion that you have been hanging around for too long in the scientific Neanderthal. Egmont, your concept of science sounds more like mathematics: start with axioms and deduce conclusions from those. Mathematics doesn't necessarily have anything to do with reality. Science makes use of mathematics, but does something entirely different: proposes hypotheses and then tests those against nature (i.e. reality) to see if they're false and/or useless. If the hypothesis pass these empirical tests, then we are justified in incrementally increasing our confidence in them... and the testing never stops, until the hypothesis fail our tests, or until a better, simpler, more explanatory hypothesis comes along which is equally successful empirically. Science produces a series of improving approximations to reality. At any one time in this process, each most accepted approximation has a region of validity it can be used over: thus Newtonian mechanics are a justifiable simpler approximation over certain "everyday" conditions, while general relativity must be used outside of those conditions (at the cost of a bit of extra complexity). Meanwhile Aristotelian physics has been completely invalidated. There's no region of validity left for it: other approximations do a better job in all circumstances. It remains a fact that your "axioms" are just accounting identities. Here is another fact: in the National Income and Product Accounts I = S is an accounting identity. They are equal by definition. You can disagree with their definitions and say that you have the REAL definition, which has savings equal to profits. But what the official accountants of the GDP define investment to equal savings. If you want to play with your own definitions, claimed to be a "theorem," fine, but nobody will pay any attention to you. Nobody. "I think Wilson would enjoy learning about these successes, in addition to the well-publicized failures." Maybe, but you know he won't! You say: "Your concept of science sounds more like mathematics." This is perhaps because you have a wrong idea of what science is all about. Science was already well established when Adam Smith declared that economics, too, is a science. Economics defends this claim until this day but has never delivered anything fitting the description of science. So, it is not "my" concept or "your" concept. Science is well-defined and economists either stick to the rules or they will be thrown out of science. According to the criteria of formal and material consistency (Klant, 1994, p. 31), economics is indisputably a failed science. In methodological terms this means that the old paradigm is dead and a new paradigm is urgently needed. Because a paradigm is defined by its foundational propositions, a.k.a axioms, a paradigm shift means practically to fully replace the old axiom set, i.e. HC1 to HC6 above, by a new one. This has nothing to do with mathematics as such. Newton put his PHYSICAL axioms on the first pages of the Principia.* In methodological analogy ECONOMIC axioms have to be laid down by economists. This defines the subject matter. The actual situation is this: Orthodoxy clings to a thoroughly refuted axiom set and Heterodoxy so far has failed to formulate a new one. As Keynes famously put it: "Yet, in truth, there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. Something similar is required to-day in economics." (Keynes, 1973, p. 16) As a matter of fact, Keynes's paradigm shift (= overthrow of axioms) failed. This means in the strict sense that economics has no scientifically valid axiomatic foundations at all. Walrasian and Keynesian economics is what Feynman famously called cargo cult science. In other words, economics is de facto OUT of science. Economists violate well-defined scientific standards on a daily basis. To recall: "In economics we should strive to proceed, wherever we can, exactly according to the standards of the other, more advanced, sciences, where it is not possible, once an issue has been decided, to continue to write about it as if nothing had happened." (Morgenstern, 1941, pp. 369-370) In the proto-science of economics it is indeed possible to teach falsified theories like supply-demand-equilibrium generation after generation 'as if nothing had happened'. Therefore, in economics the task is NOT to replace true 'Newtonian' axioms with true but more general 'Einsteinian' axioms but to replace the neoclassical axioms which are KNOWN to be FALSE. There is not "my" or "your" concept of science or different concepts in mathematics, physics, the so-called social sciences, or economics. There is only ONE way to build up a valid theory: "The basic concepts and laws which are not logically further reducible constitute the indispensable and not rationally deducible part of the theory. It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience." (Einstein, 1934, p. 165) The methodological term for 'basic concepts and laws which are not logically further reducible' is axioms. As long as economic theory is not based on a consistent set of axioms it is out of science. This is the case since Adam Smith. There is no use to wish-wash around this embarrassing fact. Einstein, A. (1934). On the Method of Theoretical Physics. Philosophy of Science, 1(2): 163–169. URL http://www.jstor.org/stable/184387 Keynes, J. M. (1973). The General Theory of Employment Interest and Money.London, Basingstoke: Macmillan. Klant, J. J. (1994). The Nature of Economic Thought. Aldershot, Brookfield, VT: Edward Elgar. Morgenstern, O. (1941). Professor Hicks on Value and Capital. Journal of Political Economy, 49(3): 361–393. URL http://www.jstor.org/stable/1824735. Having a consistent set of axioms is not a guarantee that you're doing science (try finding a use for the field of "nondefinable numbers"). But lets get straight to it: can you use the models you derive from your theory to forecast the inflation rate (core CPI or core PCE), price level, exchange rates, monetary base, interest rates, or NGDP for a couple of dozen large population countries around the world, maybe two years out? If, not, what can we do with it? How does it stack up against the NY Fed DSGE? Please share with us. Maybe some charts like this and a place to download your source code so we can all give it a go. (i) You say: "Having a consistent set of axioms is not a guarantee that you're doing science." Yes, indeed, because science requires formal AND material consistency. THIS is the guarantee: "Research is in fact a continuous discussion of the consistency of theories: formal consistency insofar as the discussion relates to the logical cohesion of what is asserted in joint theories; material consistency insofar as the agreement of observations with theories is concerned." (Klant) The fact of the matter is that orthodox economics lacks BOTH formal AND material consistency. This in turn is the guarantee that economics is NOT a science. In order to refute a theory it SUFFICES to prove EITHER logical or material inconsistency. Walrasian and Keynesian economics is logically defective and the proof has been carried out by using the axiomatic-deductive method. No scientist will ever accept Walrasian or Keynesian economics. (ii) Walrasian and Keynesian economics is refuted. Economists either do not know it, do not understand it, or ignore it. Either way, they are violating the well-defined standards of science. This is inexcusable. (iii) It is a silly game to challenge a scientist by asking him to predict the future. He will simply tell you "The future is unpredictable." (Feynman)* Predicting the future is the business of imbeciles. (iv) Because of this, economics from Jevons/Walras/Menger onward to DSGE is NOT dismissed because it has not predicted crises. Neoclassics is unacceptable because it is logically and empirically inconsistent. (v) From the structural axioms follows the elementary version of the Profit Law.** You, or a competent economist for that matter, are invited to test it against the DSGE profit law. This test, though, is an entirely separate issue and should not distract from the fact that neoclassical economics is axiomatically false since more than 140 years. (vi) That the FED uses DSGE does not speak for DSGE but against the FED. * For details see the posts 'Scientists do not predict' http://axecorg.blogspot.de/2016/02/scientists-do-not-predict.html or 'Prediction does not work? Try retrodiction first' http://axecorg.blogspot.de/2015/11/prediction-does-not-work-try.html ** See 'The three fundamental economic laws' http://axecorg.blogspot.de/2016/03/the-three-fundamental-economic-laws.html The complete formula for the simulation of the elementary structural axiomatic consumption economy is given here https://commons.wikimedia.org/wiki/File:AXEC25.png You are certainly in the position to produce the source code yourself. "The future is unpredictable." And yet predictions are made: about the climate, about gravity waves being detected, about where Jupiter will be a year from now, about the death of stars, about what kinds of fossils will be found in specific geological layers, about where Earthquakes are most likely to take place, about radioactive decay, about chemical reactions, etc, etc, etc. I'm not asking you to predict a lottery winner, I'm asking for what you say the trends will be for some very broad market indicators. Even a zeroth order model will do (up or down). Is your theory useful? I'd like to see the evidence that it is. Make them conditional if you like: tell me what things cannot happen and what's more likely to happen. Tell us the kinds of states we'll likely find the economy in, and those that are excluded by your theory (past, present and future). I'm not asking you for anything unusual: other people make exactly the kinds of forecasts I requested based on theory. Are you willing to put your theory up against theirs? Egmont, I take it all back. You win. I'd love to see you talk some sense into this guy now. We have two SEPARATE issues (i) orthodox economics is false and thoroughly refuted according to well-defined scientific criteria, and (ii), given that Orthodoxy is dead, what does the new paradigm look like? The problem with Noah Smith and you is that you have not yet realized (i). Because of this you are hopelessly locked in at the proto-scientific stage. With regard to Orthodoxy or so-called mainstream economics this is the situation: "... we may say that ... the omnipresence of a certain point of view is not a sign of excellence or an indication that the truth or part of the truth has at last been found. It is, rather, the indication of a failure of reason to find suitable alternatives which might be used to transcend an accidental intermediate stage of our knowledge." (Feyerabend) Therefore, for every economist there is but ONE worthwhile task: to contribute to the NEW paradigm. All the rest is pointless behind-the-curve blather. Instead of doing your scientific homework you argue with regard to (ii): "Even a zeroth order model will do (up or down). Is your theory useful? I'd like to see the evidence that it is. Make them conditional if you like: tell me what things cannot happen and what's more likely to happen. Tell us the kinds of states we'll likely find the economy in, and those that are excluded by your theory (past, present and future)." Obviously you cannot read. With regard to the issues of scientific prediction and testable economic laws I have referred you above to the pertinent posts which are in turn backed up by working papers. You have not realized this either. Economics could make much faster progress toward science if failed mainstream economists could simply get out of the way — NOW. ...preaching to the choir Egmont. It's "Major-Freedom" that desperately needs your insights. (link above) Science is not about preaching but proof. You are provably false and are defending the indefensible. Thank you for the link to TheMoneyIllusion. See my comment there 'The futility of testing economics blather' http://axecorg.blogspot.de/2016/03/the-futility-of-testing-economics.html "Evolution needs heredity, so evolutionary theorists who want to change the econ world should focus on demonstrating the existence of traits that are passed from company to company, or person to person, or industry to industry, within economies." That's actually not the key point of evolution. The key point is variation and selection. You don't need heredity, you simply need to define how variation and selection occur. This is simple in econ - once a company tries an idea, that idea will be tried out by various other companies in slightly different ways. New companies will form based on some amalgam of existing ideas and (possibly) novel ones (these are akin to random mutations). You don't need an explicit form of mating and heredity to generate evolutionary effects - just a way to create variations of existing entities, then a way to select among those variations which ones persist to influence the next crop of variations. Noah writes: The lesson appears clear: Bubbles exist. Investors aren't just rational, patient, well-informed, emotionless calculators of risk and return. Didnt we already know that from Keynes ?
CommonCrawl
Midterm 1 Practice Exam 1 Solutions Stony Brook Physics phy131studiof15:lectures Trace: • Midterm 1 Practice Exam 1 Solutions PHY132 Studio Fall 2015 Question 1 Solution Two blocks of mass $m_{1}$ and $m_{2}$ are connected by a rope which runs over a frictionless pulley of negligible mass. One block hangs from the rope, while the other rests on a plane with coefficients of static and kinetic friction $\mu_{s}$ and $\mu_{k}$ inclined at an angle of $\theta$ to the horizontal. A. (5 points) Add arrows indicating the direction of all of the forces acting on both $m_{1}$ and $m_{2}$ to the diagram. B. (5 points) Assuming that the incline is sufficiently steep that the blocks move when they are released from rest, find an expression for the downwards acceleration, $a$ in terms of $m_{1}$, $m_{2}$, $g$, $\mu_{s}$ or $\mu_{k}$ and $\theta$. $m_{1}a=m_{1}g-T$ $m_{2}a=T+m_{2}g\sin\theta-\mu_{k}m_{2}g\cos\theta$ $(m_{1}+m_{2})a=m_{1}g+m_{2}g\sin\theta-\mu_{k}m_{2}g\cos\theta$ $a=\frac{m_{1}g+m_{2}g\sin\theta-\mu_{k}m_{2}g\cos\theta}{m_{1}+m_{2}}$ For parts C-G consider a case where $m_{1}=$4 kg and $m_{2}$=1 kg, $\mu_{s}$=0.2 and $\mu_{k}$=0.15. C. (5 points) If $\theta=30^{o}$ what is the velocity and displacement of $m_{1}$, 0.5s after the system has been released from rest? $a=\frac{4+1\sin 30^{o}-0.15\cos30^{o}}{4+1}\times9.8=8.57\mathrm{ ms^{-2}}$ $v=at=8.57\times0.5=4.28\mathrm{ms^{-1}}$ down $y=\frac{1}{2}at^{2}=0.5\times8.57\times0.5^{2}=1.07m$ down D. (5 points) What is the magnitude of the tension in the rope during the motion in part (C)? $T=m_{1}g-m_{1}a=4\times(9.8-8.57)=4.92\mathrm{N}$ E. (5 points) How much work is done by gravity on the two block system during the motion in part (C)? $W_{g}=m_{1}gd+m_{2}gd\sin 30^{o}=4\times9.8\times1.07+1\times9.8\times1.07\times\sin 30^{o}=47.187\mathrm{J}$ F. (5 points) How much work is done by friction during the motion in part (C)? $W_{Fr}=-\mu_{k}m_{2}g\cos\theta d=-0.15\times 1 \times 9.8\times\cos30^{o}\times 1.07=-1.36\mathrm{J}$ G. (5 points) How much work is done by the normal force on $m_{2}$ during the motion in part (C) ? $W_{N}=0\mathrm{J}$ A 200 kg rocket is fired straight up from the ground with an initial velocity of 100 $\mathrm{ms^{-1}}$ after which it is subject to a constant gravitational acceleration of 9.8 $\mathrm{ms^{-2}}$ down. A 15kg projectile is fired from a cannon 1 km away at the same time as the rocket is launched. 10 seconds later the projectile hits the rocket. A. (5 points) At what height above the ground does the collision take place? Solve using rocket equation of motion $y=v_{0}-\frac{1}{2}gt^{2}=100\times10-0.5\times9.8\times10^{2}=510\mathrm{m}$ B. (5 points) At what angle $\theta$ above the horizontal must the cannon be fired in order to hit the rocket? Equations of motion of projectile $x=v_{0}\cos\theta t$ $y=v_{0}\sin\theta t -\frac{1}{2}gt^{2}$ These give $1000=v_{0}\cos\theta \times 10$ $1000=v_{0}\sin\theta \times 10$ $\theta=45^{o}$ C. (5 points) What is the initial velocity of the projectile launched from the cannon? $v_{0x}=100\mathrm{ms^{-1}}$ $v_{0}=\frac{v_{0x}}{\cos45^{o}}=141\mathrm{ms^{-1}}$ D. (5 points) What is the kinetic energy of the rocket when it is hit by the projectile? Can use either conservation of mechanical energy $KE=\frac{1}{2}mv_{0}^{2}-mgh=0.5\times200\times100^{2}-200\times9.8\times510=400\mathrm{J}$ of find velocity of rocket when it is hit $v=100-9.8\times10=2\mathrm{ms^{-1}}$ $KE=\frac{1}{2}mv^{2}=400\mathrm{J}$ E. (5 points) What is the kinetic energy of the projectile when it hits the rocket? $KE=\frac{1}{2}mv_{0}^{2}-mgh=0.5\times15\times100^{2}\times2-15\times9.8\times510=75030\mathrm{J}$ F. (10 points) If instead of starting with the velocity found in part B the projectile was fired with a velocity of 500 ms$^{-1}$ find the required angle $\theta$ above the horizontal for the projectile to the hit the rocket and the height at which the collision occurs in this case. Equations of motion for projectile are now: $x=500\cos\theta t$ $y=500\sin\theta t-\frac{1}{2}gt^{2}$ The angle can be found by considering when the y coordinates of the two objects are the same $500\sin\theta t-\frac{1}{2}gt^{2}=100t-\frac{1}{2}gt^{2}$ $\sin\theta=\frac{1}{5}$ $\theta=11.54^{o}$ The time at which the collision takes place can now be found using the equation for the x motion of the projectile. $t=\frac{1000}{500\cos\theta}=2.04\mathrm{s}$ The height can now be found by substituting back in the y equation of motion $y=500\times\frac{1}{5}\times2.04-0.5\times9.8\times2.04^{2}=183.6\mathrm{m}$ A 500 kg car is traveling with a constant speed of 50 $\mathrm{ms^{-1}}$ in a circular path around the inside edge of a cylinder. The radius of the curve is 40 m. The coefficient of static friction between the road and the tires is $\mu=0.2$. A. (5 points) How long does it take for the car to go around the cylinder once? $T=\frac{2\pi r}{v}=\frac{2\pi \times40}{50}=5.03\mathrm{s}$ B. (5 points) What is the magnitude of the normal force exerted by the wall of the cylinder on the car. $F_{N}=\frac{mv^{2}}{r}=\frac{500\times50^{2}}{40}=31250\mathrm{N}$ C. (5 points) What is the magnitude and direction of the frictional force on the car. Give the direction as one of the following, up, down, towards the center of the cylinder, radially outwards, in the direction the car is moving, opposite to the direction the car is moving. $F_{Fr}=mg=500\times9.8=4900\mathrm{N}$ up D. (5 points) How much work is done by the normal force in one complete circuit of the track? $O \mathrm{J}$ E. (5 points) How much work is done by the frictional force in one complete circuit of the track? F. (5 points) The car starts to slow down. At what speed does it start to slip down the wall of the cylinder? The car starts to slip when the maximum frictional force is less than the weight of the car. The minimum velocity for this not to happen can be found from $\mu\frac{mv^{2}}{r}=mg$ $v^{2}=\frac{rg}{\mu}=\frac{40\times9.8}{0.2}$ $v=44.3\mathrm{ms^{-1}}$ phy131studiof15/lectures/m1p1sol.txt · Last modified: 2015/09/30 17:23 by mdawber
CommonCrawl
The GL(m|n) type quantum matrix algebras II: the structure of the characteristic subalgebra and its spectral parameterization Dimitri Gurevich USTV, Université de Valenciennes, 59304 Valenciennes, France Pavel Pyatov Max Planck Institute for Mathematics, Vivatsgasse 7, D-53111 Bonn, Germany Bogoliubov Laboratory of Theoretical Physics, JINR, 141980 Dubna, Moscow region, Russia Pavel Saponov Division of Theoretical Physics, IHEP, 142284 Protvino, Russia @thsun1.jinr.ruPavel.S In our previous paper [GPS2] the Cayley-Hamilton identity for the GL(m|n) type quantum matrix algebra was obtained. Here we continue investigation of that identity. We derive it in three alternative forms and, most importantly, we obtain it in a factorized form. The factorization leads to a separation of the spectra of the quantum supermatrix into the "even" and "odd" parts. The latter, in turn, allows us to parameterize the characteristic subalgebra (which can also be called the subalgebra of spectral invariants) in terms of the supersymmetric polynomials in the eigenvalues of the quantum supermatrix. For our derivation we use two auxiliary results which may be of independent interest. First, we calculate the multiplication rule for the linear basis of the Schur functions sλ(M) for the characteristic subalgebra of the Hecke type quantum matrix algebra. The structure constants in this basis are the Littlewood-Richardson coefficients. Second, we derive a series of bilinear relations in the graded ring Λ of Schur symmetric functions in countably many variables (see [Mac] ). 2 Structure of the characteristic subalgebra 2.1 Littlewood-Richardson multiplication formula for sλ(M) 2.2 Bilinear relations 3 Various presentations of the Cayley-Hamilton identity 3.1 Separation of "even" and "odd" spectral values 3.2 Cayley-Hamilton identities for skew-symmetric and symmetric matrix powers 4 Spectral parameterization of the characteristic subalgebra 4.1 Parameterization of the single column and the single row Schur functions 4.2 Parameterization of the Schur function s[m|n](M) In the present paper, we continue the investigation of the supersymmetric GL(m|n) type quantum matrix (QM) algebras initiated in [GPS2] . Let us recall briefly the history of the subject. The first examples of the QM algebras were considered in the seminal papers of V. Drinfel'd [D] and L. Faddeev, N. Reshetikhin and L. Takhtajan [RTF] . There, a particular family of QM algebras — the algebras of quantized functions on the groups, shortly called the RTT algebras, were defined. Soon after, another important subclass of QM algebras — the reflection equation (RE ) algebras, were introduced into consideration (see, e.g., [KSkl, KSas] ). The general definition of the QM algebras was found by L.Hlavaty who aimed at giving a unified description for RTT and RE algebras [H] . This idea might seem quite strange at first glance (the representation theories of the RTT and the RE algebras are very different). At the same time, the structure investigations carried out separately for the RTT [EOW, Zh, IOPS] and the RE algebras [NT, PS, GPS1] reveal a remarkable similarity of both the algebras to the classical matrix algebra. Namely, it turns out that the RE and the RTT families admit a noncommutative generalization of the Cayley-Hamilton theorem and for the matrices of generators in both the cases a noncommutative analogue of their spectra can be constructed. Having this in mind, the general definition of the QM algebras was independently reproduced in [IOP2] and the noncommutative version of the Cayley-Hamilton theorem was derived for the QM algebras of the general linear type (see [IOP1, IOP2, IOP3] ). The family of GL(m) type QM algebras was a good case to start with. An investigation of the other classical series of the QM algebras falls into two cases — the case of the Hecke type QM algebras and the case of the Birman-Murakami-Wenzl (BMW) type QM algebras. The difference is in the choice of a quotient of the group algebra of the braid group which enters (through its R-matrix representation) into the QM algebra definition. The Hecke case contains the general linear type and its supersymmetric generalization — the GL(m|n) type QM algebras. The BMW case includes orthogonal- and symplectic- type QM algebras and their supersymmetric analogues. An investigation of the BMW case was started in [OP2] , where the Cayley-Hamilton identity and the spectra of the orthogonal- and symplectic- type QM algebras were identified. The supersymmetric GL(m|n) type QM algebra was studied in our previous paper [GPS2] . In that paper, we gave a proper definition of the family of the GL(m|n) type QM algebras and proved the Cayley-Hamilton identity for them. Our work may be viewed as a generalization of both the results by I. Kantor and I. Trishin on the Cayley-Hamilton equation for the supermatrices [KT1, KT2] (the invariant Cayley-Hamilton equation in their terminology), and by P.D. Jarvis and H.S. Green on the characteristic identities for the general linear Lie superalgebras [JG] . Still lacking in the GL(m|n) case is the identification of the spectrum of the quantum supermatrices.111 Here we put the problem for the generic QM algebra. For the subfamily of RE algebras and at the level of finite dimensional representations it was considered in [GL] . Alternatively, one can ask for a proper parameterization of the characteristic subalgebra of the GL(m|n) type QM algebra (the abelian subalgebra of the QM algebra which the coefficients of the Cayley-Hamilton identity belong to). This problem is addressed in the present work. First, we investigate in detail the structure of the characteristic subalgebra in the Hecke case. Then, we derive a series of bilinear relations in the graded ring Λ of Schur symmetric functions in countably many variables (for the definition see [Mac] ). These combinatorial relations may be of independent interest. The structure of the paper is as follows. In the next section, subsection 2.1, we derive the multiplication rule for the set of linear basic elements of the Hecke type characteristic subalgebra — the so-called Schur functions sλ(M) (the notation is explained below). The structure constants in this basis are just the Littlewood-Richardson coefficients. In other words, we define the homomorphic map from the ring of symmetric functions Λ to the characteristic subalgebra of the Hecke type QM algebra. To efficiently apply this map in the GL(m|n) case, we need a series of bilinear relations for the Schur symmetric functions sλ∈Λ.222 There should be no confusion between the elements sλ∈Λ and their homomorphic images sλ(M) in the characteristic subalgebra. The argument in the latter notation is used for distinguishing purposes. It refers to the matrix of generators of the QM algebra. They are proved in subsection 2.2. For derivation we use the Jacobi-Trudi formulas for the Schur functions sλ and apply the Plücker relations. The same method was used in [LWZ, Kleb] for the derivation of different bilinear relations for the Schur functions. We also remark that our bilinear relations certainly have common roots with the factorization formula for the supersymmetric functions [BR, PT] . In section 3, we derive three alternative expressions for the Cayley-Hamilton identity for the GL(m|n) type QM algebra. In subsection 3.1, the bilinear identities of subsection 2.2 are used to factorize the GL(m|n) type characteristic identity into a product of two terms. Let us stress that the factorization is achieved without extending the algebra by the eigenvalues of the quantum supermatrix. To the best of our knowledge, this fact has not been observed before even in the classical supermatrix case. The factorization allows us to separate "even" and "odd" eigenvalues of the quantum supermatrix in a covariant manner. That is, we do not specify explicitly the Z2-grading for the components of the quantum supermatrix. Instead, we observe the "manifestation of even and odd variables" in the factorization property of the characteristic polynomial. Two more versions of the Cayley-Hamilton theorem are presented in subsection 3.2. They are given in terms of the (skew-)symmetric powers of the quantum matrices333 The notion of the skew-symmetric power of the matrix was suggested by A.M. Lopshits (see [G.G] , p.342.) and generalize the corresponding results of [KT2, T] to the case q≠1. Yet another series of bilinear relations for the Schur symmetric functions sλ is used here for derivations (see lemma 8). These relations are also applied in the last section for parameterization of the Schur functions sλ(M). Finally, in section 4, we compute expressions for the coefficients of the GL(m|n) type Cayley-Hamilton identity in terms of the quantum matrix eigenvalues. The resulting parameterization is given in terms of the supersymmetric polynomials [Stem] (see also [Mac] , section 1.3, exercises 23 and 24). It is worth mentioning that the supersymmetric polynomials were originally introduced by F. Berezin [B1, B2] for a description of invariant polynomials on the Lie superalgebra gl(m|n) (see also [S1] and references therein). Some auxiliary q-combinatorial formulae which we need for derivations in section 2.1 are proved in the appendix. Throughout this text we keep the notation of the paper [GPS2] . When referring to formulae from that paper we use the shorthand quotation, e.g., symbol (I-3.21) refers to formula (21) from section 3 of [GPS2] . For reader's convenience in the rest of the introduction we collect a list of notation, definitions and results mainly from [GPS2] . Let V be a finite dimensional C-linear space, dimV=N. Consider a pair of elements R,F∈Aut(V⊗2). Fixing some basis {vi}Ni=1 in the space V we identify operators R and F with their matrices in that basis. We use the shorthand matrix notation of [RTF] . I.e., we write Ri (or, sometimes, more explicitly Rii+1) for the matrix of the operator Id⊗(i−1)⊗R⊗Id⊗(k−i−1) acting in the space V⊗k. Here Id∈Aut(V) denotes the identity operator. The integer k is not shown in the matrix notation. In each particular formula the actual value of k can be easily reconstructed. Few more conventions: I is the identity matrix; P∈Aut(V⊗2) is the permutation automorphism (P(u⊗v)=v⊗u). The pair of operators R and F can be used as an initial data set for the QM algebra, provided they satisfy the following conditions The matrices of both operators R and F are strict skew invertible. The skew invertibility means, say for R, the existence of an operator ΨR∈End(V⊗2) such that Tr(2)R12ΨR23=P13, where the subscript in the notation of the trace shows the number of the space V, where the trace is evaluated (here we adopt labelling V⊗k:=V1⊗V2⊗⋯⊗Vk). The strictness condition implies invertibility of an element DR1:=Tr(2)ΨR12: DR∈Aut(V). With the matrix DR one then defines the R-trace operation TrR:MatN(W)→W TrR(X):=N∑i,j=1DRjiXij,X∈MatN(W), where W is any linear space (in considerations below W is the space of the QM algebra). The operators R and F are the R-matrices, that means they satisfy the Yang-Baxter equations R1R2R1=R2R1R2,F1F2F1=F2F1F2. The operators R and F form a compatible pair {R,F} (the order of operators in this notation is essential) R1F2F1=F2F1R2,F1F2R1=R2F1F2. Given the pair {R,F} satisfying conditions i)–iii) the quantum matrix algebra M(R,F) is defined as a unital associative algebra which is generated by N2 components of the matrix ∥Mij∥Ni=1 subject to the relations444 In [GPS2] we also demand skew invertibility of an operator Rf:=F−1R−1F in the definition of the QM algebra. As is proved in [OP2] (see lemma 3.6), the latter condition is a consequence of i)–iii). R1M¯1M¯2=M¯1M¯2R1. (1.1) Here we used the iterative procedure M¯1=M,M¯¯¯¯¯¯¯¯k+1=FkM¯¯¯kF−1k (1.2) for the production of copies M¯¯¯k of the matrix M. The defining relations (1.2) then imply the same type relations for any consecutive pair of the copies of M (see lemma I-4) RkM¯¯¯kM¯¯¯¯¯¯¯¯k+1=M¯¯¯kM¯¯¯¯¯¯¯¯k+1Rk. (1.3) Imposing additional conditions on the R-matrix R we then extract specific series of the QM algebras. Demanding R to be the Hecke type R-matrix, that means its minimal polynomial to be of the second order (R+q−1I)(R−qI)=0,q∈{C∖0}, (1.4) we specify to the Hecke type QM algebra. The C-number q becomes the parameter of the algebra. Given a Hecke type R-matrix (1.4), one can construct a series of R-matrix representations of the Hecke algebras555 A brief description of the Hecke algebras, their R-matrix representations, the primitive idempotents and the basis of matrix units is given in [GPS2] , sections 2 and 3. For a more detailed exposition of the subject the reader is referred to [R, OP1] and to references therein. Hp(q) ρR:Hp(q)→End(V⊗p),p=2,3,…. (1.5) Let us impose an additional restriction on the parameter q q2k≠1,k=2,3,…, (1.6) which ensures the algebras Hp(q), p=2,3,…, to be semisimple. Then we can further specify to a series of the GL(m|n) type QM algebras. For their definition we use a set of the primitive idempotents Eλα∈Hp(q) labelled by the standard Young tableaux {λα}, where λ⊢p is a partition of p, and index α enumerates different standard tableaux corresponding to the partition λ (see section I-2). The GL(m|n) type QM algebra is characterized by the following conditions the representations ρR (1.5) are faithful for all p<(m+1)(n+1); for p≥(m+1)(n+1) the kernel of ρR is generated by (any one of) the primitive idempotents E((n+1)(m+1))α corresponding to the rectangular Young diagram ((n+1)(m+1)); the Schur function s(nm)(M) (see definition below) corresponding to the rectangular Young diagram (nm) is an invertible element of the QM algebra.666 This condition was not imposed in [GPS2] . We will need it now for the spectral parameterization of the characteristic subalgebra (see eqs.(3.5), (3.6)). Two comments are in order: It is well known that the classical (q=1) GL(m|n) type supermatrices satisfy properties v)-a) and v)-b) (for a proof see [S2] , theorem 2, and [KT1] , theorem 1.2). Hence, using the deformation arguments we can make sure that these properties remain valid for a variety of the QM-algebras related to the standard GL(m|n) type R-matrices described, e.g., in [DKS, I] . For our applications (at least) it is convenient to use v)-a) and v)-b) as defining conditions for the GL(m|n) type QM-algebras. It seems verisimilar that (for q generic) any Hecke type QM-algebra is of GL(m|n) type (for some values of integers m and n). We are going to further argue this point in a separate work. Notice that relation m+n=N≡dimV between the algebra parameters m, n and N is not assumed in the definition. Although it is indeed satisfied in many examples (say, for the QM algebras constructed by the standard GL(m|n) type R-matrices), there are known exceptions from this rule. A series of counter-examples was constructed in [G.D] . From now on we restrict ourselves to considering the Hecke type QM algebras with the parameter q satisfying condition (1.6). The characteristic subalgebra Char(R,F) of the QM algebra M(R,F) is a linear span of the set of Schur functions sλ(M) where Eλα is any one of the primitive idempotents corresponding to the partition λ (the expression in (1.7) does not depend on α). As was shown in [IOP1] , Char(R,F) is an abelian algebra with respect to the multiplication in M(R,F). Consider a subspace Pow(R,F)⊂MatN(M(R,F)) which is spanned linearly by the elements Ich(M) ∀ch(M)∈Char(R,F),and (1.8) M(x(k)):=TrR\scriptsize(2…k)(M¯1…M¯¯¯kρR(x(k))) ∀x(k)∈Hk(q),k=1,2,…. (1.9) In what follows elements of the space Pow(R,F) will be shortly called the quantum matrices. In [GPS2] it was shown that the space of the quantum matrices carries the structure of the right Char(R,F)-module and as a Char(R,F)-module it is spanned by a series of quantum matrix powers of M M¯0:=I,M¯1:=M,M¯¯¯k:=TrR\scriptsize(2…k)(M¯1…M¯¯¯kRk−1…R1),k=2,3,…. (1.10) In section 4.4 of [OP2] an analogue of the matrix multiplication was introduced for the space Pow(R,F). It was shown there that the quantum matrix multiplication agrees with the right Char(R,F)-module structure; it is associative (see proposition 4.12) and, moreover it is commutative (see propositions 4.13, 4.14). The latter result should not be surprising as all the elements of Pow(R,F) are descendants of the only quantum matrix M.777 There should be no confusion between the quantum matrix product and the multiplication in M(R,F). The latter one is the product of the matrix components, while the first one is the product of the quantum matrices. For our purposes in this paper it is enough knowing formulae M¯¯¯k=M∗M∗⋯∗M\smallk times,(Ich(M))∗M¯¯¯k=M¯¯¯k∗(Ich(M))∀ch(M)∈Char(R,F), (1.11) where by symbol "∗" we denote the quantum matrix product. We also notice that for the family of the RE algebras the product ∗ reduces to the usual matrix product. For the detailed description of the quantum matrix multiplication the reader is referred to [OP2] . The main result of our previous paper [GPS2] is the Cayley-Hamilton theorem for the GL(m|n) type QM algebras (see theorem I-10). For its compact formulation and for later convenience we introduce a shorthand notation for the following Young diagrams (partitions) Here the indices k and l take values l=0,…,r, k=0,…,p. If one of the indices k or l takes zero value, we will omit it in the notation, e.g., [r|p]0k=[r|p]k. (Cayley-Hamilton identity) In the setting i)–iv) and v)–a,b) the quantum matrix M composed of the generators of the GL(m|n) type QM algebra M(R,F) fulfills the characteristic identity n+m∑i=0M¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯m+n−imin{i,m}∑k=max{0,i−n}(−1)kq2k−is[m|n]ki−k(M)≡0. (1.13) The authors are grateful to Alexei Davydov, Nikolai Iorgov, Alexei Isaev, Issai Kantor, Hovhannes Khudaverdian, Dimitry Leites, Alexander Molev, Vladimir Rubtsov, and Vitaly Tarasov for numerous fruitful discussions and valuable remarks. PP and PS gratefully acknowledge the support from RFBR grant No. 05-01-01086. The work of PP was also partly supported by the grant of the Heisenberg-Landau foundation. Consider the graded ring Λ of symmetric functions in countably many variables. A Z-basis of Λ is given by the Schur symmetric functions sλ, λ⊢n, for n≥0 (we adopt definitions and notation of ref. [Mac] , sections 1.2 and 1.3). It is not accidental that the similar notation sλ(M) is assigned to the elements (1.7) of the characteristic subalgebra of the Hecke type quantum matrix algebra M(R,F). Indeed, consider the additive map Λ∋sλ↦sλ(M)∈Char(R,F)⊂M(R,F)\em(Hecke type). (2.1) Our first main result is as follows. In the setting i)–iv) and (1.6) the additive map (2.1) naturally extends to the homomorphism of rings. A proof of the theorem is given in the subsection 2.1. In the subsection 2.2 we derive some bilinear relations for the Schur symmetric functions sλ∈Λ. These relations are necessary for the derivations in section 3. We will prove the theorem 2 by a direct calculation. To this end we adopt its alternative formulation ′ Let M(R,F) be a Hecke type QM algebra generated by the components of matrix M. Assume that condition (1.6) on the algebra parameter q is satisfied. Then, the multiplication in the corresponding characteristic subalgebra Char(R,F) is described by the relations sλ(M)sμ(M)=∑ν⊢(k+n)cνλμsν(M), (2.2) where sλ(M),sμ(M)∈Char(R,F) are the Schur functions (1.7), and cνλμ are the Littlewood-Richardson coefficients (see, e.g., [Mac] , section 1.9). Proof. Since the cases m=0 or k=0 in (2.2) are trivial, we assume m≥1 and k≥1. Let us first prove the relation (2.2) for the case μ=(1k) is a single column diagram. In that case it reads sλ(M)s(1k)(M)=∑ν⊃λν⊢(k+n)′sν(M). (2.3) Here ⊃ denotes the inclusion relation on the set of standard Young tableaux (see section I.2.1) and the summation ∑′ is taken only over those diagrams ν whose set theoretical difference with λ is a vertical strip (for terminology see [Mac] , section 1.1). For single column diagrams (1k), k=2,3,…, their corresponding primitive idempotents E(1k) satisfy the well known iterative relations (see, e.g. [TW] , lemma 7.2, or [GPS1] , section 2.3) E(1)=1,E(1k)=(k−1)qkqE(1k−1)(qk−1(k−1)q1−σk−1)E(1k−1), (2.4) where we use notation of the section I.2.2. We shall apply these relations for a derivation of eq.(2.3). Consider the following chain of transformations =1kqTrR\scriptsize(1…n+k)[ρR(Eλα)(qI−Rn+1)…(qk−1(k−1)qI−Rn+k−1)M¯1…M¯¯¯¯¯¯¯¯¯n+k]. (2.5) Here in the first line we substitute definition (1.7) for the Schur functions and use eq.(I.3.19) for s(1k)(M) (the notation Eμ↑nβ is described in lemma I.6). We remind that this expression is independent of the choice of index α labelling the primitive idempotents Eλα∈Hn(q). In the second line we apply formula (2.4) (recall that Ri=ρR(σi)). In the third line we use relations (1.3) to permute the term ρR(E(1k−1)↑n) with the product of matrices M, then apply cyclic property of the R-trace to move ρR(E(1k−1)↑n) to the leftmost position, and take into account the commutativity of the idempotents E(1k−1)↑n and Eλα. Repeating these transformations (k−1) times we eventually obtain the last line expression. Let us denote the argument of the R-traces in (2.5) as Q(R) := ρR(Eλα)Xn+1, (2.6) Xi := (qi−n(i−n)qI−Ri)(qi−n+1(i−n+1)qI−Ri+1)…(qk−1(k−1)qI−Rn+k−1). (2.7) We notice that in view of relations (1.3) and the cyclic property of the R-trace one can perform cyclic permutations of factors in Q(R) without altering the expression (2.5). We shall use this cyclic invariance in order to transform Q(R) to a suitable form. The strategy of the transformation is as follows. We use a sequence of resolutions of the idempotent Eλα∈Hn(q) (λ⊢n) in terms of idempotents Eνβ∈Hn+i(q) (ν⊢(n+i), i≥1) described in (I.2.21) Eλα=∑ν⊃λν⊢(n+i)∑β:β⊃αEνβ. (2.8) We successively increase i in (2.8) from 2 to k and evaluate the factors (qi−1/(i−1)qI−Rn+i−1) in Q(R) on the idempotents ρR(Eνβ) ρR(Eνβ)(qi−1(i−1)qI−Rn+i−1)\mathchar13320\relax=(ℓn+i−1+i−1)q(i−1)q(ℓn+i−1)qρR(Eνβ). (2.9) Here ℓj:=c(j)−c(j+1) denotes the difference of the contents of boxes with numbers j and (j+1) in the standard tableau {νβ} (for definitions see section I.2.1); the symbol "\mathchar13320\relax=" means equality modulo cyclic permutation of factors. The evaluation rule can be argued as follows. Observe that the relations Eνβσj≡Eνβ(σj+q−ℓj(ℓj)q1)−q−ℓj(ℓj)qEνβ=(ℓj+1)q(ℓj)qEνβπj(β)−q−ℓj(ℓj)qEνβ,1≤j≤n+i−1, (2.10) are satisfied in the algebra Hn+i(q) (see (I.2.16)). Here the symbol Eνβπj(β) stands for the off-diagonal matrix unit labelled by the pair of standard Young tableaux {νβ} and {νπj(β)}, where the tableau {νπj(β)} is obtained from the tableau {νβ} by the permutation πj of boxes j and (j+1). If {νπj(β)} is non-standard the term with Eνβπj(β) is absent in (2.10). Now, transform the expression ρR(Eνβ)Rn+i−1=ρR(Eνβσn+i−1) in the left hand side of (2.9) with the use of eq.(2.10). In Q(R) the contribution of the off-diagonal matrix unit ρR(Eνβπj(β)) vanishes by virtue of the cyclic invariance. Indeed, ρR(Eνβπj(β))Xn+i=ρR(Eν′β′Eνβπj(β))Xn+i\mathchar13320\relax=ρR(Eνβπj(β))Xn+iρR(Eν′β′) =ρR(Eνβπj(β)Eν′β′)Xn+i=0. (2.11) Here the idempotent Eν′β′ corresponds to the standard tableau {ν′β′} obtained from the tableau {νβ} by removing the box with the number (n+i). The first and the last equalities in (2.11) are consequences of eq.(2.8) and the multiplication table for the matrix units (I.2.7). In the second equality we made the cyclic permutation of terms which is allowed in Q(R). The factors ρR(Eν′β′) and Xn+i are built of the mutually commuting R-matrices wherefrom the third equality in (2.11) follows. Eventually, collecting the coefficients at the diagonal matrix unit ρR(Eνβ) in Q(R) results in the right hand side of eq.(2.9). So, we begin the transformation of Q(R). Setting i=2 in (2.8) we come to the expression Q(R)=∑ν⊃λν⊢(n+2)∑β:β⊃αρR(Eνβ)(qI−Rn+1)Xn+2. (2.12) For our calculation we have to specify an explicit way of enumeration of the Young tableaux. For a given tableau {λα}, λ⊢n, we take the index α:={a1,a2,…an} to be an ordered set of pairs of integers ai:={xi,yi}, where xi and yi are, respectively, the number of column and row where the i-th box stands. Recall that the content of the i-th box is c(i)=xi−yi (see sec.I.2.1). In the summation index β in eq.(2.12) only the last two components vary. We shortly denote them as a and b, that is β={…,a,b}. For a and b in the summation (2.12) we have following three possibilities. i) a={x,y},b={x+1,y}. In this case ℓn+1=c(n+1)−c(n+2)=−1. Hence, due to relation (2.9) such tableaux do not contribute to Q(R). ii) a={x,y},b={x,y+1}. In this case ℓn+1=c(n+1)−c(n+2)=1. Hence, due to relation (2.9) the contributions of such tableaux in (2.12) equal 2qρR(Eν{…,a,b})Xn+2. (2.13) iii) a={x,y},b={¯x,¯y}, such that x≠¯x and y≠¯y. In this case we combine contributions coming from two tableaux of the same shape with indices β={…a,b} and πn+1(β)={…b,a}. Taking into account eq.(2.9) we get (ρR(Eν{…,a,b})(ℓn+1+1)q(ℓn+1)q+ρR(Eν{…,b,a})(ℓn+1−1)q(ℓn+1)q)Xn+2 (2.14) for the corresponding summands in (2.12). Noticing that the term (2.13) fits the form (2.14) with ℓn+1=1 we can rewrite (2.12) as Q(R)\mathchar13320\relax=∑ν⊃λν⊢(n+2)(a,b)′(ρR(Eν{…,a,b})(ℓn+1+1)q(ℓn+1)q+ρR(Eν{…,b,a})(ℓn+1−1)q(ℓn+1)q)Xn+2, (2.15) where the summation goes over different shape diagrams ν⊢(n+2) which are counted by unordered pairs (a,b), a={x,y} and b={¯x,¯y}. There is an additional condition y≠¯y which means that in the diagram ν the boxes with numbers (n+1) and (n+2) can not appear in the same row. It is this restriction which the summation symbol ∑′ refers to (c.f. (2.3)). For what follows it is suitable to change our notation for ℓn+1. We substitute ℓn+1=c(n+1)−c(n+2)⟶ℓab=(x−y)−(¯x−¯y) to manifest clearly the dependence on the summation variables a and b. We now proceed to the next step of the transformation. Substituting (2.8) for i=3 into eq.(2.15) and noticing ℓab=−ℓba we obtain aQ(R)\mathchar13320\relax=∑τ⊃λτ⊢(n+2)(a,b)′∑ν⊢(n+3):c=ν∖τ(ρR(Eν{…,a,b,c})(ℓab+1)q(ℓab)q+ρR(Eν{…,b,a,c})(ℓba+1)q(ℓba)q)(q22qI−Rn+2)Xn+3, (2.16) where c labels all possible complements of the diagram τ⊢(n+2) by the (n+3)-th box. Applying relation (2.9) we reduce this expression to the form aQ(R)\mathchar13320\relax=∑τ⊃λτ⊢(n+2)(a,b)′∑ν⊢(n+3):c=ν∖τ(ρR(Eν{…,a,b,c})(ℓab+1)q(ℓab)q(ℓbc+2)q2q(ℓbc)q+ρR(Eν{…,b,a,c})(ℓba+1)q(ℓba)q(ℓac+2)q2q(ℓac)q)Xn+3, (2.17) Next, we observe that the idempotents ρR(Eν{…,a,b,c}) and ρR(Eν{…,b,a,c}) in the expression above can be identified. Indeed, denoting σi(ℓ):=(σi−qℓ/ℓq1) we have ρR(Eν{…,b,a,c})Xn+3\mathchar13320\relax=ρR(σn+1(ℓab)Eν{…,b,a,c})Xn+3ρR(σn+1(ℓab))−1 =ρR(Eν{…,a,b,c}σn+1(−ℓab)(σn+1(ℓab))−1)Xn+3\mathchar13320\relax=ρR(Eν{…,a,b,c})Xn+3, (2.18) where the cyclic invariance together with relations (I.2.13), (I.2.10) and (2.11) were taken into account. Thus, from now on the order of labels a and b makes no difference in the notation Eν{…,a,b,c} and we simplify it to Eν{…,c}. Then, the expression (2.17) reduces to Q(R)\mathchar13320\relax=∑τ⊃λτ⊢(n+2)(a,b)′∑ν⊢(n+3):c=ν∖τρR(Eν{…,c})(ℓac+1)q(ℓac)q(ℓbc+1)q(ℓbc)qXn+3. (2.19) Here, noticing that ℓab=ℓac−ℓbc, we have transformed the coefficients at ρR(Eν{…,c}) using the q-combinatorial formula (A.3) for k=2 and b1=ℓac, b2=ℓbc (see Appendix). The double summation is carried out with the restriction that boxes (n+1), (n+2) and (n+3) which are labelled by a, b and c must be placed in different rows of the diagram ν. Finally, we prepare the expression (2.19) for the next step calculation by collecting the summands which correspond to tableaux of the same shape Q(R)\mathchar13320\relax=∑ν⊃λν⊢(n+3)(a,b,c)′(ρR(Eν{…,a,b,c})(ℓac+1)q(ℓac)q(ℓbc+1)q(ℓbc)q+ρR(Eν{…,b,c,a})(ℓba+1)q(ℓba)q(ℓca+1)q(ℓca)q +ρR(Eν{…,c,a,b})(ℓcb+1)q(ℓcb)q(ℓab+1)q(ℓab)q)Xn+3. (2.20) where the summation goes over different shape diagrams ν⊢(n+3) counted by unordered triples (a,b,c) such that neither pair of boxes a, b and c is placed at the same row of ν. Repeating the transformations described in eqs.(2.16)–(2.20) successively for i=4,…,k and using q-combinatorial relations (A.3), we eventually obtain Here the unordered (k−1)-tuples (a1,…ak−1) counting different shape diagrams τ⊢(n+k−1) are subject to restriction that τ∖λ is a vertical strip. The summation variable ak labels all possible complements of the diagram τ⊢(n+k−1) by the (n+k)-th box. Formula (2.21) is the i=k step analogue of the relation (2.19). An important difference is the absence of the X-term in the right hand side of the expression (one can say that Xn+k=1). Therefore, in the final expression for Q(R) we have no need to distinguish between the different idempotents ρR(Eν{…,ak,…}) (ak taking various positions) corresponding to the same shape diagram ν⊢(n+k). Thus, the analogue of eq.(2.20) reads Here by Eν{…} an arbitrary primitive idempotent corresponding to Young diagram ν is understood, the summation ∑′ goes over all diagrams ν⊢(n+k) such that ν∖λ is a vertical strip, and in the last equality we used q-combinatorial formula (A.2) setting ℓaiaj=bi−bj. Substituting expression (2.22) for Q(R) in eq.(2.5) we derive formula (2.3), which is a particular example of the Littlewood-Richardson rule. Now we are ready to prove the general case. To this end, let us argue that elements s(1k)(M), k=0,1,…, form a Z-basis of generators for the set of Schur functions. Indeed, with the help of eqs.(2.3) it is easy to see that s(2k1m)(M)=s(1(k+m))(M)s(1k)(M)−s(1(k+m+1))(M)s(1(k−1))(M)∀k≥1,m≥0. Then, using eqs.(2.3), elements s(3k,2m,1n)(M) can be expressed as linear combinations of monomials of the type s(2l1p)(M)s(1r)(M). Etc. Repeating this procedure finitely many times one can express any Schur function sλ(M) as a polynomial in generators s(1k)(M), k=0,1,… . The explicit expressions are given by famous Jacobi-Trudi identities (see [Mac] , section 1.3). At last, since the product of generators s(1k)(M) is described by the specification (2.3) of the Littlewood-Richardson formula, the product of two arbitrary Schur functions sλ(M) and sμ(M) is to be given by eq.(2.2). In this subsection we derive a series of bilinear relations for the Schur symmetric functions sλ∈Λ. By the homomorphic map (2.1) one can translate them to the characteristic subalgebra of the Hecke type quantum matrix algebra. These relations are used in section 3.1 to split the characteristic identity in the GL(m|n) case into the product of two factors and, thereby, to separate "even" and "odd" parts of the spectra of quantum matrices. Our derivation is based on the use of the Plücker relations and we start from their short reminding (for details see [Stur] ). Consider a pair of n×n matrices A=∥aij∥n1 and B=∥bij∥n1. We denote the i-th row of the matrix A as ai∗ and introduce notation detA:=[A],A:=(a1∗…ai∗…an∗1…i…n), (2.23) where the latter symbol contains a detailed information on the row content of A. Namely, it says that the row ai∗ appears in the matrix A at the i-th place (counting downwards). Let us fix a set of integer data {k,r1,r2,…,rk} such that 1≤k≤n and 1≤r1<⋯<rk≤n. Given these data the Plücker relation reads [A][B]=∑1≤s1<⋯<sk≤n [a1∗…bs1∗…bs2∗…bsk∗…an∗1…r1…r2…rk…n]× (2.24) [b1∗…ar1∗…ar2∗…ark∗…bn∗1…s1…s2…sk…n], where the sum is taken over all possible sets {k,s1,…,sk}. We now apply the Plücker relations for the proof of Proposition 3 Let us fix four integers r, p, l and k, such that 1≤l≤r and 1≤k≤p. Then in the ring Λ of symmetric functions the following bilinear relations are satisfied (for the notation see (1.12)) s\raisebox−0.227622pt$[r|p]lk$s\raisebox−0.227622pt$[r|p]$=
CommonCrawl
New Gmat Format Home » GMAT Exam Help » New Gmat Format New Gmat Format The Gmat Format is a format of recording and marking, commonly used in house recording studios. It is widely used in artists' and record producers' studio, among them Brian Eno (for their credits), John Mayer (for their early projects), and Michael Wiederschneider (for their commercials). Definition and concept Gmat format Gmat formats are a collection of recording or marking, commonly associated with the recording or mixing of an artist. As such a format does not comprise solely a recording of the artist's music, but rather also gives some control over the recording process, producing additional material and providing a formal indication of control over the recording process. Some artists define a Gmat format as providing the way of recording and keeping the artist a part of the process; the exact terms vary depending on distribution and styles of recording. Gmat formats can also be defined as aspects of their documentation, meaning that it is possible to read a Gmat format and it represents the artist's work. Artists that describe their work may also claim ownership of the artist's materials but at the very least they can give a full and full description of the artwork and the documents under which the artist is writing. Examples Gmat used to be common for artists of recording work, such as Keith Richards, Bob Dylan and Ray Davies, but these days I am mainly focusing on Keith Richards, Bob Dylan, and Ray Davies in modern music – now to be published together with the rest of the artists and photographers present to the world over and beyond the Gmat format in their individual and collective best days. In the early days of recording I reviewed the group with a couple of musicians in New York City, John Johnston & Mike Rees and John Deere in Chicago, including Mike Spyer and Ray Davies. Here's the Gmat format we are using at the time. He is at least imp source same as the others around. It's the first format produced, but used in the artists' studio and is almost a work of art. In several cases in music publishing processes, such as the late 1970's, it is designed to take the place of the other types of music production. The concept of a Gmat format would indicate whether the artist intended to have it in their home studio, or not. Gmat format has a special meaning in the contemporary music format as well as a unique value from an art model, because it simply shows the artist living their artistic lives, or at least the artist's work. This type of artwork can be found in different genres of music, both metal and rock. In many examples, a Gmat file might be a B&R of some kind, many artists not being aware of the differences between a Gmat format look at more info a B&R. For example, the Gmat Format was used in the film "TakeOver", featured in B&R 2001's "All the Lights Have Fallen". This style of music started as early as the 1980s. Performances The composition, expression, timbre, words, and music can be interpreted in any way—especially narrative, even if "tone-replay". Help Write My Assignment One of the most effective compositions is featured in music publishers' music events and concerts, being an example of what helpful hints go into an event, a painting or song in any way possible. So, sometimes you will find a musician, composer, or other recording artist as an check my blog for the song, but the song may even be composed in a song format. Often, a song being composed, would create an interest in an out-of- tune song. Types and Discover More Here Gmat formats may currently come together in several different music forms, including recording, with special focus on three types, namely: Vinyl/EP (also known as Digital Grambarga) A vinyl recording, typically used at a museum or museum artifact, or at a recording studio or public exhibition A vinyl/EP recording featuring a music composition that closely mimics the recording of an artist's work, so there are two main methods for obtaining this type of recording, the first being a "vibro record" (an artifact of a period of musical movement) produced for a "live" album and then taken for audio recording, often by the recording artist with music, or the artist as the recording artist ANew Gmat Format Algebra (8C6:4FM) Lectures on the Structure Problem of Real Types and Fundamental Groups. Springer-Verlag, 2018. 1. Introduction. 1.. 2. Main Example of Spherical Spline Representation Algebra Hyliet van Oumen, Hyliet Witte Rijtgen. Untersuchungen zur Algebra-Notes des Ausführen der derischen Institut für Mathematik der Könige 1110 N 543 (2011). Introduction ============ In this paper we shall concentrate on two main aspects of Spherical Spline Representation Algebra, i.e., what we here mean by the general term. Here the number 1110 is the first (there is a link between these two part of the paper and the relevant parts in the Spherical Representation Algebra) that we shall use. Let us briefly provide briefly some items on these two notions, with some definitions and facts. Sspline Representation Algebra is the derived form of Spherical Spline Representation Algebra. The Definition ============= It is from Theorem \[th:sphericalsum\] that we intend to prove that the $6$-dimensional (very-) real Spherical Algebra consists of three real points $Q_1$ and $Q_2$ with Cartesian points of infinity $a_1$ and $a_2$ respectively. There is a Cartesian middle point $b_1$ on the plane with $a_1,a_2$ tangencies to the plane and a linear connection $$\label{eq:q1} \nabla : \left\{ Q_1a_1 \le 0 \le b_1 \le Q_2 \le a_1 \le b_2\right\} \longrightarrow \mathbb{R}. Pay Someone To Do My Homework Cheap $$ Note that the cartesian middle point $b_1$ is tangent to the plane and $a_1$ is the point in the real plane described above. We recall $$\mbox{Cartesian :} \quad \mathbb{R}^3\times \mathbb{R}^3 \longrightarrow \mathbb{C}.$$ The Cartesian center of $\mathbb{C}$ is the Cartesian middle point $b_0^N$ with respect to the Cartesian center of $H$. $$\mbox{Cartemd :} \quad \mathbb{R}^3 \cong \mathbb{C} \cong H \cong \mathbb{R}.$$ The Cartesian middle of $\mathbb{R}^{n-3} \times \mathbb{R}^n$ consists of two Cartesian middle points $Q_1$ and $Q_2$ (depending on the orientation) of complex plane $$\mbox{Conal :} \quad \mathbb{C} \cong \mathbb{C} \cong q_1 \times q_2,$$ where $q_1(q_2)$ is the Cartesian middle point (as an element in $\mathbb{X}(\mathbb{R}^3)$) and $q_i \in \{0,1,2,3\}$. In the spherical case, we note that $q_i \in \{0,1,2,3\}$ are equal to the Cartesian middle points of the real Weyl bundle, and the Cartesian middle, c.f. R. Abraham [@Ab], in the following, $a_1 \in \mathbb{Z}$, have all Cartesian middle points zero. Regarding one-dimensional representations, we note that we can realize (for a Cartesian middle point) $Q_{2,1} \in \mathbb{C}$ as any part of a real Weyl bundle whose cartesian middle points are given by $Q_2$ and $Q_1$. Go Here general, we know that any real $x \in H$ in a projection $J$ of a complex plane $\mathbb{P}^1$New Gmat Format and the First Three Years in A The history of the Gmat Format has been used here in a series of posts, with a lot of responses from readers or a few specific Gmat products. Read them if you want to know more about what's going on at the Gmat format. And of course, if these are not comments from fans, then let me know. Gmat Format | 6 It's important to remember that both the format and go to this web-site format of the Internet are tightly controlled. An online blog is designed in an online system consisting of a computer-interface and more than 400,000 square feet. As a simple reference, which can be found here, the Gmat is an online system composed of 10 main components: 1. a server which only delivers explanation from in-house databases (queries) 2. a database manager which displays (real-time) find here via OpenOffice.org (server-specific) files, read by an RTF parser 3. a database loader which processes the data in a format similar to the format available in a Gmat browser 4. I Will Do Your Homework For Money a database interface running on a Gmat browser instead of on separate systems 5. the machine that the Gmat format is based(es) on 6. the database system Gmat Format and The Datacenters – In Review Since 2005 The Gmat formats are designed to keep track and to improve user experience for their users and to provide access to more information. The format is designed to keep users thinking in a generally relaxed and very accurate manner because of the fact it uses more memory than a database. A Gmat Dfiller contains all of the core data, mostly stored in a distributed database system. The format will keep all of the database in memory for as long as the format or query fails. During frequent updates its Dfiller is also faster and more precise to it, but it also has enough memory in Gmat that Dfiller can be used anytime anytime. However, when updates are at bay that may cause memory exhaustion in some cases, the Dfiller is optimized for those situations. In addition, the format only has one file system, the memory pages (read/write/read/write elements in the contents of Gmat file). The only time you can use a Dfiller in a Gmat format is when you're running a Gmat client application built with Emacs. The RTF Parser for Dfiller is available as a free extension to your Gmat Dfiller. While the Gmat Dfiller is able to extract all of its files from a database (which is usually something very simple and easy — it works with a couple of database tools and a database format), it is best to see the RTF files simply for you (it simply means, you know, getting it all bundled, all of it. The RTF is available here for free with the help of Free RTF support documentation. If you need more help or suggestions, please call us at 800-662-4958 or email us @rfmat.io). The RTF Parser involves about 4 (hits =.gifs per line, in small order) files. The first round is where the Dfiller's focus moves. In order to get read review file, you first have to start the RTF Parser. That time, you need to click on the open the RTF file. Then you set a key in your command line, like this: To get one file, make sure you've not marked your RTF file name with /home/xyz/./cdb/tmp/rtf/tmp/rtf.RTF/. The command you're running in the Gmat client will put the RTF file in the desired file structure. Then you enter an RTF checkbox. Do not exceed those arguments. Remember: The file format should be the same as the RTF size above. The next step is to create the JQuery module. You will see, that as far as you can execute the File.getFiles method, get the JQuery File name. The Modules file uses the jQuery module and you can load the JQuery File.get Gmat Overview Free Practice Gmat
CommonCrawl
Automated Quantification of Early Bone Alterations and Pathological Bone Turnover in Experimental Arthritis by in vivo PET/CT Imaging Bianca Hoffmann1,2 na1, Carl-Magnus Svensson ORCID: orcid.org/0000-0002-9723-90633 na1, Maria Straßburger4, Björn Gebser1, Ingo M. Irmler5, Thomas Kamradt5, Hans Peter Saluz1,2 & Marc Thilo Figge2,3 Scientific Reports volume 7, Article number: 2217 (2017) Cite this article The assessment of bone damage is required to evaluate disease severity and treatment efficacy both in arthritis patients and in experimental arthritis models. Today there is still a lack of in vivo methods that enable the quantification of arthritic processes at an early stage of the disease. We performed longitudinal in vivo imaging with [18F]-fluoride PET/CT before and after experimental arthritis onset for diseased and control DBA/1 mice and assessed arthritis progression by clinical scoring, tracer uptake studies and bone volume as well as surface roughness measurements. Arthritic animals showed significantly increased tracer uptake in the paws compared to non-diseased controls. Automated CT image analysis revealed increased bone surface roughness already in the earliest stage of the disease. Moreover, we observed clear differences between endosteal and periosteal sites of cortical bone regarding surface roughness. This study shows that in vivo PET/CT imaging is a favorable method to study arthritic processes, enabling the quantification of different aspects of the disease like pathological bone turnover and bone alteration. Especially the evaluation of bone surface roughness is sensitive to early pathological changes and can be applied to study the dynamics of bone erosion at different sites of the bones in an automated fashion. Rheumatoid Arthritis (RA) is one of the most common autoimmune diseases with a prevalence of up to 1% in developed countries1. Main characteristics of RA include synovitis and painful joint swelling in the early stages followed by subsequent bone erosion, which leads to loss of joint function and life quality. Early diagnosis is crucial for the success of disease-suppressing anti-inflammatory treatments and there is a window of opportunity for early therapeutic intervention2,3,4. Consequently, sensitive diagnostic methods are inevitable to increase treatment success and to monitor treatment efficacy. Besides early diagnosis and continued monitoring, it is also important to understand the underlying mechanisms that lead to progressive, erosive RA. Animal models have not only provided insights into molecular mechanisms but also allow the development and preclinical studying of new diagnostic methods and therapeutic approaches5,6,7. For a long time plain radiography (x-ray) in humans and histopathological examination in animals have been the gold-standards to assess arthritic processes. Their major drawbacks are insufficient resolution, poor visualization of complex structures and in case of histopathological analysis the high number of laboratory animals to be sacrificed for tissue analysis. In contrast, ultrasound, magnetic resonance imaging (MRI), computed tomography (CT) and positron emission tomography (PET) are minimal-invasive and allow, similar to x-ray, for longitudinal in vivo studies, whereas CT and MRI yield much higher resolution and PET offers insights into metabolic processes8. We have shown earlier that the PET tracers [18F]-fluorodeoxyglucose ([18F]-FDG) and [18F]-fluoride are feasible to assess joint inflammation and pathological bone metabolism in a model of glucose-6-phosphate isomerase (G6PI)–induced arthritis9,10,11. CT has also widely been used to quantify pathological changes of the bones in experimental models as well as RA patients12,13,14,15,16. Bone erosion occurs already early in the course of RA3, 12 and, therefore, many studies aim to quantify this erosive process. Silva et al.17 proposed an approach based on the quantification of bone surface roughness. As the erosion and, thereby, the increasing roughness of the surface should be a precursor of visually detectable lesions, this method seems to be very promising for early and sensitive quantification of bone destruction. Nevertheless, this approach has never been used for further studies of experimental arthritis models, which could be due to a crucial scale parameter that has to be defined for each study and can potentially influence the results to a great extent. In this work, we used in vivo [18F]-fluoride PET/CT imaging to quantify pathological bone metabolism and bone destruction in a model of G6PI-induced arthritis in mice. For the first time, we evaluated the bone surface roughness longitudinally in this experimental model and compared these results to non-immunized control animals. In contrast to Irmler et al.10, we used a roughness quantification based on surface triangulation, which is minimally affected by bone growth and yielded a high sensitivity to bone destruction in the early stage of the disease. These two aspects were lacking in the surface area representation of roughness used by Irmler et al.10. Additionally, we investigated the progression of bone surface roughness at the endosteal and periosteal sites of cortical bone separately as well as different spatial scales on which the roughness occurs. This concept allowed us to detect unequal impairment of endosteal and periosteal surfaces by experimental arthritis without injuring the animals. The calculation of bone surface roughness was implemented as a fully automated image analysis pipeline, which also includes automated segmentation of the regions of interest and therefore allows for high-throughput studies that are free of user bias. Finally, quantitative results from PET and CT as well as clinical scoring were compared to each other by correlation analysis in the present study. Glucose-6-phosphate isomerase–induced arthritis Female inbred DBA/1 mice (weight 16.2–19.5 g; University Hospital, Jena, Germany) were housed under standard conditions (temperature: 20 ± 2 °C, humidity: 50 ± 10%, light/dark cycle: 12/12 h) in groups of 3 to 4 animals in individually ventilated cages, and fed with normal mouse chow and water ad libitum. All animals were cared for in accordance with the principles outlined by the European Convention for the protection of vertebrate animals used for experimental and other scientific purposes. Experiments were in compliance with the German animal protection law and were approved by the Federal State Authority of Thuringia and ethics committee (permit Reg.-Nr. 02–001/14). Experimental arthritis was induced as described earlier11 at an age of 10 to 13 weeks in 10 animals that were picked at random. In brief, mice were immunized subcutaneously with 400 μg of recombinant human G6PI emulsified in complete Freund's adjuvant (Sigma-Aldrich, Taufkirchen, Germany). Clinical scoring of arthritis manifestation was performed macroscopically. Swelling and erythema of wrist and ankle joints, metacarpophalangeal (MCP) and metatarsophalangeal (MTP) joints in each paw was graded from 0 to 3 as established by Irmler et al. 10. Representative images of mice paws for the respective scores are shown in Supplementary Fig. S1. The cumulative clinical score for each paw was calculated as the sum of the scores for wrist/ankle joint, MCP/MTP joint and digits/toes. Positron emission tomography/Computed tomography in vivo imaging In vivo imaging was performed with a multimodal Siemens Inveon Small Animal PET/CT system (Siemens Healthcare Medical Imaging). The PET modality has radial, axial and transaxial resolutions of 1.5 mm at the center of the field of view18. We performed PET acquisitions with a coincidence timing window of 3.4 ns and an energy window of 350–650 keV. PET acquisitions, each with duration of 20 minutes, were started 35 minutes after injection of [18F]-fluoride in 0.9% sodium chloride solution with an activity of approximately 10 MBq into the tail vein. 3D PET images were reconstructed with three-dimensional ordered subset expectation maximization/maximum a priori algorithm and CT-based attenuation correction. The CT modality consists of a cone beam x-ray micro–CT (µCT) source with a focal spot size of 50 μm and a built-in 0.8 mm carbon fiber filter and a 3,072 × 2,048–pixel x-ray detector. The μCT acquisition protocol for high resolution scans of hind paws used 2,048 × 2,048–pixel axial-transaxial resolution, magnification parameter med-high, 80 kV at 500 μA, 3500 ms exposure time, total rotation of 360° and 360 projections per scan. CT images were reconstructed using a Shepp–Logan filter and cone-beam filtered backprojection with a pixel resolution of 14.609 µm. The animals were anesthetized with 3% isoflurane (Deltaselect, Dreieich, Germany) vaporized in oxygen (1.5 l/min) in an external anesthesia chamber prior to the PET/CT scans and kept under anesthesia throughout the scans with 1.5% isoflurane vaporized in oxygen (1.5 l/min) to prevent animal movement. Anesthesia was monitored by measuring the respiratory frequency and the body temperature was kept at 37 °C by using a heating pad. Imaging was performed longitudinally before immunization (day −4) and at different time points (days 10, 14, 18/20, 24 and 35) of acute and chronic arthritis with immunized mice (n = 10) and non-immunized (n = 6) controls. The overall scanning time for one animal at one time point was approximately 55 min. Quantification of pathophysiological bone metabolism The quantitative uptake of [18F]-fluoride as a measure of pathophysiological bone metabolism in fore and hind paws of the mice was calculated with Siemens Inveon Research Workplace 4.0 software based on fused PET/CT images. Guided by the CT images spherical (ellipsoids) volumes of interest (VOIs) were placed manually around the bone and joint structures of fore and hind paws, respectively. A representative cross-sectional slice with overlaid VOI boundaries is shown in Supplementary Fig. S2. Each VOI was then thresholded by a value of 40% to the pixels with the highest standard uptake value (SUV; g/ml), which describes the incorporation of [18F]-fluoride into the bone and is defined as: $${\rm{SUV}}=\frac{{\rm{average}}\,{\rm{activity}}\,{\rm{in}}\,{\rm{VOI}}}{{\rm{injected}}\,{\rm{activity}}}\,{\rm{body}}\,{\rm{weight}}$$ Reasonable threshold values were found to range from approximately 30% to 50%, while the threshold was not very sensitive to changes within this range (see Supplementary Fig. S2). Based on this segmentation the mean SUV and standard deviation in each VOI was calculated automatically by the software. Automated preparation of VOIs based on μCT images For further analysis high resolution μCT images were automatically cut to VOIs including only left and right hind paw, respectively (Fig. 1). To this end, each μCT image stack (Fig. 1a) was converted from 16-bit to 8-bit grayscale with intensity values between 0 and 255 and down-sampled by factor 4 in order to speed up the algorithm. The image stack was binarized by thresholding with an intensity value of 75 into foreground (bony/cartilage structures) and background (surrounding tissue and air) pixels. The fairly low threshold value was optimized in regard to include only high density pixels representing bone material and to ensure that anatomically connected regions are not split. Based on this binarization 8-connected foreground pixels in the 3D space of the image stack were found and these objects were from now on represented as the coordinates and size of surrounding bounding boxes (Fig. 1b). Smaller objects like pelvic bones were excluded from further processing by considering exclusively bounding boxes containing more than 85,000 foreground pixels as potential candidates to represent left or right hind paw (Fig. 1c). Furthermore all bounding boxes that did not appear in a predefined section in z-space (z min < 300 or z max -z min < 300) were excluded (Fig. 1d). The bounding boxes that represent left and right hind paw were found based on their coordinates in x-, y- and z-space, as the two paws usually appear in the lower left and right regions of the image stack for our image acquisition protocol. The endpoint of the hind paws was defined where the tibia enters the tarsocrural joint and the calcaneus starts to appear in the image stack. This position was found by the typical pattern of the number of connected objects in each slice in z-direction (Supplementary Fig. S3). According to this new endpoint the bounding boxes were shrunk in z-direction (Fig. 1e), then upsampled by factor 4 and some additional space was added in each direction (x: 2 × 20 pixels, y: 2 × 20 pixels, z: 2 × 10 pixels). The resulting region was cut out of the original high-resolution CT image (Fig. 1f). The automated VOI preparation algorithm was implemented in Java using the ImageJ library v1.49b19. Preparation of defined μCT VOIs containing left and right hind paws. 16-bit image stacks (a) were converted to 8-bit grayscale, down sampled by factor 4 and binarized via intensity thresholding to foreground and background pixels. Bounding boxes represent individual, connected structures of foreground pixels which are shown here as volume rendering (b). All bounding boxes containing less foreground pixels than a certain threshold were discarded (c), as well as bounding boxes that are not located at the most left and most right regions of the image stack (d). The two bounding boxes representing left and right hind paw were shrunk to a defined region, which was determined by the end of the calcaneus bone (e) and the obtained VOIs were cut out of the original stack (f, shown here: left hind paw). μCT-based assessment of VOI roughness Based on the prepared μCT VOIs triangulated surface meshes were reconstructed with the marching cubes (MC) algorithm17, 20. The intensity threshold for the MC algorithm was obtained individually for each VOI by Otsu's method21 to compensate for variations in the image intensity, caused by the imaging modality itself. In order not to lose resolution, the grid size for MC was chosen to be equal to one pixel. Additionally, we corrected the surface meshes for ambiguities according to Cignoni et al.22. Potentially duplicated vertices and remaining artifacts, which were not representing parts of the hind paws, were removed automatically from surface meshes by a script using MeshLab software 64-bit v1.3.323. For the removal of artifacts a cutoff value of 1,900,000 triangles was chosen as the surface meshes of the hind paws comprised 2,320,000 triangles on average, ranging from 1,930,000 to 3,194,000 triangles. For following comparison, three different surface reconstructions of each VOI were created: complete surface (cs): comprises the whole surface, as described above outer surface (os): comprises only the periosteal bone surface inner surface (is): comprises only the endosteal bone surface These three surface reconstructions are individually visualized in Fig. 2. The roughness of the three surface reconstructions of each paw was in principal calculated as described by Silva et al.17. In brief, for each triangle in a surface mesh a circular vicinity of adjacent triangles was defined by a roughness radius r (Supplementary Fig. S4). The angles between each adjacent triangles' normal vector and the current triangle's normal vector was computed and the mean angle was obtained. The angle information of all surface meshes of the control data (before immunization) was compiled into a composite histogram. A threshold angle, which discriminates between smooth and rough regions was obtained by fitting a probability density function (PDF) to this histogram. While Silva et al. used a gamma PDF, we decided to use a lognormal PDF (equation (2)), as it is computationally more efficient and fits better to the composite histogram of our VOIs (Supplementary Figs S5, S6 and S7). $$p(x)=\frac{1}{x\sigma \sqrt{2\pi }}{e}^{-\frac{{(\mathrm{ln}x-\mu )}^{2}}{2{\sigma }^{2}}}$$ Visualization of complete, outer and inner cortical bone surface meshes. (a) The complete cortical bone surface mesh comprises the periosteal surface of the cortical bone and the endosteal surface which is separating the cortical substance from the medullary cavity and can be seen in the cross-sectional view at the bottom. (b) The surface mesh of the outer cortical bone surface only consists of the periosteal surface of the cortical substance. The endosteal surface is not part of this mesh, as can be seen in the corresponding cross-sectional view. (c) The surface mesh of the inner cortical surface only comprises the endosteal surface that separates the cortical substance from the medullary cavity. The threshold angle, T, was calculated as described by Silva et al. as the mean plus 2 standard deviations of the PDF: $${\mu }_{p(x)}={e}^{\mu +\frac{{\sigma }^{2}}{2}}\,{\rm{and}}\,{\sigma }_{p(x)}=\sqrt{({e}^{{\sigma }^{2}}-1){e}^{2\mu +{\sigma }^{2}}}$$ $$T={\mu }_{p(x)}+2{\sigma }_{p(x)}$$ Roughness for the complete surface, R VOI,cs , the outer surface, R VOI,os , and the inner surface, R VOI,is , was calculated for each VOI by adding up the frequencies of all angles of the respective surface mesh that were greater than T. To assess the scale of roughness on which bone erosion occurs, we calculated R VOI,cs , R VOI,os and R VOI,is for 6 different roughness radii r: 1, 3, 5, 10, 15 and 20, which correspond to ~15, 44, 73, 140, 219 and 292 μm, respectively. The values of r were chosen so that they scan from the lowest reasonable radius in agreement with the approximate pixel size resolution to one that covers a big area without wrapping around any bones. μCT-based assessment of VOI volume Based on the μCT image stacks and corresponding thresholds that were used for the MC algorithm, we computed the VOI volume V VOI using Fiji software v1.49b and the included 3D Objects Counter plugin24. V VOI is the number of connected foreground pixels of the biggest identified 3D object multiplied by the image voxel size of 3.11∙10−6 mm3. Differences between groups were evaluated with the non-parametric Mann-Whitney U test (two-tailed) and significant differences were accepted for p < 0.05. The results are reported as mean ± standard deviation. For comparison of different roughness radii r the ratio of roughness (μ AR/CO , equation (5)) was calculated by dividing the mean roughness of the arthritic group, μ AR , by the mean roughness of the control group, μ CO . The standard deviation σ AR/CO for μ AR/CO was calculated according to propagation of uncertainty: $${\mu }_{AR/CO}=\frac{{\mu }_{AR}}{{\mu }_{CO}}$$ $${\sigma }_{AR/CO}=|{\mu }_{AR/CO}|\sqrt{{(\frac{{\sigma }_{AR}}{{\mu }_{AR}})}^{2}+{(\frac{{\sigma }_{CO}}{{\mu }_{CO}})}^{2}}$$ Statistical differences between μ AR/CO for different groups were evaluated by bootstrapping and significance was assumed when the 95% confidence intervals of μ AR/CO of two groups did not overlap. In the correlation analysis the Spearman correlation coefficient was calculated and significance was accepted for p < 0.05. Statistical analysis was performed in R25 using the stats package for correlation analysis. An overview of the study design and image analysis pipeline, including the relevant processing steps and software used are given in the Supplementary Fig. S8. The Java implementation of the image analysis pipeline and raw data is available from the authors upon request. The number of data points used for statistical analysis for each experiment can be found in the Supplementary Table S1. Evaluation of clinical score in progressing murine arthritis The immunization of DBA/1 mice with G6PI led to severe symmetric polyarthritis of the small joints of fore and hind paws. First macroscopic signs of the disease could be observed at day 9 after immunization and most animals showed severe inflammation at day 10, indicated by marked swelling and redness. Throughout the acute stage from day 14 to day 18 the clinical score remained high, followed by subsequent decrease until day 35 (Fig. 3). Especially MCP and MTP joint regions showed severe signs of G6PI-induced arthritis, while wrist and ankle joints yielded overall lower clinical scores. At day 35 clinical signs of inflammation were not visible anymore in wrist/ankle joints and digits/toes. Course of macroscopically visible arthritis before and after immunization with G6PI. The maximum of redness and swelling was observed at day 10 and the clinical score remained at a high level until day 18. In the remitting phase the clinical score subsequently declined. The visible inflammation mainly manifested in MCP and MTP joint regions. Error bars denote the standard error of the mean clinical score. Quantification and localization of pathophysiological bone metabolism by [18F]-fluoride PET Pathophysiological bone metabolism was quantified by the calculation of the SUV in fore and hind paws, describing the amount of incorporated tracer in these VOIs. The measured amount of overall injected [18F]-fluoride was 10.56 ± 1.85 MBq and PET image acquisition was started after 34.92 ± 0.57 min. Some animals had to be excluded from further analysis at some time points due to partially paravenous injected tracer, resulting in 6 to 14 fore and hind paws that could be examined in each group (control, arthritic) for each time point. Before immunization mean SUVs in fore and hind paws were 2.82 ± 0.58 and 1.68 ± 1.14, respectively. Fore paws of arthritic animals showed significantly increased tracer uptake starting at day 14 after immunization (p = 0.002, SUV 4.34 ± 0.79) with a maximum increase of 93% compared to day −4 at day 24 and declining tracer incorporation at day 35 (Fig. 4a). Hind paws of arthritic animals showed significantly increased tracer uptake already at day 10 after immunization (p = 0.049, SUV 2.78 ± 0.83). The highest average uptake in arthritic hind paws could also be observed at day 24 after immunization with an increase of 207% compared to day −4 (Fig. 4b). While tracer incorporation in arthritic fore and hind paws follows the same trend, significantly increased tracer uptake of healthy control animals could only be observed in hind paws starting at day 20 (p = 0.002, SUV 2.89 ± 0.62) with a maximum increase of 122% at day 35. In general, fore paws in both groups showed higher [18F]-fluoride uptake than hind paws, while the onset of increased tracer uptake appeared earlier in hind paws and reached higher percental increase. Although [18F]-fluoride is a bone seeking tracer that is incorporated also into the bones of healthy animals, arthritic animals showed considerably higher tracer uptake in fore and hind paws after arthritis onset. Quantification and localization of pathophysiological bone metabolism in G6PI-induced arthritis. (a) Compared to healthy non-arthritic animals the SUV revealed significant increase of [18F]-fluoride uptake in fore paws of arthritic mice starting at day 14 after immunization while tracer uptake of healthy animals did not change compared to day −4. (b) Tracer uptake in arthritic hind paws was significantly increased already at day 10 after immunization and healthy animals also showed increased uptake in the hind paws starting at day 20 compared to day −4. (c) While [18F]-fluoride uptake in the hind paws was evenly distributed at day −4, the tracer accumulated predominantly in tarsometatarsal and MTP joint regions at later time points in both, healthy and arthritic animals. The exact localization of [18F]-fluoride uptake in hind paws revealed that the tracer is uniformly distributed before immunization, but accumulates especially in tarsometatarsal and MTP joint regions at later time points in control and arthritic animals (Fig. 4c). Accuracy of µCT VOI preparation The automated preparation of VOIs based on µCT images was performed for 95 datasets, each containing the two hind paws, and yielded an accuracy of 92.1%. VOIs were identified and cut correctly for 175 of the 190 paws. In 5 cases the paw could not be identified and in another 10 cases the VOI was cut falsely. For these datasets VOIs were created manually if the image was free of motion artifacts. Quantification of VOI roughness in progressing murine arthritis Bone destruction caused by G6PI-induced arthritis was evaluated by calculation of the VOI roughness, which captures erosive processes at the surface of bony and other high density structures of the hind paws. VOI roughness of the complete surface (R VOI,cs ) of arthritic animals was significantly increased compared to healthy animals at days 10 to 24 after immunization (Fig. 5a). The maximum increase was reached at day 18 after immunization with an increase of 113% compared to the control animals and 135% compared to day −4. R VOI,cs of control animals was significantly decreased at day 10 after immunization in comparison to the measurements of all animals at day −4 (p = 0.01). In the past, histological studies and manual assessment of radiographs revealed that the endosteal and periosteal surfaces of cortical bone are impaired unequally by experimental arthritis and that bone formation and bone erosion are not uniformly distributed throughout these surfaces26,27,28,29. In order to investigate these differences by non-invasive µCT imaging we calculated the bone surface roughness separately for the inner (endosteal) bone surface and the outer (periosteal) bone surface for different roughness radii r, under the assumption that bone erosion and bone formation cause roughness on different spatial scales. For roughness of the inner surface, R VOI,is , the ratio μ AR/CO was significantly increased for almost all examined roughness radii r at all time points after immunization compared to day −4, but mostly pronounced between r = 10 and r = 20 (Fig. 5b). The maximum increase of 351% was reached at day 18 for r = 15. In contrast, μ AR/CO of the outer surface, R VOI,os , was predominantly increased for smaller roughness radii in the range of r = 1 to r = 5 at days 10 to 24 and larger roughness radii in the range of r = 5 to r = 20 at day 35 after immunization (Fig. 5c). The maximum increase of 107% was also found at day 18 after immunization, but for r = 1. The time course of roughness progression in arthritic and non-arthritic animals is shown in Fig. 6 by color-coded surface meshes of representative hind paws. VOI roughness of arthritic and healthy non-arthritic animals for different time points and roughness radii. (a) The hind paws of arthritic animals showed significantly increased VOI roughness of the complete surface at days 10, 14, 18 and 24 after immunization compared to healthy animals (roughness radius r = 3). For the group of healthy mice roughness was significantly decreased at day 10 compared to the control measurements of all animals before immunization. (b) The VOI roughness of the inner surface was more pronounced for roughness radii r = 10 and r = 15, while the VOI roughness of the outer surface occurred predominantly on a small spatial scale for roughness radii r = 1 to r = 5 (c). The increase of VOI roughness of arthritic animals reached its peak at day 18 after immunization and subsequently declined until day 35 in all cases. Visualization of bone surface roughness in arthritic (top row) and non-arthritic (bottom row) hind paws over time. The roughness of the bone surface is visualized by a color gradient ranging from blue (smooth surface) to red (rough surface) and shown here for representative hind paws for the different time points. First signs of erosion get visible at day 10 after immunization near to the MTP joints in arthritic animals. The roughness is further increasing until day 18 where large parts of the bone surface are affected which is indicated by the orange color. At later time points the roughness is declining again and increased surface roughness is restricted to the regions near to the MTP joints at day 35. In contrast, the bone surface of non-arthritic animals is constantly smooth over time. Quantification of VOI volume in progressing murine arthritis The change of VOI volume in the course of G6PI-induced arthritis for arthritic and healthy animals is shown in Fig. 7. Compared to day −4 the arthritic animals showed a statistically significant increase in V VOI at day 18 after immunization with a total increase of 6% (p = 0.003). Such a difference could not be observed for healthy animals. In general, arthritic animals had greater V VOI at each time point compared to healthy animals. V VOI of arthritic animals was calculated for various intensity thresholds in the range of 80% to 120% of the threshold calculated by Otsu's method, which was used to segment the VOI. The variation of the threshold had no impact on the increase of V VOI of arthritic animals at day 18 after immunization (Supplementary Fig. S9). VOI volume for arthritic and healthy non-arthritic animals. The VOI volume of the group of arthritic animals was overall larger compared to the group of healthy animals, but a significant increase of VOI volume over time could only be observed for arthritic animals at day 18 after immunization. Correlation of quantitative measurements of experimental arthritis severity We performed a correlation analysis for the results of clinical scoring, PET measurements and CT roughness values. To test the correlation between clinical score and PET SUV we used the cumulative clinical scores from fore and hind paws of arthritic animals at days 10 to 35 after immunization. Statistically significant correlations were found at days 10 and 14 with ρ = 0.87 (p < 0.001) and ρ = 0.59 (p = 0.002), respectively. For correlation between clinical score and VOI roughness we only considered the cumulative clinical scores of the hind paws of arthritic mice at days 10 to 35 after immunization, because roughness analysis was only carried out for the hind paws. We found statistically significant correlations between the clinical score and R VOI,is for roughness radii r = 1 and r = 10 (ρ = 0.65 with p = 0.032 and ρ = 0.65 with p = 0.032) and R VOI,os for r = 10 (ρ = 0.69 with p = 0.018) at day 14 after immunization. For the correlation analysis between PET and CT we used the SUVs and VOI roughness values of arthritic and healthy animals from all investigated time points. We found positive correlations between SUV and R VOI,cs , R VOI,is as well as R VOI,os for different roughness radii at days 14, 24 and 35 after immunization (Fig. 8). Correlation matrix for PET SUV and VOI roughness for different roughness radii r and time points. Statistically insignificant correlations are marked with a black X. The direction of correlation is identified by a color gradient, ranging from red color (negative correlation) to blue color (positive correlation). Statistically significant correlations between SUV and VOI roughness were found at days 14, 24 and 35 after immunization, while at days 24 and 35 only small roughness radii r ranging from 1 to 5 yielded significance. In this study we examined the feasibility of combined in vivo PET/CT imaging to quantify metabolic and anatomic changes in bones caused by experimental arthritis in mice. The immunization of DBA/1 mice with G6PI induced a symmetrical polyarthritis that was marked by swelling and redness of the paws. The onset of the disease could visually be detected at day 9 after immunization and longitudinal imaging was started at day 10 to ensure unimpeded manifestation of arthritic processes. Already in the early acute stage of the disease arthritic animals showed significantly increased uptake of [18F]-fluoride, whereby enhanced tracer uptake could be first observed in the hind paws. After a peak at day 24 after immunization the uptake of [18F]-fluoride declined. These results are consistent with earlier findings10 and demonstrate that static PET imaging with [18F]-fluoride after a short incubation time of 35 min is able to capture pathological bone turnover in experimental arthritis. Furthermore, untreated control animals showed increased uptake of [18F]-fluoride in the hind paws towards the end of the experiment. Chronic stressful conditions can cause alterations of bone turnover, associated with bone loss and osteoporosis30, leading to increased mineral-binding capacity. Therefore the increased tracer uptake in control animals can, for example, be explained by mental and physical stress due to repeated narcosis and injection procedures. Nevertheless, the tracer uptake was constantly higher for arthritic animals compared to controls. Bone erosion is a major characteristic of RA in humans as well as in experimental arthritis models and is often used to grade the severity of the disease31. Many preclinical studies aimed to quantify the size of such erosions, either manually or in an automated fashion5, 16, 32. So far, only few studies focused on the bone surface roughness, which should be a much more sensitive measure as it is a precursor of measurable erosions of the bone17, 33, 34. We quantified the roughness of bony structures of the entire hind paws of mice and called this roughness the VOI roughness, because several factors, such as CT image resolution, bones in close vicinity and the partial volume effect, make it impossible to find the true boundary between bone and surrounding tissue. Our VOIs are a representation of the bony structures which may contain dense material that is in close proximity to the real bones surface. Our image analysis pipeline of µCT images is completely automatic and comprises three main steps: (i) automated preparation of VOIs, (ii) reconstruction of 3D VOI surface meshes and (iii) the calculation of VOI roughness. The automated fashion of our image analysis pipeline ensures that the quantification of VOI roughness is fast, free of user bias and easily applicable also without expert knowledge in image segmentation or mouse anatomy. Nevertheless, in a few cases the first step produced faulty VOIs, which was mostly caused by untypical projections of the paws or motion artifacts. Therefore, the user has to evaluate the proper VOI representation and to decide if datasets have to be excluded from further analysis, for example, images showing motion artifacts. We made the observation that especially after repeated narcosis within a short period of time it became difficult to maintain a stable depth of narcosis for individual animals. To ensure animal welfare, we preferred less deep narcosis in these cases, accepting an increased number of images containing motion artifacts towards the end of the experiment. For the first time, we evaluated the progression of bone surface roughness at periosteal and endosteal sites of the cortical bone and resolved the spatial scale on which this roughness occurs. Therefore we calculated the VOI roughness for three different regions (complete cortical bone surface, outer cortical bone surface and inner cortical bone surface) and different roughness radii r ranging from r = 1 to r = 20. The VOI roughness of arthritic animals was significantly increased for all regions already at day 10 after immunization, but periosteal and endosteal sites of the bones showed differences with respect to r. While endosteal roughness of arthritic animals is increased for nearly all r, the periosteal roughness could only be detected between r = 1 and r = 5 in the earlier phase of the disease and only at a scale of r = 5 to r = 20 in the late remitting phase. In general, the roughness occurred on a larger spatial scale on the endosteal surface and is also more pronounced there compared to the periosteal surface. In both cases the roughness reaches its maximum at day 18 and then declines at later time points. These results suggest that bone erosion is not uniformly distributed over the bones surfaces and that the roughness measurements proposed here are able to capture different aspects of the disease, i.e., bone erosion and bone formation. Several studies could show that bone marrow infiltrates lead to increased bone formation at endosteal surfaces close to erosions26, 27. In addition, the osteoclastic activity seems to be increased at periosteal surfaces compared to endosteal surfaces28 and in adjuvant-induced arthritis new bone formation was observed at the periosteal bone surface29. Taken together, it is reasonable to assume that bone erosion can be measured as roughness on a smaller spatial scale, while bone formation is marked by increased roughness on a larger spatial scale and that these two processes may be evaluated simultaneously by in vivo µCT imaging. This would be of great advantage in drug evaluation studies that are focused on reduction of erosive processes and induction of bone repair as the quantitative methodology proposed here is non-invasive and can be applied longitudinally without the need to sacrifice test animals at each measurement time point. However, to prove this hypothesis more investigations are required in the future and the correlation between bone surface roughness, the scales on which it occurs and the processes of bone erosion and formation have to be evaluated first. This includes histological analysis and should be part of future studies on experimental arthritis models. In an earlier study the quantitative assessment of PET tracer uptake was clearly superior of µCT-based measurements with regard to detecting early alterations of the bones in experimental arthritis10. PET imaging would therefore be the means of choice when sensitivity is indispensable, even though it does cause additional costs, a logistic overhead and additional radiation dose compared to CT imaging alone. In contrast, our results demonstrate that VOI roughness is much more sensitive to detect early bone erosion than bone volume or surface area measurements, which have been performed previously10 and which have frequently been used to quantify arthritic processes in animal models13, 14, 32, 35, 36. In fact, we found no delay between the onset of increased PET tracer uptake and increased bone surface roughness after arthritis induction, emphasizing the capability of bone surface roughness analysis to quantify pathological alterations of the bones immediately after arthritis onset. However, our correlation analysis showed that PET SUV and VOI roughness are not necessarily correlated throughout the course of experimental arthritis (see Fig. 8). Statistically significant correlations were only found at days 14, 24 and 35 after immunization. At the onset of arthritis manifestation at day 10 as well as at the peak of bone surface roughness at day 18 the two measurements did not correlate with each other. At days 24 and 35, only statistically significant correlations were found for small roughness radii r in the range of 1 to 5. This observation is further supporting our hypothesis that bone erosion can be measured as roughness on a small spatial scale because the erosion of the bone causes an increasing surface area and is therefore directly connected to the mineral-binding capacity of the surface and also to the uptake of [18F]-fluoride. While PET SUV and VOI roughness were mainly correlated at the later stages of the disease, statistically significant correlations between PET SUV and clinical score or VOI roughness and clinical score could only be found in the early acute stage at days 10 and 14. This result indicates that macroscopical signs of experimental arthritis are closely connected to the onset of arthritic processes but do not represent the severity of bone surface roughness and pathological bone turnover. An important therapeutic aim in the treatment of arthritis patients is to stop bone erosion or overshooting. To assess the efficacy of new drugs, sensitive diagnostic methods are needed and combined PET/CT imaging as shown here would assist this process by providing a non-invasive tool that allows the quantification of bone metabolism, bone erosion and formation. While µCT-based assessment of bone surface roughness turned out to be very sensitive towards early alterations of the bones, our correlation analysis indicates that PET and CT imaging capture different aspects of the disease and may augment the knowledge gained in experimental studies when used in combination. Scott, D. L., Wolfe, F. & Huizinga, T. W. J. Rheumatoid arthritis. Lancet 376, 1094–1108 (2010). Catrina, A. I., Joshua, V., Klareskog, L. & Malmström, V. Mechanisms involved in triggering rheumatoid arthritis. Immunol. Rev. 269, 162–174 (2016). McInnes, I. B. & Schett, G. The pathogenesis of rheumatoid arthritis. N. Engl. J. Med. 365, 2205–2219 (2011). van der Linden, M. P. M. et al. Long-term impact of delay in assessment of patients with early arthritis. Arthritis Rheum. 62, 3537–3546 (2010). van den Berg, W. B. Lessons from animal models of arthritis over the past decade. Arthritis Res. Ther. 11, R250 (2009). Kobezda, T., Ghassemi-Nejad, S., Mikecz, K., Glant, T. T. & Szekanecz, Z. Of mice and men: how animal models advance our understanding of T-cell function in RA. Nat. Rev. Rheumatol. 10, 160–170 (2014). Sardar, S. & Andersson, Å. Old and new therapeutics for rheumatoid arthritis: in vivo models and drug development. Immunopharmacol. Immunotoxicol. 38, 2–13 (2016). McQueen, F. M. Imaging in early rheumatoid arthritis. Best Pract. Res. Clin. Rheumatol. 27, 499–522 (2013). Irmler, I. M. et al. In vivo molecular imaging of experimental joint inflammation by combined 18F-FDG positron emission tomography and computed tomography. Arthritis Res. Ther. 12, R203 (2010). Irmler, I. M. et al. 18F-Fluoride PET/CT for non-invasive in vivo quantification of pathophysiological bone metabolism in experimental murine arthritis. Arthritis Res. Ther. 16, R155 (2014). Schubert, D., Maier, B., Morawietz, L., Krenn, V. & Kamradt, T. Immunization with glucose-6-phosphate isomerase induces T cell-dependent peripheral polyarthritis in genetically unaltered mice. J. Immunol. 172, 4503–4509 (2004). Backhaus, M. et al. Arthritis of the finger joints: a comprehensive approach comparing conventional radiography, scintigraphy, ultrasound, and contrast-enhanced magnetic resonance imaging. Arthritis Rheum. 42, 1232–1245 (1999). Barck, K. H. et al. Quantification of cortical bone loss and repair for therapeutic evaluation in collagen-induced arthritis, by micro-computed tomography and automated image analysis. Arthritis Rheum. 50, 3377–3386 (2004). Yang, S. et al. Quantification of bone changes in a collagen- induced arthritis mouse model by reconstructed three dimensional micro-CT. Biol. Proced. Online 15, 8 (2013). Salaffi, F. et al. Validity of a computer-assisted manual segmentation software to quantify wrist erosion volume using computed tomography scans in rheumatoid arthritis. BMC Musculoskelet. Disord. 14, 265 (2013). Stach, C. M. et al. Periarticular bone structure in rheumatoid arthritis patients and healthy individuals assessed by high-resolution computed tomography. Arthritis Rheum. 62, 330–339 (2010). Silva, M. D. et al. Application of surface roughness analysis on micro-computed tomographic images of bone erosion: examples using a rodent model of rheumatoid arthritis. Mol. Imaging 5, 475–484 (2006). Visser, E. P. et al. Spatial resolution and sensitivity of the Inveon small-animal PET scanner. J. Nucl. Med. 50, 139–147 (2009). Rasband, W. S. ImageJ, National Institutes of Health, USA, http://imagej.nih.gov/ij/ (1997–2016). Lorensen, W. E. & Cline, H. E. Marching cubes: A high resolution 3D surface construction algorithm. ACM SIGGRAPH Comput. Graph. 21, 163–169 (1987). Otsu, N. A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man and Cybernetics 9, 62–66 (1999). Cignoni, P., Ganovelli, F., Montani, C. & Scopigno, R. Reconstruction of topologically correct and adaptive trilinear isosurfaces. Comput. Graph. 24, 399–418 (2000). MeshLab, Visual Computing Lab – ISTI – CNR, http://meshlab.sourceforge.net (2014). Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012). R Development Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org (2014). Görtz, B. et al. Arthritis induces lymphocytic bone marrow inflammation and endosteal bone formation. J. bone Miner. Res. 19, 990–998 (2004). Hayer, S. et al. B-cell infiltrates induce endosteal bone formation in inflammatory arthritis. J. Bone Min. Res. 23, 1650–1660 (2008). Keller, K. K. et al. Bone formation and resorption are both increased in experimental autoimmune arthritis. PLoS One 7, 1–7 (2012). Yu, Y. et al. In vivo evaluation of early disease progression by X-ray phase-contrast imaging in the adjuvant-induced arthritic rat. Skeletal. Radiol. 35, 156–164 (2006). Azuma, K., Furuzawa, M., Fujiwara, S., Yamada, K. & Kubo, K. Y. Effects of active mastication on chronic stress-induced bone loss in mice. Int. J. Med. Sci. 12, 952–957 (2015). Devauchelle-Pensec, V., Saraux, A. & Alapetite, S. Diagnostic value of radiographs of the hands and feet in early rheumatoid arthritis. Jt. Bone Spine 434–441 (2002). Töpfer, D., Finzel, S., Museyko, O., Schett, G. & Engelke, K. Segmentation and quantification of bone erosions in high-resolution peripheral quantitative computed tomography datasets of the metacarpophalangeal joints of patients with rheumatoid arthritis. Rheumatology 53, 65–71 (2014). Silva, M., Savinainen, A. & Kapadia, R. Quantitative analysis of micro-CT imaging and histopathological signatures of experimental arthritis in rats. Mol. Imaging 3, 312–318 (2004). Healy, A. M. et al. PKC-θ -deficient mice are protected from Th1-dependent antigen-induced arthritis. J. Immunol. 177, 1886–1893 (2006). Proulx, S. T. et al. Longitudinal assessment of synovial, lymph node, and bone volumes in inflammatory arthritis in mice by in vivo magnetic resonance imaging and microfocal computed tomography. Arthritis Rheum. 56, 4024–4037 (2007). Sevilla, R. S. et al. Development and optimization of a high-throughput micro-computed tomography imaging method incorporating a novel analysis technique to evaluate bone mineral density of arthritic joints in a rodent model of collagen induced arthritis. Bone 73, 32–41 (2014). The authors thank the Bundesministerium für Bildung und Forschung for funding of this work (grant numbers: 0316040A and 03ZZ0803A). Bianca Hoffmann and Carl-Magnus Svensson contributed equally to this work. Departemet Cell and Molecular Biology, Leibniz-Institute for Natural Product Research and Infection Biology, Hans-Knöll-Institute, Beutenbergstr. 11a, 07745, Jena, Germany Bianca Hoffmann , Björn Gebser & Hans Peter Saluz Friedrich Schiller University, Fürstengraben 1, 07743, Jena, Germany , Hans Peter Saluz & Marc Thilo Figge Applied Systems Biology, Leibniz-Institute for Natural Product Research and Infection Biology, Hans-Knöll-Institute, Beutenbergstr. 11a, 07745, Jena, Germany Carl-Magnus Svensson Transfer Group Anti-infectives, Leibniz-Institute for Natural Product Research and Infection Biology, Hans-Knöll-Institute, Beutenbergstr. 11a, 07745, Jena, Germany Maria Straßburger Institute of Immunology, Jena University Hospital, Leutragraben 3, 07743, Jena, Germany Ingo M. Irmler & Thomas Kamradt Search for Bianca Hoffmann in: Search for Carl-Magnus Svensson in: Search for Maria Straßburger in: Search for Björn Gebser in: Search for Ingo M. Irmler in: Search for Thomas Kamradt in: Search for Hans Peter Saluz in: Search for Marc Thilo Figge in: B.H., C.M.S., M.S., I.M.I., T.K. and H.P.S. conceived and designed the study. B.H., M.S. and B.G. performed the data acquisition. B.H., C.M.S. and M.T.F. analyzed the data and performed statistical analysis. B.H. and C.M.S. prepared the figures. All authors revised the manuscript critically and finally approved the version of the article to be published. Correspondence to Hans Peter Saluz or Marc Thilo Figge. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Hoffmann, B., Svensson, C., Straßburger, M. et al. Automated Quantification of Early Bone Alterations and Pathological Bone Turnover in Experimental Arthritis by in vivo PET/CT Imaging. Sci Rep 7, 2217 (2017) doi:10.1038/s41598-017-02389-6 DOI: https://doi.org/10.1038/s41598-017-02389-6 Scientific Reports menu About Scientific Reports Guest Edited Collections Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Editorial Board Highlights
CommonCrawl
Optimization of conformal whispering gallery modes in limaçon-shaped transformation cavities Jung-Wan Ryu1, Jinhang Cho2, Inbo Kim2 & Muhan Choi2,3 Scientific Reports volume 9, Article number: 8506 (2019) Cite this article Transformation optics Directional light emission from high-Q resonant modes without significant Q-spoiling has been a long standing issue in deformed dielectric cavities. In limaçon-shaped gradient index dielectric cavities recently proposed by exploiting conformal transformation optics, the variation of Q-factors and emission directionality of resonant modes was traced in their system parameter space. For these cavities, their boundary shapes and refractive index profiles are determined in each case by a chosen conformal mapping which is taken as a coordinate transformation. Through the numerical exploration, we found that bidirectionality factors of generic high-Q resonant modes are not directly proportional to their Q-factors. The optimal system parameters for the coexistence of strong bidirectionality and a high Q-factor was obtained for anisotropic whispering gallery modes supported by total internal reflection. Whispering gallery modes (WGMs) are high-Q resonant modes supported in spherical and circular dielectric cavities, where corresponding light rays are trapped inside the cavities because the incident angle of the light rays circulating along the curved boundary are larger than the critical angle for total internal reflection (TIR)1,2. However, the rotational symmetry causes isotropic light emission, i.e., the isotropic evanescent field outside cavities, which is a considerable disadvantage for applications in optical communication and integrated photonic circuits3. Various methods have been proposed to obtain directional light emissions by breaking rotational symmetry while minimizing the spoiling of high Q-factors, for example, deformed microcavities4,5,6,7, annular microcavities8,9, coupled microcavities10,11, and microcavities with defects at their boundaries12. Although many deformed cavity shapes that support high-Q modes with directional emissions have been reported for practical applications, it has been a long-standing issue over the last two decades to find a general method to obtain a highly directional light emission solely from high-Q WGMs without Q-spoiling. Recently, deformed gradient index microcavities designed by transformation optics13,14, which are named transformation cavity, have been proposed to obtain directional light emission while simultaneously maintaining the nature of high-Q WGMs15. The cavity boundary shapes and corresponding refractive index profiles of the transformation cavities were designed utilizing conformal transformation optics16. Transformation cavities have attracted considerable attention not only in resonator optics as they combine optical microcavities with transformation optics, but also in applications requiring high-Q modes with unidirectional light emission. The designing scheme can be applicable in various frequency regimes and the transformation cavity with inhomogeneous refractive index profile can be implemented effectively by drilling subwavelength-scale air holes in a dielectric slab or by arranging dielectric posts with high refractive indices exploiting metamaterial concept17,18,19,20. For example, a triangular transformation cavity was already implemented and its associated high-Q mode was experimentally observed at microwave frequencies15. There have been, however, few systematic studies on resonant mode properties in their system parameter space. Numerical investigation on the change of optical mode properties in a system parameter space is important from a practical point of view for obtaining an optimal design because the mode characteristics change substantially as the boundary shapes and corresponding refractive index profiles of cavities vary. In this work, we studied the optical properties of resonant modes in limaçon-shaped transformation cavities. Variations in Q-factors, near-field intensity patterns, and the directionality of the far-fields of resonant modes in limaçon-shaped transformation cavities were numerically investigated as functions of system parameters. This paper is organized as follows. We introduce a limaçon-shaped transformation cavity with an inhomogeneous refractive index profile. And then the variation in high-Q and low-Q resonant modes according to their system parameters is traced by numerical calculation. Based on these results, the optimal condition of system parameters for the resonant mode having both high Q-factor and strong bidirectional emission are obtained. Finally, we summarize our results. Limaçon-shaped transformation cavity If we consider an infinite cylindrical dielectric cavity with the translational symmetry along the z-axis, the Maxwell equations are reduced to 2-dimensional scalar wave equation. In this case, one can use effective 2-dimensional dielectric cavity model, where optical modes are described by resonances or quasibound modes which are obtained by solving the following scalar wave equation, $$[{\bigtriangledown }^{2}+{n}^{2}({\bf{r}}){k}^{2}]\psi ({\bf{r}})=0,$$ where n(r) is the refractive index function and r = (x, y). The resonant modes should satisfy the outgoing-wave boundary condition, $$\psi ({\bf{r}}) \sim h(\varphi ,k)\frac{{e}^{ikr}}{\sqrt{r}}\,{\rm{for}}\,r\to \infty ,$$ where h(ϕ, k) is the far-field angular distribution of the emission. In a conventional deformed cavity with a homogeneous refractive index profile, n(r) is n0 inside the cavity and 1 outside the cavity. In the case of transverse magnetic (TM) polarization, the wave function ψ(r) corresponds to Ez, the z component of electric field21. In the case of transverse electric (TE) polarization, the wave function ψ(r) corresponds to Hz, the z component of the magnetic field. Both the wave function ψ(r) and its normal derivative ∂νψ are continuous across the cavity boundary in the case of TM polarization. In the case of TE polarization, the wave function ψ(r) is continuous across the cavity boundary and instead of its normal derivative, n(r)−2∂νψ is continuous across the cavity boundary. The real part of the complex wave number k is equal to ω/c where ω is frequency of the resonant mode and c is speed of light. The imaginary part of k is equal to −1/(2cτ) where τ is lifetime of the mode. The quality factor Q of the mode is defined as Q = 2πτ/T = −Re(k)/2Im(k). As an example of transformation cavities, we consider a transformation cavity whose boundary is given by a limaçon shape, which is one of the widely studied shapes in the fields of quantum billiard22 and optical microcavity6. The corresponding conformal mapping from the unit circle in Fig. 1(a) to the limaçon in Fig. 1(b) is given by $$\zeta =\beta (\eta +\varepsilon {\eta }^{2}),$$ where η = u + iv and ζ = x + iy are complex variables denoting positions in the original virtual space (see Fig. 1(a)) and the physical space (see Fig. 1(b)), respectively; ε is a deformation parameter and β is a positive size-scaling parameter, which change the cavity boundary shape and refractive index profile of the cavity, \(n(x,y)={n}_{0}{|d\zeta /d\eta |}^{-1}={n}_{0}/(\beta |\sqrt{1+4\varepsilon \zeta /\beta }|)\), where n0 is the refractive index of the unit disk cavity in the original virtual space. The refractive index outside cavity is set equal to 1. In this work, we focus on TM polarization modes without loss of generality since TE polarization mode can be treated similarly. In the following sections, we numerically investigate the variation of resonant modes as a function of ε and β using the boundary element method23,24. (a) Circular dielectric cavity with homogeneous refractive index, n0 = 1.8 in η-space (original virtual space). Straight gray lines denote grids of Cartesian coordinates. (b) Limaçon-shaped transformation cavity in ζ-space (physical space) with inhomogeneous refractive index, which is obtained by the conformal mapping given by Eq. (3) with ε = 0.2 and β = 0.75. Note that curved gray lines are not grids of coordinates but the transformed image of the straight grid lines in η-space by the conformal mapping. The transformed curved grids are encoded in the spatially varying refractive index inside the cavity which is denoted by scaled colors. Variations of resonances according to the system parameters: high-Q modes First, we consider the high-Q WGM in a homogeneous circular dielectric cavity with n0 = 1.8, whose mode number (m, l) is (14, 1), where m and l are azimuthal and radial mode numbers, respectively, and the Q-factor is about 2174. The variation of optical mode properties, including Q-factor, emission directionality, and Husimi functions as functions of the system parameters ε and β are obtained. To measure the degree of bidirectional emission which is the ratio of the intensity emitted into the windows centered at ϕ = ±π/2 with an angular width of π/2 to the total emitted intensity (see the inset in Fig. 2(d)), we define the bidirectionality factor B as $${\rm{B}}=\frac{{\int }_{\frac{\pi }{4}}^{\frac{3\pi }{4}}I(\varphi )d\varphi }{{\int }_{0}^{\pi }I(\varphi )d\varphi },$$ where I(ϕ) is the far-field intensity at the angle ϕ (see the inset in Fig. 2(d)) since the distribution has the mirror symmetry with respect to the horizontal axis (x-axis)11. If a mode exhibits complete isotropic emission, the B-factor is equal to 0.5. B-factor greater than 0.5 implies bidirectional emission in the vertical directions. When B-factor is less than 0.5, the mode exhibits unidirectional or bidirectional emission in the horizontal directions. It should be noted that the directionality factors can be defined in other ways25,26 or by the angle variance of far-field intensity, which yield qualitatively similar results. (a) Q-factor and (b) B-factor as functions of ε and β. Solid and dashed curves represent βmax and βsat as a function of ε, respectively. Horizontal (red dashed) and vertical (blue dashed) lines represent β = 0.75 and ε = 0.1, respectively. Dots represent six selected resonant modes (A-F). (c) Q-factor (black curve) and B-factor (red curve) as functions of ε with β = 0.75, corresponding to the red horizontal dashed lines in (a,b). Three blue dashed lines denote ε = 0.01, 0.1, and 0.2. (d) Q-factor (black curve) and B-factor (red curve) as functions of β with ε = 0.1, corresponding to the blue vertical dashed lines in (a,b). Three blue dashed lines denote β = 0.6, 0.8, and 1.0. The inset represents far-field angle ϕ and the B-factor where the shaded region indicates the range of bidirectional emission. Q-factor and B-factor as functions of ε and β Figure 2 shows Q-factor and B-factor as functions of deformation parameter ε and size-scaling parameter β in limaçon-shaped transformation cavities. As deformation parameter ε becomes larger under a fixed β, the Q-factor typically degrades, however, the degree of degradation is much smaller as compared with severe Q-spoiling in the corresponding homogeneous cavity. On the other hand, as size-scaling parameter β decreases when ε is fixed, Q-factor increases as shown in Fig. 2(a,d), since the confinement effect becomes stronger by the enhancement of TIR mechanism. Q-factor of a mode is closely related to βmax = 1/(1 + 2ε), the largest value of β that support the so called, conformal WGMs (cWGMs); βmax can be obtained from the condition |dζ/dη|−1 ≥ 1 necessary for TIR in transformation cavities with outside refractive index nout = 115. In cases where β ≤ βmax, the cWGMs with high-Q factor can be supported in transformation cavities. When β > βmax, only relatively short-lived resonances can be formed since the cWGMs are no longer supported. In Fig. 2(b), for any given β value, one can notice that B-factor is maximized at around ε = 0.1. On the other hand, both Q-factor and B-factor have a higher value as β value becomes smaller as shown in Fig. 2(a,b,d). Also, as one can see in the Q-factor curve shown in Fig. 2(d), when β is smaller than βsat that is a chosen value around which the Q-factor starts to saturate, the slope of the curve becomes significantly reduced. The βsat is related to the ratio of wavelength of the mode to the characteristic length scales of the system and is different for each individual mode. Change of a resonance according to ε variation We investigate the variations of resonant modes as a function of ε for fixed β = 0.75. As can be seen in Fig. 2(c), the Q-factor curve shows distinctive slope changes at two ε values associated with βsat and βmax. The reason of the slope change at \(\varepsilon > 0.167\) is that β starts to violate the TIR condition (β > βmax(ε) = 0.75) when ε > 0.167. We will discuss similar behavior in the next section on the β parameter variation. The B-factor has a maximum value around ε = 0.1. The reason for the change of emission directionality according to the deformation can be easily understood by plotting the Husimi function27,28, one of the widely used phase space representation of intracavity wave intensity. The Husimi function at the dielectric interfaces is defined by the overlap of the boundary wave function of an optical mode with a Gaussian wave packet on the cavity boundary in a reduced phase space, which are useful when exploring ray-wave correspondence in optical dielectric cavities. For our transformation cavities, the Husimi function can be calculated in the reciprocal virtual space which is obtained by an inverse conformal mapping from the physical space29. Figure 3 shows refractive index profiles, near-field intensity patterns, far-field intensity patterns, and Husimi functions for the modes with ε = 0.01, 0.1, and 0.2 (A, B, and C marked in Fig. 2, respectively). In the case with ε = 0.01 (very small deformation), the range of the refractive index profile is narrow as shown in Fig. 3(A-1), so the near and the far-field intensity patterns of the mode-A depicted in Fig. 3(A-2,A-3) are very similar as those of the corresponding WGM with isotropic emission in a uniform circular cavity. Also, in the Husimi function depicted in Fig. 3(A-4), the upper and lower bands above the critical line which represent the intensities of counterclockwise (CCW) and clockwise (CW) traveling wave components, respectively, is slightly changed from the straight uniform bands of the Husimi function of the WGM in a uniform circular cavity. The distance between the intensity tails of bands of Husimi function and critical line is nearly the same at all positions of cavity boundary. Thus the mechanism of the almost isotropic emission is direct tunneling. In the first column are refractive index profiles in limaçon-shaped transformation cavities when ε is equal to (A-1) 0.01 (mode-A in Fig. 2), (B-1) 0.1 (mode-B), and (C-1) 0.2 (mode-C), respectively. In the second column are near-field intensity patterns when ε is equal to (A-2) 0.01, (B-2) 0.1, and (C-2) 0.2 with β = 0.75, respectively. In the third column are far-field intensity patterns when ε is equal to (A-3) 0.01, (B-3) 0.1, and (C-3) 0.2, respectively. In the fourth column are Husimi functions when ε is equal to (A-4) 0.01, (B-4) 0.1, and (C-4) 0.2, respectively. The yellow curves represent the critical lines for total internal reflection. The inset in the last Husimi function shows the arc length s and incident angle χ of the ray trajectory in a reciprocal virtual space29. When ε becomes 0.1, the far-field intensity pattern of the mode-B exhibits a pronounced bidirectional emission as shown in Fig. 3(B-3) while maintaining the Q-factor sufficiently high as shown in Fig. 2. The band-type intensities of Husimi function becomes distorted a little bit more because of the breaking of rotational symmetry, but are still very similar to those of a WGM in a uniform circular cavity, unlike the cases of conventional deformed cavities with homogeneous refractive index. On the other hand, the critical lines for TIR become bent from straight lines into curved ones as shown in Fig. 3(B-4). The dominant emissions leak out in two opposite tangential directions slightly off the cavity boundary position (s = 0) where the intensity tails of the CW and CCW wave bands of Husimi functions are closest to the critical lines (i.e., where the refractive index ratio between inside and outside cavity is lowest). Since the distance between the intensity bands of Husimi function and critical lines is sufficiently away, emission mechanism is still direct tunneling. The main reason that the far-field intensity patterns drastically change from almost isotropic (mode-A) to bidirectional emission patterns (mode-B) even though the near-field intensity pattern of each cWGM is similar to those of WGM in uniform index circular cavities is the bending down and up of the critical lines, i.e., symmetric rise and fall of in/out index ratio along the cavity boundary, not the variations of the band structure of Husimi function of cWGMs. When ε = 0.2, the mode-C is not a cWGM since β is larger than βmax as shown in Fig. 2 and then the Q-factor becomes lower since the distance between the intensity bands of Husimi function and critical lines is closer at s = 0 than the previous case, as can be noticed in Fig. 3(C-4). In spite of lower Q-factor, the far-field intensity pattern of the mode-C still exhibits a bidirectional feature as shown in Fig. 3(C-3). To summarize, in a limaçon-shaped transformation cavity, as the deformation parameter ε starts to increase for a fixed β value, the high-Q WGM with isotropic emission in a circular dielectric cavity changes into the high-Q cWGM with bidirectional emission. As ε increases further to break the TIR condition, the high-Q cWGM transforms into the low-Q mode with some degraded bidirectionality. Change of a resonance according to β variation Next, we investigate the variations of resonant modes as a function of β for fixed ε = 0.1. As shown in Fig. 2(d), as β decreases, Q-factor of the mode increases monotonically through two characteristic β values where the slope of Q-factor curve is significantly changed. Approaching to the first point, βmax = 0.833, Q-factor of the mode steeply reaches a very high value because it enters to the parametric domain satisfying the TIR condition, \(\beta \le {\beta }_{max} \sim 0.833\). This tendency is also seen in Fig. 2(c) where the Q-factor begins to increase from \(\varepsilon \sim 0.167\) when β = 0.75. Reaching around the second point \({\beta }_{sat} \sim 0.63\), the Q-factor of the mode starts to be saturated because most of the plane wave components of the incident wave faithfully undergoes TIR. This means that in order to keep the Q-factor value of the mode high, β should be set sufficiently smaller than βmax. Figure 4 shows refractive index profiles, near-field intensity patterns, far-field intensity patterns, and Husimi functions for the modes with β = 0.6, 0.8, and 1.0 (D, E, and F marked in Fig. 2), respectively. The mode-D and E with β < βmax are cWGMs and their far-field intensity patterns exhibit bidirectional emission as shown in Fig. 4(D-3,E-3). The mode-F with β > βmax is not a cWGM and its bidirectionality of emission is considerably spoiled as shown in Fig. 4(F-3). The B-factor monotonically decrease as β increase in Fig. 2(d) unlike the case of ε variation shown in Fig. 2(c). As shown in Fig. 4(D-4), when β = 0.6, the upper and lower bands of the Husimi function of the mode-D is far from the critical lines. When β = 0.8, the lines become closer to the bands as shown in Fig. 4(E-4). When β = 1.0 violating the TIR condition, the critical lines for the mode-F overlap with the bands over a fairly large region centered around s = 0, as depicted in Fig. 4(F-4). In the first column are refractive index profiles in limaçon-shaped transformation cavities when β is equal to (D-1) 0.6 (mode-D in Fig. 2), (E-1) 0.8 (mode-E), and (F-1) 1.0 (mode-F), respectively. In the second column are near-field intensity patterns when β is equal to (D-2) 0.6, (E-2) 0.8, and (F-2) 1.0 with ε = 0.1, respectively. In the third column are far-field intensity patterns when β is equal to (D-3) 0.6, (E-3) 0.8, and (F-3) 1.0, respectively. In the fourth column are Husimi functions when β is equal to (D-4) 0.6, (E-4) 0.8, and (F-4) 1.0, respectively. Through the above investigation for the change of mode characteristics, we can know that, as β crosses βmax of TIR condition, the high-Q WGM can change between a low-Q mode and a cWGM. Also a higher Q-factor of the cWGM accompanies better bidirectional emission as β varies when ε is fixed. Additionally, smaller β requires the transformation cavity have wider range of refractive index profile with a higher maximum index value. The range of refractive index profile is an important restriction in actual fabrication of the transformation cavity, so the optimal β should be selected appropriately within the attainable range of refractive index. Figure 5 shows the diagram which represent the positions of the resonant modes depicted in Fig. 2 in the Q-factor vs. B-factor space. In this diagram, one can see the parameters for the resonant mode-D is a nearly optimal condition for high Q-factor and strong bidirectional emission in a limaçon-shaped transformation cavity with 0 ≤ ε < 0.25 and 0.5 < β < 1.0. In the case of a cWGM with a sufficiently high Q-factor, the major emission occurs in the two opposite tangential directions from the one small spot slightly off the cavity boundary position (s = 0) where the in/out refractive index ratio is smallest. Therefore, in a limaçon-shaped transformation cavity, tunneling emission occurs only at one position (s = 0) of the cavity boundary for all cWGMs, i.e., the emission directionality of a high-Q cWGM is universal. However, in contrast to high-Q cWGMs, the far-field intensity patterns of low-Q modes are not universal, but instead represent specific properties of individual modes, which will be dealt with briefly in the following section. Diagram of Q-factor vs. B-factor space of resonant modes (brown dots) in the regions of 0 ≤ ε < 0.25 and 0.5 < β < 1.0. The large black dots represent six resonant modes (A–F). Variations of resonances depending on the system parameters: low-Q modes In this section, we consider a low-Q mode in a circular dielectric cavity with n0 = 1.8, whose mode number (m, l) is (8, 3) and the Q-factor is about 84. Just as in the case of a high-Q cWGM, we also study the variation of the mode and its optical properties, such as Q-factor, emission directionality, and Husimi function, as functions of the system parameters in a limaçon-shaped transformation cavity. For this mode, as the deformation parameter increases, we obtain unidirectional emission, which is a specific property of an individual mode. To measure the degree of unidirectional emission which is the ratio of the intensity emitted into a π/2-degree window centered around ϕ = 0 to the total emitted intensity (see the inset in Fig. 6(d)), similarly to the B-factor in the previous section, we define a unidirectionality factor U as $${\rm{U}}=\frac{{\int }_{0}^{\frac{\pi }{4}}\,I(\varphi )d\varphi }{{\int }_{0}^{\pi }\,I(\varphi )d\varphi },$$ where I(ϕ) is the far-field intensity distribution at the angle ϕ since the wave function has mirror symmetry for horizontal axis (x-axis). The U-factor is equal to 0.25 when a mode exhibits complete isotropic emission. Thus, when the U-factor is larger than 0.25, the mode exhibits unidirectional emission in the horizontal direction. (a) Q-factor and (b) U-factor as functions of ε and β. Solid curves represent βmax as a function of ε. Horizontal (red dashed) and vertical (blue dashed) lines represent β = 0.8 and ε = 0.2, respectively. Dots represent six selected resonant modes (A-F). (c) Q-factor (black curve) and U-factor (red curve) as functions of ε with β = 0.8, corresponding to the red horizontal dashed lines in (a,b). Three blue dashed lines denote ε = 0.01, 0.1, and 0.24. (d) Q-factor (black curve) and B-factor (red curve) as functions of β with ε = 0.2, corresponding to the blue vertical dashed lines in (a,b). Three blue dashed lines denote β = 0.6, 0.8, and 1.0. The shaded region of the inset indicates the range of unidirectional emission. Q-factor and U-factor as functions of ε and β Figure 6 shows the Q-factor and U-factor as functions of ε and β. As β decreases, the Q-factor, of course, increases due to the overall rising of refractive index as mentioned above. In contrast to a high-Q cWGM, the Q-factor of the mode is not strongly related to βmax because low-Q modes do not satisfy the TIR condition. The aspect of U-factor variation as functions of ε and β differs from that of Q-factor variation. As β increases, the U-factor of the mode increases in the region of β < 0.8, but decreases in the region of β > 0.8. This means that there is critical value of β for unidirectionality of emission as shown in Fig. 6(d). A highest U-factor is obtained near ε = 0.2 and β = 0.8 as shown in Fig. 6. We now investigate the variations of resonant modes as a function of ε when β = 0.8. As ε increases, the Q-factor decreases and the U-factor increases for \(\varepsilon \lesssim 0.2\), but decreases gradually for \(\varepsilon \gtrsim 0.2\). Figure 7 shows the near-field intensity patterns and far-field intensity patterns as well as the Husimi functions when ε = 0.01, 0.1, and 0.24. The refractive index profiles are similar to those in Figs 3 and 4. When ε = 0.01, the near-field intensity patterns are nearly the same as those of a low-Q mode with a mode number of (m, l) = (8, 3) in a homogeneous circular cavity, and the far-field intensity patterns are nearly isotropic. When ε = 0.1 and 0.2, the U-factor is large because of unidirectional emission, in contrast to the universal bidirectional emission of a high-Q cWGM. It is noted that the U-factor as functions of ε and β in Fig. 6(b) is a specific property of each optical mode with mode number (8, 3). If we obtain the U-factor as functions of (ε, β) of optical modes with different mode numbers, we can obtain different characteristics of U-factor variation from the Fig. 6(b). As a result, the far-field intensity pattern shows clear unidirectional emission, as shown in Fig. 7(B-2,C-2). The Husimi functions in Fig. 7(A-3,B-3,C-3) show that the Q-factor of the modes becomes lower as the CW/CCW wave intensities of Husimi functions move below the critical lines for the TIR. It should be noted that there are regions with fluctuating U-factors in Fig. 6(b,c) when \(0.14\lesssim \varepsilon \lesssim 0.18\) where the Q-factor changes smoothly but the far-field intensity pattern changes rapidly. In the first column are near-field intensity patterns in a limaçon-shaped transformation cavity when ε is equal to (A-1) 0.01 (mode-A in Fig. 6), (B-1) 0.1 (mode-B), and (C-1) 0.24 (mode-C) when β = 0.8, respectively. In the second column are far-field intensity patterns when ε is equal to (A-2) 0.01, (B-2) 0.1, and (C-2) 0.24, respectively. In the third column are Husimi functions when ε is equal to (A-3) 0.01, (B-3) 0.1, and (C-3) 0.24, respectively. Yellow curves represent the critical lines for total internal reflection. We investigate the variations of resonant modes as a function of β when ε = 0.2. As β increases, the Q-factor decreases, but the U-factor increases when β is less than 0.8 and decreases when β is greater than 0.8. Figure 8 shows the near-field intensity patterns and far-field intensity patterns as well as the Husimi functions when β = 0.6, 0.8, and 1.0, for ε = 0.2. They are similar to those shown in Fig. 7. In the first column are near-field intensity patterns in a limaçon-shaped transformation cavity when β is equal to (D-1) 0.6 (mode-D in Fig. 6), (E-1) 0.8 (mode-E), and (F-1) 1.0 (mode-F) when ε = 0.2, respectively. In the second column are far-field intensity patterns when β is equal to (D-2) 0.6, (E-2) 0.8, and (F-2) 1.0, respectively. In the third column are Husimi functions when β is equal to (D-3) 0.6, (E-3) 0.8, and (F-3) 1.0, respectively. We studied the optical properties of the resonant modes in limaçon-shaped transformation cavities with an inhomogeneous refractive index profiles by changing system parameters. From numerical calculations of Q-factors, the directionality factors of the far-fields, and Husimi functions of resonant modes as functions of system parameters, we found that generic high-Q resonant modes exhibit bidirectional emissions but the bidirectionality factors of the modes are not directly proportional to their Q-factors. In contrast to the universal bidirectional emission of a high-Q cWGM, the directionality of low-Q modes are the specific property of individual mode. In the implementation of a transformation cavity, the maximum and minimum values of the refractive index inside the cavity are limited by the lack of a high refractive index material in nature. We demonstrated that limaçon-shaped transformation cavities which support optimal cWGM with high-Q factor and strong bidirectional emission can be designed within an attainable range of refractive index. We expect that our approach and results for transformation cavities will be useful to design of advanced optical devices. McCall, S. L., Levi, A. F. J., Slusher, R. E., Pearton, S. J. & Logan, R. A. Whispering-gallery mode microdisk lasers. Appl. Phys. Lett. 60, 20 (1992). Yamamoto, Y. & Slusher, R. E. Optical Processes in Microcavities. Physics Today 46, 66 (1993). Optical Processes in Microcavities, edited by Chang, R. K. & Campillo, A. J. (World Scientific, Singapore, 1996). Nöckel, J. U. & Stone, A. D. Ray and wave chaos in asymmetric resonant optical cavities. Nature 385, 45–47 (1997). Gmachl, C. et al. High-Power Directional Emission from Microlasers with Chaotic Resonators. Science 280, 1556–1564 (1998). Wiersig, J. & Hentschel, M. Combining Directional Light Output and Ultralow Loss in Deformed Microdisks. Phys. Rev. Lett. 100, 033901 (2008). Cao, H. & Wiersig, J. Dielectric microcavities: Model systems for wave chaos and non-Hermitian physics. Rev. Mod. Phys. 87, 61–111 (2015). Wiersig, J. & Hentschel, M. Unidirectional light emission from high-Q modes in optical microcavities. Phys. Rev. A 73, 031802(R) (2006). Preu, S., Schmid, S. I., Sedlmeir, F., Evers, J. & Schwefel, H. G. L. Directional emission of dielectric disks with a finite scatterer in the THz regime. Opt. Exp. 21, 16370–16380 (2013). Ryu, J.-W., Lee, S.-Y. & Kim, S. W. Coupled nonidentical microdisks: Avoided crossing of energy levels and unidirectional far-field emission. Phys. Rev. A 79, 053858 (2009). Ryu, J.-W. & Hentschel, M. Designing coupled microcavity lasers for high-Q modes with unidirectional light emission. Opt. Lett. 36, 1116–1118 (2011). Wang, Q. J. et al. Whispering-gallery mode resonators for highly unidirectional laser action. Proc. Natl. Acad. Sci. USA 107, 22407–22412 (2010). Leonhardt, U. Optical conformal mapping. Science 312, 1777–1780 (2006). Pendry, J. B., Schurig, D. & Smith, D. R. Controlling electromagnetic fields. Science 312, 1780–1782 (2006). Kim, Y. et al. Designing whispering gallery modes via transformation optics. Nat. Photonics 10, 647–652 (2016). Xu, L. & Chen, H. Conformal transformation optics. Nat. Photonics 9, 15–23 (2015). Valentine, J., Li, J., Zentgraf, T., Bartal, G. & Zhang, X. An optical cloak made of dielectrics. Nat. Mater. 8, 568–571 (2009). Gabrielli, L. H., Cardenas, J., Poitras, C. B. & Lipson, M. Silicon nanostructure cloak operating at optical frequencies. Nat. Photonics 3, 461–463 (2009). Vasić, B., Isić, G., Gajić, R. & Hingerl, K. Controlling electromagnetic fields with graded photonic crystals in metamaterial regime. Opt. Exp. 18, 20321–20333 (2010). Gao, H., Zhang, B., Johnson, S. G. & Barbastathis, G. Design of thin–film photonic metamaterial Lüneburg lens using analytical approach. Opt. Exp. 20, 1617–1628 (2012). Jackson, J. D., Classical Electrodynamics, 2nd Edition, (John Wiley and Sons, New York, 1975). Robnik, M. Classical dynamics of a family of billiards with analytic boundaries. J. Phys. A 16, 3971 (1983). Wiersig, J. Boundary element method for resonances in dielectric microcavities. J. Opt. A: Pure Appl. Opt. 5, 53 (2003). Ryu, J.-W. et al. Boundary integral equation method for resonances in gradient index cavities designed by conformal transformation optics. (in preparation). Song, Q. H. et al. Directional Laser Emission from a Wavelength-Scale Chaotic Microcavity. Phys. Rev. Lett. 105, 103902 (2010). Shu, F.-J., Zou, C.-L. & Sun, F.-W. An Optimization Method of Asymmetric Resonant Cavities for Unidirectional Emission. Journal of Lightwave Technology 31, 2994–2998 (2013). Hentschel, M., Schomerus, H. & Schubert, R. Husimi functions at dielectric interfaces: Inside-outside duality for optical systems and beyond. Europhys. Lett. 62, 636 (2003). Lee, S.-Y., Ryu, J.-W., Kwon, T.-Y., Rim, S. & Kim, C.-M. Scarred resonances and steady probability distribution in a chaotic microcavity. Phys. Rev. A 72, 061801(R) (2005). Kim, I. et al. Husimi functions at gradient index cavities designed by conformal transformation optics. Opt. Exp. 26, 6851–6859 (2018). This work was supported by the Institute for Basic Science of Korea (IBS-R024-D1) and National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2017R1A2B4012045 and No. 2017R1A4A1015565). Center for Theoretical Physics of Complex Systems, Institute for Basic Science, Daejeon, 34126, Republic of Korea Jung-Wan Ryu Digital Technology Research Center, Kyungpook National University, Daegu, 41566, Republic of Korea Jinhang Cho , Inbo Kim & Muhan Choi School of Electronics Engineering, Kyungpook National University, Daegu, 41566, Republic of Korea Muhan Choi Search for Jung-Wan Ryu in: Search for Jinhang Cho in: Search for Inbo Kim in: Search for Muhan Choi in: J.-W.R., J.C., I.K. and M.C. conceived the original idea. J.-W.R. and J.C. performed numerical simulations. J.-W.R., J.C., I.K. and M.C. analyzed the data and discussed the results. All authors wrote the manuscript and provided feedback. Correspondence to Muhan Choi. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Ryu, J., Cho, J., Kim, I. et al. Optimization of conformal whispering gallery modes in limaçon-shaped transformation cavities. Sci Rep 9, 8506 (2019) doi:10.1038/s41598-019-44768-1 Received: 08 January 2019 Accepted: 21 May 2019 DOI: https://doi.org/10.1038/s41598-019-44768-1 Boundary integral equation method for resonances in gradient index cavities designed by conformal transformation optics , Jinhang Cho , Soo-Young Lee , Yushin Kim , Sang-Jun Park , Sunghwan Rim , Muhan Choi & Inbo Kim Scientific Reports (2019) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Scientific Reports menu About Scientific Reports Guest Edited Collections Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Editorial Board Highlights
CommonCrawl
Nathaniel Woodward © 2019 CC BY-SA 4.0 Forgetting as "Interference" during Consolidation and Retrieval The singular exception is provided by the human ability to remember past happenings. When one thinks today about what one did yesterday, time's arrow is bent into a loop. The rememberer has mentally traveled back into her past and thus violated the law of the irreversibility of the flow of time. She has not accomplished the feat in physical reality, of course, but rather in the reality of the mind, which, as everyone knows, is at least as important for human beings as is the physical reality." -Endel Tulving Forgetting is probably the scariest thing I can think of short of death itself (indeed, perhaps even scarier). The two phenomena seem to have a lot in common if you think about it! Both are inexorable, incompletely understood, terrifying natural destructions of the self. Both, also, seem to be necessary side-effects of the very processes that enable life and our ability to remember it. Life is the constant (and ultimately losing?) struggle of processes creating order against the prevailing thermodynamic tendency toward disorder (entropy), made possible by a continual flux of energy from the environment. Memory, too, is a way of imposing temporary order on fleeting perceptions, allowing our sensory experience to temporarily persist in the form of impressions which, if attended to and encoded, become memories… at least, until we forget them. Just imagine "you" without your memories, without the impressions of your past experiences… there's really no such thing! Without memories of the past serving to verify our continued existence, we lose the concept of time altogether and are consigned to exist forever in the present moment. Less existentially dramatic, imagine everyday cases of forgetting like losing access to knowledge you worked hard to acquire, skills you've trained painstakingly to master. Needless to say, I have a morbid fascination with forgetting (and remembering, which we will see is part and parcel). Like those latterday immortality-seekers, in my own way I too care deeply about such preservation… not of life per se (though I do think I support these efforts), but of my memories of my own life as-lived; unadulterated access to my past experiences as-encoded. I'm not asking for superhuman mnemonic powers or an eidetic memory; no, I humbly desire continued access to all the ideas_ I've had_ and the things _I've tried consciously to remember_. It's scary when you realize you don't know what you no-longer know… because you've necessarily forgotten about all the things you've forgotten, all that you once knew. At the same time, even in the face of its many foibles, human memory is an astounding innovation! An apparatus shaped by natural forces to serve only a few basic needs (food/water, shelter/safety, mates…) is currently being used to perform all sorts of amazing feats in a world it simply was not evolved for. But more about memory later; this post is about forgetting. Forgetting occurs in at least two ways. One is active around the time of initial memory formation (encoding), and the other is active as you recall information from memory (retrieval). I'll give you a foretaste of both phenomena, to be savored at length in due time. The first type of forgetting has to do with the fact that recently formed memories take time to fully consolidate, and during this time they are vulnerable to interference by other mental activity, which can weaken and even eliminate incipient memories! We will talk about how sleep and alcohol actually improve memory for previously learned material by reducing the amount mental activity in the intervening period, which would otherwise interfere with the consolidation process (e.g. Gais, Lucas, & Born, 2006; Bruce & Pihl, 1997; Lamberty et al., 1990). We will also talk about how neuroscience bears this out; recently induced long-term potentiation (LTP) - the neural mechanism thought to underlie memory formation - is inhibited by subsequent LTP, even if the tasks are unrelated (Xu et al., 1998; Abraham, 2003). This damaging effect of new learning has on prior learning becomes less and less pronounced as the delay between new and prior learning is increased. This is super fascinating, so stick around; we'll save this best bit for last. The second way forgetting can occur is through memory retrieval; that is, remembering one thing may necessarily make it more difficult to remember other things. When you go to retrieve a memory, something has cued you to search for that memory. If many different memories are associated with the same cue, then they "compete" for access to conscious awareness during the memory search; the more of these cue-to-memory associations there are, the more difficult it will be to recall the desired information. Furthermore, increasing the strength of one cue-to-memory association necessarily weakens the associations of all other memories to that cue: this is why it is hard to remember all of the things in a given category, like the wives of Henry VIII. As you recall each additional wife, the relative strengths of the others in memory are thereby diminished, and thus the remaining wives become harder to recall. "Ah, I can never remember that last one!" Selectively retrieving an item from memory, though bolstering future recall for that item, harms our later recall of similar items. As we will see, this negative effect of retrieval may even extend beyond related items; much evidence exists to suggest that retrieval of information is impaired by the previous retrieval of other, unrelated items. This can be viewed as a basic consequence of increasing non-specific mental activity during the retention interval, when presumably the original memories for the to-be-remembered material are still in a state of consolidation. Incredibly, recent research suggests retrieving an well-remembered item from memory may put that item in a state of "reconsolidation" thus re-opening even well-remembered, durable memories to the interfering effects of mental activity (Dudai, 2004). That is, recently activated memories may be just as vulnerable to vitiation by subsequent mental activity as recently formed memories are, though this is still the subject of much debate. Since these processes of encoding and retrieval are going on literally all of the time in each and every sober, waking one of us, so too is forgetting. I have now given you a relatively complete overview; next, we look into both retrieval-based and encoding-based forgetting phenomena in greater detail. In this post, I rely heavily on two review papers (Anderson & Neely, 1996; Wixted, 2004) both for ideas and sources; I highly recommend reading them both in full, especially if this post leaves you with lingering questions. "Interference" during Memory Retrieval An old, solid theory of forgetting was based on various, often related memories interfering with each other. The topic of interference generated loads of research from 1900-1970 and led to many important discoveries in the study of human memory. Though it is no longer alone completely satisfactory as a unified theory of forgetting, it is extremely robust in many respects and it continues to shed light on our understanding of cognition. In this context, "interference" is the impaired ability to remember information that is similar to other information in memory. It is a theory of forgetting, which itself has several proposed mechanisms. Interference occurs when newly acquired information impairs our ability to retrieve previously stored memories (retroactive interference), or conversely, when what you already know makes it harder to learn something new (proactive interference). It happens when new learning crowds out old learning, or vice versa. Imagine you get a new telephone number; at first, when people ask you for your number, you may give them your old one by accident (proactive interference). However, after a year's experience using your new phone number, you may find your old number difficult to recall (retroactive interference). But this is not the usual way! Typically, the more you know about something, the more readily you will be able to associate incoming information with stuff you've already stored in memory; indeed, high levels of domain-specific knowledge can enable otherwise low-aptitude people to perform at the same level as their high-aptitude counterparts (Walker, 1987). By virtue of these additional associations and interconnections, the new memory will be more durable and accessible via multiple pathways. And I've talked before about the testing effect: the fact that retrieving a given memory makes that memory easier to retrieve in the future. So what gives? The trouble starts when the same retrieval cues are related to multiple items in memory. Retrieval cues can be attributes of the target memory or just incidental contextual factors that were present at the time of encoding. For example, when you park your car at the store, aspects of your parking experience are encoded into a mental representation of the event (store, time of day, the fact that you drove a car, the type of car you drove, your internal state…). To the extent that your other parking experiences are similar, they will also contain these characteristics. If these serve as the primary cues which you use to recall your car's location, other memories sharing these features will also be evoked. Interference increases with the number of competing memories associated with a given cue or set of cues; thus, going from retrieval cue to target memory depends not only on the strength of cue-target association, but also on whether the cue is related to other items in memory. Early research into these phenomena was done using the something called the A-B, A-D paradigm. Participants would be instructed to learn random pairs of words (A-B), such as dog-boat, desk-rock, etc. Once they had learned these associations, they would be given another list (A-D) where the first word in each pair was the same, but the word it was paired with was different (dog-sky, desk-egg). Various methods of testing show that memory performance on the first list (A-B) suffers when second list of responses (A-D) must be learned, presumably because people show mutually incompatible responses to the common cue "A" (McGeoch, 1942). Observations such as this led to the ratio rule: the probability that you recall B given cue A is equal to the strength of the A-B association relative to the strengths of all other associations involving cue A. That is, $$ P(B|A) = \frac{strength_{AB}}{strength_{AB}+strength_{AD}+...} $$ Further findings emerged from a paradigm known as Part-Set Cuing. Slamecka (1968) had people study six words from each of five different categories (e.g., types of trees, birds, etc) and tested them later by giving them each category and having them recall the 6 items that went with each. Crucially, the experimental group was given a couple of items from each category as cues (they were cued with part of each set) to help the recall the remaining items, while the control group was given no cues. It turned out that, when you count the number of non-cued target items recalled in both groups, the people who received the cues performed significantly worse than those who got no cues. Roediger (1973) showed that this deficit in recall for remaining items increases as a function of the number of part-set cues given. The crucial factor is that the cues and the targets have a common retrieval cue: the category. However, while part-set cuing hinders recall of remaining items in the set, it has no effect on people's ability to recognize those remaining items in a list of distractors. Dewey Rundus (1973) claimed that part-set cuing induces "retrieval competition" between cued items and non-cued target items. In terms of the ratio rule described above, presentation of non-target items strengthens the association of these items to their category cue, which reduces the relative strength of target items. Like, if the category was fruits and you had been given 6 fruits to remember, then providing you with the cue 'orange' at the time of recall strengthens the fruit-orange association while weakening the association of the 5 others, thus making them more difficult to recall. So increasing the strength of one cue-item association necessarily weakens the association of all other items with that cue. As if that wasn't bad enough, it turns out that a shared cue may not even be required for such "retrieval induced forgetting". Many studies have demonstrated Output Interference, the finding that recall of a target item is impaired by previous retrieval of other items regardless of whether or not the target item shares cues with those retrieved items! This observation is extremely important and has helped to shape recent theory about forgetting. In one early study, AD Smith (1971) had people study 7 items from 7 unrelated categories; the final test was simply cuing people with the category names in different orders. He found that the average number of items recalled dropped significantly with each additional category, from 70% for the first to 45% for the seventh. The same thing was found when examining paired associates (Roediger & Schmidt, 1980): participants studied 20 A-B pairs and were then given A-____ as the test cue and asked to recall the target (B). Performance was found to decrease as a function of previous retrievals; across 5 sequential blocks (of 4 questions each), there was a systematic decline in average correctly recalled (.85, .83, .80, .76, and .73). The decline in successful future retrieval caused by prior retrieval, then, does not depend on the category. The findings hold for recognition tests, too. Smith repeated the previous experiment but gave a test that listed 7 items alongside 7 unrelated distractors for each category, requiring subjects to pick the 7 that appeared on the initial study list, very similar decreases were observed. The finding appears to be quite robust (Ratcliff, Clark, & Shiffrin, 1990). Other studies have shown that both general output interference and cue/category-specific retrieval competition operate separately and simultaneously (Neely, Schmidt, & Roediger, 1983). Even controlling for the passage of time (and thus any residual effects of working memory), output interference persists. As we will see, this distinction between cue-dependent forgetting and forgetting that results from nonspecific mental activity is important because it implies (at least) two different processes. The Paradox of Interference Any cognitive act that involves representations stored in memory requires the process of retrieval. If retrieval itself is a source of interference, then accessing what we already know contributes to forgetting, independent of any new learning (Roediger, 1974). If semantic memory (memory for facts) was as susceptible to interference as episodic memory, then this would lead to the paradox of interference (Smith, Adams, & Schorr, 1978). That is, as an expert learned more and more facts about a specific topic/subject/category, he or she should develop more and more difficulty in remembering any one of them (i.e., by the ratio rule). But thankfully, this does not seem to happen. Why? John Anderson (1974) performed an interesting study where people were asked to study various "facts" of the basic form "a person is in the place", for instance, "a hippie is in the park" or "a lawyer is in the church". Then, subjects were given test items where they had to verify whether or not certain statements were true (eg, "a hippie is in the school"). The crucial finding was that the greater the number of facts learned about a person or location, the longer it took for subjects to verify a statement about that person or location. Assuming, as we have been, that facts are stored in memory as a network of associations, we can imagine that cue like "a hippie is in the park" causes memory nodes "hippie"and "park" to become activated, and that this activation spreads to other nodes that they are linked to in memory. If activation from one node intersects with the activation from another node (i.e., if they are associatively linked), a person would say "true". Under these assumptions, the speed at which activation spreads down an associative link coming from a node is thought to be slower the greater the number of other associative links there are fanning out from that same node. Thus, the more different facts a person learns about someone (hippie, lawyer, etc) the longer the verification times become. This "fan effect" is not just an artifact of using these questionable pseudo-facts. In a follow-up study addressing these concerns, students studied true facts about famous people (eg, "Teddy Kennedy is a liberal senator") alongside up to 4 fantasy facts about them. Even when people knew they were being tested only on the true facts, a fan effect occurred as the number of fantasy facts learned about the famous person increased (Lewis & Anderson, 1976). Two studies provided helped to resolve this paradox (McCloskey & Bigler, 1980; Reder & Anderson, 1980), and they both depend on category hierarchies: if memory search can be restricted to subcategories, they found, then only the facts within that subcategory affect search time. For example, an expert on Richard Nixon has many different categories of information stored about him (his foreign policy, his family life, Watergate…). When asked a question about Nixon's wife, the expert can limit the search to the family subcategory; crucially, only facts within this subcategory will produce interference effects. When retrieving information from memory, people first select the relevant subcategory, and the time it takes to do this is affected by the number of irrelevant subcategories BUT NOT by the number of facts within those irrelevant subcategories. Once the relevant category is selected, only the facts within that category show a fan effect. Thus, interference effects occur in both semantic and episodic memory. But with semantic memory, memories may be more easily compartmentalized into subcategories that allow for a focused search that restricts the source of interference to items within that category. The new perspective on interference (first hinted at by Roediger, 1974) holds that our tendency to forget is intimately bound to the very mechanisms that allow memory retrieval to occur. It has come to be known as Retrieval-induced Forgetting (RIF), and it has as its counterpart the Testing Effect (aka Retrieval Practice), which is the happier finding that retrieving information from memory enhances future retrieval of the same information, above and beyond simply restudying the information. A definitive experimental demonstration of both effects can be seen in Goodmon & Anderson (2011), summarized in the graph above. Essentially, participants study pairs of words that consist of a category name and then an item from the category, such as METAL-iron, TREE-birch, FRUIT-orange, METAL-silver, TREE-elm… There are usually 5-10 categories and 5-10 items per category. After studying the initial list, participants are then given a fill-in-the-blank test over some of the items they saw. They might get METAL-i___ _and _TREE-e_____ for example, with the initial letter provided as a cue. After this, there will be items that were practiced (**Rp+**, such as _METAL-iron, TREE-elm_), items that were not practiced but related to practiced items by category (**Rp-**, such as _METAL-silver, TREE-birch_), and unpracticed, unrelated items (**NRp**, such as _FRUIT-orange_). After this test phase which provides retrieval practice for certain items, participants are given a final test in which they are asked to remember all studied items! The goal is to see how retrieval practice affects participants' recall for unpracticed-related words (Rp-) compared to unpracticed-unrelated words (NRp). The results of these studies follow the same general trend, with one example being given in the graph above. Notice three things: (1) the items people had practiced retrieving (Rp+, like METAL-iron) are remember best, (2) unpracticed items in the same category are remembered worst (Rp-, like METAL-silver), and that unpracticed items from unpracticed categories (NRp, like FRUIT-orange), are in the middle. Thus, in a single experiment, you can demonstrate the beneficial effects of retrieval practice on the practiced items, and the detrimental effects of retrieval practice on items related to the practiced items (that is, retrieval-induced forgetting). Selectively retrieving items from memory harms our later recall of similar items by some kind of suppression of the latter. This helps to overcome competition by these related associations when we are interested in retrieving one of them in particular, but it has the significant downside of impairing retrieval of those other items were they to again become relevant. "Interference" during Memory Encoding In 1924, Jenkins and Dallenbach demonstrated that everyday experiences can interfere with memory. Their study showed that recall of information was better after period of sleep than after an equal period spent awake (controlling for time of day, etc). The findings are robust and have been replicated many times. For example, high school students' ability to remember new vocabulary terms is enhanced when sleep follows within a few hours of learning (Gais, Lucas, & Born, 2006). In the late 19th century, observations of patients with brain damage leading to retrograde amnesia (memory loss for prior events) revealed that the degree of forgetting was greater for more recently acquired memories than it was for older memories; this came to be known as Ribot's Law in honor of one of it's earliest discoverers. In retrograde amnesia, more recent memories are more likely to be lost than more remote ones. Because of the effects of forgetting occur on a temporal gradient, this phenomenon is appropriately called "temporally graded retrograde amnesia". It can be induced by electroshock therapy and is seen in many neurological disorders including Alzheimer's disease. All this is to suggest that older memories are somehow strengthened against degeneration while newer memories are not. This is consistent with Jost's second law (1897) and the fact that forgetting data is not well fit by an exponential function (that is, a function with a constant decay rate), but fit much better by functions with ever decreasing proportional rates of decay! This finding of a temporal gradient in interference is very suggestive of the idea that memories consolidate over time. This would imply that retroactive interference would be stronger the closer it is to the original learning; that is, interference should affect newer memories more than it affects older ones. You could test this in the laboratory by having people learn something, and then interfere with this learning at differing intervals by having different groups of people study something else after different amounts of time had elapsed. Amazingly, a laboratory study testing these ideas was conducted in 1900 by Muller and Pilzecker in Germany produced findings absolutely consistent with this hypothesis. Imposing mental exertion earlier in the retention interval resulted in poorer performance when subjects were called upon to remember the original information. And it need not be list-memorization, either! Early research began to uncover temporal gradients when the interfering mental exertion was solving algebra problems (Skaggs, 1925), reading a newspaper (Robinson, 1920), etc. Why would this happen? One leading hypothesis that has received neuroscientific support is that the resources required to consolidate memories are themselves limited, and so any subsequent learning takes away from resources that would have otherwise been used to consolidate the original learning. As discussed above, an A-B, A-D paired associates learning paradigm is where participants learn a list of items A-B, and then some time later a list of items A-D, and then later are given a cued recall test for their memory of the original list (prompted with A, they have to produce B). The period from original learning of the A-B list until the final test is called the retention interval, and the learning of list A-D produces retroactive interference on participants memories for the A-B list. Later studies using this paradigm to search for an temporal gradient of retroactive interference found an inverted U-shape. Poor recall of A-B was observed if A-D was learned soon after; much better recall of A-B was found if A-D was learned in the middle of the retention interval; poor recall of A-B was observed if A-D was learned right before the cued-recall test. This finding is quite common (Wixted, 2004) and suggestive of both forms of forgetting discussed in this post: interference during consolidation, and interference due to retrieval competition. Retrograde Facilitation? If increasing mental exertion during the retention interval results in poorer recall, then reducing mental exertion as much as possible after learning should improve recall! Unfortunately, a waking brain is almost always aflutter with activity… but sleep, alcohol, and benzodiazapenes all reduce this activity, inducing a temporary state of retrograde amnesia and closing the brain (specifically, the hippocampus) to much new input. By limiting this new input, recently formed memories should be protected from the retroactive interference they would otherwise encounter from waking mental activity. Let's talk about alcohol first, because it's fun. As you may have experienced, alcohol causes a certain degree of anterograde amnesia for materials studied (and events experienced) while under the influence. It is a less widely known that alcohol actually results in improved memory for materials studied just prior to consumption (Bruce & Pihl, 1997; Lamberty et al., 1990; Mann et al. 1984; Parker et al. 1980, 1981). Not that this is a good study strategy… (though who's to say). The prevailing explanation for this finding is that alcohol facilitates recent memories because it prevents the formation of new memories that would otherwise cause retroactive interference (Mueller et al. 1983). The same thing is observed with benzodiazepines (Coenen & Van Luijtelaar, 1997). All of this is entirely consistent with the idea that ordinary forgetting is caused by the retroactive effects of subsequent memory formation that accompanies ordinary mental activity. No direct tests for a temporal gradient have been done with alcohol or benzos in the retention interval, but some studies a la Muller & Pilzecker (1900) have revealed such an effect for sleep. For example, Ekstrand (1972) used a 24-hour retention interval and had subjects sleep 8 hours either right after the original learning occurred or right before the recall test; people in the immediate-sleep condition recalled 81% of the items, whereas people in the delayed-sleep condition recalled only 66%. Neural Mechanism As I've talked about in other posts, long-term potentiation (LTP) in the hippocampus is the leading explanation for how memories are initially formed (Martin et al. 2000). LTP is a long-lasting enhancement of synaptic transmission (the "receiving" neuron becomes more sensitive to the "sending" neuron) brought about by high frequency stimulation from sender. Interestingly (!), alcohol and benzodiazepines are both known to block LTP in the hippocampus (Del Cerro et al., 1992; Evans & Viola-McCabe, 1996; Givens & McMahon 1995, Roberto et al. 2002). Furthermore, alcohol does NOT impair the maintenance of hippocampal LTP induced PRIOR to consumption—indeed, consistent with memory findings, it enhances prior LTP! With respect to sleep-related brain activity, it is known that LTP can be induced during REM sleep but not during non-REM sleep. This may account for the fact that during REM sleep, people are often able to remember mental activity (i.e., dreams), while during non-REM sleep people cannot remember any mental activity taking place (and thus do not experience dreams). Indeed, it has been shown that non-REM sleep protects previously-established memories from interference far better than does REM sleep (Plihal & Born, 1999); further, REM sleep was found to interfere with prior memories just as much as an equal period of intervening wakefulness! All of this is consistent with the observation that, while many prescription antidepressants greatly reduce REM sleep, they are not known to cause memory problems (Vertes & Eastman, 2000). Weirdly, REM sleep does appear to be very important for the consolidation of procedural memories which are non-hippocampus-dependent (Karni et al., 1994). All this is to suggest that when the demands placed on the hippocampus are reduced, it is better able to coordinate memory consolidation. Cells in the hippocampus which fired together during waking experience were shown to be reactivated together during non-REM sleep in rats (Wilson & McNaughton, 1994). Also, coordinated firing between different areas of the neocortex has been shown to replay itself during quiet, unstimulated wakefulness in monkeys (Hoffman & McNaughton, 2002). Instead of relying on sleep or alcohol to inhibit LTP, it is much more precise to administer a drug that selectively targets and prevents induction of hippocampal LTP. These drugs, known as NMDA antagonists, prevent LTP and thus prevent learning of hippocampus-dependent tasks. Experiments using these drugs show that, when administered after a learning task or after LTP is artificially induced in the lab by direct neuronal stimulation, they block all subsequent LTP which would otherwise interfere with the original learning; thus, memories for the originally-learned information (or the strength of the artificially induced LTP) are enhanced by taking NMDA antagonists during the retention period. That is, these LTP-blocking drugs produce all the same effects we've seen above, but allow us to draw more specific conclusions about the underlying mechanism. In one great example of this research, Izquierdo et al. (1999) had rats learn a hippocampus-dependent task and then exposed rats to a novel, stimulating environment either 1 hour or 6 hours later. These researchers observed a temporal gradient: rats forgot more of the original learning when exposed to the novel environment 1 hour after learning, again suggesting that recently established LTP is more vulnerable to disruption by subsequent mental activity than LTP established longer ago. To investigate whether this interference was caused by new LTP associated with exposure to the novel environment, they administered an LTP-blocking drug to a group of rats prior to exposure to the novel environment (1 hour after original learning). These drugs prevented any retroactive interference effects; in this group of rats, memory for the original learning was unimpaired by exposure to the novel environment! The same findings are observed if you artificially induce LTP and measure it's decay over time. Abraham et al. (2002) induced LTP in the hippocampus of rats using electrical stimulation and these animals were returned to their usual "stimulus poor" home cage environments. In this low-interference environment, LTP decayed very slowly. In the experimental condition, some of the rats were exposed to a complex, stimulating environment (a larger cage, new objects, and other rats). Exposures to this environment caused the originally induced LTP to decay much more rapidly, and this interference was a function of the delay between exposure to the new environment and the induction of LTP. Thus, regardless of whether original learning is "actual" or artificially induced LTP, subsequent interfering learning (either actual or artificially induced) interferes with the original learning, and this interference is more pronounced the smaller the delay between original and interfering learning. The central message here, and indeed the main argument made by Wixted (2004) from which this post draws shamefully, is this: the hippocampus is extremely important in consolidating newly formed memories and ordinary mental activity (particularly subsequent memory formation) interferes with that process. Thus, biological memory appears to be self-limiting; new memories are created at the expense of partially damaging other memories, especially if these other memories haven't had enough time to consolidate. Even more spooky is the somewhat new idea of reconsolidation: that it is recently activated memories, not just recently formed ones, that are vulnerable to interference (Dudai, 2004). According to this theory, even if a memory has been completely consolidated, reactivation of said memory makes it just as vulnerable to interference as if it were newly formed; thus accessing consolidated memories might simply restart the consolidation process! AHH! How many times I "googled it", June 2012 to present Laudato Si Pope-pourri
CommonCrawl
Survey of Temporal Information Extraction Chae-Gyun Lim* , Young-Seob Jeong** and Ho-Jin Choi* Corresponding Author: Young-Seob Jeong** ([email protected]) Chae-Gyun Lim*, School of Computing, Korea Advanced Institute of Science and Technology, Daejeon, Korea, [email protected] Young-Seob Jeong**, Dept. of Big Data Engineering, Soonchunhyang University, Asan, Korea, [email protected] Ho-Jin Choi*, School of Computing, Korea Advanced Institute of Science and Technology, Daejeon, Korea, [email protected] Received: January 3 2019 Abstract: Documents contain information that can be used for various applications, such as question answering (QA) system, information retrieval (IR) system, and recommendation system. To use the information, it is necessary to develop a method of extracting such information from the documents written in a form of natural language. There are several kinds of the information (e.g., temporal information, spatial information, semantic role information), where different kinds of information will be extracted with different methods. In this paper, the existing studies about the methods of extracting the temporal information are reported and several related issues are discussed. The issues are about the task boundary of the temporal information extraction, the history of the annotation languages and shared tasks, the research issues, the applications using the temporal information, and evaluation metrics. Although the history of the tasks of temporal information extraction is not long, there have been many studies that tried various methods. This paper gives which approach is known to be the better way of extracting a particular part of the temporal information, and also provides a future research direction. Keywords: Annotation Language , Temporal Information , Temporal Information Extraction Documents are used to deliver information to the readers. In the past, readers were human, but computers are becoming a new class of readers. Computers can collect information much faster than humans can, and are capable of storing much more information than humans are. To realize these strengths of computers, it is necessary to develop techniques for extracting information from documents. This is because such documents are usually unstructured text. The techniques can be thought of as converters that take unstructured texts as input and output the information in a particular format more favorable for computers. Due to the exponentially increasing number of unstructured documents available on the web and from other sources, developing such techniques is becoming more important. Among the many aspects of extracting information from documents, the extraction of temporal information has recently drawn much attention. This is because documents usually contain temporal information that is useful for further application in such as knowledge base (KB) construction, information retrieval (IR) systems, and question answering (QA) systems. Given a simple question, "Who was the president of South Korea eight years ago?", for example, a QA system may have difficulty finding the right answer without correct temporal information about when the question was posed and what '8 years ago' refers to. In this paper, studies related to extraction of temporal information are discussed. The relevant studies are summarized in chronological order, and the history of the annotation languages and shared tasks is described. Answers to the following questions are provided in this paper. What is temporal information? Is there a structured way to describe the task boundary of the temporal information extraction? What is the history of the annotation languages? What is the history of related studies? Are there some shared tasks related to temporal information extraction? Which applications can benefit from temporal information? What are the research issues? How might a system of temporal information extraction be evaluated? The rest of this paper is organized as follows. Section 2 provides the definition of temporal information, and describes how to represent temporal information. It also gives the definition of the task of temporal information extraction. Section 3 introduces the task boundary of temporal information extraction, and Section 4 gives the history of the annotation languages and shared tasks. Section 5 provides the history of related studies in chronological order, and in Section 6, the research issues are discussed. Some metrics are provided in Section 7, which could be used to evaluate a system of temporal information extraction, along with some issues related to the evaluation process. Finally, Section 8 concludes the paper. 2. Temporal Information 2.1 What Is Temporal Information? Information can be defined as data endowed with meaning and purpose [1]. Information is inferred from data and is differentiated from data in that it is useful. From a practical point of view, data is a set of raw observations, and information is something useful extracted or inferred from the observations. Information is used to construct knowledge, while wisdom is defined in terms of knowledge. The relationship between these four concepts is depicted in Fig. 1, where the upper concepts are more meaningful and useful than the lower concepts. If information is poorly extracted from data, then it will harm the quality of knowledge and eventually harm the quality of wisdom. Therefore, it is important to develop an effective method for information extraction. Time can be defined as a measure in which events can be ordered from the past through the present into the future, and also, as the measure of the durations of events and the intervals between them [2]. Based on the definition of information and the definition of time, temporal information can be defined as information that can be used to order events and to measure the durations or intervals of events. It is obvious that, to order the events or to measure the durations of the events, it is necessary to take the information about the events into account. That is, the temporal information includes not only the temporal points and durations, but also the information about the events themselves. Furthermore, it is also necessary to consider the relation between the temporal points (or durations) and the events, because such relation has a crucial role for ordering the events or measuring the durations or intervals of the events. DIKW (data-information-knowledge-wisdom) pyramid [ 1]. To make a concept of temporal information easier to understand, Fig. 2 shows simple examples of them. A green shape with a dashed line means a time information (i.e., temporal points or duration), an orange shape with a solid line means an event, a double-headed arrow means a temporal relationship between other entities, and an underscored word (or phrase) means a connective which is in the temporal perspective. At the first example, a verb 'study' and an adverb phrase 'three days' can be event and time information on the given sentence, respectively. A connective 'for' has a role to make a temporal relationship that the 'study' event will be continued during 'three days'. On the other example, there are two events—a verb 'eats' and a noun phrase 'the final exam'—and a temporal relationship between them derived from a connective 'after'. This relationship means that the 'eats' event will be started after another event. Examples of temporal information. Formally, temporal information can be represented as {T, E, R}, where T denotes the temporal points, durations or intervals, E means the events, and R represents the temporal relation. The relation R can be either of [TeX:] $$R_{T T}, R_{E E}, \text { or } R_{T E}, \text { where } R_{T T}$$ denotes the relation between two temporal points (or durations), [TeX:] $$R_{E E}$$ denotes the relation between two events, and [TeX:] $$R_{T E}$$ denotes the relation between a temporal point (or duration) and an event. Of course, there can be situations when there is no relation even though there are some T and E. 2.2 Temporal Information Representation Temporal information appears in raw text through temporal expression and event expression. Event expression is used to represent the events, while temporal expression is used to denote the temporal points, durations, and intervals. [3] suggested that there are three forms of temporal expression: an explicit reference, an indexical reference, and a vague reference. The form of explicit reference directly represents the value of temporal information (e.g., 'April 4', 'March 8, 1983'), while the form of indexical reference indirectly represents the value by relative expressions (e.g., 'three months later', 'yesterday'). The form of vague reference represents ambiguous temporal information (e.g., 'early 1990s', 'about three months'). Meanwhile,[4] suggested that there are three forms of temporal information representation: an explicit reference, an implicit reference, and a relative reference. The explicit reference is same as the explicit reference of [3], while the indexical reference of [3] seems that it contains the implicit reference and relative reference of [4]. The vague reference is absent in the proposed forms of [4]. In this paper, five reference forms are defined: explicit reference, implicit reference, relative reference, vague reference, and non-consuming reference. The form of explicit reference directly represents the value of temporal information (e.g., 'March 8', '2000.08.12'). This form was first mentioned in the fifth Message Understanding Conference (MUC-5) [5]. The form of implicit reference represents a period or a time point that is known by the public without containing any explicit value of temporal information (e.g., 'the Japanese colonial period', 'Middle Ages'). This form can be divided into two subforms, a global implicit reference and a local implicit reference. The global implicit reference includes temporal expressions that are supposed to be known to the general public, such as 'Middle Ages', 'glacial epoch', and 'the Roman Era'. The local implicit reference includes temporal expressions that are supposed to be known to readers. For example, if a document has two sentences "Hoyeon was born 1986." and "When she was born, the building was established.", then the readers know the value of the expression 'When she was born' which can be inferred by considering the first sentence. The difference between the global implicit reference and the local implicit reference is that the normalized value of the global implicit reference is obtained from a common-sense or external KB, while the normalized value of the local implicit reference is obtained from the information within the corresponding document. The form of relative reference represents expressions that can be used to infer the value (e.g., 'two weeks ago'). This form was first mentioned in MUC-7 [5]. The form of vague reference represents ambiguous temporal information (e.g., 'early 1990s'). The form of non-consuming reference represents temporal information that is not observable in the text, but it is assumed to be provided in other ways. For example, when a document is written on October 12 but this is not explicitly written in the document, then the Document Creation Time (e.g., 'October 12') can be given as meta-data. There are several kinds of such meta-data including Document Creation Time, Document Modification Time, Document Access Time, and others. A temporal expression takes one of the above five forms, so it is necessary to convert it into a more structured template in order to use it for further applications. Given the sentence "Hoyeon and Younseo had breakfast at 9 o'clock", there is the temporal expression '9 o'clock' which should be converted into a structured form comprehensible by the computers. The structured form must represent the extent, value, and any additional information in the temporal expression. The extent is used to describe the position of the temporal expression in the raw text. For example, the position of the temporal expression '9시 [9 si]' can be represented by indicating offset boundaries, where the offset boundary can be represented using a token index or a character index. The value of the temporal expression is used to represent the temporal points (e.g., '2010-03-08') or periods (e.g., '3 months', '2 days'). It is worth noting that the same value can be represented by various expressions in text. For example, the temporal expressions '9 o'clock' and '9:00AM' denote the same value. This is the reason that temporal expressions must be converted into structured forms. The structured form should have a way of representing other information, such as temporal patterns (e.g., 'two times a week'), indication of the temporal information type (e.g., DATE, TIME, DURATION), and indication of whether the temporal information is vague or not. It is also necessary to convert the event expression into a structured template. For the sentence "Younseo eats a cookie.", there is the event expression 'eats', where its structured form must represent the extent, the class, and some additional information of the event expression. The extent is used to describe the position of the event expression in the raw text. The class is used to represent the type of event. For example, the event expression 'eats' is the behavioral event experienced by 'Younseo', so the type of the event expression can be denoted as OCCURRENCE. The structured form of event expression should have a way to represent other information, such as polarity of the event, tense of the event expression, and so on. Based on structured forms of temporal expression and event expression, a structured temporal relation between them can be generated. The relation must have two corresponding arguments and a relation type. For the sentence "Younseo eats a cookie at 9 o'clock.", the two arguments of the relation are '9 o'clock' and 'eats', and its relation type can be denoted as INCLUDES, so it means that the event 'eats' occurs at '9 o'clock'. The structured forms of temporal information must convey the core information of the temporal expression, the event expression, and the relation. Because the structured forms will be used in further applications, it is important to design forms that are effective and efficient. A package of structured forms is called annotation language, because it is used to annotate the raw text. 2.3 Temporal Information Extraction As described earlier, for further application, temporal information must be converted into a structured form comprehensible by computers. The conversion process is a task of temporal information extraction. Because temporal information is useful for many applications, it is important to develop effective methods for extraction of temporal information. The task of temporal information extraction strongly depends on the annotation language, because it will not be possible to extract temporal information that is not defined by the annotation language. In other words, the task can be defined as extraction of all the temporal information defined by the annotation language, but the task cannot be extraction of temporal information not defined by the annotation language. Different applications may adopt different parts of the temporal information, and they may introduce additions to the annotation language in order to achieve their final goals. As the annotation languages may not consider some language-specific characteristics, it will be necessary to revise the annotation languages to apply them to a target language. The task of temporal information extraction is part of a larger application (e.g., QA systems, IR systems), so it is important to clarify the boundary of the task of temporal information extraction. Recall that the definition of temporal information is "information that can be used to order events or to measure the durations or intervals of events". The events can be related to other tasks, such as spatial information extraction, subject-predicate-object (SPO) extraction, or sentiment prediction. That implies that the same information might be extracted in more than one task, which may harm the overall efficiency of the application. Thus, it is important to set an appropriate boundary of the task of temporal information extraction. There are three main approaches to the task of temporal information extraction: rule-based, datadriven, and hybrid. The rule-based approach is to define a set of rules, while the data-driven approach is to design an algorithm and define a set of features. The hybrid approach combines the rule-based approach and the data-driven approach. It is important to determine which approach will be used for extracting which part of the temporal information. 3. Task Boundary Although there have been many studies related to the task of temporal information extraction, most of them did not clearly define their task boundaries. In this section, a structured way of describing the task boundary for temporal information extraction is proposed. The task boundary of temporal information extraction can be determined using three sub-boundaries: a boundary of temporal expressions, a boundary of event expressions, and a boundary of temporal relations. As defined earlier, there are five forms of temporal expressions: explicit reference, implicit reference, relative reference, vague reference, and non-consuming reference. The boundary of temporal expressions is used to indicate the forms of the temporal expressions that are supposed to be extracted. If a desired system of temporal information extraction has the boundary of explicit reference, then the desired system will give only the temporal information derived from the temporal expressions taking the form of explicit reference. The event expressions are typically verbs or nouns. The boundary of event expressions indicates which one (e.g., verbs, nouns, or both) to extract. There are three kinds of boundary of temporal relations: a kind boundary, a text boundary, and a transitivity boundary. The kind boundary can contain at least one of three kinds: temporal links, subordinated links, and aspectual links. The temporal links are usually annotated by a tlink tag. The tlink tag can be either of timex3-timex3 link (TT tlink), timex3- makeinstance link (TM tlink), makeinstance-makeinstance link (MM tlink), or a link between Document Creation Time and makeinstance (DM tlink). The subordinated links are usually annotated by a slink tag, while the aspectual links are typically annotated by an alink tag. The second kind of temporal relation, namely the text boundary, indicates how many sentences/ paragraphs/documents to consider for temporal relation extraction. For example, the temporal relation can be extracted for each sentence independently, or it can be extracted by considering two or more adjacent sentences. The text boundary can be one among the following options: single sentence, multiple sentences, single paragraph, multiple paragraphs, single document, or multiple documents. The local implicit reference of temporal expressions requires consideration of the temporal information obtained from all sentences that appeared before the target sentence, so one may argue that the text boundary always must be the single document when the boundary of temporal expressions contains the local implicit reference. However, the text boundary indicates whether the inter-sentence temporal relations are allowed or not, while the local implicit reference is just about the normalized values of temporal expressions, not about the inter-sentence temporal relations. Thus, the text boundary is independent of the boundary of temporal expressions. The third kind of temporal relation, namely the transitivity boundary, indicates whether the transitivity in Allen's interval algebra is adopted or not, and indicates how many sentences will be processed with the transitivity. The transitivity boundary is determined to one of the following options: none, single sentence, multiple sentences, single paragraph, multiple paragraphs, single document, or multiple documents. If the transitivity boundary is 'none', then there will be no temporal relations inferred using the transitivity in Allen's interval algebra. If the transitivity boundary is 'single sentence', then the transitivity will be applied to each sentence independently. For example, when the event e1 occurred when time t1, and the event e2 occurred before [TeX:] $$t_{1},$$ then it can be inferred that [TeX:] $$e_{2}$$ occurred before [TeX:] $$e_{1}$$ although there is no expression representing the relation between [TeX:] $$e_{1}$$ and [TeX:] $$e_{2}$$. If the transitivity boundary is larger than 'single sentence' (e.g., multiple sentences, single document), then there will be more temporal relations inferred using transitivity. Thus, it is necessary to determine the transitivity boundary carefully. When the transitivity boundary is either 'none' or 'single sentence', and the text boundary is 'multiple sentences' or larger boundary, then only explicit inter-sentence temporal relations will be extracted. That is, the inter-sentence temporal relations will be extracted only when there is at least one explicit temporal expression (e.g., '그 후 [Geu hoo]' (thereafter)). For example, if there are two sentences "He opened the door and came in." and "Thereafter, he slept.", then the inter-sentence temporal relation must be the linkage between the event 'came in' and the event 'slept'. If the transitivity boundary is two sentences (multiple sentences), then there will be one more inter-sentence temporal relation between 'opened' and'slept'. When the kind boundary contains temporal links (e.g., tlink tags), it is also necessary to consider the types of tlink tags (e.g., TT tlink, TM tlink, etc.). This is called a transitivity boundary for types of temporal links. Summary of the task boundary of temporal information To summarize, the boundary of temporal information can be summarized in Table 1. It is important to determine these sub-boundaries of temporal information before starting development of a system for extraction of temporal information. 4. History of Annotation Languages and Shared Tasks The history of annotation languages and shared tasks can be summarized as shown in Fig. 3, where the orange dots represent the annotation languages and the blue dots denote the shared tasks. One notable thing is the appearance of Time Mark-up Language (TimeML) in 2003, which became the basis for many studies on extraction of temporal information. From 2007 to 2013, TempEval, which is a series of shared tasks, triggered many studies because it provided a high-quality dataset constructed using TimeML. The standardized version of TimeML, namely ISO-TimeML, appeared in 2009, and was revised in 2012. Between 2009 and 2011, some variations of TimeML were proposed as adaptations to particular languages (e.g., Korean, Italian). History of annotation languages and shared tasks. 4.1 Shared Tasks There were several shared tasks intended to develop systems for temporal information extraction from text. In the MUC-5 which was held in 1993, where there was a sub-task of assigning a calendrical time to a joint venture event [6]. At MUC-6 which was held in 1995, there was a sub-task of extraction of absolute temporal value as a part of the general task of Named Entity (NE) extraction [7]. The NE extraction task included the tag elements: enamex (for entity names, comprising organizations, persons, and locations), timex (for temporal expressions, namely direct mentions of dates and times), and numex (for number expressions, consisting only of direct mentions of currency values and percentages). As the proportion of timex tags in the test set was only 10%, the temporal information extraction was not the main part of the NE extraction task. The next relevant conference was held in 1998, namely MUC-7, extended the boundary of the sub-task to the extraction of relative temporal value [8]. In the field of Topic Detection and Tracking (TDT), the task of temporal information extraction became important because topic tracking is strongly related to the task of finding temporal relations between events. Since the shared task TDT-2 in 1998, there have been studies about extracting temporal information and applying it to final goals [9-11]. Based on TimeML, a series of shared tasks appeared, namely TempEval. The TempEval-1 was held as a task 15 of SemEval in 2007 [12], where it provided a dataset TimeBank constructed using the TimeML. There are three sub-tasks in TempEval-1: (1) the extraction of events and relations between them, (2) the extraction of events and relations with Document Creation Time, and (3) the extraction of temporal relations between major events in different sentences. Also, in 2007, Automatic Content Extraction (ACE) opened recognition tasks related to the temporal information processing: the extraction of temporal expressions and events. These tasks have structures quite different from those of TempEval-1. For example, ACE 2007 has eight event types: life, movement, transaction, business, conflict, contact, personnel, and justice, which are completely different from those of TimeML. The TempEval-2 was held as task 13 of SemEval in 2010 [13], where it provided datasets for six languages: Chinese, English, French, Italian, Spanish, and Korean. There are six sub-tasks in TempEval- 2: (1) the extraction of timex3 tags and their attributes, (2) the extraction of event tags and the attributes of makeinstance tags, (3) the extraction of temporal relations between makeinstance and timex3 within the same sentence, (4) the extraction of temporal relations between makeinstance and Document Creation Time, (5) the extraction of temporal relations between major makeinstance tags of adjacent sentences, and (6) the extraction of temporal relations between two makeinstance tags. TempEval-3 was held as a task 1 of SemEval in 2013 [14], where it provided datasets for English and Spanish. There are five sub-tasks in TempEval-3: (1) the extraction of timex3 tags and their attributes, (2) the extraction of event tags and the attributes of makeinstance tags, (3) the extraction of all the tags from the texts, (4) the extraction of temporal relations given the correct timex3, event, and makeinstance tags, and (5) the extraction of temporal relation types given the correct argument pairs. In TempEval-3, for the task of extraction of timex3, the best performance was 77.61% (F1-measure) achieved by HeidelTime-t [15], which is a rule-based system. For the task of extraction of event and makeinstance tags, the best performance was 81.05% (F1-measure), and was achieved by ATT-1 [16] utilizing Maximum Entropy. For the task of extraction of tlink tags given correct other tags, the best performance was 36.26% (F1- measure) achieved by ClearTK-2 [17], using support vector machine (SVM) [18,19] and Logit. For the task of extraction of tlink tags without correct other tags, the best performance was 30.98% (F1-measure), achieved by ClearTK-2. As the state-of-the-art performance of temporal information extraction is not satisfactory, many researchers have kept trying to achieve better performance on this task. There are shared tasks of temporal information extraction in the medical field. The Informatics for Integrating Biology and the Bedside (i2b2) offered a natural language processing (NLP) challenge in 2012 [20]. The goal of the i2b2 shared task was to develop a system for extracting temporal information from the discharge summaries of hospitals, where the temporal information is represented in a way similar to that for TimeML (e.g., timex3, event, tlink). The tlink tag of i2b2 has only three relation types: BEFORE, AFTER, and OVERLAP. Another shared task in the medical field is Clinical TempEval which was held as a task 6 of SemEval in 2015 [21], for which the goal was to develop a system for extracting temporal information from clinical texts. The temporal information was annotated with a new annotation language modified from TimeML, because the temporal information in the medical field has some characteristics different from the general temporal information defined by the traditional TimeML. 4.2 Annotation Languages As the task of temporal information extraction has become more important for many applications (e.g., QA systems, IR systems), it also has become important to design a language for annotating or representing temporal information. In TIDES (Translingual Information Detection, Extraction, and Summarization) supported by DARPA, introduced a timex2 guideline [22] in 2000, in which the temporal values are represented in ISO-8601 [23]. Since then, timex2 has evolved through several versions from 2001 to 2005. Similar to timex, the timex2 is based on inline annotation. Based on the TIDES timex2 guideline, there was a task of extraction of temporal information in ACE program in 2004, where the task includes the extraction of temporal expressions and prediction of temporal values. TimeML was introduced as a new well-organized annotation language in 2003 [24]. It was mainly based on three previous works: TIDES timex2 guideline, Sheffield Temporal Annotation Guidelines (STAG) [5], and another emerging work [25]. The TimeML was the first stable annotation language that incorporated temporal expressions, event expressions, and temporal relations. As more studies appeared based on the TimeML, a standardized version of it, namely ISO-TimeML [26], was proposed in 2009, and revised in 2012. The ISO-TimeML has many parts in common with TimeML, but also has some additional tags and attributes. Many studies were based on the traditional TimeML adopted by the TempEval series, so the TimeML is the de facto standard while the ISO-TimeML is the de jure standard. To achieve generalization, the ISO-TimeML allows for modification of some parts of it, based on some language-specific characteristics. In [27], the Italian TimeML (It-TimeML), which is based on the ISO-TimeML was proposed, and it demonstrated the reliability of the It-TimeML guidelines and specifications based on the inter-coder agreement. The TimeML and ISO-TimeML might be stable and well organized, but language diversity was not well considered. For example, it was assumed that annotation is performed at a token level, which is not acceptable for some languages (e.g., Korean, Chinese). To overcome this limitation, Korean TimeML (KTimeML) was proposed as a new annotation language for Korean in 2009 [28]. It might be a solution to the limitation, but it has its own limitations. In [29], the limitations of the KTimeML are described, and a new revised version of KTimeML is proposed to address the limitations. 5. Temporal Information Extraction Methods Because documents typically contain temporal information, many researchers have been attracted to developing systems for extracting such information from text. In 1972, a formal model for temporal references was presented [30]. In this study, a specific time was represented as an ordered pair whose elements are time points, so the temporal reference could be seen as a temporal relation between two time points. A better-structured definition of temporal relation was proposed in 1983 [31], of which the proposed 13 relation types are summarized in Table 2. Here, 13 relation types could represent every temporal relation between events, and it provided motivation for many additional studies related to temporal relation extraction. The Allen's 13 base temporal relations [ 31] Since the appearance of Allen's 13 temporal relations, there have been studies that were mainly based on linguistic assumptions or manually defined constraints. [32] utilized Narrative convention which is an assumption that the events of the current sentence must have occurred after the events of previous sentences. This assumption is simple, but can be effective for particular domains (e.g., stories). [33] proposed a manually defined set of rules to order the sequence of clause pairs. In [34] a new system for ordering events was proposed by analyzing the tense of the events. [35] used a linguistic model to incorporate temporal information for representing events. [36] extended the work of [33] by adding additional rules for ordering events in ambiguous cases. [37] analyzed how the compositionality affects the interpretation of temporal information, especially in the case of subordinated events. [38] proposed a method to label events with time periods based on TOODOR [39]. This method represented time values using eight units (e.g., day, century), while the time periods were represented by three types (e.g., time point, time interval, and span of time). The proposed method employed syntactic/semantic parsers, and used a set of rules for labeling the events. All of these studies commonly aimed at developing systems for ordering events by extracting the temporal information, where the proposed methods were mainly assumptions or constraints that were manually defined, based on linguistic knowledge or observations from the texts. 5.1 From TIMEX2 Scheme Since the appearance of the TIDES timex2 guideline in 2001, it became easier to generate/share a dataset because the guideline helped the datasets be consistent. This eventually led researchers to attempt to apply not only rules or linguistic constraints, but also machine-learning methods and mathematical models. In [40,41], systems for extracting relative temporal expressions and temporal relations between the events were proposed. They defined lexical rules by hand, and extended the rules automatically by a machinelearning method. In [3], it was assumed that there are three types of temporal expressions: explicit reference, indexical reference, and vague reference. The temporal expressions were extracted using finite state transducer (FST), and the event expressions were extracted using rules. The temporal relations were recognized as one of seven relation types (e.g., BEFORE, AFTER, INCLUDE, AT). [9] proposed a method for extracting temporal information for TDT. The temporal expression candidates were extracted using finite state automata (FSA), and some of them were filtered out using a predefined dictionary. The absolute values of the recognized temporal expressions were extracted using a lexicon set and a set of rules. 5.2 From TimeML Scheme Since TimeML appeared in 2003, some studies proposed annotation tools based on it, and some studies reported several limitations of TimeML and insisted that the TimeML should be changed. [42] explained how the TimeML was designed, and described some challenging issues and directions for future study. Two annotation tools, TANGO [43] and Callisto [44], were proposed that followed TimeML. [45] suggested adding a new tag CLINK to the TimeML. This study also insisted that there must be a function for denoting arguments in event tag. In [46], a tool for annotating temporal information was proposed, namely T-BOX, where the annotation was based on the TimeML. This presents events in temporal order. For example, when the event e1 occurs before event e2, then e1 is shown to the left of e2. Meanwhile, as the size of the cumulated dataset got larger, more studies attempted to use various machine-learning methods. Evita (Events In Text Analyzer) was proposed in [47], and was developed using the TARSQI framework [48]. Evita combines a statistical method and a set of rules to extract events and attributes (e.g., tense, aspect, modality, polarity, event class). In [49], a method for extracting temporal information from Chinese text was proposed. It defined a set of rules and utilized a chart-parsing based on constraints. [50] proposed a method for extracting temporal information from Swedish texts, and used the extracted temporal information to generate animated 3D scenes. It utilized finite state machine (FSM) and rules for extracting temporal expressions and event expressions. The temporal relations between events were extracted using decision trees (DT). 5.3 From TempEval and TempEval-2 Shared Tasks Since TempEval, which is the well-known series of shared tasks, emerged in 2007, many studies mainly aimed at one or more sub-tasks defined by the TempEval. The publicly available dataset, namely TimeBank, was provided by TempEval. In [51], a method for extracting temporal relations between two events was proposed. It had two stages: (1) a machine-learning model for classifying event attributes (i.e., tense, aspect, modality, polarity, and event class), and (2) a machine-learning model for classifying the relation types between two events. It used TimeBank for experiments, and reported that Naive Bayes (NB) generally gives better performance than maximum entropy (ME). [52] proposed a method focused on the extraction of temporal expressions. It adopted the method of begin-inside-outside (BIO) tags, which are independent of the lengths of text segments. Poveda's method utilized TnT tagger [53] and YamCha toolkit [54], and compared the performance of SVM and FOIL (first-order inductive learner). It was reported that SVM generally gave better performance. In [55], a method for resolving conflicts between temporal relations was proposed. It used integer linear programming (ILP), and applied the transitivity assumption to generate additional relations or to remove inconsistent relations. After experiments with TimeBank, it was reported that the proposed method increased the performance accuracy by 3.6%. [56] proposed a method for extracting temporal relations between events and/or Document Creation Time. This study utilized Markov logic and defined a set of rules for Markov logic network (MLN). The TempEval has been held triennially, and the TempEval-2 was held in 2010. During the three years from 2010 to 2012, a huge number of brilliant studies appeared. Many studies attempted to use various machine-learning methods, and some studies were about visualization of temporal information. Few studies provided reviews or surveys about temporal information extraction, and several studies tried to utilize patterns between temporal information and other information (e.g., spatial information). In [57], a new method for extracting events and temporal expressions was proposed. To extract events, it converted the results of TRIPS parser into a logical form, and used a set of rules defined using the logical forms. It also employed MLN to extract major events, and conditional random fields (CRF) for extracting temporal expressions. To predict the absolute temporal values, a set of manually defined rules was used. [58] proposed a system for ordering events and spatial information. It was based on an assumption that temporal information and spatial information appear within a particular distance (e.g., a sentence or paragraph). It first collected Wikipedia featured articles using unstructured information management applications (UIMA) [59], and attempted to extract spatial information using Meta-Carta GeoTagger [60], while the temporal information was extracted using a set of manually defined rules. In [61], a new corpus for the task of extraction of temporal expressions, namely WikiWars, was introduced. The source documents are collected form Wikipedia, and the annotation is performed using timex2. HeidelTime was proposed in [15], where it was found to be the best method for extracting temporal expressions in TempEval-2. It is a rule-based system that is portable, because it is based on UIMA. TimeTrails was introduced in [62], where its purpose was to help document analysis by visualizing the extracted temporal/spatial information. For this purpose, HeidelTime was employed to extract temporal information, and MetaCarta GeoTagger was used to extract spatial information. [63] employed Evita [47] and GUTime to extract temporal expressions and event expressions, and then used MLN to extract temporal relations. It also defined temporal entropy (TE) for evaluating the tightness of the extracted information within each document. TIPSem (Temporal Information Processing based on Semantic information) was proposed in [64], where it was one of the best methods in TempEval-2. CRF was used to extract temporal expressions and event expressions. This showed that using the semantic information conveying relations between elements could help with extraction of temporal information. Timely YAGO (T-YAGO) was proposed in [65], and is an extension of YAGO achieved by incorporating temporal aspects. Using this approach, temporal facts were extracted from Wikipedia infoboxes, categories, and lists. The extracted facts were integrated into the KB of T-YAGO. In [66], a review of the current research trends was provided. This review included a number of applications that could benefit from temporal information, and discussed challenging issues. [67] used the expectation maximization (EM) algorithm to extract three types of temporal relations (BEFORE, AFTER, and OVERLAP). This was the first study that employed the EM algorithm for this task. In the Estep, the algorithm finds conflicts between the relations using a set of rules, and it replaces inconsistent relations using the probability values of the clusters, where each cluster is regarded as a relation type. In the M-step, the algorithm applies a smoothed relative-frequency estimate. In [68], PRAVDA was proposed, by which temporal facts could be automatically harvested from web text. For this, a patternbased approach was used to extract candidate temporal expressions, and a label-propagation approach was employed to compute confidence scores of the candidates. In [69], YAGO2 was proposed. The purpose of this study was to extend the previous YAGO system [70] by incorporating temporal/spatial information. For this purpose, 5-tuple SPOTL (subject, predicate, object, time, and space), an extension from the 3-tuple SPO of YAGO, was used. This method extended KB by extracting the temporal/spatial information from Wikipedia documents and WordNet [71]. For representation of the temporal information, it followed ISO-8601, while it used GeoNames to represent the spatial information. This showed that temporal/spatial information could be used to help extract facts from text more accurately. In [72], an extended version of PRAVDA [68] was proposed. This combined the label propagation approach and an integer linear program, which eventually detects noisy events by incorporating temporal constraints among the events. SUTime, proposed in [73], was used for extracting temporal expressions and predicting temporal values. It is a part of the Stanford CoreNLP pipeline. In [74], a system for extracting temporal relations was proposed that took only six types of TimeML: SIMULTANEOUS, BEGINS, ENDS, BEFORE, IBEFORE, and INCLUDES. Herein, it used bootstrapped cross-document classification (BCDC) which takes additional relevant documents selected by the INDRI system [75] to re-train SVM models already trained using other training documents. The EM algorithm of [67] was adopted for extracting temporal relations. It was reported that the BCDC method worked well when the size of the dataset was small, and that the EM algorithm worked well when it was properly initialized. This implies that the proposed system works poorly with a biased dataset. [76] proposed a system for extracting temporal information from Wikipedia documents. It extended [65] by adding some named events to higher-order and first-order facts of T-YAGO. For this, it utilized a set of rules to extract temporal information from infobox, categories, titles, and lists of Wikipedia documents. Its usefulness was demonstrated by experimental results that it extracted 2-3-times more temporal facts and 50-times more events than T-YAGO and YAGO2 [69]. In [77], a survey about temporal information processing was provided. The report first introduced previous studies on information extraction, and described classical work in temporal information extraction and temporal reasoning. This work also provided research issues concerning the task of temporal information extraction, and listed some real-world applications. 5.4 From TempEval-3 Shared Task As for TempEval-2, there have been many studies since TempEval-3 was held in 2013. Some studies attempted to find a way to effectively apply the temporal information to further applications (e.g., QA systems, KB systems), and few studies tried various machine-learning models as feature-generation models or classifiers. In several studies motivated from i2b2 challenges, systems of temporal information extraction were developed in another domain (e.g., clinical domain). In [78], a system of temporal information extraction in the clinical domain was proposed. The goal of this study was to extract timex3, event, and tlink tags, from clinical texts. CRF was used to extract event tags, while timex3 tags were extracted using a set of rules. Based on the extracted timex3 and event tags, it extracted some tlink candidates using several rules. The candidates were filtered out using machinelearning methods (e.g., CRF, SVM). Another system in the clinical domain was proposed in [79], where its goal was to extract timex3 tags and event tags from clinical texts. It made use of a set of hand-crafted rules for timex3 extraction, and used the integer quadratic program (IQP) to infer attributes of event tags based on the assumption that the relations between two events might guide the inference procedure to determine the attributes of the other events. [80] proposed a method to predict temporal values from texts, for which it utilized context-free grammar (CFG) and rules. In [81] a system for temporal information extraction from clinical narratives was proposed. Its purpose was to extract timex3 and event tags, which was basically a part of the i2b2 challenges. For this purpose, it employed HeidelTime [15] to extract general timex3 tags, and used a CRF-based sequence labeling method to extract domain-specific timex3 tags and event tags. A survey of temporal IR and related applications was provided in [82]. Although it focused on studies of temporal retrieval, it also discussed the task of temporal information extraction from Web documents. [83] attempted to extract temporal expressions and to find temporal values employing hand-engineered Combinatory Categorial Grammar (CCG). For this, context information (e.g., Document Creation Time, verb tense) was utilized to find the absolute values of temporal expressions. [84] proposed a system for populating KB by incorporating newly extracted temporal information. [85] proposed a sieve-based temporal ordering method, where the sieve represents a classifier. The sieve-based method is cascade architecture, such that each sieve passes its temporal relation decisions on to the next sieve. It was reported that the most precise sieves were collections of handcrafted rules, and insisted that the reason for this is that the intuition behind the rules is not easily captured by machine-learning models. [86] proposed a system for Korean language, which included a combination of machine-learning models and handcrafted rules. It incorporated a feature-generation model, namely the Language Independent Feature Extractor (LIFE) [87], to generate complementary features to improve the performance of the system. In SemEval-2017, 'Task 12: Clinical TemEval' was held as a shared task for capturing temporal information on the clinical domain [88]. While notes from colon cancer patients had been used for both training and testing in Clinical TempEval 2015 and 2016, the training process used colon cancer patients data and the testing process used brain cancer patients data in Clinical TempEval 2017. [89] proposed a GUIR model that combines CRFs and decision tree ensembles constructed on lexical features (e.g., uppercases, lowercases, prefixes, suffixes, punctuations, stop words, etc.), rule-based features for complex patterns or specific words, and distributional features (e.g., word clusters and word embeddings). [90] proposed Hitachi model that combines CRFs, neural networks and decision tree ensembles trained by lexical features (e.g., n-grams of nearby words, character n-grams, prefixes, suffixes, etc.) and common features (e.g., POS tags, verb tenses, sentence lengths, event/time tokens, number of other event/time mentioned, etc.). [91] proposed KULeuven-LIIR model that combines SVMs for detecting event/time expressions and a structured perceptron for temporal relations. [92] proposed LIMSI-COT model that uses neural network-based methods to detect both intra and inter-sentence relations. As a result of the shared task, in the case of time span extraction, GUIR model showed the best performance with 0.57 F1 score, and in the case of extracting the time span and class together, KULeuven-LIIR model showed the best result with 0.53 F1 score. Although LIMSI-COT model showed relatively low results with the F1 score than others, it had the highest result of recall with 0.66 and 0.63 for each case. These results demonstrate that models considering various rules and grammatical elements are more suitable for finding time spans than others such as neural network-based models. In SemEval-2018, 'Task 6: Parsing Time Normalizations' was held as a shared task related to time information extraction [93]. The purpose of this task is to develop new techniques that allow time normalization based on recognizing semantically compositional time operators. For this task, they presented two tracks—identifying the time operators, and providing time intervals on the timeline. Here, the compositional time operators are underlying the proposed method in previous work [94]. Olex et al. [95] submitted a Chrono model that applies the rule-based approach as primary and a Chrono* model that improves some bugs on their previous model. Their models have captured temporal tokens with temporal expressions or regular expressions for specific words, and connected some consecutive tokens to find temporal phrases. To summarize, many studies focused on either a rule-based approach or a data-driven approach, or both of them. It seems that the most powerful approach for the task of timex3 tag extraction is the rulebased approach, while the most powerful approach for the task of event tag extraction is the data-driven approach. In terms of the task of tlink tag extraction, the data-driven approach seems the best. Many recent studies attempted to make use of machine-learning methods for the task of temporal information extraction. The features adopted from machine-learning methods can be summarized as follows. To extract timex3 tags, given a particular window size W surrounding the target token, the features include n-grams of tokens, n-grams of POS tags, the top ontology class of WordNet, the frequencies of the target token, the suffix and prefix, whether n-grams are upper-case or not, whether ngrams are digits or not, whether the first character is upper-case or not, whether the previous token is a temporal expression or not, the head token of the target token, the semantic role label of the target token, and the semantic role labels of subordinated tokens. In particular, the attribute value of timex3 tag was predicted primarily by a set of rules rather than by machine-learning methods. To extract event tags, their features were defined very similarly to timex3 tags. The attributes of makeinstance tags (e.g., polarity, modality, tense) are predicted mainly by a set of rules. To extract tlink tags between timex3 and makeinstance, the features include such items as n-grams of tokens of the argument tags, n-grams of POS tags of the argument tags, whether the two arguments are in the same sentence, the head preposition, and whether there is a temporal expression of interval nearby the timex3 tag. To extract tlink tags between two makeinstance tags, the features include such as n-grams of tokens of the argument tags, n-grams of POS tags of the argument tags, the WordNet synset of the token of each argument, the verbs subordinated by the arguments, the adverbs attached to the verbs if the arguments are verbs, whether the arguments have the same tense, whether the arguments have the same aspect, a pair of tense of the two arguments, a pair of aspect of the two arguments, and a pair of class of the event tags related to the arguments. As shown above, the defined features are heavily related to linguistic observations, so major effort is required to put a heavy effort to feature engineering process with consideration of language-specific characteristics. Most of the previous studies were focused on the use of English, so it is necessary to investigate the best way to extract temporal information using the Korean language. 6. Research Issues of Temporal Information Extraction 6.1 Perspective on Knowledge The task of temporal information extraction has several research issues still unresolved. The first issue is the design of annotation language for specific purposes. The purpose might be a particular application (e.g., QA system) or a particular language having distinct characteristics that cannot be annotated with the existing annotation languages. It would be better to make the annotation language more general, so that it can be used to annotate any expression conveying temporal information. It should also incorporate language-specific characteristics of the target language. If an annotation language misses some languagespecific characteristics, it might harm the performance of applications developed using the poor annotation language. Moreover, if there are some expressions that cannot be annotated with the annotation language, then such temporal expressions will never be extracted by the system developed using the annotation language. This will eventually cause further applications to be deficient from the missing temporal information. Thus, the annotation language should be carefully designed. The second issue is the construction of dataset for each language/purpose. Since the TimeML has appeared, there have been several other datasets that could be used for studies of temporal information processing. However, these are mostly English datasets. The datasets of other languages are relatively small and typically have many annotation errors because not enough time was taken by the founders of such datasets to consider completely the characteristics of their target languages. This issue is strongly related to the first issue, because the dataset will be poor without carefully designed annotation language for each target language. If the annotation language incorporates language-specific characteristics and is sufficiently expressive, then a high-quality dataset can be constructed using a part of the annotation language for its own purpose. For different purposes, different parts of the annotation language could be adopted to construct the dataset. For example, if the purpose is to develop an application that simply recognizes temporal expressions, then it will be sufficient that the dataset contains only timex3 tags, without any other tags or attributes. It would be better, of course, if the dataset has all the tags and attributes defined by the annotation language. However, because manual annotation takes a great deal of time, it is necessary to determine which part of the annotation language to use for constructing the dataset, given consideration of the purpose. The third issue is the temporal context. If the boundary of temporal expressions contains relative reference, then it is necessary to design an algorithm for maintaining the temporal context. Given the two sentences "Tommy was born in 1990." and "After 10 years, he went to jail.", it will be difficult to get the value of 'After 10 years' without considering the current time in the previous sentence. The current time for each sentence is called its temporal context, and it is not trivial to design an algorithm to maintain the temporal context. The simplest algorithm is just to update the temporal context when there is a temporal expression of explicit reference within the corresponding sentence. The first sentence of the example above has the explicit reference '1990', so the temporal context can be updated to 1990. This algorithm may fail to track the temporal context in some cases. For example, if the two sentences above were followed by the sentence "After 2 years, he was released from the prison.", then the correct value of 'After 2 years' must be 2002. However, the simplest algorithm will give 1992, because there is no explicit reference in the previous sentence, and the value 1990 obtained from the first sentence is the latest temporal context. Although the algorithm has such problems, it works well in most cases because this problem does not happen very often. The fourth issue is the temporary knowledge-base (TKB). If the boundary of temporal expressions contains local implicit reference, then it is necessary to design a method for maintaining the TKB. For the two sentences "Tommy was born 1990." and "He went to jail when he was 10 years old.", it is impossible to infer the value of 'when he was 10 years old' without using the temporal information extracted from the first sentence. This is different from the temporal context, because this case requires a kind of semantic reasoning. For example, it is required to know that the value of 'when he was 10 years old' can be computed based on the knowledge of when the event 'born' happened in the first sentence. Thus, such knowledge extracted from the local implicit reference must be maintained. The collection of the knowledge is defined as the TKB, where TKB can be maintained per paragraph, document, or even corpus. In most case, TKB will be maintained per document. The cost of maintenance of TKB per document will be expensive, so it will be beneficial to develop an efficient TKB. The tenth issue is the external knowledge-base (EKB). If the boundary of temporal expressions contains global implicit reference, then it is necessary to design a method for communicating with an EKB. For the sentence "During the Koryo Dynasty, the ancestors developed it.", it is impossible to obtain the value of 'the Koryo Dynasty' without using some external resources (e.g., KB). Such a KB is defined as an EKB, as it essentially does not belong within the boundary of temporal information extraction. 6.2 Perspective on Development The first issue is the development of annotation tools. This issue is related to the second issue, because the annotation tools are supposed to be used to construct the datasets. This is also related to the first issue, because it must be determined which annotation language to use before development of the annotation tools is started. Given a particular annotation language, the annotation tools must satisfy three requirements. First, it should provide ways to annotate all the tags and attributes defined by the annotation language. Second, it should be easy for the annotators to use. This is about the interactions between humans and the tools. Third, it should be able to generate annotated files with at least one wellknown format (e.g., XML format, JSON format). Well-developed annotation tools satisfying these requirements will make construction of datasets easier and faster. The second issue is the system structure. As the task of temporal information extraction can be divided into several sub-tasks, it is necessary to determine how to design the structure of the system. For example, if the system has three sub-tasks: extraction of temporal expressions, extraction of event expressions, and extraction of temporal relations, then the system might have a cascade structure that conducts the three sub-tasks in order. Several factors must be considered in the design of the system structure. Some subtasks may be performed concurrently, and some sub-tasks may not be performed without the results from other particular sub-tasks. Some sub-tasks may benefit from the results of other sub-tasks. Furthermore, the system may require preprocessing (e.g., language analytic tools) or post-processing (e.g., result formatting). Thus, it is necessary to design the system structure with consideration of such factors. The third issue is the investigation of usefulness of various feature generators. Given the text, raw features—e.g., part-of-speech (POS) tags, Named Entity (NE) tags, dependency structures—can be generated. Based on the raw features, higher-level features can be derived. For example, given a pair of two morphemes with their POS tags, a high-level feature could be an indication of whether their POS tags are the same or not. Because manual feature-engineering requires a great deal of time, several automatic feature generators were proposed, such as tree-kernel functions, deep neural networks, and probabilistic topic models. The tree-kernel functions require dependency parsing as preprocessing and are known to convey syntactic patterns of the text, so it could be useful for relation extraction. The deep neural networks and topic models typically do not require linguistic knowledge, and they generate real-valued vectors or integer values as features. They are known to convey semantic features or semantic/syntactic features. Other feature generation methods could also be considered. The fourth issue is the inter-sentence temporal relation extraction. Many studies of temporal information extraction were focused on extraction from each sentence independently. That is, the text boundary of these studies is a 'single sentence'. If the text boundary is larger than 'single sentence' (e.g., multiple sentences), then it is necessary to find a way to extract inter-sentence temporal relations. In such cases, it should be determined whether implicit inter-sentence relations (e.g., the relations inferred by transitivity) are extracted or not. 6.3 Other Perspectives The first issue is the investigation of ways for achieving high performance for each sub-task. The task of temporal information extraction can be divided into several sub-tasks, and the best methods for the different sub-tasks will be different. Thus, it is necessary to find the best method for each sub-task. According to state-of-the-art research trends, a rule-based approach is best for extraction of temporal expressions, while a machine-learning approach is best for extraction of event expressions. For each approach, it is also necessary to find the most effective algorithm or model. For example, even if the rulebased approach turns out to be the best for a particular sub-task, it is still required to find the best set of rules. Similarly, in terms of the machine-learning approach, it is still necessary to find the best specific machine-learning model. This issue also includes the parameter settings of the models. The second issue is the harmony with other tasks. Temporal information is probably combined with the results of some other tasks, such as spatial information extraction, co-reference resolution, or semantic role labeling. This issue might seem that it is not about the task of temporal information extraction, but there might be redundancy among the tasks unless this issue is considered. For example, if one uses the results of the task of semantic role labeling to help the task of temporal information extraction, then some of the predicted semantic roles might be the same as some of the extracted event expressions. Thus, it is necessary to find a way to avoid such redundancy, and to apply the semantic role labels effectively to the system of temporal information extraction. The third issue is the contradiction resolution of temporal relations. There could be contradiction among the extracted temporal relations. For example, if event e1 occurred before event [TeX:] $$e_{2}, \text { and } e_{2}$$ occurred before event [TeX:] $$e_{3}$$ [TeX:] $$e_{3},$$ then it is contradiction when there is a temporal relation in which event [TeX:] $$e_{3}$$ [TeX:] $$e_{1}.$$ This may happen often when the text boundary is greater than or equal to multiple sentences. The fourth issue is the definition of task boundary structure. Although there were many studies about the task of temporal information extraction, there was no clear definition about the structure of the task boundary. Most of the existing studies relied heavily on the task definition provided by shared tasks (e.g., TempEval), but such task definition misses some aspects of the temporal information to be extracted. For example, the TempEval does not take transitivity into account as a task boundary, and the transitivity can be a serious factor for the temporal relation extraction. The fifth issue is the time zone. Given a question "When a plane took off from the Incheon airport (South Korea) at 8:00AM and landed in Shanghai (China) at 9:00AM, what is the flight time?", the QA system will give the answer '1 hour' if it does not consider the time zone. However, the answer is wrong because there is a time lag of an hour between Incheon and Shanghai. To deal with this issue, it will be necessary to investigate a way to incorporate the time zone into the temporal information processing. 7. Evaluation Metrics When a dataset is annotated, it is necessary to evaluate the quality of the dataset. This is usually performed using Cohen's Kappa k or Fleiss's Kappa k [96]. Cohen's Kappa measures the agreement between two annotators, while Fleiss's Kappa measures the agreement among three or more annotators. Greater Kappa values indicate that the corresponding dataset is annotated in a more consistent way, which in turn implies that the dataset is more reliable. More details of the Kappa values can be found in [96]. When a system for temporal information extraction is developed, it is necessary to evaluate the system. For the evaluation, the dataset is typically divided into a training set, validation set, and test set, where the validation set is used to find the best parameter setting and the test set is completely unseen until the system is tested. The evaluation could also be performed using other methods such as k-fold cross validation, hold-out cross validation, and leave-one-out cross validation. When the dataset is prepared, it is necessary to determine which metric to use. There are a number of available metrics such as accuracy, general [TeX:] $$F_{\beta}$$ score, ROC (receiver operating characteristic), and so on. Among the metrics, the most widely used ones are precision, recall, and F1 score, which is computed by a combination of the precision and the recall. The formula of the [TeX:] $$\mathrm{F}_{1}$$ score is as following Eq. (1). [TeX:] $$F_{1}=2 \times \frac{\text { precision } \times \text { recall }}{\text { precision }+\text { recall }}$$ When the dataset is prepared and a particular metric is chosen, there are still several issues that must be considered during the evaluation. First, it must be determined how to evaluate the predicted extent of tags. The tag extent can be evaluated in a strict manner or a soft manner. For the strict manner, only perfectly predicted extents are regarded as correctly predicted tags, while for the soft manner, predicted extents with small errors are also regarded as correctly predicted tags. For the sentence "I will go there tomorrow morning.", there is one timex3 tag whose correct extent is 'tomorrow morning'. If the system predicts that the extent of timex3 tag is 'tomorrow', then it will be regarded as an incorrect prediction by the strict manner. On the other hand, when the soft manner with '1 token error' is employed, then the tag extent 'tomorrow' is also regarded as a correct prediction. In most cases, the strict manner is used to measure the tag extents. Second, it is necessary to determine whether the tag attributes will be evaluated independently or not. If the predicted timex3 tag has two attributes type and value, then each attribute can be evaluated independently or evaluated in a sequential manner. The process for the sequential manner is that, given a particular order of attributes, the evaluation of each attribute is performed using only the tags with correctly predicted precedent attributes. For example, when the order of timex3 attributes is the sequence of extent, type, and value, then the type prediction is evaluated using only the tags with correctly predicted extents. Thus, the performance of extent prediction will influence the performance of the following attributes (e.g., type and value). Third, it must be determined whether the prediction of temporal relation tags (e.g., tlink tags) is performed using the other correct tags or not, where the other tags are timex3, event, and makeinstance tags. Because the temporal relation is a relation between two argument tags, there are two ways to evaluate the relation tags: (1) evaluation given the other correct tags and (2) evaluation given the other predicted tags. It would be best, of course, if both ways were performed. Although the history of the field of temporal information extraction is short, there have been many studies related to this subject. In this paper, studies related to the temporal information extraction are discussed, and summarized in chronological order. The history of annotation languages and shared tasks is described, and some issues about the temporal information (e.g., task boundary, research issues, evaluation metrics, and applications) are discussed. To summarize the trend of recent research about the temporal information extraction, many studies have been focused on applying various methods to the task of the temporal information extraction, because the size of the datasets to be handled, and machinelearning technologies, are developing rapidly. So far, the rule-based approach seems best for the task of timex3 extraction, while the data-driven approach (e.g., CRF, SVM) seems best for the task of event and the task of tlink extraction. In the future, it will be necessary to develop give an effort to rule/feature engineering in order to improve performance, and other machine-learning models should be designed or investigated. It is also necessary to find a way to combine wisely the rules and the machine-learning models, because the combination may result in synergetic effects. This work was supported by Institute for Information & communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2013-2-00131, Development of Knowledge Evolutionary WiseQA Platform Technology for Human Knowledge Augmented Services), and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2019021348). Chae-Gyun Lim He is currently a PhD candidate in the School of Computing at Korea Advanced Institute of Science and Technology (KAIST), Korea. In 2011, he received a B.S. in Medical Computer Science from Eulji University, Korea. Between 2011 and 2013, he worked as a research assistant in the Department of Computer Science at KAIST, Korea, and in 2015, a M.Sc. in the Department of Computer Engineering at Kyung Hee University, Korea. His research interests include temporal information extraction, topic modeling, big data analysis and bioinformatics. Young-Seob Jeong He received a B.S. in Computer Science from Hanyang University, Korea, in 2012, an M.Sc. in Computer Science from Korea Advanced Institute of Science and Technology (KAIST), Korea, and in 2016, a Ph.D. in School of Computing from KAIST, Korea. He joined the faculty of the Department of Big Data Engineering at Soonchunhyang University, Asan, Korea, in 2017. His current research topics are text mining, information extraction, action recognition, and dialog systems, where his favorite techniques are topic modeling and deep learning. Ho-Jin Choi He is currently an associate professor in the School of Computing at Korea Advanced Institute of Science and Technology (KAIST). In 1982, he received a B.S. in Computer Engineering from Seoul National University, Rep. of Korea. In 1985, he obtained an M.Sc. in Computing Software and Systems Design from Newcastle University, UK, and in 1995, a Ph.D. in Artificial Intelligence from Imperial College, London, UK. Currently, he serves as a member of the board of directors for the Software Engineering Society of Korea, the Computational Intelligence Society of Korea, and the Korean Society of Medical Informatics. His current research interests include artificial intelligence, data mining, software engineering, and biomedical informatics. 1 M. Baldassarre, "Think big: learning contexts, algorithms and data science," Research on Education and Media, vol. 8, no. 2, pp. 69-83, 2016.doi:[[[10.1515/rem-2016-0020]]] 2 Wikipedia, (Online). Available:, https://en.wikipedia.org/wiki/ 3 F. Schilder, C. Habel, "Temporal information extraction for temporal question answering," in New Directions in Question Answering: Papers from the 2003 AAAI Symposium. Menlo ParkCA: AAAI Press, pp. 35-44, 2003.custom:[[[-]]] 4 O. Alonso, M. Gertz, R. Baeza-Yates, "On the value of temporal information in information retrieval," ACM SIGIR Forum, vol. 41, no. 2, pp. 35-41, 2007.doi:[[[10.1145/1328964.1328968]]] 5 A. Setzer, R. J. Gaizauskas, "Annotating events and temporal information in newswire texts," in Proceedings of the 2nd International Conference on Language Resources and Evaluation (LREC), Athens, Greece, 2000;pp. 1287-1294. custom:[[[-]]] 6 US Advanced Research Projects Agency, Fifth Message Understanding Conference (MUC-5): Proceedings of a Conference Held in Baltimore, Maryland, August 25-27, 1993, CA: Morgan Kaufmann, San Francisco, 1993.custom:[[[-]]] 7 Online, 1995 (), Available: https://cs.nyu.edu/cs/faculty/ grishman/NEtask20.book_1.html, Available: https://cs.nyu.edu/cs/faculty/ grishman/NEtask20.book_1.html. custom:[[[-]]] 8 N. Chinchor, "Appendix D: MUC-7 Information extraction task definition (version 5.1)," in Proceedings of the 7th Message Understanding Conference (MUC-7), Fairfax, VA, 1998;custom:[[[-]]] 9 P. Kim, S. H. Myaeng, "Usefulness of temporal information automatically extracted from news articles for topic tracking," ACM Transactions on Asian Language Information Processing (TALIP), vol. 3, no. 4, pp. 227-242, 2004.doi:[[[10.1145/1039621.1039624]]] 10 J. Allan, J. G. Carbonell, G. Doddington, J. Yamron, Y. Yang, "Topic detection and tracking pilot study final report," in Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop, Lansdowne, VA, 1998;pp. 194-218. custom:[[[-]]] 11 Y. Yang, J. G. Carbonell, R. D. Brown, T. Pierce, B. T. Archibald, X. Liu, "Learning approaches for detecting and tracking news events," IEEE Intelligent Systems and their Applications, vol. 14, no. 4, pp. 32-43, 1999.doi:[[[10.1109/5254.784083]]] 12 M. Verhagen, R. Gaizauskas, F. Schilder, M. Hepple, J. Moszkowicz, J. Pustejovsky, "The TempEval challenge: identifying temporal relations in text," Language Resources and Evaluation, vol. 43, no. 2, pp. 161-179, 2009.doi:[[[10.1007/s10579-009-9086-z]]] 13 M. Verhagen, R. Sauri, T. Caselli, J. Pustejovsky, "SemEval-2010 Task 13: TempEval-2," in Proceedings of the 5th International Workshop on Semantic Evaluation, Uppsala, Sweden, 2010;pp. 57-62. custom:[[[-]]] 14 N. UzZaman, H. Llorens, L. Derczynski, J. Allen, M. Verhagen, J. Pustejovsky, "Semeval-2013 task 1: Tempeval-3: evaluating time expressions, events, and temporal relations," in Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval), Atlanta, GA, 2013;pp. 1-9. custom:[[[-]]] 15 J. Strotgen, M. Gertz, "HeidelTime: High quality rule-based extraction and normalization of temporal expressions," in Proceedings of the 5th International Workshop on Semantic Evaluation, Uppsala, Sweden, 2010;pp. 321-324. custom:[[[-]]] 16 H. Jung, A. Stent, "ATT1: temporal annotation using big windows and rich syntactic and semantic features," in Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval), Atlanta, GA, 2013;pp. 20-24. custom:[[[-]]] 17 S. Bethard, "Cleartk-timeml: a minimalist approach to TempEval 2013," in Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval), Atlanta, GA, 2013;pp. 10-14. custom:[[[-]]] 18 B. E. Boser, I. M. Guyon, V. N. Vapnik, "A training algorithm for optimal margin classifiers," in Proceedings of the 5th Annual Workshop on Computational Learning Theory, Pittsburgh, PA, 1992;pp. 144-152. custom:[[[-]]] 19 C. Cortes, V. Vapnik, "Support-vector networks," Machine Learning, vol. 20, no. 3, pp. 273-297, 1995.doi:[[[10.1007/BF00994018]]] 20 The Informatics for Integrating Biology and the Bedside (i2b2), 2012 (Online). Available:, https://www.i2b2.org/NLP/TemporalRelations/ 21 S. Bethard, L. Derczynski, G. Savova, J. Pustejovsky, M. Verhagen, "Semeval-2015 task 6: clinical TempEval," in Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval), Denver, CO, 2015;pp. 806-814. custom:[[[-]]] 22 L. Ferro, I. Mani, B. Sundheim, G. Wilson, "TIDES Temporal Annotation Guidelines (version 1.0.2)," The MITRE CorporationMcLean, VA, 2001.custom:[[[-]]] 23 Data elements and interchange formats – Information interchange –Representation of dates and times, ISO 8601, 2004, ISO 8601, Data elements and interchange formats – Information interchange –Representation of dates and times, 2004.custom:[[[-]]] 24 J. Pustejovsky, J. M. Castano, R. Ingria, R. Sauri, R. J. Gaizauskas, A. Setzer, G. Katz, D. Radev, "TimeML: robust specification of event and temporal expressions in text," in Proceedings of AAAI Spring Symposium on New Directions Question Answering, Stanford, CA, 2003;pp. 28-34. custom:[[[-]]] 25 G. Katz, F. Arosio, "The annotation of temporal information in natural language sentences," in Proceedings of the Workshop on Temporal and Spatial Information Processing, Stroudsburg, PA, 2001;custom:[[[-]]] 26 Language resources management - Semantic annotation framework (SemAF) - Part1: Time and events, ISO 24617-1:2012, 2012, ISO 24617-1:, Language resources management - Semantic annotation framework (SemAF) - Part1: Time and events, 2012.custom:[[[-]]] 27 T. Caselli, V. B. Lenzi, R. Sprugnoli, E. Pianta, I. Prodanof, "Annotating events, temporal expressions and relations in Italian: the It-TimeML experience for the Ita-TimeBank," in Proceedings of the 5th Linguistic Annotation Workshop, Portland, OR, 2011;pp. 143-151. custom:[[[-]]] 28 S. Im, H. You, H. Jang, S. Nam, H. Shin, "KTimeML: specification of temporal and event expressions in Korean text," in Proceedings of the 7th Workshop on Asian Language Resources, Singapore, 2009;pp. 115-122. custom:[[[-]]] 29 Y. S. Jeong, W. T. Joo, H. W. Do, C. G. Lim, K. S. Choi, H. J. Choi, "Korean TimeML and Korean TimeBank," in Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC), Portoroz, Slovenia, 2016;pp. 356-359. custom:[[[-]]] 30 B. C. Bruce, "A model for temporal references and its application in a question answering program," Artificial Intelligence: An International Journal, vol. 3, pp. 1-26, 1972.doi:[[[10.1016/0004-3702(72)90040-9]]] 31 J. F. Allen, Communications of the ACM, vol, 26, no. 11, pp. 832-843, 1983.custom:[[[-]]] 32 D. R. Dowty, "The effects of aspectual class on the temporal structure of discourse: semantics or pragmatics?," Linguistics and Philosophy, vol. 9, no. 1, pp. 37-61, 1986.custom:[[[-]]] 33 B. L. Webber, "Tense as discourse anaphor," Computational Linguistics, vol. 14, no. 2, pp. 61-73, 1988.custom:[[[-]]] 34 R. J. Passonneau, "A computational model of the semantics of tense and aspect," Computational Linguistics, vol. 14, no. 2, pp. 44-60, 1988.custom:[[[-]]] 35 M. Moens, M. Steedman, "Temporal ontology and temporal reference," Computational Linguistics, vol. 14, no. 2, pp. 15-28, 1988.custom:[[[-]]] 36 F. Song, R. Cohen, "Tense interpretation in the context of narrative," in Proceedings 9th National Conference on Artificial Intelligence, Anaheim, CA, 1991;pp. 131-136. custom:[[[-]]] 37 C. H. Hwang, L. K. Schubert, "Tense trees as the "fine structure" of discourse," in Proceedings of the 30th Annual Meeting on Association for Computational Linguistics, Newark, DE, 1992;pp. 232-240. custom:[[[-]]] 38 D. Llido, R. Berlanga, M. J. Aramburu, "Extracting temporal references to assign document event-time periods," in Database and Expert Systems Applications. Heidelberg: Springer, pp. 62-71, 2001.custom:[[[-]]] 39 M. J. Aramburu-Cabo, R. Berlanga-Llavori, "Retrieval of information from temporal document databases," in Object-Oriented Technology: ECOOP 1999 Workshop Reader. Heidelberg: Springerp. 215, 1999.custom:[[[-]]] 40 I. Mani, B. Schiffman, J. Zhang, "Inferring temporal ordering of events in news," in Proceedings of Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), Edmonton, Canada, 2003;pp. 55-57. custom:[[[-]]] 41 I. Mani, G. Wilson, "Robust temporal processing of news," in In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, Hong Kong, China, 2000;pp. 69-76. custom:[[[-]]] 42 I. Mani, in Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP), Borovets, Bulgaria, 2003, pp, 45-60, 45-60. custom:[[[-]]] 43 Tango - annotation tool (Online). Available:, http://www.timeml.org/tango/tool.html 44 D. S. Day, C. McHenry, R. Kozierok, L. D. Riek, "Callisto: a configurable annotation workbench," in Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC), Lisbon, Portugal, 2004;custom:[[[-]]] 45 J. Pustejovsky, J. Littman, R. Sauri, "Argument structure in TimeML," in Dagstuhl Seminar Proceedings. Wadern, Germany: Schloss Dagstuhl, Leibniz-Zentrum für Informatik, 2006;custom:[[[-]]] 46 M. Verhagen, in Annotating, Extracting and Reasoning about Time and Events, Heidelberg: Springer, pp. 7-28, 2007.custom:[[[-]]] 47 R. Sauri, R. Knippen, M. Verhagen, J. Pustejovsky, "Evita: a robust event recognizer for QA systems," in Proceedings of the Conference on Human Language Technology and Empirical Methods Natural Language Processing, Vancouver, Canada, 2005;pp. 700-707. custom:[[[-]]] 48 M. Verhagen, I. Mani, R. Sauri, R. Knippen, J. B. Jang, J. Littman, A. Rumshisky, J. Phillips, J. Pustejovsky, "Automating temporal annotation with TARSQI," in Proceedings of the ACL Interactive Poster and Demonstration Sessions, Ann Arbor, MI, 2005;pp. 81-84. custom:[[[-]]] 49 W. Mingli, L. Wenjie, L. Qin, L. Baoli, in Natural Language Processing – IJCNLP 2005, Heidelberg: Springer, pp. 694-706, 2005.custom:[[[-]]] 50 A. Berglund, R. Johansson, P. Nugues, "A machine learning approach to extract temporal information from texts in Swedish and generate animated 3D scenes," in Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Trento, Italy, 2006;pp. 385-392. custom:[[[-]]] 51 N. Chambers, S. Wang, D. Jurafsky, "Classifying temporal relations between events," in Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, Prague, Czech Republic, 2007;pp. 173-176. custom:[[[-]]] 52 J. Poveda, M. Surdeanu, J. Turmo, "A comparison of statistical and rule-induction learners for automatic tagging of time expressions in English," in Proceedings of the 14th International Symposium on Temporal Representation and Reasoning (TIME'07), Alicante, Spain, 2007;pp. 141-149. custom:[[[-]]] 53 T. Brants, 1998 (Online). Available:, http://www.coli.uni-saarland.de/~thorsten/tnt/ 54 T. Kudo, 2013 (Online). Available:, http://chasen.org/~taku/software/yamcha/ 55 N. Chambers, D. Jurafsky, "Jointly combining implicit constraints improves temporal ordering," in Proceedings of the Conference on Empirical Methods Natural Language Processing, Honolulu, HI, 2008;pp. 698-706. custom:[[[-]]] 56 K. Yoshikawa, S. Riedel, M. Asahara, Y. Matsumoto, "Jointly identifying temporal relations with Markov logic," in Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, Singapore, 2009;pp. 405-413. custom:[[[-]]] 57 N. UzZaman, J. F. Allen, "Event and temporal expression extraction from raw text: first step towards a temporally aware system," International Journal of Semantic Computing, vol. 4, no. 4, pp. 487-508, 2010.doi:[[[10.1142/S1793351X10001097]]] 58 J. Strotgen, M. Gertz, P. Popov, "Extraction and exploration of spatio-temporal information in documents," in Proceedings of the 6th Workshop on Geographic Information Retrieval, Zurich, Switzerland, 2010;custom:[[[-]]] 59 Apache Software Foundation, 2013 (Online). Available:, http://uima.apache.org/ 60 Qbase, (Online). Available:, http://qbase.com/products/metacarta/ 61 P. Mazur, R. Dale, "WikiWars: a new corpus for research on temporal expressions," in Proceedings of the 2010 Conference on Empirical Methods Natural Language Processing, Cambridge, MA, 2010;pp. 913-922. custom:[[[-]]] 62 J. Strotgen, M. Gertz, "TimeTrails: a system for exploring spatio-temporal information in documents," in Proceedings of the VLDB Endowment, 2010;vol. 3, no. 1-2, pp. 1569-1572. custom:[[[-]]] 63 X. Ling, D. S. Weld, "Temporal information extraction," in Proceedings of the 24th AAAI Conference on Artificial Intelligence, Atlanta, GA, 2010;pp. 1385-1390. custom:[[[-]]] 64 H. Llorens, E. Saquete, B. Navarro, "TIPSem (English and Spanish): evaluating CRFs and semantic roles in tempeval-2," in Proceedings of the 5th International Workshop on Semantic Evaluation, Los Angeles, CA, 2010;pp. 284-291. custom:[[[-]]] 65 Y. Wang, M. Zhu, L. Qu, M. Spaniol, G. Weikum, "Timely YAGO: harvesting, querying, and visualizing temporal knowledge from Wikipedia," in Proceedings of the 13th International Conference on Extending Database Technology, 2010;pp. 697-700. custom:[[[-]]] 66 O. Alonso, J. Strotgen, R. A. Baeza-Yates, M. Gertz, "Temporal information retrieval: challenges and opportunities," in Proceedings of Workshop on Linked Data on the Web, Hyderabad, India, 2011;pp. 1-8. custom:[[[-]]] 67 S. A. Mirroshandel, G. Ghassem-Sani, "Temporal relation extraction using expectation maximization," in Proceedings of the International Conference Recent Advances Natural Language Processing, Hissar, Bulgaria, 2011;pp. 218-225. custom:[[[-]]] 68 Y. Wang, B. Yang, L. Qu, M. Spaniol, G. Weikum, "Harvesting facts from textual web sources by constrained label propagation," in Proceedings of the 20th ACM International Conference on Information and Knowledge Management, Glasgow, Scotland, 2011;pp. 837-846. custom:[[[-]]] 69 J. Hoffart, F. M. Suchanek, K. Berberich, E. Lewis-Kelham, G. De Melo, G. Weikum, "YAGO2: exploring and querying world knowledge in time, space, context, and many languages," in Proceedings of the 20th International Conference Companion on World Wide Web, Hyderabad, India, 2011;pp. 229-232. custom:[[[-]]] 70 M. S. Fabian, K. Gjergji, W. Gerhard, "Yago: a core of semantic knowledge unifying WordNet and Wikipedia," in Proceedings of the 16th International World Wide Web Conference, Banff, Canada, 2007;pp. 697-706. custom:[[[-]]] 71 WordNet (Online). Available:, https://wordnet.princeton.edu/ 72 Y. Wang, M. Dylla, M. Spaniol, G. Weikum, "Coupling label propagation and constraints for temporal fact extraction," in Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers, Jeju, Korea, 2012;pp. 233-237. custom:[[[-]]] 73 A. X. Chang, C. D. Manning, "SUTime: a library for recognizing and normalizing time expressions," in Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC), Istanbul, Turkey, 2012;pp. 3735-3740. custom:[[[-]]] 74 S. A. Mirroshandel, G. Ghassem-Sani, "Towards unsupervised learning of temporal relations between events," Journal of Artificial Intelligence Research, vol. 45, pp. 125-163, 2012.doi:[[[10.1613/jair.3693]]] 75 T. Strohman, D. Metzler, H. Turtle, W. B. Croft, "Indri: a language model-based search engine for complex queries," in Proceedings of the International Conference on Intelligent Analysis, McLean, VA, 2005;pp. 2-6. custom:[[[-]]] 76 E. Kuzey, G. Weikum, "Extraction of temporal facts and events from Wikipedia," in Proceedings of the 2nd Temporal Web Analytics Workshop, Lyon, France, 2012;pp. 25-32. custom:[[[-]]] 77 I. Berrazega, "Temporal information processing: a survey," International Journal on Naturel Language Computing, vol. 1, no. 2, pp. 1-14, 2012.doi:[[[10.5121/ijnlc.2012.1201]]] 78 B. Tang, Y. Wu, M. Jiang, Y. Chen, J. C. Denny, H. Xu, "A hybrid system for temporal information extraction from clinical text," Journal of the American Medical Informatics Association, vol. 20, no. 5, pp. 828-835, 2013.doi:[[[10.1136/amiajnl-2013-001635]]] 79 P. Jindal, D. Roth, "Extraction of events and temporal expressions from clinical narratives," Journal of Biomedical Informatics, vol. 46, pp. S13-S19, 2013.doi:[[[10.1016/j.jbi.2013.08.010]]] 80 S. Bethard, "A synchronous context free grammar for time normalization," in Proceedings of the Conference on Empirical Methods Natural Language Processing, Seattle, WA, 2013;pp. 821-826. custom:[[[-]]] 81 Y. K. Lin, H. Chen, R. A. Brown, "MedTime: a temporal information extraction system for clinical narratives," Journal of Biomedical Informatics, vol. 46, pp. S20-S28, 2013.doi:[[[10.1016/j.jbi.2013.07.012]]] 82 R. Campos, G. Dias, A. M. Jorge, A. Jatowt, "Survey of temporal information retrieval and related applications," ACM Computing Surveys (CSUR), vol. 47, no. 2, 2015.doi:[[[10.1145/2619088]]] 83 K. Lee, Y. Artzi, J. Dodge, L. Zettlemoyer, "Context-dependent semantic parsing for time expressions," in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, Baltimore, MD, 2014;pp. 1437-1447. custom:[[[-]]] 84 H. Ji, T. Cassidy, Q. Li, S. Tamang, "Tackling representation, annotation and classification challenges for temporal knowledge base population," Knowledge and Information Systems, vol. 41, no. 3, pp. 611-646, 2014.doi:[[[10.1007/s10115-013-0675-1]]] 85 T. Cassidy, "Temporal information extraction and knowledge base population," Ph.D. dissertationCity University of New York, NY, 2014.custom:[[[-]]] 86 Y. S. Jeong, Z. M. Kim, H. W. Do, C. G. Lim, H. J. Choi, "Temporal information extraction from Korean texts," in Proceedings of the 19th Conference on Computational Natural Language Learning, Beijing, China, 2015;pp. 279-288. custom:[[[-]]] 87 Y. S. Jeong, H. J. Choi, "Language independent feature extractor," in Proceedings of the 29th AAAI Conference on Artificial Intelligence, Austin, TX, 2015;pp. 4170-4171. custom:[[[-]]] 88 S. Bethard, G. Savova, W. T. Chen, L. Derczynski, J. Pustejovsky, M. Verhagen, "Semeval-2016 task 12: clinical TempEval," in Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval), San Diego, CA, 2016;pp. 1052-1062. custom:[[[-]]] 89 S. MacAvaney, A. Cohan, N. Goharian, "GUIR at SemEval-2017 Task 12: a framework for cross-domain clinical temporal information extraction," in Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval), Vancouver, Canada, 2017;pp. 1024-1029. custom:[[[-]]] 90 P. R. Sarath, R. Manikandan, Y. Niwa, "Hitachi at SemEval-2017 Task 12: system for temporal information extraction from clinical notes," in Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval), Vancouver, Canada, 2017;pp. 1005-1009. custom:[[[-]]] 91 A. Leeuwenberg, M. F. Moens, "KULeuven-LIIR at SemEval-2017 Task 12: cross-domain temporal information extraction from clinical records," in Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval), Vancouver, Canada, 2017;pp. 1030-1034. custom:[[[-]]] 92 J. Tourille, O. Ferret, X. Tannier, A. Neveol, "LIMSI-COT at SemEval-2017 Task 12: neural architecture for temporal information extraction from clinical narratives," in Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval), 2017;pp. 597-602. custom:[[[-]]] 93 E. Laparra, D. Xu, A. Elsayed, S. Bethard, M. Palmer, "SemEval 2018 Task 6: parsing time normalizations," in Proceedings of the 12th International Workshop on Semantic Evaluation, New Orleans, LA, 2018;pp. 88-96. custom:[[[-]]] 94 S. Bethard, J. Parker, "A semantically compositional annotation scheme for time normalization," in Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC), Portoroz, Slovenia, 2016;pp. 3779-3786. custom:[[[-]]] 95 A. Olex, L. Maffey, N. Morgan, B. McInnes, "Chrono at SemEval-2018 Task 6: a system for normalizing temporal expressions," in Proceedings of the 12th International Workshop on Semantic Evaluation, New Orleans, LA, 2018;pp. 97-101. custom:[[[-]]] 96 J. Pustejovsky, A. Stubbs, Natural Language Annotation for Machine Learning: A Guide to Corpus-Building for Applications, CA: O'Reilly Media, Sebastopol, 2012.custom:[[[-]]] Task boundary Temporal expressions (multiple choice) Explicit reference Implicit reference (i.e., global implicit reference, local implicit reference) Relative reference Vague reference Non-consuming reference Event expressions Both of verbs and nouns Temporal relations Kind boundary (multiple choice) Temporal links Subordinated links Aspectual links Text boundary Single sentence/multiple sentences Single paragraph/multiple paragraphs Single document/multiple documents Transitivity boundary Transitivity boundary for types of temporal links (only available when kind boundary contains temporal links) TT tlink TM tlink MM tlink DM tlink [TeX:] $$X<Y$$ X takes place before Y [TeX:] $$Y>X$$ Y takes place after X X m Y X meets Y (X ends with beginning of Y) Y mi X X meets Y (inversed notation) X o Y X overlaps with Y Y oi X X overlaps with Y (inversed notation) X s Y X starts Y Y si X X starts Y (inversed notation) X d Y X during Y Y di X X during Y (inversed notation) X f Y X finishes Y Y fi X X finishes Y (inversed notation) X = Y X is equal to Y
CommonCrawl
◄ ▲ ► MATHJAX There are several reasons you might be seeing this page. In order to read the online edition of The Feynman Lectures on Physics, javascript must be supported by your browser and enabled. If you have have visited this website previously it's possible you may have a mixture of incompatible files (.js, .css, and .html) in your browser cache. If you use an ad blocker it may be preventing our pages from downloading necessary resources. So, please try the following: make sure javascript is enabled, clear your browser cache (at least of files from feynmanlectures.caltech.edu), turn off your browser extensions, and open this page: If it does not open, or only shows you this message again, then please let us know: which browser you are using (including version #) which operating system you are using (including version #) This type of problem is rare, and there's a good chance it can be fixed if we have some clues about the cause. So, if you can, after enabling javascript, clearing the cache and disabling extensions, please open your browser's javascript console, load the page above, and if this generates any messages (particularly errors or warnings) on the console, then please make a copy (text or screenshot) of those messages and send them with the above-listed information to the email address given below. By sending us information you will be helping not only yourself, but others who may be having similar problems accessing the online edition of The Feynman Lectures on Physics. Your time and consideration are greatly appreciated. Mike Gottlieb [email protected] Editor, The Feynman Lectures on Physics New Millennium Edition The recording of this lecture is missing from the Caltech Archives. 39 Elastic Materials Reference: C. Kittel, Introduction to Solid State Physics, John Wiley and Sons, Inc., New York, 2nd ed., 1956. 39–1The tensor of strain In the last chapter we talked about the distortions of particular elastic objects. In this chapter we want to look at what can happen in general inside an elastic material. We would like to be able to describe the conditions of stress and strain inside some big glob of jello which is twisted and squashed in some complicated way. To do this, we need to be able to describe the local strain at every point in an elastic body; we can do it by giving a set of six numbers—which are the components of a symmetric tensor—for each point. Earlier, we spoke of the stress tensor (Chapter 31); now we need the tensor of strain. Fig. 39–1.A speck of the material at the point $P$ in an unstrained block moves to $P'$ where the block is strained. Imagine that we start with the material initially unstrained and watch the motion of a small speck of "dirt" embedded in the material when the strain is applied. A speck that was at the point $P$ located at $\FLPr=(x,y,z)$ moves to a new position $P'$ at $\FLPr'=(x',y',z')$ as shown in Fig. 39–1. We will call $\FLPu$ the vector displacements from $P$ to $P'$. Then \begin{equation} \label{Eq:II:39:1} \FLPu=\FLPr'-\FLPr. \end{equation} The displacement $\FLPu$ depends, of course, on which point $P$ we start with, so $\FLPu$ is a vector function of $\FLPr$—or, if you prefer, of $(x,y,z)$. Fig. 39–2.A homogenous stretch-type strain. Let's look first at a simple situation in which the strain is constant over the material—so we have what is called a homogeneous strain. Suppose, for instance, that we have a block of material and we stretch it uniformly. We just change its dimensions uniformly in one direction—say, in the $x$-direction, as shown in Fig. 39–2. The motion $u_x$ of a speck at $x$ is proportional to $x$. In fact, \begin{equation*} \frac{u_x}{x}=\frac{\Delta l}{l}. \end{equation*} We will write $u_x$ this way: \begin{equation*} u_x=e_{xx}x. \end{equation*} The proportionality constant $e_{xx}$ is, of course, the same thing as $\Delta l/l$. (You will see shortly why we use a double subscript.) If the strain is not uniform, the relation between $u_x$ and $x$ will vary from place to place in the material. For the general situation, we define the $e_{xx}$ by a kind of local $\Delta l/l$, namely by \begin{equation} \label{Eq:II:39:2} e_{xx}=\ddpl{u_x}{x}. \end{equation} This number—which is now a function of $x$, $y$, and $z$—describes the amount of stretching in the $x$-direction throughout the hunk of jello. There may, of course, also be stretching in the $y$- and $z$-directions. We describe them by the numbers \begin{equation} \label{Eq:II:39:3} e_{yy}=\ddp{u_y}{y},\quad e_{zz}=\ddp{u_z}{z}. \end{equation} Fig. 39–3.A homogenous shear strain. We need to be able to describe also the shear-type strains. Suppose we imagine a little cube marked out in the initially undisturbed jello. When the jello is pushed out of shape, this cube may get changed into a parallelogram, as sketched in Fig. 39–3.1 In this kind of a strain, the $x$-motion of each particle is proportional to its $y$-coordinate, \begin{equation} \label{Eq:II:39:4} u_x=\frac{\theta}{2}\,y. \end{equation} And there is also a $y$-motion proportional to $x$, \begin{equation} \label{Eq:II:39:5} u_y=\frac{\theta}{2}\,x. \end{equation} So we can describe such a shear-type strain by writing \begin{equation*} u_x=e_{xy}y,\quad u_y=e_{yx}x \end{equation*} with \begin{equation*} e_{xy}=e_{yx}=\frac{\theta}{2}. \end{equation*} Now you might think that when the strains are not homogeneous we could describe the generalized shear strains by defining the quantities $e_{xy}$ and $e_{yx}$ by \begin{equation} \label{Eq:II:39:6} e_{xy}=\ddp{u_x}{y},\quad e_{yx}=\ddp{u_y}{x}. \end{equation} But there is one difficulty. Suppose that the displacements $u_x$ and $u_y$ were given by \begin{equation*} u_x=\frac{\theta}{2}\,y,\quad u_y=-\frac{\theta}{2}\,x \end{equation*} They are like Eqs. (39.4) and (39.5) except that the sign of $u_y$ is reversed. With these displacements a little cube in the jello simply gets shifted by the angle $\theta/2$, as shown in Fig. 39–4. There is no strain at all—just a rotation in space. There is no distortion of the material; the relative positions of all the atoms are not changed at all. We must somehow make our definitions so that pure rotations are not included in our definitions of a shear strain. The key point is that if $\ddpl{u_y}{x}$ and $\ddpl{u_x}{y}$ are equal and opposite, there is no strain; so we can fix things up by defining \begin{equation*} e_{xy}=e_{yx}=\tfrac{1}{2}(\ddpl{u_y}{x}+\ddpl{u_x}{y}). \end{equation*} For a pure rotation they are both zero, but for a pure shear we get that $e_{xy}$ is equal to $e_{yx}$, as we would like. Fig. 39–4.A homogenous rotation—there is no strain. In the most general distortion—which may include stretching or compression as well as shear—we define the state of strain by giving the nine numbers \begin{equation} \begin{aligned} e_{xx}&=\ddp{u_x}{x},\\[2pt] e_{yy}&=\ddp{u_y}{y},\\[-2pt] &\qquad\vdots\\ e_{xy}&=\tfrac{1}{2}(\ddpl{u_y}{x}+\ddpl{u_x}{y}),\\[-4pt] &\qquad\vdots \end{aligned} \label{Eq:II:39:7} \end{equation} These are the terms of a tensor of strain. Because it is a symmetric tensor—our definitions make $e_{xy}=e_{yx}$, always—there are really only six different numbers. You remember (see Chapter 31) that the general characteristic of a tensor is that the terms transform like the products of the components of two vectors. (If $\FLPA$ and $\FLPB$ are vectors, $C_{ij}=A_iB_j$ is a tensor.) Each term of $e_{ij}$ is a product (or the sum of such products) of the components of the vector $\FLPu=(u_x,u_y,u_z)$, and of the operator $\FLPnabla=(\ddpl{}{x},\ddpl{}{y},\ddpl{}{z})$, which we know transforms like a vector. Let's let $x_1$, $x_2$, and $x_3$ stand for $x$, $y$, and $z$ and $u_1$, $u_2$, and $u_3$ stand for $u_x$, $u_y$, and $u_z$; then we can write the general term $e_{ij}$ of the strain tensor as \begin{equation} \label{Eq:II:39:8} e_{ij}=\tfrac{1}{2}(\ddpl{u_j}{x_i}+\ddpl{u_i}{x_j}), \end{equation} where $i$ and $j$ can be $1$, $2$, or $3$. When we have a homogeneous strain—which may include both stretching and shear—all of the $e_{ij}$ are constants, and we can write \begin{equation} \label{Eq:II:39:9} u_x=e_{xx}x+e_{xy}y+e_{xz}z. \end{equation} (We choose our origin of $x$, $y$, $z$ at the point where $\FLPu$ is zero.) In this case, the strain tensor $e_{ij}$ gives the relationship between two vectors: the coordinate vector $\FLPr=(x,y,z)$ and the displacement vector $\FLPu=(u_x,u_y,u_z)$. When the strains are not homogeneous, any piece of the jello may also get somewhat twisted—there will be a local rotation. If the distortions are all small, we would have \begin{equation} \label{Eq:II:39:10} \Delta u_i=\sum_j(e_{ij}-\omega_{ij})\,\Delta x_j, \end{equation} where $\omega_{ij}$ is an antisymmetric tensor, \begin{equation} \label{Eq:II:39:11} \omega_{ij}=\tfrac{1}{2}(\ddpl{u_j}{x_i}-\ddpl{u_i}{x_j}), \end{equation} which describes the rotation. We will, however, not worry any more about rotations, but only about the strains described by the symmetric tensor $e_{ij}$. 39–2The tensor of elasticity Now that we have described the strains, we want to relate them to the internal forces—the stresses in the material. For each small piece of the material, we assume Hooke's law holds and write that the stresses are proportional to the strains. In Chapter 31 we defined the stress tensor $S_{ij}$ as the $i$th component of the force across a unit-area perpendicular to the $j$-axis. Hooke's law says that each component of $S_{ij}$ is linearly related to each of the components of strain. Since $S$ and $e$ each have nine components, there are $9\times9=81$ possible coefficients which describe the elastic properties of the material. They are constants if the material itself is homogeneous. We write these coefficients as $C_{ijkl}$ and define them by the equation \begin{equation} \label{Eq:II:39:12} S_{ij}=\sum_{k,l}C_{ijkl}e_{kl}, \end{equation} where $i$, $j$, $k$, $l$ all take on the values $1$, $2$, or $3$. Since the coefficients $C_{ijkl}$ relate one tensor to another, they also form a tensor—a tensor of the fourth rank. We can call it the tensor of elasticity. Suppose that all the $C$'s are known and that you put a complicated force on an object of some peculiar shape. There will be all kinds of distortion, and the thing will settle down with some twisted shape. What are the displacements? You can see that it is a complicated problem. If you knew the strains, you could find the stresses from Eq. (39.12)—or vice versa. But the stresses and strains you end up with at any point depend on what happens in all the rest of the material. The easiest way to get at the problem is by thinking of the energy. When there is a force $F$ proportional to a displacement $x$, say $F=kx$, the work required for any displacement $x$ is $kx^2/2$. In a similar way, the work $w$ that goes into each unit volume of a distorted material turns out to be \begin{equation} \label{Eq:II:39:13} w=\tfrac{1}{2}\sum_{ijkl}C_{ijkl}e_{ij}e_{kl}. \end{equation} The total work $W$ done in distorting the body is the integral of $w$ over its volume: \begin{equation} \label{Eq:II:39:14} W=\int\tfrac{1}{2}\sum_{ijkl}C_{ijkl}e_{ij}e_{kl}\,dV. \end{equation} This is then the potential energy stored in the internal stresses of the material. Now when a body is in equilibrium, this internal energy must be at a minimum. So the problem of finding the strains in a body can be solved by finding the set of displacements $\FLPu$ throughout the body which will make $W$ a minimum. In Chapter 19 we gave some of the general ideas of the calculus of variations that are used in tackling minimization problems like this. We cannot go into the problem in any more detail here. What we are mainly interested in now is what we can say about the general properties of the tensor of elasticity. First, it is clear that there are not really $81$ different terms in $C_{ijkl}$. Since both $S_{ij}$ and $e_{ij}$ are symmetric tensors, each with only six different terms, there can be at most $36$ different terms in $C_{ijkl}$. There are, however, usually many fewer than this. Let's look at the special case of a cubic crystal. In it, the energy density $w$ starts out like this: \begin{align} w=\tfrac{1}{2}\{&C_{xxxx}e_{xx}^2\!+C_{xxxy}e_{xx}e_{xy}\!+C_{xxxz}e_{xx}e_{xz}\notag\\[.5ex] +\;&C_{xxyx}e_{xx}e_{xy}\!+C_{xxyy}e_{xx}e_{yy}\ldots\text{etc}\ldots\notag\\[.5ex] \label{Eq:II:39:15} +\;&C_{yyyy}e_{yy}^2\!+\ldots\text{etc}\ldots\text{etc}\ldots\}, \end{align} with $81$ terms in all! Now a cubic crystal has certain symmetries. In particular, if the crystal is rotated $90^\circ$, it has the same physical properties. It has the same stiffness for stretching in the $y$-direction as for stretching in the $x$-direction. Therefore, if we change our definition of the coordinate directions $x$ and $y$ in Eq. (39.15), the energy wouldn't change. It must be that for a cubic crystal \begin{equation} \label{Eq:II:39:16} C_{xxxx}=C_{yyyy}=C_{zzzz}. \end{equation} Next we can show that the terms like $C_{xxxy}$ must be zero. A cubic crystal has the property that it is symmetric under a reflection about any plane perpendicular to one of the axes. If we replace $y$ by $-y$, nothing is different. But changing $y$ to $-y$ changes $e_{xy}$ to $-e_{xy}$—a displacement which was toward $+y$ is now toward $-y$. If the energy is not to change, $C_{xxxy}$ must go into $-C_{xxxy}$ when we make a reflection. But a reflected crystal is the same as before, so $C_{xxxy}$ must be the same as $-C_{xxxy}$. This can happen only if both are zero. You say, "But the same argument will make $C_{yyyy}=0$!" No, because there are four $y$'s. The sign changes once for each $y$, and four minuses make a plus. If there are two or four $y$'s, the term does not have to be zero. It is zero only when there is one, or three. So, for a cubic crystal, any nonzero term of $C$ will have only an even number of identical subscripts. (The arguments we have made for $y$ obviously hold also for $x$ and $z$.) We might then have terms like $C_{xxyy}$, $C_{xyxy}$, $C_{xyyx}$, and so on. We have already shown, however, that if we change all $x$'s to $y$'s and vice versa (or all $z$'s and $x$'s, and so on) we must get—for a cubic crystal—the same number. This means that there are only three different nonzero possibilities: \begin{equation} \begin{aligned} &C_{xxxx}\:(=C_{yyyy}=C_{zzzz}),\\[.5ex] &C_{xxyy}\:(=C_{yyxx}=C_{xxzz},\:\text{etc.}),\\[.5ex] &C_{xyxy}\:(=C_{yxyx}=C_{xzxz},\:\text{etc.}). \end{aligned} \label{Eq:II:39:17} \end{equation} For a cubic crystal, then, the energy density will look like this: \begin{equation} \begin{aligned} w&=\tfrac{1}{2}\{C_{xxxx}(e_{xx}^2+e_{yy}^2+e_{zz}^2)\\[.5ex] &\quad+\,2C_{xxyy}(e_{xx}e_{yy}+e_{yy}e_{zz}+e_{zz}e_{xx})\\[.5ex] &\quad+\,4C_{xyxy}(e_{xy}^2+e_{yz}^2+e_{zx}^2)\}. \end{aligned} \label{Eq:II:39:18} \end{equation} For an isotropic—that is, noncrystalline—material, the symmetry is still higher. The $C$'s must be the same for any choice of the coordinate system. Then it turns out that there is another relation among the $C$'s, namely, that \begin{equation} \label{Eq:II:39:19} C_{xxxx}=C_{xxyy}+2C_{xyxy}. \end{equation} We can see that this is so by the following general argument. The stress tensor $S_{ij}$ has to be related to $e_{ij}$ in a way that doesn't depend at all on the coordinate directions—it must be related only by scalar quantities. "That's easy," you say. "The only way to obtain $S_{ij}$ from $e_{ij}$ is by multiplication by a scalar constant. It's just Hooke's law. It must be that $S_{ij}=(\text{const})e_{ij}$." But that's not quite right; there could also be the unit tensor $\delta_{ij}$ multiplied by some scalar, linearly related to $e_{ij}$. The only invariant you can make that is linear in the $e$'s is $\sum e_{ii}$. (It transforms like $x^2+y^2+z^2$, which is a scalar.) So the most general form for the equation relating $S_{ij}$ to $e_{ij}$—for isotropic materials—is \begin{equation} \label{Eq:II:39:20} S_{ij}=2\mu e_{ij}+\lambda\Bigl(\sum_ke_{kk}\Bigr)\delta_{ij}. \end{equation} (The first constant is usually written as two times $\mu$; then the coefficient $\mu$ is equal to the shear modulus we defined in the last chapter.) The constants $\mu$ and $\lambda$ are called the Lamé elastic constants. Comparing Eq. (39.20) with Eq. (39.12), you see that \begin{equation} \begin{aligned} C_{xxyy}&=\lambda,\\[.5ex] C_{xyxy}&=\mu,\\[.5ex] C_{xxxx}&=2\mu+\lambda. \end{aligned} \label{Eq:II:39:21} \end{equation} So we have proved that Eq. (39.19) is indeed true. You also see that the elastic properties of an isotropic material are completely given by two constants, as we said in the last chapter. The $C$'s can be put in terms of any two of the elastic constants we have used earlier—for instance, in terms of Young's modulus $Y$ and Poisson's ratio $\sigma$. We will leave it for you to show that \begin{equation} \begin{aligned} C_{xxxx}&=\frac{Y}{1+\sigma} \biggl(1+\frac{\sigma}{1-2\sigma}\biggr),\\[.5ex] C_{xxyy}&=\frac{Y}{1+\sigma} \biggl(\frac{\sigma}{1-2\sigma}\biggr),\\[.5ex] C_{xyxy}&=\frac{Y}{2(1+\sigma)}. \end{aligned} \label{Eq:II:39:22} \end{equation} 39–3The motions in an elastic body Fig. 39–5.A small volume element $V$ bounded by the surface $A$. We have pointed out that for an elastic body in equilibrium the internal stresses adjust themselves to make the energy a minimum. Now we take a look at what happens when the internal forces are not in equilibrium. Let's say we have a small piece of the material inside some surface $A$. See Fig. 39–5. If the piece is in equilibrium, the total force $\FLPF$ acting on it must be zero. We can think of this force as being made up of two parts. There could be one part due to "external" forces like gravity, which act from a distance on the matter in the piece to produce a force per unit volume $\FLPf_{\text{ext}}$. The total external force $\FLPF_{\text{ext}}$ is the integral of $\FLPf_{\text{ext}}$ over the volume of the piece: \begin{equation} \label{Eq:II:39:23} \FLPF_{\text{ext}}=\int\FLPf_{\text{ext}}\,dV. \end{equation} In equilibrium, this force would be balanced by the total force $\FLPF_{\text{int}}$ from the neighboring material which acts across the surface $A$. When the piece is not in equilibrium—if it is moving—the sum of the internal and external forces is equal to the mass times the acceleration. We would have \begin{equation} \label{Eq:II:39:24} \FLPF_{\text{ext}}+\FLPF_{\text{int}}= \int\rho\ddot{\FLPr}\,dV, \end{equation} where $\rho$ is the density of the material, and $\ddot{\FLPr}$ is its acceleration. We can now combine Eqs. (39.23) and (39.24), writing \begin{equation} \label{Eq:II:39:25} \FLPF_{\text{int}}=\int_v(-\FLPf_{\text{ext}}+\rho\ddot{\FLPr})\,dV. \end{equation} We will simplify our writing by defining \begin{equation} \label{Eq:II:39:26} \FLPf=-\FLPf_{\text{ext}}+\rho\ddot{\FLPr}. \end{equation} Then Eq. (39.25) is written \begin{equation} \label{Eq:II:39:27} \FLPF_{\text{int}}=\int_v\FLPf\,dV. \end{equation} What we have called $\FLPF_{\text{int}}$ is related to the stresses in the material. The stress tensor $S_{ij}$ was defined (Chapter 31) so that the $x$-component of the force $dF$ across a surface element $da$, whose unit normal is $\FLPn$, is given by \begin{equation} \label{Eq:II:39:28} dF_x=(S_{xx}n_x+S_{xy}n_y+S_{xz}n_z)\,da. \end{equation} The $x$-component of $\FLPF_{\text{int}}$ on our little piece is then the integral of $dF_x$ over the surface. Substituting this into the $x$-component of Eq. (39.27), we get \begin{equation} \label{Eq:II:39:29} \int_A(S_{xx}n_x+S_{xy}n_y+S_{xz}n_z)\,da=\int_vf_x\,dV. \end{equation} We have a surface integral related to a volume integral—and that reminds us of something we learned in electricity. Note that if you ignore the first subscript $x$ on each of the $S$'s in the left-hand side of Eq. (39.29), it looks just like the integral of a quantity $\unicode{x201C}\FLPS\,\unicode{x201D}\cdot\FLPn$—that is, the normal component of a vector—over the surface. It would be the flux of $\unicode{x201C}\FLPS\,\unicode{x201D}$ out of the volume. And this could be written, using Gauss law, as the volume integral of the divergence of $\unicode{x201C}\FLPS\,\unicode{x201D}$. It is, in fact, true whether the $x$-subscript is there or not—it is just a mathematical theorem you get by integrating by parts. In other words, we can change Eq. (39.29) into \begin{equation} \label{Eq:II:39:30} \int_v\biggl( \ddp{S_{xx}}{x}+\ddp{S_{xy}}{y}+\ddp{S_{xz}}{z} \biggr)dV=\int_vf_x\,dV. \end{equation} Now we can leave off the volume integrals and write the differential equation for the general component of $\FLPf$ as \begin{equation} \label{Eq:II:39:31} f_i=\sum_j\ddp{S_{ij}}{x_j}. \end{equation} This tells us how the force per unit volume is related to the stress tensor $S_{ij}$. The theory of the motions inside a solid works this way. If we start out knowing the initial displacements—given by, say, $\FLPu$—we can work out the strains $e_{ij}$. From the strains we can get the stresses from Eq. (39.12). From the stresses we can get the force density $\FLPf$ in Eq. (39.31). Knowing $\FLPf$, we can get, from Eq. (39.26), the acceleration $\ddot{\FLPr}$ of the material, which tells us how the displacements will be changing. Putting everything together, we get the horrible equation of motion for an elastic solid. We will just write down the results that come out for an isotropic material. If you use (39.20) for $S_{ij}$, and write the $e_{ij}$ as $\tfrac{1}{2}(\ddpl{u_i}{x_j}+\ddpl{u_j}{x_i})$, you end up with the vector equation \begin{equation} \label{Eq:II:39:32} \FLPf=(\lambda+\mu)\,\FLPgrad{(\FLPdiv{\FLPu})}+\mu\,\nabla^2\FLPu. \end{equation} You can, in fact, see that the equation relating $\FLPf$ and $\FLPu$ must have this form. The force must depend on the second derivatives of the displacements $\FLPu$. What second derivatives of $\FLPu$ are there that are vectors? One is $\FLPgrad{(\FLPdiv{\FLPu})}$; that's a true vector. The only other one is $\nabla^2\FLPu$. So the most general form is \begin{equation*} \FLPf=a\,\FLPgrad{(\FLPdiv{\FLPu})}+b\,\nabla^2\FLPu, \end{equation*} which is just (39.32) with a different definition of the constants. You may be wondering why we don't have a third term using $\FLPcurl{\FLPcurl{\FLPu}}$, which is also a vector. But remember that $\FLPcurl{\FLPcurl{\FLPu}}$ is the same thing as $\FLPgrad{(\FLPdiv{\FLPu})}-\nabla^2\FLPu$, so it is a linear combination of the two terms we have. Adding it would add nothing new. We have proved once more that isotropic material has only two elastic constants. For the equation of motion of the material, we can set (39.32) equal to $\rho\,\partial^2\FLPu/\partial t^2$—neglecting for now any body forces like gravity—and get \begin{equation} \label{Eq:II:39:33} \rho\,\frac{\partial^2\FLPu}{\partial t^2}= (\lambda+\mu)\,\FLPgrad{(\FLPdiv{\FLPu})}+\mu\,\nabla^2\FLPu. \end{equation} It looks something like the wave equation we had in electromagnetism, except that there is an additional complicating term. For materials whose elastic properties are everywhere the same we can see what the general solutions look like in the following way. You will remember that any vector field can be written as the sum of two vectors: one whose divergence is zero, and the other whose curl is zero. In other words, we can put \begin{equation} \label{Eq:II:39:34} \FLPu=\FLPu_1+\FLPu_2, \end{equation} where \begin{equation} \label{Eq:II:39:35} \FLPdiv{\FLPu_1}=0,\quad \FLPcurl{\FLPu_2}=\FLPzero. \end{equation} Substituting $\FLPu_1+\FLPu_2$ for $\FLPu$ in (39.33), we get \begin{equation} \label{Eq:II:39:36} \rho\,\partial^2/\partial t^2[\FLPu_1+\FLPu_2]= (\lambda+\mu)\,\FLPgrad{(\FLPdiv{\FLPu_2})}+ \mu\,\nabla^2(\FLPu_1+\FLPu_2). \end{equation} \begin{align} \rho\,\partial^2/&\partial t^2[\FLPu_1+\FLPu_2]=\notag\\[1ex] \label{Eq:II:39:36} &(\lambda+\mu)\,\FLPgrad{(\FLPdiv{\FLPu_2})}+ \mu\,\nabla^2(\FLPu_1+\FLPu_2). \end{align} We can eliminate $\FLPu_1$ by taking the divergence of this equation, \begin{equation*} \rho\,\partial^2/\partial t^2(\FLPdiv{\FLPu_2})= (\lambda+\mu)\,\nabla^2(\FLPdiv{\FLPu_2})+ \mu\,\FLPdiv{\nabla^2(\FLPu_2)}. \end{equation*} \begin{align*} \rho\,\partial^2/&\partial t^2(\FLPdiv{\FLPu_2})=\notag\\[1ex] &(\lambda+\mu)\,\nabla^2(\FLPdiv{\FLPu_2})+ \mu\,\FLPdiv{\nabla^2(\FLPu_2)}. \end{align*} Since the operators ($\nabla^2$) and ($\FLPdiv{}$) can be interchanged, we can factor out the divergence to get \begin{equation} \label{Eq:II:39:37} \FLPdiv{\{\rho\,\partial^2\FLPu_2/\partial t^2- (\lambda+2\mu)\,\nabla^2\FLPu_2\}}=0. \end{equation} Since $\FLPcurl{\FLPu_2}$ is zero by definition, the curl of the bracket $\{\}$ is also zero; so the bracket itself is identically zero, and \begin{equation} \label{Eq:II:39:38} \rho\,\partial^2\FLPu_2/\partial t^2= (\lambda+2\mu)\,\nabla^2\FLPu_2. \end{equation} This is the vector wave equation for waves which move at the speed $C_2=\sqrt{(\lambda+2\mu)/\rho}$. Since the curl of $\FLPu_2$ is zero, there is no shearing associated with this wave; this wave is just the compressional—sound-type—wave we discussed in the last chapter, and the velocity is just what we found for $C_{\text{long}}$. In a similar way—by taking the curl of Eq. (39.36)—we can show that $\FLPu_1$ satisfies the equation \begin{equation} \label{Eq:II:39:39} \rho\,\partial^2\FLPu_1/\partial t^2=\mu\,\nabla^2\FLPu_1. \end{equation} This is again a vector wave equation for waves with the speed $C_1=\sqrt{\mu/\rho}$. Since $\FLPdiv{\FLPu_1}$ is zero, $\FLPu_1$ produces no changes in density; the vector $\FLPu_1$ corresponds to the transverse, or shear-type, wave we saw in the last chapter, and $C_1=C_{\text{shear}}$. If we wished to know the static stresses in an isotropic material, we could, in principle, find them by solving Eq. (39.32) with $\FLPf$ equal to zero—or equal to the static body forces from gravity such as $\rho\FLPg$—under certain conditions which are related to the forces acting on the surfaces of our large block of material. This is somewhat more difficult to do than the corresponding problems in electromagnetism. It is more difficult, first, because the equations are a little more difficult to handle, and second, because the shape of the elastic bodies we are likely to be interested in are usually much more complicated. In electromagnetism, we are often interested in solving Maxwell's equations around relatively simple geometric shapes such as cylinders, spheres, and so on, since these are convenient shapes for electrical devices. In elasticity, the objects we would like to analyze may have quite complicated shapes—like a crane hook, or an automobile crankshaft, or the rotor of a gas turbine. Such problems can sometimes be worked out approximately by numerical methods, using the minimum energy principle we mentioned earlier. Another way is to use a model of the object and measure the internal strains experimentally, using polarized light. Fig. 39–6.Measuring internal stresses with polarized light. It works this way: When a transparent isotropic material—for example, a clear plastic like lucite—is put under stress, it becomes birefringent. If you put polarized light through it, the plane of polarization will be rotated by an amount related to the stress: by measuring the rotation, you can measure the stress. Figure 39–6 shows how such a setup might look. Figure 39–7 is a photograph of a photoelastic model of a complicated shape under stress. Fig. 39–7.A stressed plastic model as seen between crossed polaroids. [From F. W. Sears, Optics, Addison-Wesley Publishing Co., Mass., 1949.] 39–4Nonelastic behavior In all that has been said so far, we have assumed that stress is proportional to strain; in general, that is not true. Figure 39–8 shows a typical stress-strain curve for a ductile material. For small strains, the stress is proportional to the strain. Eventually, however, after a certain point, the relationship between stress and strain begins to deviate from a straight line. For many materials—the ones we would call "brittle"—the object breaks for strains only a little above the point where the curve starts to bend over. In general, there are other complications in the stress-strain relationship. For example, if you strain an object, the stresses may be high at first, but decrease slowly with time. Also if you go to high stresses, but still not to the "breaking" point, when you lower the strain the stress will return along a different curve. There is a small hysteresis effect (like the one we saw between $B$ and $H$ in magnetic materials). Fig. 39–8.A typical stress-strain relation for large strains. The stress at which a material will break varies widely from one material to another. Some materials will break when the maximum tensile stress reaches a certain value. Other materials will fail when the maximum shear stress reaches a certain value. Chalk is an example of a material which is much weaker in tension than in shear. If you pull on the ends of a piece of blackboard chalk, the chalk will break perpendicular to the direction of the applied stress, as shown in Fig. 39–9(a). It breaks perpendicular to the applied force because it is only a bunch of particles packed together which are easily pulled apart. The material is, however, much harder to shear, because the particles get in each other's way. Now you will remember that when we had a rod in torsion there was a shear all around it. Also, we showed that a shear was equivalent to a combination of a tension and compression at $45^\circ$. For these reasons, if you twist a piece of blackboard chalk, it will break along a complicated surface which starts out at $45^\circ$ to the axis. A photograph of a piece of chalk broken in this way is shown in Fig. 39–9(b). The chalk breaks where the material is in maximum tension. Fig. 39–9.(a) A piece of chalk broken by pulling on the ends; (b) a piece broken by twisting. Other materials behave in strange and complicated ways. The more complicated the materials are, the more interesting their behavior. If we take a sheet of "Saran-Wrap" and crumple it up into a ball and throw it on the table, it slowly unfolds itself and returns toward its original flat form. At first sight, we might be tempted to think that it is inertia which prevents it from returning to its original form. However, a simple calculation shows that the inertia is several orders of magnitude too small to account for the effect. There appear to be two important competing effects: "something" inside the material "remembers" the shape it had initially and "tries" to get back there, but something else "prefers" the new shape and "resists" the return to the old shape. We will not attempt to describe the mechanism at play in the Saran plastic, but you can get an idea of how such an effect might come about from the following model. Suppose you imagine a material made of long, flexible, but strong, fibers mixed together with some hollow cells filled with a viscous liquid. Imagine also that there are narrow pathways from one cell to the next so the liquid can leak slowly from a cell to its neighbor. When we crumple a sheet of this stuff, we distort the long fibers, squeezing the liquid out of the cells in one place and forcing it into other cells which are being stretched. When we let go, the long fibers try to return to their original shape. But to do this, they have to force the liquid back to its original location—which will happen relatively slowly because of the viscosity. The forces we apply in crumpling the sheet are much larger than the forces exerted by the fibers. We can crumple the sheet quickly, but it will return more slowly. It is undoubtedly a combination of large stiff molecules and smaller, movable ones in the Saran-Wrap that is responsible for its behavior. This idea also fits with the fact that the material returns more quickly to its original shape when it's warmed up than when it's cold—the heat increases the mobility (decreases the viscosity) of the smaller molecules. Although we have been discussing how Hooke's law breaks down, the remarkable thing is perhaps not that Hooke's law breaks down for large strains but that it should be so generally true. We can get some idea of why this might be by looking at the strain energy in a material. To say that the stress is proportional to the strain is the same thing as saying that the strain energy varies as the square of the strain. Suppose we have a rod and we twist it through a small angle $\theta$. If Hooke's law holds, the strain energy should be proportional to the square of $\theta$. Suppose we were to assume that the energy were some arbitrary function of the angle; we could write it as a Taylor expansion about zero angle \begin{equation} \label{Eq:II:39:40} U(\theta)=U(0)+U'(0)\,\theta+\tfrac{1}{2}U''(0)\,\theta^2+ \tfrac{1}{6}U'''(0)\,\theta^3+\dotsb \end{equation} \begin{align} U(\theta)=U(0)&+U'(0)\,\theta+\!\tfrac{1}{2}U''(0)\,\theta^2\notag\\[.5ex] \label{Eq:II:39:40} &+\tfrac{1}{6}U'''(0)\,\theta^3\!+\dotsb \end{align} The torque $\tau$ is the derivative of $U$ with respect to angle; we would have \begin{equation} \label{Eq:II:39:41} \tau(\theta)=U'(0)+U''(0)\,\theta+\!\tfrac{1}{2}U'''(0)\,\theta^2\!+\dotsb \end{equation} Now if we measure our angles from the equilibrium position, the first term is zero. So the first remaining term is proportional to $\theta$; and for small enough angles, it will dominate the term in $\theta^2$. [Actually, materials are sufficiently symmetric internally so that $\tau(\theta)=-\tau(-\theta)$; the term in $\theta^2$ will be zero, and the departures from linearity would come only from the $\theta^3$ term. There is, however, no reason why this should be true for compressions and tensions.] The thing we have not explained is why materials usually break soon after the higher-order terms become significant. 39–5Calculating the elastic constants As our last topic on elasticity we would like to show how one could try to calculate the elastic constants of a material, starting with some knowledge of the properties of the atoms which make up the material. We will take only the simple case of an ionic cubic crystal like sodium chloride. When a crystal is strained, its volume or its shape is changed. Such changes result in an increase in the potential energy of the crystal. To calculate the change in strain energy, we have to know where each atom goes. In complicated crystals, the atoms will rearrange themselves in the lattice in very complicated ways to make the total energy as small as possible. This makes the computation of the strain energy rather difficult. In the case of a simple cubic crystal, however, it is easy to see what will happen. The distortions inside the crystal will be geometrically similar to the distortions of the outside boundaries of the crystal. We can calculate the elastic constants for a cubic crystal in the following way. First, we assume some force law between each pair of atoms in the crystal. Then, we calculate the change in the internal energy of the crystal when it is distorted from its equilibrium shape. This gives us a relation between the energy and the strains which is quadratic in all the strains. Comparing the energy obtained this way with Eq. (39.13), we can identify the coefficient of each term with the elastic constants $C_{ijkl}$. For our example we will assume a simple force law: that the force between neighboring atoms is a central force, by which we mean that it acts along the line between the two atoms. We would expect the forces in ionic crystals to be like this, since they are just primarily Coulomb forces. (The forces of covalent bonds are usually more complicated, since they can exert a sideways push on a nearby atom; we will leave out this complication.) We are also going to include only the forces between each atom and its nearest and next-nearest neighbors. In other words, we will make an approximation which neglects all forces beyond the next-nearest neighbor. The forces we will include are shown for the $xy$-plane in Fig. 39–10(a). The corresponding forces in the $yz$- and $zx$-planes also have to be included. Fig. 39–10.(a) The interatomic forces we are taking into account; (b) a model in which the atoms are connected by springs. Since we are only interested in the elastic coefficients which apply to small strains, and therefore only want the terms in the energy which vary quadratically with the strains, we can imagine that the force between each atom pair varies linearly with the displacements. We can then imagine that each pair of atoms is joined by a linear spring, as drawn in Fig. 39–10(b). All of the springs between a sodium atom and a chlorine atom should have the same spring constant, say $k_1$. The springs between two sodiums and between two chlorines could have different constants, but we will make our discussion simpler by taking them equal; we call them $k_2$. (We could come back later and make them different after we have seen how the calculations go.) Fig. 39–11.The displacements of the nearest and next-nearest neighbors of atom $1$ (exaggerated). Now we assume that the crystal is distorted by a homogeneous strain described by the strain tensor $e_{ij}$. In general, it will have components involving $x$, $y$, and $z$; but we will consider now only a strain with the three components $e_{xx}$, $e_{xy}$, and $e_{yy}$ so that it will be easy to visualize. If we pick one atom as our origin, the displacement of every other atom is given by equations like Eq. (39.9): \begin{equation} \begin{aligned} u_x&=e_{xx}x+e_{xy}y,\\ u_y&=e_{xy}x+e_{yy}y. \end{aligned} \label{Eq:II:39:42} \end{equation} Suppose we call the atom at $x=y=0$ "atom $1$" and number its neighbors in the $xy$-plane as shown in Fig. 39–11. Calling the lattice constant $a$, we get the $x$ and $y$ displacements $u_x$ and $u_y$ listed in Table 39–1. Table 39–1 Atom Location $x,y$ $u_x$ $u_y$ $k$ $1$ $\phantom{-}0,0\phantom{-}$ $0$ $0$ — $2$ $\phantom{-}a,0\phantom{-}$ $e_{xx}a$ $e_{yx}a$ $k_1$ $3$ $\phantom{-}a,a\phantom{-}$ $(e_{xx}+e_{xy})a$ $(e_{yx}+e_{yy})a$ $k_2$ $4$ $\phantom{-}0,a\phantom{-}$ $e_{xy}a$ $e_{yy}a$ $k_1$ $5$ $-a,a\phantom{-}$ $(-e_{xx}+e_{xy})a$ $(-e_{yx}+e_{yy})a$ $k_2$ $6$ $-a,0\phantom{-}$ $-e_{xx}a$ $-e_{yx}a$ $k_1$ $7$ $-a,-a$ $-(e_{xx}+e_{xy})a$ $-(e_{yx}+e_{yy})a$ $k_2$ $8$ $\phantom{-}0,-a$ $-e_{xy}a$ $-e_{yy}a$ $k_1$ $9$ $\phantom{-}a,-a$ $(e_{xx}-e_{xy})a$ $(e_{yx}-e_{yy})a$ $k_2$ Now we can calculate the energy stored in the springs, which is $k/2$ times the square of the extension for each spring. For example, the energy in the horizontal spring between atom $1$ and atom $2$ is \begin{equation} \label{Eq:II:39:43} \frac{k_1(e_{xx}a)^2}{2}. \end{equation} Note that to first order, the $y$-displacement of atom $2$ does not change the length of the spring between atom $1$ and atom $2$. To get the strain energy in a diagonal spring, such as that to atom $3$, however, we need to calculate the change in length due to both the horizontal and vertical displacements. For small displacements from the original cube, we can write the change in the distance to atom $3$ as the sum of the components of $u_x$ and $u_y$ in the diagonal direction, namely as \begin{equation*} \frac{1}{\sqrt{2}}\,(u_x+u_y). \end{equation*} Using the values of $u_x$ and $u_y$ from the table, we get the energy \begin{equation} \label{Eq:II:39:44} \frac{k_2}{2}\!\biggl(\!\frac{u_x\!+u_y}{\sqrt{2}}\!\biggr)^2\!\!\!=\! \frac{k_2a^2}{4}(e_{xx}\!+e_{yx}\!+e_{xy}\!+e_{yy})^2. \end{equation} For the total energy for all the springs in the $xy$-plane, we need the sum of eight terms like (39.43) and (39.44). Calling this energy $U_0$, we get \begin{align} U_0=\,&\frac{a^2}{2}\bigg\{\!k_1e_{xx}^2\!+\! \frac{k_2}{2}(e_{xx}\!+e_{yx}\!+e_{xy}\!+e_{yy})^2\notag\\[-2pt] &+k_1e_{yy}^2\!+\! \frac{k_2}{2}(e_{xx}\!-e_{yx}\!-e_{xy}\!+e_{yy})^2\notag\\ &+k_1e_{xx}^2\!+\! \frac{k_2}{2}(e_{xx}\!+e_{yx}\!+e_{xy}\!+e_{yy})^2\notag\\ \label{Eq:II:39:45} &+k_1e_{yy}^2\!+\! \frac{k_2}{2}(e_{xx}\!-e_{yx}\!-e_{xy}\!+e_{yy})^2\!\biggr\}. \end{align} To get the total energy of all the springs connected to atom $1$, we must make one addition to the energy in Eq. (39.45). Even though we have only $x$- and $y$-components of the strain, there are still some energies associated with the next-nearest neighbors off the $xy$-plane. This additional energy is \begin{equation} \label{Eq:II:39:46} k_2(e_{xx}^2a^2+e_{yy}^2a^2). \end{equation} The elastic constants are related to the energy density $w$ by Eq. (39.13). The energy we have calculated is the energy associated with one atom, or rather, it is twice the energy per atom, since one-half of the energy of each spring should be assigned to each of the two atoms it joins. Since there are $1/a^3$ atoms per unit volume, $w$ and $U_0$ are related by \begin{equation*} w=\frac{U_0}{2a^3}. \end{equation*} To find the elastic constants $C_{ijkl}$, we need only to expand out the squares in Eq. (39.45)—adding the terms of (39.46)—and compare the coefficients of $e_{ij}e_{kl}$ with the corresponding coefficient in Eq. (39.13). For example, collecting the terms in $e_{xx}^2$ and in $e_{yy}^2$, we get the factor \begin{equation*} (k_1+2k_2)a^2, \end{equation*} so \begin{equation*} C_{xxxx}=C_{yyyy}=\frac{k_1+2k_2}{a}. \end{equation*} For the remaining terms, there is a slight complication. Since we cannot distinguish the product of two terms like $e_{xx}e_{yy}$, from $e_{yy}e_{xx}$, the coefficient of such terms in our energy is equal to the sum of two terms in Eq. (39.13). The coefficient of $e_{xx}e_{yy}$ in Eq. (39.45) is $2k_2$, so we have that \begin{equation*} (C_{xxyy}+C_{yyxx})=\frac{2k_2}{a}. \end{equation*} But because of the symmetry in our crystal, $C_{xxyy}=C_{yyxx}$, so we have that \begin{equation*} C_{xxyy}=C_{yyxx}=\frac{k_2}{a}. \end{equation*} By a similar process, we can also get \begin{equation*} C_{xyxy}=C_{yxyx}=\frac{k_2}{a}. \end{equation*} Finally, you will notice that any term which involves either $x$ or $y$ only once is zero—as we concluded earlier from symmetry arguments. Summarizing our results: \begin{equation} \begin{aligned} C_{xxxx}&=C_{yyyy}=\frac{k_1+2k_2}{a},\\[-.5ex] C_{xyxy}&=C_{yxyx}=\frac{k_2}{a},\\[-.5ex] C_{xxyy}&=C_{yyxx}=C_{xyyx}=C_{yxxy}=\frac{k_2}{a},\\[.5ex] C_{xxxy}&=C_{xyyy}=\text{etc.}=0. \end{aligned} \label{Eq:II:39:47} \end{equation} We have been able to relate the bulk elastic constants to the atomic properties which appear in the constants $k_1$ and $k_2$. In our particular case, $C_{xyxy}=C_{xxyy}$. It turns out—as you can perhaps see from the way the calculations went—that these terms are always equal for a cubic crystal, no matter how many force terms are taken into account, provided only that the forces act along the line joining each pair of atoms—that is, so long as the forces between atoms are like springs and don't have a sideways part such as you might get from a cantilevered beam (and you do get in covalent bonds). We can check this conclusion with the experimental measurements of the elastic constants. In Table 39–2 we give the observed values of the three elastic coefficients for several cubic crystals.2 You will notice that $C_{xxyy}$ and $C_{xyxy}$ are, in general, not equal. The reason is that in metals like sodium and potassium the interatomic forces are not along the line joining the atoms, as we assumed in our model. Diamond does not obey the law either, because the forces in diamond are covalent forces and have some directional properties—the bonds would prefer to be at the tetrahedral angle. The ionic crystals like lithium fluoride, sodium chloride, and so on, do have nearly all the physical properties assumed in our model, and the table shows that the constants $C_{xxyy}$ and $C_{xyxy}$ are almost equal. It is not clear why silver chloride should not satisfy the condition that $C_{xxyy}=C_{xyxy}$. Table 39–2Elastic Moduli of Cubic Crystals in ${\bf 10^{12}\text{ dynes/cm}^{2*}}$ $\underline{C_{xxxx}}$ $\underline{C_{xxyy}}$ $\underline{C_{xyxy}}$ Na $\phantom{0}0.055$ $0.042$ $0.049$ K $\phantom{0}0.046$ $0.037$ $0.026$ Fe $\phantom{0}2.37\phantom{0}$ $1.41\phantom{0}$ $1.16\phantom{0}$ Diamond $10.76\phantom{0}$ $1.25\phantom{0}$ $5.76\phantom{0}$ Al $\phantom{0}1.08\phantom{0}$ $0.62\phantom{0}$ $0.28\phantom{0}$ LiF $\phantom{0}1.19\phantom{0}$ $0.54\phantom{0}$ $0.53\phantom{0}$ NaCl $\phantom{0}0.486$ $0.127$ $0.128$ KCl $\phantom{0}0.40\phantom{0}$ $0.062$ $0.062$ NaBr $\phantom{0}0.33\phantom{0}$ $0.13\phantom{0}$ $0.13\phantom{0}$ KI $\phantom{0}0.27\phantom{0}$ $0.043$ $0.042$ AgCl $\phantom{0}0.60\phantom{0}$ $0.36\phantom{0}$ $0.062$ *From: C. Kittel, Introduction to Solid State Physics, John Wiley and Sons, Inc., New York, 2nd ed., 1956, p. 93. We choose for the moment to split the total shear angle $\theta$ into two equal parts and make the strain symmetric with respect to $x$ and $y$. ↩ In the literature you will often find that a different notation is used. For instance, people usually write $C_{xxxx}=C_{11}$, $C_{xxyy}=C_{12}$, and $C_{xyxy}=C_{44}$. ↩ Copyright © 1964, 2006, 2013 by the California Institute of Technology, Michael A. Gottlieb and Rudolf Pfeiffer
CommonCrawl
A network-based model to explore the role of testing in the epidemiological control of the COVID-19 pandemic Yapeng Cui1,2,3, Shunjiang Ni1,2,3 & Shifei Shen1,2,3 Testing is one of the most effective means to manage the COVID-19 pandemic. However, there is an upper bound on daily testing volume because of limited healthcare staff and working hours, as well as different testing methods, such as random testing and contact-tracking testing. In this study, a network-based epidemic transmission model combined with a testing mechanism was proposed to study the role of testing in epidemic control. The aim of this study was to determine how testing affects the spread of epidemics and the daily testing volume needed to control infectious diseases. We simulated the epidemic spread process on complex networks and introduced testing preferences to describe different testing strategies. Different networks were generated to represent social contact between individuals. An extended susceptible-exposed-infected-recovered (SEIR) epidemic model was adopted to simulate the spread of epidemics in these networks. The model establishes a testing preference of between 0 and 1; the larger the testing preference, the higher the testing priority for people in close contact with confirmed cases. The numerical simulations revealed that the higher the priority for testing individuals in close contact with confirmed cases, the smaller the infection scale. In addition, the infection peak decreased with an increase in daily testing volume and increased as the testing start time was delayed. We also discovered that when testing and other measures were adopted, the daily testing volume required to keep the infection scale below 5% was reduced by more than 40% even if other measures only reduced individuals' infection probability by 10%. The proposed model was validated using COVID-19 testing data. Although testing could effectively inhibit the spread of infectious diseases and epidemics, our results indicated that it requires a huge daily testing volume. Thus, it is highly recommended that testing be adopted in combination with measures such as wearing masks and social distancing to better manage infectious diseases. Our research contributes to understanding the role of testing in epidemic control and provides useful suggestions for the government and individuals in responding to epidemics. According to statistics from the World Health Organization (WHO), as of August 28, 2020, there have been over 24 million confirmed cases of coronavirus disease (COVID-19) and over 820,000 related deaths worldwide [1]. The International Monetary Fund (IMF) predicted that the global economic growth would reach -4.9% in 2020 as a result of the COVID-19 pandemic [2]. In order to reduce losses caused by COVID-19, testing has been adopted by many countries as an effective response measure. The WHO has also called for more tests in response to COVID-19 [3]. Researchers have found that testing plays an important role in controlling the spread of infectious diseases [4–9]. Testing can identify individuals who are infected but remain undiagnosed, which makes it possible to protect others from infection by quarantining those who are infected [10–13]. Scholars have also found that testing data can provide accurate estimates of epidemic trends and help governments distinguish whether an outbreak is increasing or past its peak [14]. Testing is so important for controlling epidemics that it has increasingly attracted the attention of scholars. A subset of previous research on testing focused on trials and clinical statistics, mainly in the field of HIV. In the HIV Prevention Trials Network (HPTN) 071 community-randomized trial [15], participants were divided into three groups: a combination of prevention intervention with universal testing and antiretroviral therapy (ART), prevention intervention with ART provided according to local guidelines, or standard care. The HIV incidence of the three groups showed that universal testing and treatment could reduce the population-level incidence of HIV infection. However, the timing of testing was also found to be important for controlling HIV[16]. Grinsztejn et al. studied the effects of early versus delayed testing on HIV infection, and the clinical results showed that early testing could reduce HIV transmission [13]. Cohen et al. also showed that early testing and implementation of ART treatment could reduce HIV infections [12]. That said, research also showed that the effectiveness of testing could be greatly reduced when high-frequency transmitters were not tested or linkage to care was inadequate [17, 18]. In addition, some scholars demonstrated concern about the effectiveness of testing strategies. For example, Lightfoot et al. reported that using a social network strategy to distribute HIV self-test kits could reduce undiagnosed infections [19]. This suggested that factors such as age, residence, and education level should also be taken into consideration to develop more targeted promotion testing strategies [20, 21]. Another subset of previous research explored the impact of testing on epidemic transmission by mathematical models. A series of established mathematical models showed that universal testing could control the epidemic [22–26]. Ng constructed an agented-based model to explore the effect of testing on the COVID-19 epidemic in the United States. They found that broadening testing would accelerate the return to normal life and random testing was too inefficient unless a majority of population was infected [27]. Berger et al. found that testing at a higher rate in conjunction with targeted quarantine policies can dampen the economic impact of the coronavirus and reduce infection peak [28]. Granich et al. proposed a mathematical model to simulate the spread of HIV and found that universal voluntary testing and treatment could drive HIV transmission to an elimination phase within 5 years [22]. Similarly, a compartmental model was proposed by Aronna et al. to study the impact of testing, and an explicit expression for the basic reproduction number R0 in terms of testing rate was obtained. From the expression of R0, the conclusion was drawn that testing among asymptomatic cases is fundamental to the control of epidemics [29]. Moreover, Kolumbus and Nisan established a susceptible-exposed-infected-recovered(SEIR) model to study the effect of tracking and testing on suppressing epidemic outbreaks. They found that testing could reduce both economic losses and mortality, but required a large testing capacity [30]. According to a report by the Imperial College London, testing healthcare workers(HCWs) and other at-risk groups weekly could reduce their contribution to transmission by 25-33% [3]. Similarly, Priyanka and Verma adopted the susceptible-infected-recovered(SIR) model to compare the effectiveness of testing and lockdown measures and found that testing outperformed lockdowns [31]. Omori et al. reported that the limited testing capacity had a significant influence on the estimation of epidemic growth rate [32]. The effect of specificity and sensitivity of testing has also been studied [33, 34]. A limitation of previous studies is that they primarily examined infectious diseases with a slow transmission process, such as HIV. In other words, the number of infections remain relatively small over a short period. As a result, the upper bound of testing volume does not need to be considered. However, when epidemics such as SARS and COVID-19 occur, infections multiply rapidly in a short time, and a much larger number of individuals need to be diagnosed through testing. In this case, the upper bound of daily testing volume cannot be ignored, and the impact of testing on suppressing epidemic transmission requires in-depth research. In mathematical models, it is often simply assumed that individuals are tested and quarantined with a certain probability. However, in real life, the daily testing volume gradually increases as the understanding of the epidemic deepens, and an individual is typically not tested again for a certain period (such as two incubation periods) after being tested negative, considering the limited testing resources. In order to bridge the gap, an epidemic transmission model combined with a testing mechanism was proposed to study the role of testing in epidemic control. The paper is organized as follows. In "Methods" section, we state the epidemic transmission model and testing mechanism in detail. In "Results" section, a series of numerical simulations are detailed, and the results are described. The discussion is presented in "Discussion" section, and conclusions are stated in "Conclusions" section. We proposed a model to study the impact of testing on epidemic transmission. The model consists of two parts: an epidemic transmission model, and a testing mechanism. The former simulates the epidemic transmission process in the population, and the latter models the testing process implemented by the government. We also stated the strategy used to validate the proposed model. Epidemic transmission model Complex networks have been a good framework for describing the population structure in real world. A network is composed of nodes and edges. Nodes represent individuals and edges represent social contacts between individuals. The number of edges connected with a node is called the degree of the node. Studies have shown that the degree distribution of social networks obeys a power-law distribution [35–37], which indicates that vast majority of individuals have small degrees, but there exist some individuals who contact with many individuals (also called super spreaders in the context of epidemics). These networks whose degree distribution obeys the power-law distribution are called scale-free networks, and the Barabasi-Albert (BA) network [38] is one kind of scale-free networks. When generating the BA network, we start with a small nucleus of m0 connected nodes. Then, a new node is added every step to connect to the old nodes. The probability of the new node connected to node i is proportional to the degree of node i. After enough steps, a network with a power-law degree distribution will be generated. Then, we simulate the epidemic transmission process on the generated networks. In this study, an extended SEIR model [39, 40] was introduced to describe the epidemic transmission process. In our model, an individual can be classified into one of six states: susceptible (S), latent (L), asymptomatic infectious (Ia), symptomatic infectious (Is), recovered (R), and dead (D). Specifically, the infection process is as follows. Initially, an individual is randomly chosen as the infection source (i.e., set it in state Is) and others are susceptible (S). At each time step, a susceptible (S) individual i randomly contacts one of their neighbors. Individual i in contact with symptomatic or asymptomatic infectious individuals will be infected with a probability of λ and γλ, respectively, where λ represents the infection rate in contact with symptomatic infectious individuals, and γ measures the relative infectiousness of asymptomatic infections compared with symptomatic infections. Once individual i is infected, they will enter the latent (L) state, and at the end of the latency period 1/ε, they will become asymptomatic or symptomatic infectious, with the probability pa and 1−pa, respectively. At the same time, infectious individuals (asymptomatic and symptomatic) will recover with probability μ and die at rate β. The whole process will continue to evolve until there are no infected individuals (latent, asymptomatic, or symptomatic) on the networks. Figure 1a describes the epidemic transmission process. A diagram illustrating the proposed model. a shows the epidemic transmission process and (b) shows the testing mechanism. The descriptions of parameters in the figure are described in Table 1 Table 1 Model parameters, variables and respective descriptions Testing mechanism In real life, we are not typically aware of infectious diseases from the time they occur, and thus there is a delay between the start of testing and the time when infectious diseases begin. Therefore, in our model, only when the current time step is greater than T will the testing mechanism be introduced into the epidemic transmission model. In addition, because of limited healthcare workers and medical resources, an upper bound exists in daily testing volume. At each time, the largest number of people who can be tested is V, which represents the daily testing volume. In this model, asymptomatic and symptomatic infectious individuals will test positive and will be quarantined, and thus they cannot cause secondary infections by contact with others. Given the limited testing resources, individuals who are tested negative will not be tested again within two incubation periods, which has been adopted by many countries as a testing strategy in response to COVID-19 [41, 42]. As the understanding of the epidemic deepens, the daily testing volume will gradually increase. Considering the limited medical staff and their working hours, there is also an upper bound on the daily testing volume. The change in daily testing volume is $$ V = max (V_{inc}\times(t-T),V_{limit}), $$ where Vinc and Vlimit indicate the increase speed and upper bound of the daily testing volume, respectively; t is the current time step; and T is the time when testing starts. In addition, different testing strategies may be used when implementing testing, such as random testing (RT), contact-tracking testing (CT), or a combination of the two. In this study, testing preference, α, which measures the priority of testing individuals who are in close contact with confirmed cases, was introduced to represent different testing methods. If α=1, individuals in close contact with confirmed cases will be tested first (CT). Moreover, α=0 means random testing (RT), and when 0<α<1, a combination of RT and CT is adopted. The testing process is performed as follows. M represents the number of individuals who are in close contact with confirmed cases and not tested. At each time step, if αM≤V, αM individuals in close contact will be tested first, then V−αM individuals will be tested randomly in the population. Otherwise, if αM>V, only V individuals in close contact will be tested randomly. Figure 1b illustrates the testing process. Table 1 presents a summary of parameters and variables, and respective descriptions as well as values used in our model. Model validation We compared the simulation data with real data to validate our model. In response to the COVID-19 pandemic, many countries have adopted testing measures. As a result of the different testing capabilities, the number of people tested every day varies in different countries. In this study, the testing-positive rate was used as an indicator to compare the simulation results with real data. The reason why the number of confirmed cases is not used is that we think the number of confirmed cases refers to the number of infected individuals identified by testing, but there are still many infected individuals who have not been tested in the population. Therefore, it is not appropriate to use the number of confirmed cases to estimate the actual infection scale in the population. Considering that the testing process can be regarded as a sampling of the population, the testing-positive rate can represent the actual infection scale in the population to some extent. Therefore, when verifying the proposed model, we used the peak of the testing-positive rate curve to represent the peak of infection scale in the population. Specifically, the real data came from the daily report of each country and was collected by Our World in Data [44]. The real data included the number of people who had been tested and the number of people who had tested positive (confirmed cases) every day. Based on the data, the testing-positive rate was calculated. For country i, its testing-positive rate curve reaches the peak Pi on date Ti. We let \(V_{i}^{t}\) be the testing volume of country i on date t. We calculated the average of \(V_{i}^{t}\) where t<T and obtained the average testing volume of country i denoted as Vi. The testing volumes after date T were not considered because these testing volumes do not contribute to the peak of the positive rate curve. After calculation process described above, we obtained a pair of values (Vi, Pi) for each country. At the same time, based on our proposed model, we obtained the peak of the testing-positive rate curve under different testing volumes. In the context of COVID-19, we set the basic reproduction number R0=2.6 [43]. Contact-tracking testing preference α was set as 1, indicating that the individuals in close contact with confirmed cases will be tested with high priority; this has been adopted by most countries as their testing strategy. The calculation process for simulation data was the same as that for real data above. We also obtained a pair of values (vi, pi) for each simulation. If (vi, pi) curve is consistent with (Vi, Pi) curve, the proposed model will be validated. In this study, Barabasi-Albert (BA) scale-free networks were generated and used to describe the contact structure of population in real life [38]. A series of epidemic spread simulations were conducted on these networks. All the results were averaged over 1000 simulations. We first investigated the impact of the daily testing volume and the testing start time on the epidemic transmission. Two indicators were considered: the peak value of infections, vp, and the time when the peak arrives, tp, because these two indicators are of the most concern to governments in their response to epidemics. As Fig. 2a shows, the greater the daily testing volume and the earlier the testing started, the lower the infection peak. To make vp less than 0.5%, the daily testing volume had to be at least 0.02 and testing had to start within 70 time steps (region I in Fig. 2a). As Fig. 2b shows, tp first increased and then decreased as the daily testing volume grew. This can be explained as follows. Increasing the daily testing volume can suppress the spread of infectious diseases and delay the outbreak. However, if the testing volume continues to increase, the infectious disease can be controlled to a great extent and will end early because almost all infections are identified and quarantined, leading to a smaller tp. Moreover, tp reached the maximum when the daily testing volume was between 0.01 and 0.04, and testing started within 25 time steps (region I in Fig. 2b). Further, the larger the tp, the more time there was expected to prepare for the outbreak, which can be very meaningful in controlling epidemics. The findings indicated that the greatest impact of testing on the spread of infectious diseases lies in flattening the infection curve, delaying the arrival of the outbreak, or ending epidemics early. It is recommended that the government start a wide range of tests as soon as possible to suppress epidemic transmission. The impact of testing volume V and testing start time T on epidemic transmission. a shows the impact on the infection peak. The red and blue color refer to high and low peak values, respectively. b shows the impact on the arrival time of infection peak. The blue color means that the epidemic breaks out very early, while the red color means the opposite. In region I of (a), the peak values were smaller than 0.005. In region I of (b), the peak times were larger than 130 time steps. Starting testing early and increasing daily testing volume could suppress the epidemic transmission In real life, the daily testing volume will gradually increase as understanding of the epidemic deepens. Therefore, we studied the impact of changes in the daily testing volume on epidemic transmission. The impact of Vinc and Vlimit is shown in Fig. 3. As Vlimit increased, the infection scale decreased significantly. However, the infection scale was hardly changed with the increase of Vinc, indicating that in terms of controlling infectious diseases, it is more important to break through the limitation of daily testing volume. The solid line in Fig. 3 represents the contour line where the infection scale is 5%, which required the upper bound of daily testing volume to reach at least 5%. The results showed that increasing the upper limit of daily testing volume was essential to control epidemics, which requires the government to invest enough support resources. The impact of changes in daily testing volume on infection scale. The red color means the large infection scale, while the blue color means the opposite. Breaking through the limitations of daily testing volume could greatly suppress the epidemic transmission, but promoting the increase speed of daily testing volume hardly changes the infection scale We then investigated the impact of testing preference, α, on the epidemic transmission, which is shown in Fig. 4. When the testing start time T and the daily testing volume V were fixed, the larger the α, the smaller the final infection scale, which indicated that the higher priority testing is for individuals in contact with confirmed cases, the greater control we can have over infectious diseases. The five curves in Fig. 4 could be divided into two groups according to the values of T and V: Group A included solid square, solid circle, and solid triangle curves, and Group B included hollow, semi-solid, and solid triangle curves. From Group A and B, we can see that the earlier that testing started and the larger the daily testing volume, the smaller the infection scale. However, comparing Group A and B, it was found that the testing volume V had a greater impact on the curve, indicating that the testing volume plays a greater role in controlling the spread of infectious diseases than the testing start time. The findings suggested that the government adopt contact-tracing testing strategy because contact-tracing testing could effectively suppress the spread of infectious diseases. The impact of testing preference on epidemic transmission. Square, circle and triangle curves were obtained under T=30 (Group A) and solid, semi-solid and hollow triangle curves were obtained under V=0.06 (Group B). Priority testing for individuals in contacts with confirmed cases can suppress the epidemic transmission In order to control infectious diseases merely through testing (S0), a huge daily testing volume was required (see Fig. 2). Assuming that a city has a population of 10 million, a daily test volume of 5% means that 500,000 individuals need to be tested every day, which is difficult to implement. In order to reduce the testing volume while achieving the goal of controlling infectious diseases, we introduced other control measures such as wearing masks and social distancing. According to reference [45–47], we assumed that social distancing could reduce individuals' infection probability by 30%(S30). Some scholars revealed that wearing masks had limited effects on epidemic transmission because masks cannot filter submicron-sized airborne particles [45, 48]. However, some studies also showed that wearing masks could prevent the transmission of coronaviruses [49–52]. Considering the debate on the effectiveness of wearing masks, we assumes that wearing masks could reduce the individuals' infection probability by 10%(S10). As Fig. 5 shows, even if the infection probability was reduced by only 10%, the infection scale was greatly reduced. When the infection probability reduced by 30%, the infection scale was less than 2%. In the inset of Fig. 5, the three different scenarios are compared in detail. To control the infection scale below 5%, if no other measures are taken, the daily testing volume had to reach 5.1%. However, if other measures were taken to reduce the infection probability by 10%, the daily testing volume reduced by more than 40% and only had to reach 3%. Once other measures were taken to reduce the infection probability by 30%, the infection scale was about 1% even if the daily testing volume was 1%. The results indicated that comprehensive measures performed better than single measure. Other measures can greatly reduce the testing volume required to control infectious diseases, relieving the medical resource pressure during epidemic outbreaks. The effect of testing on epidemic transmission under different scenarios. S0 means that no other measures were taken except testing. S10 and S 30 indicate the scenarios where other measures were taken to reduce individuals' infection probability by 10% and 30%, respectively. Combined with other measures such as wearing masks and social distancing, the daily testing volume could be significantly reduced while the epidemic will still be controlled We further explored how testing affects epidemic transmission when the infectiousness of the epidemic changes. With a different basic reproductive number, R0, and daily testing volume, V, a series of simulations were conducted. The results under scenario S0 and S10 are shown in Fig. 6a and b respectively. S10 means that other measures were adopted to reduce individuals' infection rate by 10%, and S0 indicates that only testing was adopted. We found that regardless of scenarios S0 and S10, the infection scale always increased with R0 and decreased with the daily testing volume. The solid line in Fig. 6 is the contour line where the infection scale is 5%, which means the change of minimum daily testing volume required to keep the infection scale below 5%. It can be seen that regardless of whether other measures were taken, the required daily testing volume almost increased linearly as the basic reproductive number grew. However, in scenario S0, when R0 was relatively large (R0>3.6), and the required daily testing volume increased sharply (see Fig. 6a), indicating that when the infectiousness of the epidemic is strong, the daily testing volume required to control the epidemic will be extremely large if only the testing measure is taken. Comparing Fig. 6a and b, we also concluded that the required daily testing volume will be greatly reduced if other measures are taken at the same time. The effect of basic reproductive number R0 and testing on infection scale under different scenarios. The results of scenario S0 where only testing measure was adopted are shown in (a), and (b) describes the results of scenario S10 where other measures were implemented to reduce individuals' infection rate by 10%. The solid line is the contour line where the infection scale is 5%. The daily testing volume required to control epidemics increased almost linearly as R0, but when other measures were adopted, the required testing volume decreased Aiming to study whether the network scale has an impact on the results, we conducted a series of simulations on different networks. As Fig. 7 shows, although the number of nodes in the network is different, the trend of the infection scale with the daily testing volume was almost the same, which indicated that our results are useful for understanding the epidemic transmission process on a larger scale even though they were obtained in a small network. The impact of network scale. The square, circle and triangle curves represent the simulation results on networks with 5000, 8000, and 10000 nodes, respectively. Even if the network scale was different, the trend of the infection scale with the daily testing volume was almost the same Finally, we compared the simulation data with real data to validate our model, as shown in Fig. 8. We fitted the simulation data as shown in the red line. In Fig. 8, ISO country code was used to mark the points of some countries where a large of confirmed cases have been reported, such as the United States, France, Italy, and South Africa. The real data were consistent with the simulation data. Most of the points representing different countries fall near the simulation data. In region I, the errors between real data and simulation data were large. The number of countries falling in region I was 28, which accounts for less than 30% of all countries (95) in the figure. The countries in region I include Bangladesh, Pakistan, the Philippines, Nigeria, Kenya, Myanmar, Thailand, and Morocco. The x-axis values of the points in region I are relatively small (less than 0.5), indicating that a small number of people can be tested every day in these countries. Therefore, because of the small sample size, the actual infection scale in the population will be biased through the testing data, leading to errors between the simulation data and real data in region I. Another explanation is that the true infection rates in these countries are low which explains the low testing-positive rates. In this case, the basic reproduction number R0 is less than 1 and epidemic outbreak do not occur in these countries. However, in order to ensure that the infectious disease can spread on the networks, we set R0 as 2.6 in the simulations. To apply our model to these countries, we need to adjust the model parameters. In general, the data we obtained through the proposed model were consistent with the real data, which indicates that the proposed model is reliable, especially for countries reaching pandemic levels of infection. The simulation data versus real data. A hollow square point (real data) indicates one country, representing the average testing volume (x-axis) and the peak of the testing-positive rate curve (y-axis). The red circle points show the peak of positive rate curve under different testing volumes in the simulations. We set R0=2.6 and α=1 In response to epidemics, different testing strategies may be adopted by governments, such as random testing, contact-tracking testing, or a combination of the two methods. Moreover, as the understanding of epidemics deepens, the daily testing volume will gradually increase. However, considering the limited medical staff and their working hours, there is an upper bound to the daily testing volume. Therefore, in this study, an epidemic transmission model combined with testing mechanism was proposed to study the role of testing in epidemic control. The combined model incorporates different testing methods as well as an increased speed and upper bound of the daily testing volume. Through a series of simulations, we found that testing could inhibit the spread of infectious diseases. In addition, the priority testing for individuals in close contact with confirmed cases could enhance the effect of testing on infectious diseases. However, in order to control the epidemic (i.e., control the infection scale below 5%), the daily testing volume had to reach 5.1%. When the urban population is relatively large, 5.1% means a huge amount of testing every day. Our results were consistent with previous studies that concluded that only large-scale testing can effectively control epidemics [3, 30]. Fortunately, effective algorithms such as group testing have been proposed by other scholars [53–55], and these make it possible to greatly increase the daily testing volume. We also found that when other measures such as wearing masks and social distancing were adopted, the daily testing volume required was greatly reduced. Assuming that other measures could only reduce individuals' infection probability by 10%, the daily testing volume required were reduced by more than 40%, which further emphasized the importance of taking comprehensive measures in response to epidemics. We conducted simulations on networks with different scales and obtained the same results, which indicates that our results are also meaningful for epidemic control on a large scale. In this study, we focused on the impact of testing on the spread of infectious diseases. Therefore, the impact of testing specificity was not considered. How an infected individual can affect the spread of infectious diseases after being tested negative is worthy of further study. In this study, an epidemic transmission model combined with testing mechanisms was proposed to study the impact of testing volume, testing start time and testing preferences on the spread of infectious diseases. Through extensive numerical simulations, we made the following observations: The infection peak decreased with an increase of daily testing volume. Early testing could also reduce the infection peak. Increasing the upper bound of daily testing volume could greatly reduce the infection scale, but the increased speed of daily testing volume hardly impacted the infection scale. The higher priority there was for testing individuals in close contact with confirmed cases, the smaller the infection scale. However, when the daily testing volume was large, testing preferences had little impact on the infection scale. When testing was combined with other measures are adopted in response to epidemics, the daily testing volume required was reduced by more than 40% even if other measures could only reduce the infection probability by 10%. Plus, the daily testing volume required increased almost linearly with the basic regeneration number R0. The scale of the network had little effect on the results. Although the nodes of the networks were different, the trend of infection scale with the daily testing volume was basically the same. The above findings indicated that testing can reduce the infection peak and delay the outbreak of epidemics. This is very important for governments to deal with epidemics because it means that we have more time to prepare medical resources. Testing has become one of the most effective measures to deal with infectious diseases. We also provided some suggestions for dealing with epidemics. It is important to increase the daily testing volume because a larger testing volume means that more infected people can be identified and then treated, thereby reducing the infection scale and saving more lives. However, in response to the COVID-19 pandemic, some countries are not able to implement large-scale testing. In this case, international cooperation is important in increasing the testing volume and controlling the epidemic, especially in underdeveloped countries. Starting testing as early as possible is another way to suppress the epidemic transmission. In addition, comprehensive measures can greatly reduce the daily testing volume required, and therefore it is recommended that testing be combined with measures such as wearing masks and social distancing. Our proposed model was also validated by COVID-19 testing data. In summary, our research contributes to understanding the role of testing in controlling epidemics and provides useful suggestions for governments and individuals in response to infectious diseases. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. SEIR model: Susceptible-exposed-infected-recovered model RT: Contact-tracking testing World Health Organization. WHO coronavirus disease (COVID-19) dashboard. 2020. https://covid19.who.int/. Accessed 28 Aug 2020. International Monetary Fund. World Economic Outlook Update: A crisis like no other, an uncertain recovery. 2020. https://www.imf.org/en/publications/weo. Accessed 28 Aug 2020. Grassly N, Pons Salort M, Parker E, et al.Report 16: Role of testing in covid-19 control.Imperial College London; 2020. https://doi.org/10.25561/78439. Reid S, Reid C, Vermund S. Antiretroviral therapy in sub-saharan africa: adherence lessons from tuberculosis and leprosy. Int J STD AIDS. 2004; 15(11):713–6. Mendis K, Rietveld A, Warsame M, Bosman A, Greenwood B, Wernsdorfer WH. From malaria control to eradication: The who perspective. Trop Med Int Health. 2009; 14(7):802–9. Vermund SH, Hayes RJ. Combination prevention: new hope for stopping the epidemic. Curr HIV/AIDS Rep. 2013; 10(2):169–86. Sridhar S, To KK, Chan JF, Lau SK, Woo PC, Yuen K-Y. A systematic approach to novel virus discovery in emerging infectious disease outbreaks. J Mol Diagn. 2015; 17(3):230–41. Chan JF, Sridhar S, Yip CC, Lau SK, Woo PC. The role of laboratory diagnostics in emerging viral infections: the example of the middle east respiratory syndrome epidemic. J Microbiol. 2017; 55(3):172–82. Perkins MD, Dye C, Balasegaram M, Bréchot C, Mombouli J-V, Røttingen J-A, Tanner M, Boehme CC. Diagnostic preparedness for infectious disease outbreaks. Lancet. 2017; 390(10108):2211–4. Salathé M, Althaus CL, Neher R, Stringhini S, Hodcroft E, Fellay J, Zwahlen M, Senti G, Battegay M, Wilder-Smith A, et al. Covid-19 epidemic in switzerland: on the importance of testing, contact tracing and isolation. Swiss Med Wkly. 2020; 150(11-12):20225. Vermund SH. Control of hiv epidemic: improve access to testing and art. Lancet HIV. 2017; 4(12):533–4. Cohen MS, Chen YQ, McCauley M, Gamble T, Hosseinipour MC, Kumarasamy N, Hakim JG, Kumwenda J, Grinsztejn B, Pilotto JH, et al. Antiretroviral therapy for the prevention of hiv-1 transmission. N Engl J Med. 2016; 375(9):830–9. Grinsztejn B, Hosseinipour MC, Ribaudo HJ, Swindells S, Eron J, Chen YQ, Wang L, Ou S-S, Anderson M, McCauley M, et al. Effects of early versus delayed initiation of antiretroviral treatment on clinical outcomes of hiv-1 infection: results from the phase 3 hptn 052 randomised controlled trial. Lancet Infect Dis. 2014; 14(4):281–90. Rydevik G, Innocent GT, Marion G, Davidson RS, White PC, Billinis C, Barrow P, Mertens PP, Gavier-Widén D, Hutchings MR. Using combined diagnostic test results to hindcast trends of infection from cross-sectional data. PLoS Comput Biol. 2016; 12(7):1004901. Hayes RJ, Donnell D, Floyd S, Mandla N, Bwalya J, Sabapathy K, Yang B, Phiri M, Schaap A, Eshleman SH, et al. Effect of universal testing and treatment on hiv incidence—hptn 071 (popart). N Engl J Med. 2019; 381(3):207–18. Abdool Karim SS. Hiv-1 epidemic control — insights from test-and-treat trials. N Engl J Med. 2019; 381(3):286–8. Iwuji CC, Orne-Gliemann J, Larmarange J, Balestre E, Thiebaut R, Tanser F, Okesola N, Makowa T, Dreyer J, Herbst K, et al. Universal test and treat and the hiv epidemic in rural south africa: a phase 4, open-label, community cluster randomised trial. Lancet HIV. 2018; 5(3):116–25. Ortblad KF, Baeten JM, Cherutich P, Wamicwe JN, Wasserheit JN. The arc of hiv epidemics in sub-saharan africa: new challenges with concentrating epidemics in the era of 90–90–90. Curr Opin HIV AIDS. 2019; 14(5):354–65. Lightfoot MA, Campbell CK, Moss N, Treves-Kagan S, Agnew E, Dufour M-SK, Scott H, Sa'id AM, Lippman SA. Using a social network strategy to distribute hiv self-test kits to african american and latino msm. J Acquir Immune Defic Syndr. 2018; 79(1):38–45. Mirandola M, Gios L, Joanna Davis R, Furegato M, Breveglieri M, Folch C, Staneková D, Nita I, Stehlíková D. Socio-demographic factors predicting hiv test seeking behaviour among msm in 6 eu cities. Eur J Public Health. 2017; 27(2):313–8. Mugabe D, Bhatt N, Carlucci JG, Gudo ES, Gong W, Sidat M, Moon TD. Self-reported non-receipt of hiv test results: A silent barrier to hiv epidemic control in mozambique. PLoS ONE. 2019; 14(10):0224102. Granich RM, Gilks CF, Dye C, De Cock KM, Williams BG. Universal voluntary hiv testing with immediate antiretroviral therapy as a strategy for elimination of hiv transmission: a mathematical model. Lancet. 2009; 373(9657):48–57. Di Giamberardino P, Compagnucci L, De Giorgi C, Iacoviello D. Modeling the effects of prevention and early diagnosis on hiv/aids infection diffusion. IEEE Trans Syst Man Cybern Syst. 2017. https://doi.org/10.1109/tsmc.2017.2749138. Ayoub HH, Awad SF, Abu-Raddad LJ. Use of routine hiv testing data for early detection of emerging hiv epidemics in high-risk subpopulations: a concept demonstration study. Infect Dis Model. 2018; 3:373–84. Lorch L, Trouleau W, Tsirtsis S, Szanto A, Schölkopf B, Gomez-Rodriguez M. A spatiotemporal epidemic model to quantify the effects of contact tracing, testing, and containment. 2020. arXiv preprint arXiv:2004.07641. Scarselli D, Budanur NB, Hof B. Catastrophic failure of outbreak containment: Limited testing causes discontinuity in epidemic transition. 2020. arXiv preprint arXiv:2006.08005. Ng WL. To lockdown? when to peak? will there be an end? a macroeconomic analysis on covid-19 epidemic in the united states. J Macroecon. 2020; 65:103230. Berger DW, Herkenhoff KF, Mongey S. An seir infectious disease model with testing and conditional quarantine. Technical report, National Bureau of Economic Research. 2020. Aronna MS, Guglielmi R, Moschen LM. A model for covid-19 with isolation, quarantine and testing as control measures. 2020. arXiv preprint arXiv:2005.07661. Kolumbus Y, Nisan N. On the effectiveness of tracking and testing in seir models. 2020. arXiv preprint arXiv:2007.06291. Verma V, et al. Study of lockdown/testing mitigation strategies on stochastic sir model and its comparison with south korea, germany and new york data. 2020. arXiv preprint arXiv:2006.14373. Omori R, Mizumoto K, Chowell G. Changes in testing rates could mask the novel coronavirus disease (covid-19) growth rate. Int J Infect Dis. 2020. https://doi.org/10.1016/j.ijid.2020.04.021. Villela DA. Imperfect testing of individuals for infectious diseases: Mathematical model and analysis. Commun Nonlinear Sci Numer Simul. 2017; 46:153–60. Burstyn I, Goldstein ND, Gustafson P. It can be dangerous to take epidemic curves of covid-19 at face value. Can J Public Health. 2020; 111(3):397–400. Pastor-Satorras R, Castellano C, Van Mieghem P, Vespignani A. Epidemic processes in complex networks. Rev Mod Phys. 2015; 87(3):925. Catanzaro M, Boguná M, Pastor-Satorras R. Generation of uncorrelated random scale-free networks. Phys Rev E. 2005; 71(2):027103. Holme P, Kim BJ. Growing scale-free networks with tunable clustering. Phys Rev E. 2002; 65(2):026107. Barabási A-L, Albert R. Emergence of scaling in random networks. Science. 1999; 286(5439):509–12. Colizza V, Barrat A, Barthelemy M, Valleron A-J, Vespignani A. Modeling the worldwide spread of pandemic influenza: baseline case and containment interventions. PLoS Med. 2007; 4(1):13. Balcan D, Gonçalves B, Hu H, Ramasco JJ, Colizza V, Vespignani A. Modeling the spatial spread of infectious diseases: The global epidemic and mobility computational model. J Comput Sci. 2010; 1(3):132–45. ECDC. Guidelines for COVID-19 testing and quarantine of air travellers – Addendum to the Aviation Health Safety Protocol. 2020. https://www.ecdc.europa.eu/en/publications-data/guidelines-covid-19-testing-and-quarantine-air-travellers. Accessed 10 Dec 2020. CDC. Testing Strategy for Coronavirus (COVID-19) in High-Density Critical Infrastructure Workplaces after a COVID-19 Case Is Identified. 2020. https://www.cdc.gov/coronavirus/2019-ncov/community/worker-safety-support/hd-testing.html. Accessed 10 Dec 2020. Wu JT, Leung G, Leung K. Nowcasting and forecasting the potential domestic and international spread of the 2019-ncov outbreak originating in Wuhan, China: a modelling study. Lancet. 2020; 395(10225):689–97. Max Roser EO-O, Hannah Ritchie, Hasell J. Coronavirus pandemic (covid-19). Our World Data. 2020. https://ourworldindata.org/coronavirus. Accessed 10 Dec 2020. Pratomo H. From social distancing to physical distancing: A challenge forevaluating public health intervention against covid-19. Kesmas. 2020; 15(2):60–3. Ahmed F, Zviedrite N, Uzicanin A. Effectiveness of workplace social distancing measures in reducing influenza transmission: a systematic review. BMC Public Health. 2018; 18(1):518. Weng W, Ni S. Evaluation of containment and mitigation strategies for an influenza a pandemic in china. Simulation. 2015; 91(5):407–16. Migliori GB, Nardell E, Yedilbayev A, D'Ambrosio L, Centis R, Tadolini M, Van Den Boom M, Ehsani S, Sotgiu G, Dara M. Reducing tuberculosis transmission: a consensus document from the world health organization regional office for Europe. Eur Respir J. 2019; 53(6):1900391. Xiao J, Shiu EY, Gao H, Wong JY, Fong MW, Ryu S, Cowling BJ. Nonpharmaceutical measures for pandemic influenza in nonhealthcare settings—personal protective and environmental measures. Emerg Infect Dis. 2020; 26(5):967. Cheng KK, Lam TH, Leung CC. Wearing face masks in the community during the covid-19 pandemic: altruism and solidarity. Lancet. 2020. https://doi.org/10.1016/s0140-6736(20)30918-1. Leung NH, Chu DK, Shiu EY, Chan K-H, McDevitt JJ, Hau BJ, Yen H-L, Li Y, Ip DK, Peiris JM, et al. Respiratory virus shedding in exhaled breath and efficacy of face masks. Nat Med. 2020; 26(5):676–80. Esposito S, Principi N, Leung CC, Migliori GB. Universal use of face masks for success against covid-19: evidence and implications for prevention policies. Eur Respir J. 2020. https://doi.org/10.1183/13993003.01260-2020. Brault V, Mallein B, Rupprecht J-F. Group testing as a strategy for the epidemiologic monitoring of covid-19. 2020. arXiv preprint arXiv:2005.06776. Gebhard O, Hahn-Klimroth M, Parczyk O, Penschuck M, Rolvien M. Optimal group testing under real world restrictions. 2020. arXiv preprint arXiv:2004.11860. Kadri U. Variation of positiveness to enhance testing of specimens during an epidemic. 2020. arXiv preprint arXiv:2004.11753. The authors deeply appreciate support for this paper by the National Key R&D Program of China (Grant No. 2018YFF0301000, and the National Natural Science Foundation of China (Grant No. 71673161, 71790613). Institute of Public Safety Research, Tsinghua University, Beijing, China Yapeng Cui, Shunjiang Ni & Shifei Shen Department of Engineering Physics, Tsinghua University, Beijing, China Beijing Key Laboratory of City Integrated Emergency Response Science, Beijing, China Yapeng Cui Shunjiang Ni Shifei Shen YC, SN and SS designed this study and wrote the manuscript. YC and SN conducted data collection and analysis, and interpretation of results. All authors agree to publish the article. The authors read and approved the final manuscript. Correspondence to Shunjiang Ni. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Cui, Y., Ni, S. & Shen, S. A network-based model to explore the role of testing in the epidemiological control of the COVID-19 pandemic. BMC Infect Dis 21, 58 (2021). https://doi.org/10.1186/s12879-020-05750-9 Infectious disease control Complex networks Healthcare-associated infection control
CommonCrawl
transformée de fourier d'un signal Figure 1.3 { Approximation d'un signal triangulaire p eriodique avec un nombre croissant de termes de la s erie de Fourier. - Transactions on Neural Systems and Rehabilitation Engineering, - IEEE Journal of Translational Engineering in Health and Medicine, - Journal of Biomedical and Health Informatics, - IEEE Transactions on Biomedical Engineering, - IEEE Open Journal of Engineering in Medicine and Biology, - Translational Engineering & Healthcare Innovation, Its a long way to the top if you wanna rock and roll and other classics, 2019 Newly Elected Members of the EMB Administrative Committee, A Secured and Privacy-preserved Smart Health Monitoring and Improvement System, Designing a Career in Biomedical Engineering, IEEE Engineering in Medicine and Biology Society (EMBS), Biomedical Engineering in Education, Industry & Society, Tissue Engineering & Regenerative Medicine, Video: Advances in Biomedical Engineering, PLEASE NOTE THAT THE DEADLINE TO SUBMIT YOUR NOMINATION IS NOVEMBER 25, 2019, Past Academic Career Achievement Award Recipients, Past Distinguished Service Award Recipients, Past Early Career Achievement Award Recipients, Past Professional Career Achievement Award Recipients, Changes to the EMBS Constitution & Bylaws Administration, AI and 5G empowered Internet of Medical Things, AI-driven Informatics, Sensing, Imaging and Big Data Analytics for Fighting the COVID-19 Pandemic, About Biomedical Imaging and Image Processing (BIIP) Technical Committee, About the Technical Committee on BioRobotics, Brain, Mind and Body: Cognitive Neuroengineering for Health and Wellness, Biomedical Signal Processing Technical Committee, Call for Proposals Latin American International Student Conference (ISC) 2021, EMBS President Message – Diversity & Inclusion, Emerging IoT-driven Smart Health: from Cloud to Edge, Enabling Technologies for Next Generation Telehealthcare, Explainable AI for Clinical and Population Health Informatics, Finalists of the Student Paper Competition for EMBC 2020, Generative Adversarial Networks in Biomedical Image Computing, Grand Challenges in Engineering in Medicine & Biology, IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI' 21), Innovative Data Analysis Methods for Biomedicine, Bioinformatics in Clinical Environments (Closed), Biomedical and Health Informatics for Diabetes (Closed), Biomedical Informatics across the Cancer Continuum, Biomedical ITC Convergence Engineering (Closed), Camera-Based Monitoring for Pervasive Healthcare Informatics, Computer-Based Intelligent Technologies for Improving the Quality of Life (Closed), Data Science in Smart Healthcare: Challenges and Opportunities, Deep Learning for Biomedical and Health Informatics (Closed), Enabling Technologies in Health Engineering and Informatics for the New Revolution of Healthcare 4.0, Enabling Technologies in Parkinson's Disease Management (Closed), Flexible Sensing and Medical Imaging for Cerebro-cardiovascular Health, Health Engineering and Informatics Driven by the Industry4.0 for Aging Society, Informatics for Personalized, Precision and Preventive Healthcare (3P Healthcare), Informatics on Biomedical Data Learning, Reasoning, and Representation, Information Fusion for Medical Data: early, late and deep fusion methods for multimodal data, Instructions for Associate Editors and Guest Editors, Integrating Informatics and Technology for Precision Medicine, Integrative Sensor Networks, Informatics and Modeling for Precision and Preventative Medicine, Interactive Virtual Environments for Neuroscience, Internet of Medical Things for Health Engineering, Multi-modal Computing for Biomedical Intelligence Systems, Nutrition Informatics: from Food Monitoring to Dietary Management (Closed), Ophthalmic Image Analysis and Informatics, Pervasive Sensing and Machine Learning for Mental Health, Predictive Intelligence in Biomedical and Health Informatics, RF and Communication Technologies for Wireless Implants (Closed), Sensor Informatics and Quantified Self (Closed), Sensor Informatics for Managing Mental Health (Closed), Small Things and Big Data: Controversies and Challenges in Digital Healthcare, Telehealth Systems and Applications (Closed), Unobtrusive Assessment of the Mechanical Aspects of Cardiovascular Performance (Closed), Special Issue on "Advanced Internet of Things in a Personalized Healthcare System: Validation, Analysis and Utilization", 10 popular papers published recently on IEEE Reviews in Biomedical Engineering, 8th Graz Brain-Computer Interface Conference 2019, Calls for Participation in Working Groups, EMB Standards Working Groups, Projects, & Standards, Request for Information on Autonomous Systems for Medical Evacuation, Sensing Psychological Parameters and AI-enabled Emotion Care for Human Wellness and Patient Monitoring, Clubs and Chapters Establishment and Renewal, Creating a Webpage for Your Club or Chapter, About the Technical Committee on Translational Engineering for Healthcare Innovations, Your Global Connection to the Biomedical Engineering Community, http://www.thesaurus.com/browse/transform. In fact, Fourier wrote [12, p. 454]: …et la valeur de u satisfera nécessairement à l'équation. Le traitement du signal - La transform´ee de Fourier, la transform´ee de Fourier discr`ete et la transform´ee en cosinus discret Marc Chaumont 20 janvier 2008 Marc Chaumont Introduction. L'analyse de Fourier d'un signal sonore nous permettra d'illustrer un certain nombre de propriétés utiles comme par exemple la relation entre largeur temporelle et largeur spectrale, qui sera approfondie en TD. Hence, an algorithmic scheme for solving a PDE defined on a given domain by means of an integral transform would be transform-solve-invert [1]. Specify the dim argument to use fft along the rows of X, that is, for each signal. An example is a book by the Bavarian mathematician Martin Ohm (1792–1872), published four years later in Nürenberg [28, p. 358]. The origin and history of the former have been described in a series of articles by Deakin [4]–[7]. as in the vector case. The Society for the Diffusion of Knowledge, By a correspondent, "Note on a passage in Fourier's heat,", A. de Morgan, "On divergent series and various points connected with them,", P. S. Laplace, "Sur les intégrales définies des équations à différences partielles,", S. Annaratone, "Les premières démonstrations de la formule intégrale de Fourier,", N. Wiener, "Hermitian polynomials and Fourier analysis,", L. R. Soares, H. M. Oliveira, R. J. S. Cintra, and R. M. Campello de Souza, "Fourier eigenfunctions, uncertainty Gabor principle and isoresolution wavelets,", P. P. Vaidyanathan, "Eigenfunctions of the Fourier transform,", T. P. Horikis and M. S. McCallum, "Self-Fourier functions and the self- Fourier operators,", M. S. McCallum and T. P. Horikis, "Selftransform operators,". Informations. Compare cosine waves in the time domain and the frequency domain. On observe une symétrie légèrement différente sur les signaux réels échantillonnés ci-dessous et le spectre du signal bleu (on obtiendrait quelque chose de très similaire sur le signal rouge). Influence de l'échantillonnage d'une impulsion sur sa transformée de Fourier. C'est pourquoi, une autre façon de représenter un signal est de fournir l'histogramme des coefficients de Fourier : on obtient ce que l'on appelle la représentation spectrale ou le spectre de Fourier de \(f\). Leggari Products 801,854 views Exercice n°2 : effet de la fenêtre d'observation d'un signal Soit la fonction fx 2 définie ci -après : 2 22 0 bb a pour x fx ailleurs d d 2.1. Transformée de Fourier -2- Définition et Exemple 1 (Fonction Porte) - Duration: 9:00. As an introductory note to this section, recall that the word eigenvalue comes from the German eigenwert, which means proper or characteristic value, while eigenfunction is from eigenfunktion, meaning proper or characteristic function. Fourier was elected to the Académie des Sciences in 1817. simulation software uses the library that MATLAB uses for FFT algorithms. Such representation of the function f, called the Fourier theorem, was first derived by the French mathematician Augustine Louis Cauchy (1789–1857) sometime between 1822 and 1823 and published in 1827 [14, p. 302]. Jean Baptiste returned to France in 1801 and resumed his post as professor of analysis at the École Polytechnique, but at Napoleon's request, he had to go to Grenoble to take an administrative position. Three Forward and Inverse FT Definitions."] (1997, Jan.). However, this approach is not always mathematically satisfactory. [accordion title="Introducing the Fourier Transform"] Qu'en est-il de la transformée de Fourier de la fonction temporelle 1, qu'il faut comprendre au sens de (la fonction qui vaut 1 de t variant de moins l'infini à plus l'infini) ? Luckily, the difficult situation did not last long, and Fourier was released, perhaps because of his teachers' influence. Finally, a curious and incorrect observation concerning (12) was made by the Italian mathematician and historian Umberto Bottazzini (1947) [47, pp. Fourier synthesis demo 1.pdf 2,133 × 1,600, 66 pages; 1.64 MB Fourier transform - time shifted signal.gif 480 × 384; 1.55 MB Fourier transform – Rectangular.svg 282 × 270; 79 KB For more information, see Ne10 Conditions for MATLAB Functions to Support ARM Cortex-A Fourier is buried at Père Lachaise Cemetery in Paris; the tomb shows an Egyptian motif to reflect his position as secretary of the Cairo Institute. default, the code generator produces code for FFT algorithms instead of For large prime-length vector FFTs, out-of-memory errors Le graphe du module de la transformée de Fourier d'un signal réel est ainsi pair. When n is specified, fft(X,n,dim) pads 2. padded with trailing zeros to length n. If X is a vector and the length transform. In 1780 he went to the École Royale Militaire of Auxerre (150 km southeast of Paris, today over the highway A6). The definitive and modern meaning of the term came from two mathematicians, the American Norbert Wiener (1894–1964) and the English Raymond Edward Alan Christopher Paley (1907–1933), and can be dated as late as 1933 or early as 1934 [23, p. 2]. In fact, depending on the application and the authors, three definitions are used [15, p. 7]. There are many transforms, among which the Laplace and Fourier are perhaps the most traditional and common in the physical sciences [2]. It is important to underline that in order for λ to be an eigenvalue, it is essential to find nonzero solutions to the equation. Considérons un échantillonnage de la fonction u sur l'intervalle [0,T], comportant N points et défini par : If X is an empty 0-by-0 matrix, then fft(X) returns Hence, the citation by Bochner is not true. processors. [accordion title="Table 1. The foundations of the FT theory appeared for first time in Fourier's work submitted to the Institute de France. to the size of X. This is also reflected in Schlömilch's book [27]. merci d'avance Dernière modification par narakphysics ; 13/05/2012 à 16h11. However, it is important to mention that most of the results associated with the FT are extensions or analogies of those corresponding to the Fourier series. Pour simplifier l'étude des effets du « fenêtrage » de la fonction f sur sa transformée de Fourier, on … while the size of all other dimensions remains as in X. Le graphe du module de la transformée de Fourier d'un signal réel est ainsi pair. gonométrique correspondante est la transformation de Fourier. rows of X and returns the Fourier transform of Transformees de Fourier des signaux temps´ continu : Cours C 3.1 Signaux periodiques/signaux´ a dur` ´ee limit ´ee Un signal `a dur ´ee limit ´ee est nul en dehors d'un certain intervalle : t62[t 0;t 0 + T] )s(t) = 0 On appelle dur´ee d'un signal la longueur de l'intervalle en dehors duquel ce signal … Transform length, specified as [] or a nonnegative Re : Fondamentale d'un signal Transformée de Fourier Merci pour la correction Dans la télécommunication par exemple ,même l'amplitude des harmonique est grande , on a l'intérêt à les éliminer si non il y aura le phénomène de dispersion!! Surprisingly, this method for derivation of the FT has not changed since it was first used by the French mathematician Jean Baptiste Joseph Fourier (1768–1830) in a manuscript submitted to the Institute of France in 1807 [10] and in a memoir deposited in the institute in 1811 [11]. Specify the parameters of a signal with a sampling frequency of 1kHz and a signal duration of 1 second. Bien entendu l'introduction d'un fenêtrage lors du calcul de la transformée de Fourier d'une fonction n'est pas sans conséquence sur l'expression de cette transformée de Fourier. In the third decade of the 20th century, the FT theory became a topic of research for many mathematicians and applied scientists and led to four of the most celebrated books: Bochner in 1932 [9], Wiener in 1933 [33] (which includes the results of [31], [32]), Paley and Wiener in 1934 [23], and Titchmarsh in 1937 [34]. If X is a matrix, then fft(X) treats Despite the increasing number of applications of the FT, there is not yet an agreement about the definition of the forward and inverse FT. During Fourier's eight remaining years in Paris, he resumed his mathematical researches, publishing a number of important articles. In the German literature, (12) appeared in a workbook about pure mathematics published in 1833 by Grunert [25]. In a more general sense, comparing the formulae in (3), these definitions may also be derived from the following expressions, which in part are defined by [16, p. 182]: Cette intégrale, qui contient une fonction arbitraire, n'était point connue lorsque nous avons entrepris nos recherches sur la théorie de la chaleur, qui ont été remises à l'Institut de France dans le mois de décembre 1807: Elle a été donnée par M. Laplace, dans un ouvrage qui fait partie du tome VI des Mémoires de l'école polytechnique; nous ne faisons que l'appliquer à la détermination du mouvement linéaire de la chaleur. corresponding eigenvector. Do you want to open this version instead? For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox). Calculate the double-sided spectrum and single-sided spectrum of each signal. Fourier went further and said that if the same rule is followed relative to the choice of sign, then. Chaque étudiant en physique et chaque élève ingénieur va passer des heures à les étudier. Au sens des fonctions, elle n'admet pas de transformée de Fourier parce que son intégrale de Fourier ne converge pas. Moreover, using a specific definition does not change the essence of the FT formulae, as in any case, properties are the same for any definition and the particular results are essentially equivalent. [The integrals we have obtained are not only general expressions that satisfy the DEs; they represent in a different way the natural effect, which is the object of the problem. Transformée de Fourier d'un signal échantillonné Description Informations; Intégrer/Partager; Description. In §57–60 (pp. The variable x is only affected by the symbol cosine.]. The way Fourier derived (1) and (2), or equivalently (3), has been similar since then. [accordion title="Table 2. These formulae imply that f admits a representation of the form Transformée de Fourier -2- Définition et Exemple 1 (Fonction Porte) - Duration: 9:00. In the same sense, later in 1923, Titchmarsh authored another article where the term appeared as the main title [22]. This integral which contains one arbitrary function was not known when we had undertaken our researches on the theory of heat, which were transmitted to the Institute of France in the month of December 1807: it has been given by M. Laplace, in a work which forms part of volume VIII of the Mémoires de l'École Polytechnique; we apply it simply to the determination of the linear movement of heat.]. Formulaires. From a linguistic point of view, transform and transformation have similar meanings, with synonyms, either as verb or noun, like "complete change"; "metamorphosis"; "alteration"; "transfiguration"; "change in form, in appearance, or in structure." Etymologically, the words derive from the 1300–1350 Middle English transformem and, in turn, from earlier Latin transformare [1]. In the frequency domain, plot the single-sided amplitude spectrum for each row in a single figure. 78–79]: In the Théorie analytique, Fourier went on to consider the problem of propagation of the heat in solid homogenous bodies, such as rings, spheres, cylinders, cubes, etc. In the case of the index k, it must necessarily be a positive or negative real number. transform of each column. Specific values of the three constant are given in Table 2. (As an addendum, it can be said that no birth or death dates are reported for Eagle, although it is known that he was a professor of mathematics at Manchester University.) In particular, in solidstate physics and crystallography the different choices of constants required that the definitions of the reciprocal lattices differ by a factor 2π, which caused disputes [18, p. 62]. Das Fouriersche Integral," in "Trigonometrische reihen und integrale (bis etwa 1850)," in, J. Définition Discussion suivante Discussion précédente. Use Epoxy To Coat Existing Countertops To Make Them Look Like Real Stone Step By Step Explained - Duration: 59:13. This article by Caola and those by Soares et al. However, when n has large prime factors, there is little or no speed difference. La transformée de Fourier d'un signal temporel peut s'exprimer en fonction de la pulsation ω= 2 π T =2πf T.F. Soit $`x(t)=\delta(t)`$ le signal défini par ```math \delta(t)=\left\{\begin{array}{cl}+\infty &\text{si }t=0\\ 0 & \text{ailleurs}\end{array}\right. This result was discovered exclusively by Fourier when the number n is even (a multiple of two or four), and it is described in his book [12, p. 543]. Restaurant Lounge 95, Déclaration Naissance Consulat France Maroc, Site Collège Latresne Camille Claudel, Autisme Adulte Agressivité, Samuel Le Bihan Et Sa Fille Angia, Tatoueur Ouverture Covid, Alugar Casa Férias Norte Portugal, Logement à Louer Au Mois, Disney Store Soldes, Dédouanement Moto Belgique, transformée de fourier d'un signal 2020
CommonCrawl
On the potential and challenges of laser-induced thermal acoustics for experimental investigation of macroscopic fluid phenomena Part of a collection: Applications of Laser and Imaging Technique to Fluid Mechanics. 20th International Symposium in Lisbon 2020 Christoph Steinhausen ORCID: orcid.org/0000-0001-9213-69141, Valerie Gerber1, Andreas Preusche2, Bernhard Weigand1, Andreas Dreizler2 & Grazia Lamanna1 Experiments in Fluids volume 62, Article number: 2 (2021) Cite this article Mixing and evaporation processes play an important role in fluid injection and disintegration. Laser-induced thermal acoustics (LITA) also known as laser-induced grating spectroscopy (LIGS) is a promising four-wave mixing technique capable to acquire speed of sound and transport properties of fluids. Since the signal intensity scales with pressure, LITA is effective in high-pressure environments. By analysing the frequency of LITA signals using a direct Fourier analysis, speed of sound data can be directly determined using only geometrical parameters of the optical arrangement no equation of state or additional modelling is needed at this point. Furthermore, transport properties, like acoustic damping rate and thermal diffusivity, are acquired using an analytical expression for LITA signals with finite beam sizes. By combining both evaluations in one LITA signal, we can estimate mixing parameters, such as the mixture temperature and composition, using suitable models for speed of sound and the acquired transport properties. Finally, direct measurements of the acoustic damping rate can provide important insights on the physics of supercritical fluid behaviour. Graphic Abstract Fluid injection, disintegration, and subsequent evaporation are of high importance for a stable and efficient combustion. Especially for high pressures exceeding the critical value of the injected fluids, mixing and evaporation processes as well as fundamental changes in fluid behaviour are not yet fully understood. The latter have received increased attention in the past decade, as the recently published literature shows (Falgout et al. 2016; Müller et al. 2016; Baab et al. 2016, 2018; Crua et al. 2017). Since the main objectives are evaporation and disintegration processes of liquid fluids at pressures and temperatures either close to or exceeding their critical points, quantitative data for validation of numerical simulations have recently become a research concern with increasing interest (Bork et al. 2017; Lamanna et al. 2018; Steinhausen et al. 2019; Stierle et al. 2020; Nomura et al. 2020; Lamanna et al. 2020; Qiao et al. 2020). Microscopic investigations by Santoro and Gorelli (2008), Simeoni et al. (2010) as well as Bencivenga et al. (2009) made it possible to distinguish various regions above the critical pressure, as is depicted in Fig. 1. At supercritical pressures, the region between the critical isotherm and the Widom line, which is characterized by the maximum in specific isobaric heat capacity, is identified as liquid-like. Indeed, it preserves large densities and sound dispersion (Simeoni et al. 2010; Bencivenga et al. 2009), while exhibiting the molecular structure of a gas (Santoro and Gorelli 2008). In contrast, regions with supercritical temperatures right of the Widom line are gas–like, as propagation of sound waves at the adiabatic speed of sound is recovered (Simeoni et al. 2010). In this context, the area between the critical isotherm and the Widom line can be denoted as the supercritical region, because it exhibits dynamical and physical properties intermediate between gas and liquid states. The relevance of these microscopic findings on the dynamic behaviour of supercritical fluids at macroscopic scale remains till today poorly understood. In Sect. 2, it is shown how the acoustic damping rate enables to disclose the interrelated nature of sound dispersion at microscopic and macroscopic scales. At this stage, it is important to point out that the current macroscopic description of supercritical states is mainly focused on the selection of accurate equation of states. The latter are capable to describe the continuous fluid transformation in terms of density changes and the singularities in terms of some physical properties (heat capacity, isothermal compressibility, etcetera) across the Widom line. This approach, however, may not be sufficient for a correct description of the dynamical behaviour of supercritical fluids, as currently suggested by the microscopic investigations. The previous consideration provides the motivation for the present work, where emphasis is placed on the measurement of speed of sound (\(c_{s}\)), thermal (\(D_{T}\)), and viscous (\(\varGamma\)) relaxation constants, identified as key parameters to enable macroscopic investigations of these different fluid regions. In addition, these macroscopic fluid properties, such as speed of sound, acoustic damping, and thermal diffusivity, enable a quantitative comparison of injection studies with analytical and numerical data, as has been shown for speed of sound measurements in high-pressure jets by Baab et al. (2018). The speed of sound data of the mixture was acquired using homodyne laser-induced thermal acoustics (LITA). A detailed description of the used experimental setup can be found in the work of Förster (2016). A comprehensive review on gas-phase diagnostics using laser-induced transient grating spectroscopy (LIGS) is provided by Stampanoni-Panariello et al. (2005b). Thermodynamic fluid states based on microscopic investigations; \(p_{r} = p/p_{c}\): reduced pressure; \(s_{r} = s /s_{c}\): reduced entropy; reduced properties are scaled with the properties related to the critical point; Widom line: line of maxima in specific isobaric heat capacity; thermodynamic data are taken from Lemmon et al. (2018) LIGS, LITA or similar techniques are mostly used to determine transport properties in quiescent environments, where high spatial resolution is not the focus in the investigation. However, LITA is indeed sensitive to small-scale processes, as investigation of the speed of sound data in multi-component jet mixing at high pressures by Baab et al. (2018) has shown. Kimura et al. (1995) measured transport properties of high-pressure fluids, namely carbon dioxide and trifluoromethane, using LIGS. The study focused mainly on determination of thermal diffusion, mass diffusion, and sound propagation in the vicinity of the critical point. In the same research group, thermal diffusion and sound propagation of binary mixtures of carbon dioxide and a hexafluorophosphate were investigated by Demizu et al. (2008). Latzel and Dreier (2000) investigated heat conduction, speed of sound data, as well as viral coefficients of gaseous mixtures at pressures up to \(50 \ \mathrm {MPa}\) by analysing the acoustic oscillations and the long-term decay of a near-infrared LIGS signal. Vibrational energy relaxation of azulene was studied in super-critical states by Kimura et al. (2005a) and for liquid solvents by Kimura et al. (2005b). An investigation of acoustic damping rates in pure gases was presented by Li et al. (2002) analysing the temporal behaviour of transient grating spectroscopy. The investigations included different gases at pressures up to \(25 \ \mathrm {atm}\) at room temperature. Li et al. (2002) compared these findings with classical acoustic theory and derived a linear pressure dependency for the measured acoustic damping rate. Li et al. (2005) later proposed a binary mixture model to determine the acoustic damping rate for binary atomic species. Note that all the previously reviewed studies measure transport properties using an optical arrangement with unfocused beams for grid excitation. This leads to a measurement volume with an order of magnitude \(\mathcal {O}\left( 10^{1} \ \mathrm {mm}\right)\) in diameter and \(\mathcal {O}\left( 10^{2} \ \mathrm {mm}\right)\) in length and hence a poor spatial resolution. To utilize LITA as a reliable tool for experimental investigation in jet disintegration or droplet evaporation studies, a high spatial resolution is imperative. Studies by Baab et al. (2016), Baab et al. (2018), and Förster et al. (2018) already showed the capability of acquiring quantitative speed of sound data in jet disintegration. Especially to be emphasised are the investigations by Baab et al. (2018), which demonstrated the potential of acquiring speed of sound data for multi-component jet mixing at high pressures in the near nozzle region. The purpose of this study is to present the calibration and validation processes needed for the extraction of speed of sound data, acoustic damping rates, as well as thermal diffusivities using LITA with a spatial resolution with an order of magnitude \(\mathcal {O}\left( 10^{-1} \ \mathrm {mm}\right)\) in diameter and \(\mathcal {O}\left( 10^{0} \ \mathrm {mm}\right)\) in length in a high-pressure and high-temperature environment for resonant and non-resonant fluids. Theoretical consideration on the relevance of laser-induced thermal acoustic in supercritical mixture studies The LITA (or LIGS) technique provides an excellent opportunity to measure independently and simultaneously speed of sound data and acoustic damping rates. The implications of these measurements are twofold. First, measuring acoustic damping rates allows to assess whether sound dispersion in supercritical fluids is significant and provides the possibility to indirectly measure bulk viscosities, which are mainly responsible for sound dispersion. Additionally, bulk viscosities will enable the improvement of models for the stress tensor and the kinetic energy dissipation in supercritical fluid flow simulations. Second, if both speed of sound and acoustic damping rate are measured, a set of independent equations can be derived to extract local mixing parameters, like temperature and composition of a binary mixture. The possibility to estimate mixing parameters using laser induced thermal acoustics was shown by Li et al. (2005) for binary mixtures of monoatomic species. Using transient grating spectroscopic Li et al. (2005) were able to derive the mole fraction of a Helium–Argon mixture. As we will later show in the post-processing section of this work (Sect. 3.2.2), analysing the temporal evolution of a detected LITA signal enables the determination of three transport properties, namely the speed of sound \(c_{s}\), the acoustic damping rate \(\varGamma\), and, in case of a resonant fluid behaviour, the thermal diffusivity \(D_{T}\). Whereas thermal diffusivity and speed of sound are well-known transport properties and accessible using the NIST database by Lemmon et al. (2018), acoustic damping rates have to be modelled in more detail. Dissipation of a sound waves energy is mainly caused by internal friction and heat conduction. The damping rate of an acoustic wave is, therefore, dependent on the viscosity and thermal conductivity (Li et al. 2002). Using the theoretical description by Hubschmid et al. (1995), the acoustic damping rate \(\varGamma\) depends on both shear viscosity \(\mu _{s}\) and bulk viscosity \(\mu _{v}\), and can be modelled as: $$\begin{aligned} \varGamma = \frac{1}{2 \varrho } \left[ \frac{4}{3} \ \mu _{s} + \mu _{v} + \left( \gamma -1\right) \ \frac{\kappa }{c_{p}} \right] , \end{aligned}$$ where \(\varrho\) is the fluid density, \(\gamma\) the specific heat ratio, \(\kappa\) the thermal conductivity, and \(c_{p}\) the specific isobaric heat capacity. At atmospheric conditions, bulk viscosities are neglectable compared to shear viscosities; using this assumptions, we can calculate the classical acoustic damping rate \(\varGamma _{c}\) as used by Li et al. (2002): $$\begin{aligned} \varGamma _{c} = \frac{1}{2 \varrho } \left[ \frac{4}{3} \ \mu _{s} + \left( \gamma -1\right) \ \frac{\kappa }{c_{p}} \right] . \end{aligned}$$ \(\varGamma _{c}\) predicts accurate damping rates for monoatomic substances at low pressures. For pressures up to \(2.5 \ \mathrm {MPa}\) at room temperature, Li et al. (2002) estimated the pressure dependence of measured acoustic damping rate with respect to the classical solution. A linear dependence with a fluid–dependent slope was found. For nitrogen and argon, the measured acoustic damping rate is expressed in (3). Note that the unit of pressure used by Li et al. (2002) is atmospheres: $$\begin{aligned} \begin{aligned} \varGamma _{m,Ar} =&\varGamma _{c} \left( 1/30 \ p_{atm} + 1 \right) \\ \varGamma _{m,N2} =&\varGamma _{c} \left( 1/6 \ p_{atm} + 1 \right) . \end{aligned} \end{aligned}$$ As mentioned in the Introduction, for mixing processes, e.g., high-pressure turbulent jets, transport properties can be used to derive the local mixing state of macroscopic fluid phenomena. These mixing states are defined by the local temperature of the mixture \(T_{mix}\), the local mole fraction of the fluid \(x_{Fl,mix}\), and the overall pressure in the measurement chamber \(p_{ch}\). The local transport properties speed of sound \(c_{s,mix}\), acoustic damping rate \(\varGamma _{mix}\), and thermal diffusivity \(D_{T,mix}\) of the mixture can be expressed in the following way: $$\begin{aligned} \begin{aligned} c_{s,mix} \left( p_{ch},T_{mix},x_{Fl,mix}\right)&= c_{s,\mathrm {LITA}} \\ \varGamma _{mix} \left( p_{ch},T_{mix},x_{Fl,mix}\right)&= \varGamma _{\mathrm {LITA}} \\ D_{T,mix} \left( p_{ch},T_{mix},x_{Fl,mix}\right)&= D_{T,\mathrm {LITA}}. \end{aligned} \end{aligned}$$ Using a controlled environment where the pressure in the measurement chamber \(p_{ch}\) is known, the local mixing temperatures \(T_{mix}\) and local mole fraction of the fluid \(x_{Fl,mix}\) are the only unknown fluid properties of the investigated mixture. After careful validation in well-known binary gas–fluid mixtures, this should enable us to determine the desired mixing parameters when coupling them to non-ideal mixture data, e.g., Lemmon et al. (2018). By measuring LITA signals using an optical arrangement with focused beams, it would, therefore, be possible to determine the local mixing temperature and mole fraction simultaneously. This would enable us to study mixing and evaporation processes in high-pressure jet mixing or droplet evaporation in the vicinity of the critical point. To evaluate whether the phenomenon of sound dispersion in supercritical fluids is significant, we first have to understand the behaviour of the relaxation of an electrostriction grating. The latter can be assimilated to a damped harmonic oscillator, which admits a general solution of the following type: $$\begin{aligned} I\left( t\right) = B \exp \left\{ -\beta t - i \upsilon t \right\} + C \exp \left\{ -\beta t + i \upsilon t \right\} , \end{aligned}$$ where I denotes the signal intensity, t denotes the time, B and C are dimensionless amplitudes, \(\beta ^{-1}\) is the characteristic decay time of the oscillation's amplitude, and \(\upsilon\) is the angular frequency. Note that the decay rate \(\beta\) is directly proportional to the damping constant, like the acoustic damping rate for acoustic waves. With reference to laser-induced gratings, it was found by Stampanoni-Panariello et al. (2005a) that \(\beta = q^{2} \ \varGamma\). Here, q denotes the magnitude of the grating vector. The frequency of the counter-propagating acoustic waves can be, therefore, expressed as (see Hubschmid et al. (1996)): $$\begin{aligned} \upsilon = \sqrt{\upsilon _{0}^{2} - q^{4} \varGamma ^{2}}, \end{aligned}$$ where \(\upsilon _0\) is the natural frequency, which is associated with the adiabatic speed of sound \(c_{s}\) in the following way: $$\begin{aligned} \upsilon _{0} = q c_{s}. \end{aligned}$$ For a system with a small damping constant (\(\upsilon _{0} \gg \beta\)), it follows that the frequency of oscillation is close to the undamped natural frequency \(\upsilon = \upsilon _{0} = \mathrm {const.}\). The implications of Eq. 6 are twofold. First, it shows that the local sound speed depends indeed upon the acoustic damping rate. Only if the latter is negligible, we recover the well-known condition that sound waves propagate at the adiabatic speed of sound for low-pressure gases. Second, it follows that the local speed of sound is a function of the excitation grating vector. This effect is known as sound dispersion and is commonly observed in liquids, as has been shown by Mysik (2015). For supercritical fluids, Simeoni et al. (2010) demonstrated that a significant sound dispersion could be observed in the region comprised between the critical isotherm and the Widom line. However, the probing length scale was much smaller (X-ray scattering), thus resulting in larger q values and, therefore, larger frequency dispersions. Following the procedure adopted by Mysik (2015) for liquids, the bulk viscosity can be measured as deviation between the measured acoustic damping \(\varGamma\) in Eq. 1 and the classical model \(\varGamma _{c}\) in Eq. 2. In liquids, high values of the bulk viscosity are mainly responsible for the observed sound dispersion. LITA measurements, therefore, will enable to verify whether this behaviour is valid also for supercritical fluids. Experimental facility and measurement technique Investigations for this study are performed in well-controlled quiescent conditions. Three different atmospheres, namely nitrogen with a purity of \(99.999 \ \%\), argon with a purity of \(99.998 \ \%\), and carbon dioxide with a purity of \(99.995 \ \%\), are studied. The optical setup is adapted from the one described in Baab et al. (2016) and Förster et al. (2015). Pressure chamber Experimental investigations are performed using a heatable high-pressure, high-temperature chamber. The latter is designed for phenomenological as well as statistical investigations of free-falling droplets in a near-critical environment. For the presented study, the droplet generator on top of the chamber is replaced with a closed lid. The operating condition for nitrogen range between \(p_{ch} = 2\) and \(8\,\mathrm {MPa}\) for temperatures up to \(T_{ch} = 700\, \mathrm {K}\). For argon and carbon dioxide, the operating pressures vary between 0.5 and \(8\,\mathrm {MPa}\) for temperatures up to \(600\,\mathrm {K}\). Before each set of experiments, the chamber is carefully evacuated to ensure no contamination from previous investigations. The experimental setup is operated as a continuous-flow reactor. The mass flow into the chamber is hereby controlled using a heat-capacity-based mass flow controller (Bronkhorst) for nitrogen and argon as well as a Coriolis-based mass flow controller for carbon dioxide. Note that carbon dioxide is pressurized beforehand using a pneumatic-driven piston compressor. Pressurized fluids are supplied from two sides on top of the chamber through an annular orifice. The pressure inside the chamber is controlled using a pneumatic valve at the system exhaust (Badger Meter). Since the derivation of the present equations relies on the assumption of negligible flow velocities, it is important to emphasise that the used mass flow does not exceed \(2.5\,\mathrm {kg/h}\) for carbon dioxide and \(1.25\,\mathrm {kg/h}\) for argon and nitrogen, which leads to flow velocities below \(0.04\, \mathrm {m/s}\). The chamber is constructed of heat-resistant stainless steel (EN-1.4913). Eight UV-transparent quartz windows at two different heights are placed at an angle of \(90^\circ\) to each other ensuring optical accessibility. Eight heating cartridges are vertically inserted in the chamber body. Additionally, a heating plate with four cartridges is placed below the chamber. All heaters are controlled using type-K thermocouples in the chamber body as well as the heater cartridges. The chamber encloses a cylindrical core with a diameter of \(40\,\mathrm {mm}\) and a height of \(240\, \mathrm {mm}\). For thermal insulation, a mineral-based silicate (SILCA 250 KM) is used. The bottom of the heating plate is insulated using a vermiculite plate. Vertical and horizontal sectional drawings of the chamber are depicted in Fig. 2. For pressure measurement inside the chamber, a temperature–compensated pressure transducer (Keller 35 X HTC) with an uncertainty rated at \(\pm 0.1\,\mathrm {MPa}\) is chosen. The pressure transducer is located at the chamber exhaust. Temperature measurements inside the chamber take place at three different heights with miniaturized resistance thermometers penetrating the metal core. Since the uncertainty of these resistance thermometers is temperature-dependent, the measurement uncertainties are calculated for each condition separately. Both temperature and pressure are logged continuously. Horizontal and vertical sections of the high-pressure chamber. Left: vertical section through pressure chamber. Centre (section B—B): horizontal section through pressure chamber at centre of first window. Right (section C—C): horizontal section through pressure chamber at fluid inlet and annular orifice. A: annular orifice; F: fluid inlet; G: graphite gaskets; H: heating cartridges; I: thermal insulation; O: Willis O-rings; T: resistance thermometer; V: vermiculite plate; W: quartz windows Laser-induced thermal acoustics LITA, also more generally referred to as LIGS, is discussed in detail in literature. A theoretical approach describing the generation of the laser-induced grating as well as the inherent phonon–photon and thermon–photon interaction can be found in Cummings et al. (1995) and Stampanoni-Panariello et al. (2005a). Note that the analytical expression presented by Stampanoni-Panariello et al. (2005a) is only valid for infinite beam sizes, whereas Cummings et al. (1995) takes finite beam sizes into account. In the limit of infinite beam sizes, both theories merge. Schlamp et al. (1999) extended the theory presented by Cummings et al. (1995) to account for beam misalignment and flow velocities. LITA occurs due to the non-linear interaction of matter with an optical interference pattern. The latter is introduced by two short-pulsed excitation laser beams, which are crossed using the same direction of linear polarization to produce a spatially periodic modulated polarization/light intensity distribution. The resulting changes in the optical properties of the investigated fluids are interrogated using a third input wave. The third wave originates from a second laser source and is scattered by the spatially periodic perturbations within the measurement volume. Depending on the absorption cross-section of the investigated fluid, changes in optical properties result from different processes. For non-resonant substances, pure electrostriction is observed, whereas in resonant substances, simultaneously, an additional thermal grating is produced. Eichler et al. (1986) distinguishes three dominant forms of light scattering important for LITA. Light scattering from a non-resonant grating can be referred to as stimulated Brillouin scattering (SBS), whereas scattering from a resonant thermal grating depends on the thermalization time. In case of fast energy exchange, stimulated thermal Brillouin scattering is observed (STBS). For slow energy exchange, stationary density modulations emerge, which are referred to as stimulated (thermal) Rayleigh scattering (STRS). Optical setup The optical arrangement used for the presented investigations is depicted in Fig. 3. For excitation, a pulsed Nd:YAG laser (Spectra Physics QuantaRay: \(\lambda _{exc} = 1064\, \mathrm {nm}\), \(\tau _{pulse} = 10\,\mathrm {ns}\), \(30\, \mathrm {GHz}\) line width) is used. To ensure stable and reproducible conditions, the excitation laser is set to a pulse energy of \(150\,\mathrm {mJ}\), which is continuously measured by a pyroelectric sensor (D3). The energy of the excitation pulse is subsequently controlled using a \(\lambda /2\)-wave plate (WP) together with a Glan–Laser polarizer (GLP) and continuously observed by a pyroelectric sensor (D4; Thorlabs). The pulse energy used for investigation is adjusted to values between 18 and \(50\, \mathrm {mJ}\). The GLP additionally ensures polarization of the excitation beam, which is split by a beam splitter (T1) into two excitation beams. Optical setup of the LITA system. BE: beam expander; BS: beam sampler; BT: beam trap; C: coupler; D: detector (D1: Avalanche detector; D2: photo diode; D3, D4: pyroelectric sensor; D5: thermal sensor); F: fibre; F1: neutral density filter wheel with orifice; GLP: Glan–Laser polarizer; L: lens; M: mirror; PBS: polarized beam splitter; T: beam splitter; WP: wave plate The interrogation laser source is provided using a continuous-wave DPSS laser (Coherent Verdi V8, \(\lambda _{int} = 532\, \mathrm {nm}\), \(5\,\mathrm {MHz}\) line width). The power of the interrogation laser is adjusted to ensure a good signal-to-noise ratio and varies from 0.1 to \(8.5\,\mathrm {W}\). Note that, to ensure stable power output at low power settings, the beam power is reduced using a polarizing beam splitter (PBS) together with a \(\lambda / 2\)-WP. A forward folded BOXCARS configuration is used to arrange all beams and achieve phase matching. An AR-coated lens (\(f = 1000\, \mathrm {mm}\) at \(532\,\mathrm {nm}\)) is utilized to focus all beams into the measurement volume. With an excitation beam distance of \(\varDelta y_{exc} \approx 36\,\mathrm {mm}\), the crossing angle yields \(\varTheta \approx 1^\circ\). Based on the laser specifications, the Gaussian half-width of the excitation beams in the focal point is estimated to be \(\omega _{th} = 312\,\mu \mathrm {m}\). Due to the Gaussian beam profile and the beam arrangement the optical measurement volume is an ellipsoid elongated in x-direction. Using the modelling proposed by Schlamp et al. (1999), the size of the inference pattern is estimated to be approximately \(8.6\, \mathrm {mm}\) in length and \(312\,\mu \mathrm {m}\) in diameter. This optical interference pattern has a Gaussian intensity profile with a grid spacing \(\varLambda\) modulated in y-direction, see Siegman (1977). In this context, it is crucial to mention that the direction of propagation of the acoustic waves is normal to the beam direction. Hence, the extension of the effective measurement volume in x-direction is smaller than the length of the elliptical interference pattern. The spatial resolution in x-direction is, therefore, higher than the optical interference pattern suggests. Evaluating the speed of sound radial profile data provided by Baab et al. (2018) together with the provided shadowgram, we estimate the spatial resolution in beam direction to be less than the jet diameter at the measurement location. This leads to a spatial resolution in the present report to of approximately \(312\, \mu \mathrm {m}\) in diameter and less than \(2\,\mathrm {mm}\) in length in the x-direction. An avalanche detector (D1; Thorlabs APD110) serves for detection of the scattered signal beam. The latter is previous spatially and spectrally filtered using a coupler and single-mode/multi-mode fibres. The detectors voltage signal is logged with \(20\,\mathrm {GS/s}\) by a \(1\,\mathrm {GHz}\) bandwidth digital oscilloscope (LeCroy, Waverunner 610Zi). The simplest and most common approach to extract speed of sound from a LITA signal is a direct Fourier transformation (DFT). It is imperative to mention that the speed of sound data is directly obtained from the frequency domain of the temporal LITA signal, involving only the geometrical parameters of the optical arrangement. No equation of state or modelling assumptions are necessary at this point. Using the theoretical considerations by Hemmerling and Kozlov (1999), the speed of sound \(c_{s}\) of the probed fluid can be estimated as follows: $$\begin{aligned} c_{s} = \frac{\nu \varLambda }{j}. \end{aligned}$$ The dominating frequency of the LITA signal is hereby denoted by \(\nu\). The constant j indicates if the fluid shows resonant behaviour at the wavelength of the excitation beam. In case of non-resonant fluid behaviour \(j=2\), whereas in case of resonant fluid behaviour, \(j=1\). The grid spacing \(\varLambda\) of the optical interference pattern is a calibration parameter for the optical setup. Without a mixing model, thermometry can only be performed at known gas composition and pressure, using a suitable model for the speed of sound. However, the temperature is then only indirectly determined by applying a suitable model for the fluid under consideration. This can be challenging in supercritical or high-pressure states. Using an analytical approach for finite beam sizes for the evaluation of LITA signals proposed by Schlamp et al. (1999), it should be possible to extract the speed of sound, the acoustic damping rate, as well as the thermal diffusivity from the shape of the LITA signal. In the following, we will summarize the essential parts of the mathematical derivation necessary for this study, as proposed by Schlamp et al. (1999) and Cummings et al. (1995). The used assumptions are categorized and listed in the appendix of this work. As discussed in more detail by Stampanoni-Panariello et al. (2005a), the temporal shape of the excitation laser pulse is estimated using a \(\delta\)-function at \(t_{0}\). The model suggested by Schlamp et al. (1999) can be simplified using two key assumptions proposed by Cummings et al. (1995), namely the limit of fast thermalization and negligible damping over a wave period. Correspondingly, the amplitudes of the acoustic waves \(A_{P1,P2}\) and the amplitudes of the thermal grating \(A_{T}\) in the upcoming modelling equation (13) of the LITA signal simplify to the expressions in equation (9). The real part of \(A_{P1,P2}\) indicates the influence of thermalization or STBS on the damping oscillation of the LITA signal, while the imaginary part expresses the electrostrictive contribution or SBS. Consequentially, \(A_{T}\) represent the weight of thermalization on the signal damping: $$\begin{aligned} \begin{aligned} A_{P1,P2}&= 1/2 \ U_{\varTheta } \pm i/2 \ U_{eP} \\ A_{T}&= -1 \ U_{\varTheta } \mathrm {.} \end{aligned} \end{aligned}$$ The quantities \(U_{\varTheta }\) and \(U_{eP}\) denote the approximate modulation depth of thermalization and electrostriction gratings, respectively (Cummings et al. 1995), which are used as fitting constants. Note that we further assume instantaneous release of absorbed laser radiation into heat, as proposed by Stampanoni-Panariello et al. (2005a). In case of resonant fluid behaviour, both thermalization and electrostriction gratings must be considered. On the other hand, when non-resonant fluid behaviour is expected, the grid generation process is purely electrostrictive. Therefore, the thermal modulation depth \(U_{\varTheta }\) is negligible leading to a further simplified model. Considering small beam crossing angles and negligible bulk flow velocities, parameters related to the damping of oscillations \(\varSigma _{P1,P2}\) and the damping parameter \(\varSigma _{T}\) can be expressed as given in Eq. (10) referred in Schlamp et al. (1999). Note that, due to negligible bulk flow velocities in the chamber only, beam misalignment in horizontal y-direction \(\bar{\eta }\) has an effect on the time history of the LITA signal (Schlamp et al. 1999). Hence, after careful beam alignment through the quartz windows before each measurement resulting in a maximized signal, all other possible misalignments are neglected: $$\begin{aligned} \begin{aligned} \varSigma _{P1,P2} =&\exp \left\{ -\varGamma q^2 \left( t-t_{0}\right) - \frac{2\left[ \bar{\eta } \pm c_{s}\left( t-t_{0}\right) \right] ^2}{\omega ^2+2\sigma ^2}\right\} \\&\exp \left\{ \pm iqc_{s}\left( t-t_{0}\right) \right\} \\ \varSigma _{T} =&\exp \left\{ - D_{T} q^2 \left( t-t_{0}\right) - \frac{2 \bar{\eta }^2}{\omega ^2+2\sigma ^2}\right\} . \end{aligned} \end{aligned}$$ \(D_{T}\) denotes the thermal diffusivity, \(\varGamma\) the acoustic damping rate, t the time, \(t_{0}\) the time of the laser pulse, \(\omega\), and \(\sigma\) is the Gaussian half–width of the excitation and interrogation beam in the focal point, respectively. The magnitude of the grating wave vector q depends on the grid spacing \(\varLambda\), which is a function of the crossing angle \(\varTheta\) of the excitation beams and the wavelength of the excitation pulse \(\lambda _{exc}\), see Eq. (11) and (12) referred in Stampanoni-Panariello et al. (2005a): $$\begin{aligned} q= \,& {} \frac{2 \pi }{\varLambda } \end{aligned}$$ $$\begin{aligned} \varLambda=\, & {} \frac{\lambda _{exc}}{2\sin \left( \varTheta /2\right) }. \end{aligned}$$ Using the simplifications explained above and summarized in the appendix, the time-dependent diffraction efficiency \(\varPsi (t)\) of a detected LITA signal can be expressed as: $$\begin{aligned} \begin{aligned} \varPsi (t) \propto&\exp \left\{ -\frac{8\sigma ^2}{\omega ^2\left( \omega ^2+2\sigma ^2\right) }\left( \frac{c_{s}\left( t-t_{0}\right) }{2}\right) ^2\right\} \\&\left\{ \left( P_{1}+P_{2}\right) T^{*}+\left( P^{*}_{1}+P^{*}_{2}\right) T\right\} \\&+ \exp \left\{ -\frac{8\sigma ^2}{\omega ^2\left( \omega ^2+2\sigma ^2\right) }\left( c_{s}\left( t-t_{0}\right) \right) ^2\right\} \\&\left( P_{1}P^{*}_{2}+P^{*}_{1}P_{2}\right) + \left( P_{1}P^{*}_{1}+P_{2}P^{*}_{2}+TT^{*}\right) . \end{aligned} \end{aligned}$$ where the parameters \(P_{1}\), T, etc. are calculated using \(P_{1} = A_{P1}\varSigma _{P1}\), \(T = A_{T}\varSigma _{T}\), etc., and \(^*\) denotes the complex conjugate. Based on a curve fitting of a LITA signal using modelling Eq. (13), an estimation of the thermodynamic variables \(c_{s}\), \(D_{T}\) and \(\varGamma\) is possible. Based on these transport properties, experimental investigations using LITA in the vicinity of the critical point and in the vicinity of the Widom line of a pure fluid disclose the possibility to study transitions between the supercritical fluid states depicted in Fig. 1 on a macroscopic level. Additionally, given a suitable thermodynamic model for these parameters, the knowledge of local transport properties enables us, as shown in Eq. (4), to extract local mixture quantities with a spatial resolution \(\mathcal {O}\left( 10^{-1} \ \mathrm {mm}\right)\) in diameter and \(\mathcal {O}\left( 10^{0} \ \mathrm {mm}\right)\) in length. Extraction of transport properties from LITA signals, using the approach explained above, requires a thorough calibration of the optical setup as well as a validation of the acquired transport properties. Both calibration and validation are presented in the following section. The uncertainty analysis of the operating conditions as well as the Fourier analysis of the LITA signal, the calibration, and validation of the grid spacing \(\varLambda\) is performed according to the Guide to the expression of uncertainty in measurement by the Joint Committee for Guides in Metrology (2008). For values taken from a database, uncertainties are acquired using sequential perturbation as presented by Moffat (1988). Since the purpose of this study is the feasibility to extract transport properties from LITA signals, the presented uncertainties of the acoustic damping rates gained by the curve fitting algorithm are based on the confidence interval computed by the algorithm. These confidence intervals are estimated using the inverse R factor from the QR decomposition of the Jacobian, the degrees of freedom, as well as the root-mean-squared error. Hence, the uncertainties of the acoustic damping rates are only a representation of the statistical error margin of the curve fitting at this point and do not take the uncertainties of the fitted data and the input parameters into account. The uncertainties of speed of sound data extracted using curve fitting are estimated based on the results for the DFT analysis. All uncertainties are presented within a confidence interval of \(95 \ \%\). Calibration of optical setup Modelling of LITA signals requires a deep understanding of non-linear optical processes, and phonon–photon as well as thermon–photon interaction inherent to the LITA measurement technique (Cummings et al. 1995). Additionally, Eq. (13) is highly dependent on the beam waist of the excitation beam \(\omega\) as well as the magnitude of the grating vector q, which depends on the spacing of the optical grid \(\varLambda\). Both values are highly vulnerable to distortions due to turbulence and convective transport processes if they occur in a similar time scale. However, averaging the signal over a high number of laser pulses smears the signal and minimizes the effect of shot to shot variations in turbulence and convective processes as well as laser noise, jitter, and drift. An independent study to quantify these effects, however, is highly complex. Using Eq. (8) and/or (13) to measure speed of sound, acoustic damping rate or thermal diffusivity requires, therefore, a careful and thorough calibration of the optical measurement volume, specifically the spacing of the optical grid \(\varLambda\) and the Gaussian half-width \(\omega\) of the excitation beam. Table 1 Overview of the operating conditions, input parameters, and results of the calibration of the Gaussian half-width of the excitation beam \(\omega\); operating conditions: \(T_{ch}\): fluid temperature; \(p_{ch}\): fluid pressure; \(n_{\mathrm {LITA}}\): number of laser shots used for averaging; \(E_{exc}\): pulse energy of excitation beams; \(P_{int}\): power of interrogation beam; SM: single-mode fibre with diameter of \(4\,\mu \mathrm {m}\); MM: multi-mode fibre with diameter of \(25\,\mu \mathrm {m}\); curve fitting input parameters: calibrated grid spacing \(\varLambda _{cal} = 29.33\,\mu \mathrm {m}\); measured Gaussian half-width of the interrogation beam \(\sigma _{BP} = 192\,\mu \mathrm {m}\); \(D_{T,\mathrm {NIST}}\): thermal diffusivity; \(\varGamma _{m,\mathrm {NIST}}\): pressure corrected acoustic damping rates; curve fitting results: \(c_{s,\mathrm {LITA}}\): speed of sound; \(\omega _{\mathrm {LITA}}\): Gaussian half-width of the excitation beam The calibration is done in well-known quiescent conditions. When assuming non-resonant behaviour, the grid spacing of the optical measurement volume can be characterized by the excitation wavelength \(\lambda _{exc}\): $$\begin{aligned} \varLambda (\varDelta y_{exc}, f_{exc}, \lambda _{exc}) = 2 \frac{c_{s}}{\nu }. \end{aligned}$$ A beam profiling camera (DataRay) is used for beam alignment and to measure the beam arrangement in the foci of the excitation and the interrogation beams at atmospheric conditions. Using the collected geometrical data, the grid spacing is estimated to be \(\varLambda _{BP} = 29.6 \pm 7\,\mu \mathrm {m}\) with a beam distance of \(\varDelta y_{exc,BP} = 36.9 \pm 3.8\,\mathrm {mm}\). Despite the agreement of the geometrical calibration using the beam profiling camera and the known parameters of the used optical setup, the measurement uncertainties of the calibration are unacceptable. Therefore, a new calibration procedure has been developed to calibrate the optical setup. The grid spacing is estimated for conditions up to \(700\, \mathrm {K}\) and \(8\ \mathrm {MPa}\) in non-resonant fluids, namely nitrogen and argon. Using the dependencies of the grid spacing \(\varLambda\) in Eq. (14) together with the speed of sound \(c_{s}\) extracted from Lemmon et al. (2018), we are able to calculate a mean grid spacing \(\varLambda\). It is important to emphasise that the grid spacing is, as shown by Li et al. (2002), solely dependent on the geometrical and optical parameters of the optical setup (f, \(\varDelta y_{exc}\), \(\lambda _{exc}\)). Hence, it is independent of the refractive index of the probed environment. The dominating frequency of the LITA signal \(\nu\) is estimated using a DFT together with a von Hann window and a band-pass filter. This ensures the correct extraction of the frequencies even for noisy signals in gas, gas–like, and compressed liquid states. Note that, for each condition, the acquired frequencies are averaged over at least 5000 samples. The calibration procedure yields a grid spacing of the measurement volume of \(\varLambda _{cal} = 29.33 \pm 0.14\, \ \mu \mathrm {m}\). Note that the measurement uncertainties using the new calibration procedure are more than one order of magnitude below the calibration using the beam profiling camera. Due to the high spatial resolution required for investigations in the wake of free-falling evaporation droplets and jet disintegration, the optical arrangement uses focused beams with a Gaussian beam profile. Hence, calibration of the beam waist is crucial for the correct modelling of the LITA signal and robust extraction of the acoustic damping rate as well as the thermal diffusivity. Calibration is performed using LITA signals with different operating conditions, as listed in Table 1. All signals are averaged over \(n_{\mathrm {LITA}}\) laser shots. To acquire the Gaussian half-width \(\omega\), the experimentally detected and averaged LITA signals presented in Table 1 are curve fitted using the simplified model by Schlamp et al. (1999) expressed by Eq. (13), which is presented in Sect. 3.2.2. A robust non-linear least-absolute fit using the Levenberg–Marquardt algorithm is utilized by using the non-linear fit in Matlab (MathWorks) with the robust option Least-Absolute Residuals (LAR). This method optimizes the fit by minimizing the absolute differences of the residual rather than the squared differences. We have chosen this option instead of an approach using bisquare weights, since signals averaged over more than 5000 laser shots are fitted, which leads to few outliers. The Gaussian beam width \(\omega\), the modulation depths \(U_{eP}\), \(U_{\varTheta }\), the beam misalignment \(\bar{\eta }\), the speed of sound \(c_{s}\), and the temporal offset \(t_{0}\) are hereby output parameters, whereas the remaining parameters are input parameters, which were kept constant during the curve fitting. The grid spacing is set to the calibrated value and the Gaussian half-width of the interrogation beam to the value measured by beam profile camera \(\sigma _{BP}\). Transport properties are estimated using Lemmon et al. (2018) for thermal diffusivity and shear viscosity. Acoustic damping rates are assessed by the model proposed by Li et al. (2002), see Eq. (3). The curve fitting of the last case listed in Table 1 is exemplarily depicted in Fig. 4. Measured LITA signal with non-resonant fluid behaviour in pure argon with curve fitting result used for calibration. The signal is averaged over 10602 laser shots. Operating conditions: \(p_{ch} = 2 \pm 0.1\,\mathrm {MPa}\); \(T_{ch} = 295.2 \pm 0.6\, \mathrm {K}\); \(E_{exc} = 32.4\,\mathrm {mJ}\); \(P_{int} = 1.5\, \mathrm {W}\); Curve fitting input parameters: \(\varLambda = \varLambda _{cal} = 29.33\,\mu \mathrm {m}\); \(\sigma = \sigma _{BP} = 192 \,\mu \mathrm {m}\); \(\varGamma = \varGamma _{m,\mathrm {NIST}} = 1.38\, \mathrm {mm}^2\,\mathrm {s}^{-1}\); \(D_{T} = D_{T,\mathrm {NIST}} = 1.02\, \mathrm {mm}^2\,\mathrm {s}^{-1}\); Curve fitting results: \(\omega _{\mathrm {LITA}} = 216 \pm 1\,\mu \mathrm {m}\); \(c_{s,\mathrm {LITA}} = 316 \pm 2\,\mathrm {m}\,\mathrm{s}^{-1}\) Assuming an exponential dependence with negative exponent of pulse energy and Gaussian half beam width \(\omega\) of the excitation beam, \(\omega\) can be expressed as: $$\begin{aligned} \omega _{cal} = \omega _{th} - \left( \omega _{th} - \omega _{BP}\right) \exp \left\{ - \alpha _{exc} E_{exc}\right\} , \end{aligned}$$ where \(\alpha _{exc}\) is the calibration constant, \(\omega _{BP} = 190 \,\mu \mathrm {m}\) is the Gaussian half-width measured with the beam profiling camera, and \(\omega _{th} = 312\,\mu \mathrm {m}\) is the Gaussian half-width estimated using the laser specifications. Note that for high pulse energies \(\omega\) approaches \(\omega _{th}\), whereas for low pulse energies \(\omega\) can be approximated by the investigations using the beam profiling camera. A robust non-linear least square fit using the Levenberg–Marquardt algorithm with bisquare weights is used to acquire \(\alpha _{exc}\). Validation and analysis of measurement uncertainties Due to the availability of speed of sound data at a wide range of pressure and temperature, the validation of the optical grid spacing \(\varLambda\) using Eq. (8) is possible in the whole operating range. However, since acoustic damping rates depend on both bulk and shear viscosities, available data for high-temperature and high-pressure environments are rare. Hence, validation is performed in the following section using only acoustic damping rates of argon at room temperature. Experimental speed of sound data Figure 5 depicts the validation of the calibration process using the grid spacing to characterize the measurement volume. The speed of sound was calculated by Eq. (8), where the non-resonant frequency was estimated using a DFT together with a von Hann window and a band-pass filter. A relative distribution is shown in Fig. 6. For clarity, we omit the distinction between the different non-resonant fluids argon and nitrogen in the distribution. Both distributions show a Gaussian shaped distribution. For non-resonant cases, the skewness of the distribution is \(-0.3\) with a kurtosis of 3.3. In case of resonant fluid behaviour, skewness is \(-1\) with a kurtosis of 3.7. Absolute comparison of speed of sound data for \(p_{ch} = 2\) to \(8\,\mathrm {MPa}\) and temperatures up to \(T_{ch} = 700\, \mathrm {K}\) (nitrogen) and \(p_{ch} = 0.5\) to \(8\,\mathrm {MPa}\) and temperatures up to \(T_{ch} = 600\,\mathrm {K}\) (argon and carbon dioxide) using a grid spacing of \(\varLambda = \varLambda _{cal} = 29.33 \pm 0.14\,\mu \mathrm {m}\). The speed of sound is calculated using a DFT with Eq. (8). Thermodynamic data for validation are taken from Lemmon et al. (2018) Relative distribution of the comparison of speed of sound data for \(p_{ch} = 2\) to \(8\,\mathrm {MPa}\) and temperatures up to \(T_{ch} = 700\,\mathrm {K}\) (nitrogen) and \(p_{ch} = 0.5\) to \(8\, \mathrm {MPa}\) and temperatures up to \(T_{ch} = 600\,\mathrm {K}\) (argon and carbon dioxide) using a grid spacing of \(\varLambda = \varLambda _{cal} = 29.33 \pm 0.14 \ \mu \mathrm {m}\). The speed of sound is calculated using a DFT with Eq. (8). Thermodynamic data for validation are taken from Lemmon et al. (2018). For carbon dioxide, the skewness of the distributions is \(-1\) with a kurtosis of 3.7; for nitrogen, argon skewness is acquired to be \(-0.3\) with a kurtosis of 3.3 Validation of the calibration process shows good agreement between the non-resonant measurements of nitrogen and argon, and theoretical values extracted from Lemmon et al. (2018) (NIST database). The relative measurement uncertainty of the acquired speed of sound for all investigated fluids is below \(2 \ \%\). For argon and carbon dioxide, the uncertainties of measurement and NIST database are in the same order of magnitude, which indicates the good precision of the LITA setup. However, the distribution in Fig. 6 shows a width of approximately \(3 \ \%\) for non-resonant fluids argon and nitrogen. These differences result from unavoidable misalignments of the excitation beams as well as beam steering effects due to the high-temperature and high-pressure environment. Measurements with resonant fluid behaviour observed for carbon dioxide, however, show a consistent deviation, resulting in an offset between the measurements and theoretical values taken from Lemmon et al. (2018). In addition to the observed offset in validation for carbon dioxide at temperatures up to \(600\,\mathrm {K}\) and pressures up to \(8\,\mathrm {MPa}\), a resonant fluid behaviour is observed. However, based on the absorption cross-section of carbon dioxide, resonant fluid behaviour can be a priori excluded. One and the most probable explanation for these resonant behaviours is residual moisture in the experimental setup. With the absorption cross-section of water at the excitation wavelength, residual moisture would cause resonant fluid behaviour as has already been observed by Cummings (1995). The constant offset even for low mole fractions of water also demonstrates quite nicely the sensitivity of the LITA signal on the concentration of a mixture. This sensitivity is essential to extract the mixing temperature as well as the mole fraction using Eq. (13) together with the relations shown in (4). To ensure the robustness of the optical setup, additional investigations in carbon dioxide are conducted. The objective of these experiments is twofold. First, experimental investigation in an open loop setup with carbon dioxide after drying the experimental setup using elevated temperature and vacuum shows no influence of residual moisture on the LITA signal and the calibration procedure. Hence, the origin of residual moisture is most likely due to purity of the used carbon dioxide. Second, the authors propose an intensity study, in which the energy of the excitation laser is systematically varied. This could shed some light on two other possible explanations for the observed resonant fluid behaviour, namely spontaneous Raman scattering or an optical breakdown of carbon dioxide. The authors hypothesize that due to the high pulse energy of up to \(32\,\mathrm {mJ}\) used for LITA measurements in carbon dioxide an optical breakdown of the fluid might occur, which would cause changes in the fluid properties. This would affect the formation of the density grating and, therefore, the measured LITA signal. A detailed description can be found in the work of Stampanoni-Panariello et al. (2005a). Additionally, the dependencies of the intensity of the detectable LITA signal on the excitation beam pulse energy are different for non-resonant LITA and spontaneous Raman scattering. Note that non-resonant, purely electrostrictive LITA signals are caused by stimulated Brillouin scattering (SBS), whereas resonant, LITA signals are caused by the combination of SBS, stimulated thermal Brillouin scattering (STBS), and/or stimulated thermal Rayleigh scattering (STRS). With reference to the theoretical work by Stampanoni-Panariello et al. (2005a), the signal intensity of the detectable LITA signal for both resonant and non-resonant fluid behaviour shows a quadratic dependence on the pulse energies of the excitation beams. Spontaneous Raman scattering experiences, however, a linear dependence on the incident intensity, see Powers (2013). Starting from low-energy pulses, a systematic increase of the pulse energy should, therefore, indicate when an optical breakdown of carbon dioxide occurs and if spontaneous Raman scattering is the cause of the resonant fluid behaviour, by favouring one of the mentioned effects over the other. Comparison of experimental and theoretical LITA signals Acoustic damping rate, speed of sound, and thermal diffusivity are gained using curve fitting of experimentally detected, averaged LITA signals with the simplified model by Schlamp et al. (1999) expressed by Eq. (13) presented in Sect. 3.2.2. Measured LITA signal with non-resonant fluid behaviour in pure argon with curve fitting result used to acquire transport properties. The signal is averaged over 10991 laser shots. Operating conditions: \(p_{ch} = 4 \pm 0.1\,\mathrm {MPa}\); \(T_{ch} = 295.0 \pm 0.5\,\mathrm {K}\); \(E_{exc} = 22.5 \ \mathrm {mJ}\); \(P_{int} = 2.5\,\mathrm {W}\); curve fitting input parameters: \(\varLambda = \varLambda _{cal} = 29.33\,\mu \mathrm {m}\); \(\sigma = \sigma _{BP} = 192\,\mu \mathrm {m}\); \(\omega = \omega _{cal} = 238\, \mu \mathrm {m}\); curve fitting results: \(c_{s,\mathrm {LITA}} = 319 \pm 2 \,\mathrm {m s}^{-1}\); \(\varGamma _{\mathrm {LITA}} = 1.2 \pm 0.1\, \mathrm {mm}^2\,\mathrm{s}^{-1}\); theoretical estimations using Eq. (3) together with the NIST database by Lemmon et al. (2018) give: \(c_{s,\mathrm {NIST}} = 324 \,\mathrm {m}\, \mathrm{s}^{-1}\) and \(\varGamma _{m,\mathrm {NIST}} = 1.0\,\mathrm {mm}^2\, \mathrm{s}^{-1}\) Similar to the curve fit in the calibration procedure, a robust non-linear least-absolute curve fit is used. The latter utilizes a Levenberg–Marquardt algorithm. Besides the desired speed of sound \(c_{s}\), acoustic damping rate \(\varGamma\), thermal diffusivity \(D_{T}\), the modulation depths \(U_{eP}\), \(U_{\varTheta }\), the beam misalignment \(\bar{\eta }\), and the temporal offset \(t_{0}\), all parameters are input parameters, which were held constant during the curve fitting. The grid spacing \(\varLambda\) and Gaussian half-width of the excitation beam \(\omega\) are set to the calibrated value. The Gaussian half-width of the interrogation beam \(\sigma _{BP}\) is held constant at the value measured by the beam profile camera. A curve fit for a LITA signal in pure argon at a pressure of \(4\, \mathrm {MPa}\) and a temperature of \(295\,\mathrm {K}\) is shown in Fig. 7. The signal is averaged over 10991 shots. Curve fitting yields \(c_{s,\mathrm {LITA}} = 319 \pm 2\, \mathrm {m }\,\mathrm{s}^{-1}\) and \(\varGamma _{\mathrm {LITA}} = 1.2 \pm 0.1\, \mathrm {mm}^2 \,\mathrm{s}^{-1}\), which compared to theoretical estimations using Eq. (3) together with the NIST database by Lemmon et al. (2018) (\(c_{s,\mathrm {NIST}} = 324 \,\mathrm {m} \ \mathrm{s}^{-1}\) and \(\varGamma _{m,\mathrm {NIST}} = 1.0\,\mathrm {mm}^2 \ \mathrm{s}^{-1}\)) show good agreement. The feasibility to extract acoustic damping rates in a high-pressure, high-temperature environment for a resonant fluid is presented in Fig. 8. A carbon dioxide atmosphere with residual moisture at pressure of \(8\,\mathrm {MPa}\) and a temperature of \(502.5\,\mathrm {K}\) is shown. Note that, due to the low modulation depth of the thermal grating compared to the electrostrictive grating \(U_{\varTheta ,\mathrm {LITA}}/U_{eP,\mathrm {LITA}} < 0.08\), the fitted thermal diffusivity does not reveal physically realistic results. Measured LITA signal with resonant fluid behaviour in carbon dioxide with curve fitting result used to acquire transport properties. The signal is averaged over 11030 laser shots. Operating conditions: \(p_{ch} = 8 \pm 0.1\,\mathrm {MPa}\); \(T_{ch} = 502.5 \pm 4.9\,\mathrm {K}\); \(E_{exc} = 18\,\mathrm {mJ}\); \(P_{int} = 1 \,\mathrm {W}\); curve fitting input parameters: \(\varLambda = \varLambda _{cal} = 29.33\,\mu \mathrm {m}\); \(\sigma = \sigma _{BP} = 192\,\mu \mathrm {m}\); \(\omega = \omega _{cal} = 230\,\mu \mathrm {m}\); curve fitting results: \(c_{s,\mathrm {LITA}} = 356 \pm 2\,\text {m s}^{-1}\); \(\varGamma _{\mathrm {LITA}} = 41 \pm 1\,\mathrm {mm}^2\, \mathrm{s}^{-1}\) Figure 9 depicts acoustic damping rate ratios \(\varGamma _{\mathrm {LITA}}/\varGamma _{c,\mathrm {NIST}}\) for pure argon at room temperature for various pressures up to \(8\, \mathrm {MPa}\). Values are compared to the experimental and theoretical investigations by Li et al. (2002). For pressures up to \(4 \,\mathrm {MPa}\), our experimental investigation shows good consensus with data by Li et al. (2002). The points at same temperatures and pressures indicate experiments with similar operating conditions, which are investigated on different days. The deviation between those points is most likely caused by beam steering effects of the excitation beams due to staining of the quartz windows resulting from a slightly excessive pulse energy. These stains could change the Gaussian beam width of the excitation beams without changing the beam crossing angle. Acoustic damping rate ratios at higher pressure do not show a linear dependence on pressure. Considering the critical pressure of argon \(p_{c} = 4.9\,\mathrm {MPa}\), deviations are expected, since shear and bulk viscosity show non-linear behaviour in the vicinity of the critical point as indicated by Meier et al. (2004) and Meier et al. (2005). Acoustic damping rate ratio \(\varGamma _{\mathrm {LITA}}/\varGamma _{c,\mathrm {NIST}}\) over chamber pressure \(p_{ch}\) for pure argon at room temperature. Classical acoustic damping rates are estimated using NIST database by Lemmon et al. (2018). Experimental and theoretical data are taken from Li et al. (2002) The authors hypothesize that the high values of acoustic damping rates in Fig. 9 result from thermodynamic anomalies in the vicinity of the critical point. These anomalies are an exponential increase in shear viscosity above the critical pressure as shown by Meier et al. (2004) and peak values in bulk flow velocities in the vicinity of the critical pressure as has been simulated for Lennard–Jones fluids by Meier et al. (2005). We further emphasise that bulk viscosities are neglected in the theoretical fit by Li et al. (2002), and that the fit is only validated for pressures up to \(2.53 \ \mathrm {MPa}\). The comparison in Fig. 9 indicates the capability of spatially high-resolved LITA measurements to extract speed of sound and acoustic damping rates in fluids. However, for a more concise calibration, theoretical or experimental data of acoustic damping rate at high pressure and temperatures are necessary. Hence, further investigations of theoretical approximations and experimental data of bulk viscosities are essential to validate the presented post processing curve fitting algorithm for high pressure. In this study, the challenges as well as the potential of laser-induced thermal acoustics for small-scale macroscopic fluid phenomena occurring in jet disintegration or droplet evaporation are presented. By applying LITA with an optical arrangement using focused beams, we can successfully acquire transport properties using an elliptical measurement volume with a spatial resolution of \(312\, \mu \mathrm {m}\) in diameter and less than \(2\,\mathrm {mm}\) in length. The speed of sound is measured in a high-pressure and high-temperature environment for various fluids using LITA together with a direct Fourier analysis. To validate these measurements and access their measurement uncertainties, a comparison with the NIST database by Lemmon et al. (2018) is implemented. Using a confidence interval of \(95 \ \%\), the relative uncertainties for the speed of sound data are within \(3 \ \%\) of the acquired values. Furthermore, acoustic damping rates are acquired by curve fitting experimental LITA signals to a simplified analytical expression based on the model by Schlamp et al. (1999). Validation using pure argon at elevated pressures shows promising results and, hence, confirm the capability of LITA to simultaneously measure transport properties in small-scale fluid phenomena. The importance of these transport properties measured using LITA is twofold. Investigations in pure fluids in the vicinity of their critical point and across their Widom line enable us to study transitions between supercritical fluid states on a macroscopic level. Moreover, by applying suitable models for speed of sound, acoustic damping rates, as well as thermal diffusivities in fluid mixtures, it is possible to determine mixing parameters in macroscopic fluid phenomena on a small scale. The experimental data are not yet publicly available. \(A_{P1,P2}\) : Complex amplitudes of the acoustic waves \(\left( -\right)\) \(A_{T}\) : Complex amplitudes of the thermal grating \(\left( -\right)\) B : Amplitude of harmonic oscillation \(\left( -\right)\) C : \(D_{T}\) : Thermal diffusivity \(\left( \mathrm {m}^{2} \ \mathrm{s}^{-1}\right)\) \(E_{exc}\) : Pulse energy of the excitation beams \(\left( \mathrm {kg} \ \mathrm{m}^{2} \ \mathrm{s}^{-2}\right)\) I : Signal intensity \(\left( -\right)\) \(P_{1,2}\) : Complex parameter to compute \(\varPsi (t)\); \(P_{1,2} = A_{P1,P2}\varSigma _{P1,P2}\) \(\left( -\right)\) \(P_{int}\) : Power of the interrogation beam \(\left( \mathrm {kg} \ \mathrm {m}^{2} \ \mathrm {s}^{-3}\right)\) T : Complex parameter to compute \(\varPsi (t)\); \(T = A_{T}\varSigma _{T}\) \(\left( -\right)\) \(T_{ch}\) : Fluid temperature in measurement chamber \(\left( \mathrm {K}\right)\) \(T_{mix}\) : Local mixing temperature \(\left( \mathrm {K}\right)\) \(U_{\varTheta }\) : Dimensionless modulation depth of thermalization grating \(\left( -\right)\) \(U_{eP}\) : Dimensionless modulation depth of electrostriction grating \(\left( -\right)\) \(c_{p}\) : Specific isobaric heat capacity \(\left( \mathrm {m}^{2} \ \mathrm {s}^{-2} \ \mathrm {K}^{-1}\right)\) \(c_{s}\) : Speed of sound \(\left( \mathrm {m} \ \mathrm {s}^{-1}\right)\) f : Focal length \(\left( \mathrm {m}\right)\) j : Indicator related to fluid behaviour; \(j=1\): resonant; \(j=2\): non-resonant \(\left( -\right)\) p : Pressure \(\left( \mathrm {kg} \ \mathrm {m}^{-1} \ \mathrm {s}^{-2}\right)\) Magnitude of the grating vector; \(q = 2 \pi / \varLambda\) \(\left( \mathrm {m}^{-1}\right)\) s : Specific entropy \(\left( \mathrm {m}^{2} \ \mathrm {s}^{-2} \ \mathrm {K}^{-1}\right)\) Time \(\left( \mathrm {s}\right)\) \(t_{0}\) : Time of laser pulse \(\left( \mathrm {s}\right)\) v : Fluid velocity component in y-direction \(\left( \mathrm {m} \ \mathrm{s}^{-1}\right)\) w : Fluid velocity component in z-direction \(\left( \mathrm {m} \ \mathrm{s}^{-1}\right)\) \(x_{Fl}\) : Local mole fraction of a fluid in a mixture \(\left( -\right)\) x : Cartesian coordinate \(\left( \mathrm {m}\right)\) \(\varDelta y\) : Beam distance in front of lens \(\left( \mathrm {m}\right)\) \(\varDelta z_{L}\) : Distance between the interrogation and excitation beams in front of lens \(\left( \mathrm {m}\right)\) \(\varGamma\) : Acoustic damping rate \(\left( \mathrm {m}^2 \ \mathrm {s}^{-1}\right)\) \(\varGamma _{c}\) : Classical acoustic damping rate; bulk viscosities are neglected \(\left( \mathrm {m}^2 \ \mathrm {s}^{-1}\right)\) \(\varGamma _{m}\) : Pressure corrected classical acoustic damping rate according to the theoretical and empirical considerations of Li et al. (2002) \(\left( \mathrm {m}^2 \ \mathrm {s}^{-1}\right)\) \(\varLambda\) : Grid spacing of the optical interference pattern \(\left( \mathrm {m}\right)\) \(\varPhi\) : Crossing angle of interrogation beam \(\left( \mathrm {rad}\right)\) \(\varPsi (t)\) : Time-dependent dimensionless diffraction efficiency of a LITA signal \(\left( -\right)\) \(\varSigma _{P1,P2}\) : Complex parameter related to the damping of oscillations \(\left( -\right)\) \(\varSigma _{T}\) : Complex parameter related to the damping of the signal \(\left( -\right)\) \(\varTheta\) : Crossing angle of excitation beam \(\left( \mathrm {rad}\right)\) \(\alpha\) : Calibration constant \(\left( \mathrm {s}^{2} \ \mathrm {kg}^{-1} \ m^{-2}\right)\) \(\beta\) : Decay rate \(\left( \mathrm {s}^{-1}\right)\) \(\bar{\eta }\) : Misalignment length scales in y-direction \(\left( \mathrm {m}\right)\) \(\gamma\) : Specific heat ratio \(\left( -\right)\) \(\gamma _{n \varTheta }\) : Rate of excited-state energy decay not caused by thermalization \(\left( \mathrm {s}^{-1}\right)\) \(\gamma _{\varTheta }\) : Rate of excited-state energy decay caused by thermalization \(\left( \mathrm {s}^{-1}\right)\) \(\kappa\) : Thermal conductivity \(\left( \mathrm {kg} \ \mathrm{m} \ \mathrm{s}^{-3} \ \mathrm{K}^{-1}\right)\) \(\lambda\) : Wavelength \(\left( \mathrm {m}\right)\) \(\mu _{v}\) : Dynamic bulk viscosity \(\left( \mathrm {kg} \ \mathrm{m}^{-1} \ \mathrm{s}^{-1}\right)\) \(\mu _{s}\) : Dynamic shear viscosity \(\left( \mathrm {kg} \ \mathrm{m}^{-1} \ \mathrm{s}^{-1}\right)\) \(\nu\) : Dominating frequency of the LITA signal \(\left( \mathrm {s}^{-1}\right)\) \(\omega\) : Gaussian half-width of excitation beams \(\left( \mathrm {m}\right)\) \(\varrho\) : Mass density \(\left( \mathrm {kg} \ \mathrm{m}^{-3}\right)\) \(\sigma\) : Gaussian half-width of interrogation beam \(\left( \mathrm {m}\right)\) \(\tau\) : Laser pulse length \(\left( \mathrm {s}\right)\) \(\upsilon\) : Angular frequency \(\left( \mathrm {s}^{-1}\right)\) \(\upsilon _{0}\) : Natural angular frequency associated with speed of sound \(\left( \mathrm {s}^{-1}\right)\) \(\bar{\zeta }\) : Misalignment length scales in z-direction \(\left( \mathrm {m}\right)\) Ar : Related to fluid: argon BP : Related to beam profiler measurement \(\mathrm {DFT}\) : Related to calculations using a direct Fourier transformation \(\mathrm {LITA}\) : Related to measurement using LITA N2: Related to fluid: nitrogen \(\mathrm {NIST}\) : Related to theoretical calculations using NIST database by Lemmon et al. (2018) atm : Unit of pressure used: atmospheres Related to properties at the critical point cal : Related to calibration ch : Related to condition in measurement chamber exc : Related to excitation beam int : Related to interrogation beam mix : Related to local condition of mixture Reduced properties scaled with the properties related to the critical point th : Related to theoretical calculations using data sheet specifications \(\mathcal {O}\left( \cdot \right)\) : Order of magnitude \(\left( \cdot \right) ^*\) : Complex conjugate DFT: Direct Fourier transformation GLP: Glan–Laser polarizer Least-Absolute Residuals LIGS: Laser-induced (transient) grating spectroscopy LITA: Multi-mode fibre with diameter of \(25 \ \mu \mathrm {m}\) NIST: PBS: Polarizing beam splitter SBS: Stimulated Brillouin scattering Single-mode fibre with diameter of \(4 \ \mu \mathrm {m}\) STBS: Stimulated thermal Brillouin scattering STRS: Stimulated (thermal) Rayleigh scattering WP: \(\lambda /2\)-wave plate Baab S, Förster FJ, Lamanna G, Weigand B (2016) Speed of sound measurements and mixing characterization of underexpanded fuel jets with supercritical reservoir condition using laser-induced thermal acoustics. Exp Fluids 57(11):3068. https://doi.org/10.1007/s00348-016-2252-3 Baab S, Steinhausen C, Lamanna G, Weigand B, Förster FJ (2018) A quantitative speed of sound database for multi-component jet mixing at high pressure. Fuel 233:918–925. https://doi.org/10.1016/j.fuel.2017.12.080 Bencivenga F, Cunsolo A, Krisch M, Monaco G, Ruocco G, Sette F (2009) High frequency dynamics in liquids and supercritical fluids: a comparative inelastic x-ray scattering study. J Chem Phys 130(6):064501. https://doi.org/10.1063/1.3073039 Bork B, Preusche A, Weckenmann F, Lamanna G, Dreizler A (2017) Measurement of species concentration and estimation of temperature in the wake of evaporating n-heptane droplets at trans-critical conditions. Proc Combustion Inst 36(2):2433–2440. https://doi.org/10.1016/j.proci.2016.07.037 Crua C, Manin J, Pickett LM (2017) On the transcritical mixing of fuels at diesel engine conditions. Fuel 208:535–548. https://doi.org/10.1016/j.fuel.2017.06.091 Cummings EB (1995) Laser-Induced Thermal Acoustics. PhD-Thesis, California Institute of Technology, Pasadena Cummings EB, Leyva IA, Hornung HG (1995) Laser-induced thermal acoustics (LITA) signals from finite beams. Appl Opt 34(18):3290–3302. https://doi.org/10.1364/AO.34.003290 Demizu M, Terazima M, Kimura Y (2008) Transport properties of binary mixtures of carbon dioxide and 1-butyl-3-methylimidazolium hexafluorophosphate studied by transient grating spectroscopy. Anal Sci 24:1329–1334 Eichler HJ, Günter P, Pohl DW (1986) Laser Induced Dynamic Gratings. Springer series in optical sciences ; 50, Springer, Berlin and Heidelberg and New York and Tokyo Falgout Z, Rahm M, Sedarsky D, Linne M (2016) Gas/fuel jet interfaces under high pressures and temperatures. Fuel 168:14–21. https://doi.org/10.1016/j.fuel.2015.11.061 Förster FJ (2016) Laser-induced thermal acoustics: simultaneous velocimetry and thermometry for the study of compressible flows. PhD-Thesis, University of Stuttgart, Stuttgart Förster FJ, Baab S, Lamanna G, Weigand B (2015) Temperature and velocity determination of shock-heated flows with non-resonant heterodyne laser-induced thermal acoustics. Applied Physics B 121(3):235–248. https://doi.org/10.1007/s00340-015-6217-7 Förster FJ, Baab S, Steinhausen C, Lamanna G, Ewart P, Weigand B (2018) Mixing characterization of highly underexpanded fluid jets with real gas expansion. Exp Fluids 59(3):6247. https://doi.org/10.1007/s00348-018-2488-1 Hemmerling B, Kozlov DN (1999) Generation and temporally resolved detection of laser-induced gratings by a single, pulsed Nd: YAG laser. Appl Opt 38(6):1001. https://doi.org/10.1364/AO.38.001001 Hubschmid W, Hemmerling B, Stampanoni-Panariello A (1995) Rayleigh and Brillouin modes in electrostrictive gratings. J Opt Soc Am B 12(10):1850. https://doi.org/10.1364/JOSAB.12.001850 Joint Committee for Guides in Metrology (2008) Evaluation of measurement data - Guide to the expression of uncertainty in measurement: GUM 1995 with minor corrections, 1st edn. JCGM 100:2008 Kimura Y, Kanda D, Terazima M, Hirota N (1995) Application of the transient grating method to the measurement of transport properties for high pressure fluids. Phys Chem Chem Phys 99(2):196–203. https://doi.org/10.1002/bbpc.19950990214 Kimura Y, Yamamoto Y, Fujiwara H, Terazima M (2005a) Vibrational energy relaxation of azulene studied by the transient grating method. I. Supercritical fluids. J Chem Phys 123(5):054512. https://doi.org/10.1063/1.1994847 Kimura Y, Yamamoto Y, Terazima M (2005b) Vibrational energy relaxation of azulene studied by the transient grating method. II. Liquid solvents. J Chem Phys 123(5):054513. https://doi.org/10.1063/1.1994848 Lamanna G, Steinhausen C, Weigand B, Preusche A, Bork B, Dreizler A, Stierle R, Gross J (2018) On the importance of non-equilibrium models for describing the coupling of heat and mass transfer at high pressure. Int Commun Heat Mass Transf 98:49–58. https://doi.org/10.1016/j.icheatmasstransfer.2018.07.012 Lamanna G, Steinhausen C, Weckenmann F, Weigand B, Bork B, Preusche A, Dreizler A, Stierle R, Gross J (2020) Laboratory Experiments of High-Pressure Fluid Drops: Chapter 2. American Institute of Aeronautics and Astronautics (Hg) – High-Pressure Flows for Propulsion Applications pp 49–109, https://doi.org/10.2514/5.9781624105814.0049.0110 Latzel H, Dreier T (2000) Sound velocity, heat conduction and virial coefficients of gaseous mixtures at high pressure from NIR laser-induced grating experiments. Phys Chem Chem Phys 2(17):3819–3824. https://doi.org/10.1039/b003271i Lemmon EW, Bell IH, Huber ML, McLinden MO (2018) NIST Standard Reference Database 23: Reference Fluid Thermodynamic and Transport Properties-REFPROP, Version 10.0, National Institute of Standards and Technology. https://doi.org/10.18434/T4JS3C, https://www.nist.gov/srd/refprop Li Y, Roberts WL, Brown MS (2002) Investigation of gaseous acoustic damping rates by transient grating spectroscopy. AIAA J 40(6):1071–1077. https://doi.org/10.2514/2.1790 Li Y, Roberts WL, Brown MS, Gord JR (2005) Acoustic damping rate measurements in binary mixtures of atomic species via transient-grating spectroscopy. Exp Fluids 39(4):687–693. https://doi.org/10.1007/s00348-005-1012-6 Meier K, Laesecke A, Kabelac S (2004) Transport coefficients of the Lennard–Jones model fluid. I. Viscosity. J Chem Phys 121(8):3671–3687. https://doi.org/10.1063/1.1770695 Meier K, Laesecke A, Kabelac S (2005) Transport coefficients of the Lennard-Jones model fluid. III. Bulk viscosity. J Chem Phys 122(1):14513. https://doi.org/10.1063/1.1828040 Moffat RJ (1988) Describing the uncertainties in experimental results. Exp Thermal Fluid Sci 1(1):3–17. https://doi.org/10.1016/0894-1777(88)90043-X Müller H, Niedermeier CA, Matheis J, Pfitzner M, Hickel S (2016) Large-eddy simulation of nitrogen injection at trans- and supercritical conditions. Phys Fluids 28(1):015102. https://doi.org/10.1063/1.4937948 Mysik SV (2015) Analyzing the acoustic spectra of sound velocity and absorption in amphiphilic liquids. St Petersburg Polytech Univ J 1(3):325–331. https://doi.org/10.1016/j.spjpm.2015.12.003 Nomura H, Nakaya S, Tsue M (2020) Microgravity Research on Quasi-Steady and Unsteady Combustion of Fuel Droplet at High Pressures: Chapter 1. American Institute of Aeronautics and Astronautics (Hg) – High-Pressure Flows for Propulsion Applications pp 1–47, https://doi.org/10.2514/5.9781624105814.0001.0048 Powers PE (2013) Field guide to nonlinear optics, SPIE field guide series, vol FG29. SPIE Press, Bellingham Washington USA Qiao L, Jain S, Mo G (2020) Molecular Simulations to Research Supercritical Fuel Properties: Chapter 10. American Institute of Aeronautics and Astronautics (Hg) – High-Pressure Flows for Propulsion Applications pp 409–460, https://doi.org/10.2514/5.9781624105814.0409.0460 Santoro M, Gorelli FA (2008) Structural changes in supercritical fluids at high pressures. Physical Review B 77(21), https://doi.org/10.1103/PhysRevB.77.212103 Schlamp S, Cummings EB, Hornung HG (1999) Beam misalignments and fluid velocities in laser-induced thermal acoustics. Appl Opt 38(27):5724. https://doi.org/10.1364/AO.38.005724 Siegman AE (1977) Bragg diffraction of a Gaussian beam by a crossed-Gaussian volume grating. J Opt Soc Am 67(4):545–550 Simeoni GG, Bryk T, Gorelli FA, Krisch M, Ruocco G, Santoro M, Scopigno T (2010) The Widom line as the crossover between liquid-like and gas-like behaviour in supercritical fluids. Nat Phys 6(7):503–507. https://doi.org/10.1038/nphys1683 Stampanoni-Panariello A, Kozlov DN, Radi PP, Hemmerling B (2005a) Gas phase diagnostics by laser-induced gratings I. theory. Applied Physics B 81(1):101–111, https://doi.org/10.1007/s00340-005-1852-z Stampanoni-Panariello A, Kozlov DN, Radi PP, Hemmerling B (2005b) Gas-phase diagnostics by laser-induced gratings II. experiments. Appl Phys B 81(1):113–129. https://doi.org/10.1007/s00340-005-1853-y Steinhausen C, Reutzsch J, Lamanna G, Weigand B, Stierle R, Gross J, Preusche A, Dreizler A (2019) Droplet Evaporation under High Pressure and Temperature Conditions: A Comparison of Droplet Evaporation under High Pressure and Temperature Conditions: A Comparison of Experimental Estimations and Direct Numerical Simulations. Proceedings ILASS-Europe, (2019) 29th Conference on Liquid Atomization and Spray Systems: 2–4 September 2019. France, Paris Stierle R, Waibel C, Gross J, Steinhausen C, Weigand B, Lamanna G (2020) On the selection of boundary conditions for droplet evaporation and condensation at high pressure and temperature conditions from interfacial transport resistivities. Int J Heat Mass Transf 151:119450. https://doi.org/10.1016/j.ijheatmasstransfer.2020.119450 The authors gratefully acknowledge the financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project SFB–TRR 75, Project number 84292822. Open Access funding enabled and organized by Projekt DEAL.. Open Access funding enabled and organized by Projekt DEAL. This study was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project SFB–TRR 75, Project number 84292822. Institute of Aerospace Thermodynamics (ITLR), University of Stuttgart, Pfaffenwaldring 31, 70569, Stuttgart, Germany Christoph Steinhausen, Valerie Gerber, Bernhard Weigand & Grazia Lamanna Institute Reactive Flows and Diagnostics (RSM), Technical University of Darmstadt, Otto-Berndt-Str. 3, 64287, Darmstadt, Germany Andreas Preusche & Andreas Dreizler Christoph Steinhausen Valerie Gerber Andreas Preusche Bernhard Weigand Andreas Dreizler Grazia Lamanna Correspondence to Christoph Steinhausen. Calibration, validation, and curve fitting codes are implemented using MATLAB (MathWorks). The code is not publicly available. The temporal shape of the excitation laser pulse can be estimated using a \(\delta\)-function at \(t_{0}\). As discussed by Stampanoni-Panariello et al. (2005a), this assumption is valid since the laser pulse length \(\tau\): is small compared to the reciprocal of the acoustic frequency \(\left( \tau \ll 1/ \left( c_{s} q\right) \right)\); is small compared to the reciprocal of the acoustic decay rates \(\left( \tau \ll 1/ \left( q^{2} \varGamma \right) \right)\) and \(\left( \tau \ll 1/ \left( q^{2} D_{T} \right) \right)\) The following assumptions are used to derive Eq. (9) in this work from Eqs. (6d), (6e) and (15b) in the work of Schlamp et al. (1999). The assumptions are based on the theoretical considerations by Cummings et al. (1995) and Stampanoni-Panariello et al. (2005a): Cummings et al. (1995) proposed that in the limit of fast thermalization \(D_{T} \ll 1\), where \(D_{T}\) denotes the thermal diffusivity. Cummings et al. (1995) proposed that, due to negligible damping over a wave period \(\varGamma \ll 1\), where \(\varGamma\) denotes the acoustic damping rate. Stampanoni-Panariello et al. (2005a) proposed that, in case of instantaneous release of absorbed laser radiation into heat \(c_{s} \ q \ll \gamma _{\varTheta } + \gamma _{n\varTheta }\), where \(c_{s}\) denotes the speed of sound, q denotes the magnitude of the grating wave vector, and \(\gamma _{\varTheta }\) denotes the rate of excited-state energy decay caused by thermalization (in the work of Stampanoni-Panariello et al. (2005a), this parameter is denoted as \(\lambda\)) and \(\gamma _{n\varTheta }\) denotes the rate of excited-state energy decay not caused by thermalization (in the work of Stampanoni-Panariello et al. (2005a), this parameter is denoted as \(\psi\)). Note that, using the above assumptions, Eq. (15b) in the work of Schlamp et al. (1999) yields \(A_{D} = 0\), which is not shown in Eq. (9) in this work. Equation (10) in this work is derived from Eqs. (3b), (13b), (14d) in the work of Schlamp et al. (1999) using the following assumptions and simplifications based on the specific conditions of presented experimental investigations: Considering the small beam crossing angles in the optical setup, we assume \(\cos \left( \varTheta \right) \approx 1\) and \(\cos \left( \varPhi \right) \approx 1\). Due to the low mass flow rates during the investigations and the vanishing flow velocities, bulk flow velocities can be neglected: \(v = 0\) and \(w = 0\). Due to negligible bulk flow velocities in the chamber only beam misalignment in horizontal y-direction, \(\bar{\eta }\) has an effect on the time history of the LITA signal (Schlamp et al. 1999). Hence, after careful beam alignment through the quartz windows before each measurement resulting in a maximized signal, the beam misalignment in z-direction is neglected \(\bar{\zeta } = 0\). Note that, due to the simplified version of Eq. (9) in this work, \(A_{D} = 0\); therefore, the value \(D = A_{D} \varSigma _{D} = 0\) and it is not necessary to calculate \(\varSigma _{D}\) in Eq. (14d) from the work of Schlamp et al. (1999). Following the simplifications and assumptions summarized above and considering only the temporal evolution of the LITA signal Eq. (17) in the work of Schlamp et al. (1999) can be further simplified to Eq. (13) proposed in this work. Steinhausen, C., Gerber, V., Preusche, A. et al. On the potential and challenges of laser-induced thermal acoustics for experimental investigation of macroscopic fluid phenomena. Exp Fluids 62, 2 (2021). https://doi.org/10.1007/s00348-020-03088-1 Revised: 19 October 2020
CommonCrawl
Studying the impoverishing effects of procuring medicines: a national study Mohammadreza Amiresmaili1 & Zahra Emrani ORCID: orcid.org/0000-0002-8698-33722,3 One of the main treatment procedures is through medicine prescription. Considering the rising burden of drug costs, we conducted this study to estimate the impoverishing effects of medicine on Iranian households. We carried out calculations based on the Iranian National Household Survey for the year 2013. Amoxicillin, atorvastatin and metformin were the drugs selected. Three different poverty lines were applied. Impoverishment was estimated for various scenarios. Additionally, the associations of some demographic factors were tested. Excel 2013 and SPSS v.19 were used. Many households fell under the poverty line after purchasing drugs. Procuring original brand (OB) drugs caused more poverty than lowest-priced generic (LPG) equivalents. The logistic regression testing showed that the age, gender and literacy of the head of household and the size of the household were associated with impoverishment. This study showed that purchasing medicines increases the impoverishment risk of households. This risk is an index used to assess financial protection against health costs, which is in turn an indicator of health equity. The results will be of practical use for policymakers when addressing different scenarios of setting medicines prices as well as when considering alternatives for cost shifting for cross subsidies in pharmaceutical procurement. One of the goals of the healthcare system is to make healthcare costs fair for people. In recent decades, increased costs of healthcare, caused by technology advances on the one hand and people's increased expectations and knowledge on the other, have created some problems for financing healthcare [1]. In response to such issues the United Nations provided a set of policy actions, to address the challenge of financing and to achieve the sustainable development goals, which lead 193 United Nations member states to publish a post-2015 framework (Addis Ababa Action Agenda) to be implemented by members. Providing essential public services for all, improving national health system and trying to end poverty, hunger and malnutrition were emphasized [2, 3]. To this end, governments around the world have to take measures to identify the causes of poverty and to protect their people against poverty, impoverishment and hunger [3, 4]. Paying for health services or family's health care cost is one of sources which might push families under poverty line [5]. A family's healthcare cost is defined as the family's entire economic contribution to the healthcare system, which is divided into two categories: out-of-pocket (OOP) payment and pre-payment. The main difference between the two concerns the pooling of risk across the entire population [6]. Studies have shown that, in countries with pre-payment systems, financial protection is better and healthcare costs lower [7, 8]. Providing financial justice for all in the health system is very important, albeit difficult, and leads to improved health outcomes, equity and financial protection, and consumer satisfaction. The number of households faced with catastrophic healthcare costs is a measure of fairness in the health economy [9]. However, this index is somewhat hidden from society, while falling below the poverty line is very prominent in society. Thus, an alternative is to look at "impoverishment", which indicates the number of people that have fallen below the poverty line because of healthcare costs [10]. In Iran, a few studies have been conducted to estimate the healthcare costs that cause people to fall under the poverty line. Yazdi-Feizabadi and Akbari conducted a study on the effects of health expenditures on households' impoverishment in Iran. They concluded that the mean proportion of households falling under the poverty line was 7.5% over the years 2008–2014 [11]. Rezapour et al. (2013) also studied poverty, in Tehran and based on 2013 data. Their study indicated that the poverty rate among households including healthcare costs was 4.38%, and that excluding healthcare costs was 3.6% [12]. Medication costs represent the highest category of healthcare costs for households [13]. The use of medication is very high among Iranian households and many factors have been identified explaining this excessive use of medication. Almost 53% of Iranians self-medicate, which is several times the world's rate [14]. Additionally, changes of health status indicate changes in disease patterns from contagious to non-contagious and chronic diseases [15], and an increasingly aging population in the future [16]. Chronic diseases cause pain and disability, reduce quality of life and increase the need for medication [14]. The need for repeated doctors' visits, high costs of visits, easy access to medication and illness expansiveness are among the factors that cause people to buy medications without a prescription [14, 17,18,19]. In such circumstances, all of the drug costs fall on the people. The continuation of this situation can cause financial problems and even treatment avoidance. Despite the importance of drug costs in Iran, so far, no study has been conducted to specifically investigate the impact of these costs on households' financial indices. Therefore, the present study aimed to investigate the effect of medication costs on households and their impoverishment. Health system in Iran In Iran the Ministry of Health and Medical Education (MOHME) is not only responsible for education and training but also provides health care services [20]. This organization with the help of universities of medical sciences in different provinces oversees and monitors the operation of health system [21]. The Iranian health system is an insurance based system [22]. There are both governmental and largely employer-based independent insurance organizations in health care financing system [23, 24]. The governmental insurance include: A) Iran's Health Insurance Organization (IHIO) which consists of five funds (civil servants fund, Iranian medical insurance fund, the rural and nomads fund, the universal health insurance fund, other social strata fund) [25], B) the Social Security Organization (SSO) which covers people subject to labor law, C) the Armed Forces Medical Service Organization (AFMSO): for members of the military and their dependents, D) Imam Khomeini Relief Foundation (IKRF) which is designed for covering uninsured poor. And finally there are some independent organizations which insure their own employees including Iranian national oil company, Islamic Republic of Iran Broad casting (IRIB), banks [23, 24]. Insurers receive some governmental resources (mostly from revenue of natural resources sale, especially oil) in addition to the premium and sometimes receive donations [21]. There are also Private Health Insurances which are funded through premium [25]. Existence of different insurances (with different benefit packages) has caused different coverage conditions. While a fair financing system has been confirmed in the mission of the Iranian health system [26]. So the implementation of health reform plans, redefinition of procedures and changes in financing system structure are to reach this mission. This cross-sectional study was conducted using the data sets of the Iranian National Household Survey, conducted by the Statistical Center of Iran, regarding family expenses and incomes related to 2013. The study population consisted of all urban and rural households in Iran. Sampling was done by the Statistical Center of Iran through stratified multistage sampling. It has three steps. First categorizing and head counting regions. Second, urban blocks and rural hamlets are selected from the regions. In the third step households are selected. The sample size is optimized according to the aim of this plan to estimate average income and expenditure of a household. To achieve better estimations, recruiting of sample households were distributed during all months of the year [27]. The total number of sample households was 38,244, of which 18,854 households were living in the city and 19,390 in rural areas. Studies that measure the ability to pay for a service or product require three types of information: (a) the price of the product or service, (b) the household income or household consumption expenditure and (c) a threshold [28]. In this study, to examine the payment ability of the sample households in regards medication costs, three types of medications, Amoxicillin Cap, Metformin Tab, and Atorvastatin Tab, were selected. These medications have been identified as popular and widely used by the Food and Drug Administration [29], and their retail price was set as the base line. Selecting "income" means the household does not have any other resources (for example savings or borrow) to pay for health. In this study, we preferred to use the "household consumption expenditure" data, based on the data set related to the year 2013. Three types of poverty line were taken into account as threshold. To calculate the households that had fallen below the poverty line due to the costs of medication, the direct OOP payments of each household to purchase medication (in addition to their health costs) were taken away from the total expenses of the household. That number was then compared with the household's survival expenditure (as the threshold). If the figure was less than the survival expenditure, it indicated that the household had fallen under the poverty line. (The survival expenditure of a household is the minimum amount that the household must spend to survive in the community. The poverty line in this study was considered to be the household's minimum survival expenditure). Particular attention must be paid in the calculation and that is to make sure the household has not been below the poverty line before spending their money on medication [18]. In other words in prospective method of impoverishment, first the household's impoverishment because of their health cost (OOP) is calculated. Second the household's impoverishment after paying for medication (in addition to their previous health cost) is calculated. Then these two figures are subtracted to obtain the real impoverishment after purchasing the medication. There are different ways to determine poverty. Three types of poverty line were taken into account to obtain the results: Since the poverty line was not announced by officials in 2013, the poverty line stated by economic experts was used ($140 per month for urban households and $84.5 per month for rural households), labelled the "informal poverty line" in this study [30]. Urban and rural poverty lines for 2013 were calculated as follows: Households were set ascending based on "food exp" (which is defined as food expenditure share of total household expenditure) data. Households were classified into 100 percentiles based on the "food exp" data. The 45th and 55th percentiles were identified. The average weight of equivalent food costs (which is food expenditure divided by equivalent household size) of those households situated between the 45th and 55th percentiles was calculated. As a result, the per capita survival expenditure, or poverty line, was achieved. The following formulas were used [31]: $$ PL= food45\prec food{\exp}_h\prec food55 $$ $$ PL=\frac{\sum Wh\ast eqfood}{\sum Wh} $$ "Food exp" denotes the food expenditure share of total household expenditure: $$ food{\exp}_h={food}_h/{\exp}_h $$ "Eqfood" refers to the food expenditure ratio of equivalent household size: $$ {Eqfood}_h={food}_h/{Eqsize}_h $$ Equivalent household size (Eqsize) is: $$ Eqsize={hsize}^{0.56} $$ The World Bank has determined a poverty line and prepared a basis for comparison between different countries. Therefore, the international poverty line was also considered in this study. The international poverty line is US $1.90 and $3.10 per day in 2011 purchasing power parity (PPP), which is convertible to each country's domestic currency based on its PPP conversion factors. To convert international dollars into the local currency (Iranian Rial), the US dollar figure was multiplied by the PPP conversion factor (which is 5001/363). In the second step the resulted figure was multiplied by proportion of Consumer Price Index (CPI) of the year 2013 divided to CPI of 2011. In This way inflation between 2011 and the year in which the household surveys were conducted (2013) was adjusted for [32]. In this study, all variables were gathered on a monthly basis. Calculations were based on the Iranian Rial. Results are reported based on the US dollar. Excel 2013 and SPSS v.19 were used for the calculations. Ten scenarios of taking medicine were created. In each scenario, we calculated the percentage of households that fell below the poverty line due to medication costs. Since this study aims to obtain the results with prospective method of impoverishment calculations, in every scenario, the possibility of disease in the household was considered in the calculations. Amoxicillin (500 mg, 3 times a day, $0.11 per day), Atorvastatin (10 mg, once a day, $0.02 per day), and Metformin (500 mg, 3 times a day, $0.03 per day), used to treat sinusitis (53% prevalence) [33], hypercholesterolemia (41.6% prevalence) [34], and type 2 diabetes (8.5% prevalence) [35] respectively, were selected to create the scenarios. Lipitor ($0.39 daily) and Glucophage ($0.15 daily), original brand (OB) drugs used to treat hypercholesterolemia and diabetes respectively, were included in three of the scenarios. In this section, the results of the analysis of households' medication expenditure under the different scenarios are presented. Table 1 is provided in three parts. In the first part different poverty lines are inserted in the first column. Percentage of people under the poverty line before purchasing drug can be found in the second column. Other 10 columns show percentage of households below the poverty line after procurement of the drugs. Second part of the table shows percentage of households drawn under the poverty line because of purchasing drugs. Third part shows number of households which have been impoverished by expenditures on drugs. Table 1 Impoverishment and unaffordability estimates relating to purchase of medicine under different scenarios Table 1 shows that the number of households that fell below the poverty line due to drug costs was greater among rural than urban households. Various poverty lines have been used in the table. According to the poverty line we calculated for this study, 41 households would not have been able to pay for Amoxicillin. According to the international poverty line of $3.10 PPP, paying for Amoxicillin would have caused 20 households to fall below the poverty line. Based on the poverty line of $1.90 PPP, this drug would have been affordable for almost all households. Obviously, a higher poverty line causes unaffordability of drug costs in a greater number of households. Furthermore, the proportion of families falling below the poverty line due to paying for Atorvastatin is significantly different to that falling below it due to purchasing Lipitor. The same is true for the use of Glucophage compared to Metformin. Therefore, it can be concluded that people's ability to pay for OB drugs is much lower. In this study, the risk of disease occurring in the society as a whole was assumed equal to the risk of disease occurring in the sample households [36]. In the scenarios where more than one drug was used in a household, the probability of more than one disease occurring in a household was calculated by multiplying together the prevalences of the diseases, regardless of the number of family members. This resulted in a smaller number than the probability of occurrence of only one disease. However, these probabilities do not seem logical. Therefore, the absolute impoverishment numbers in scenarios with more than one drug were not calculated, thus some of the spaces in Table 1 are empty. Table 2 shows the association between impoverishment (using the poverty line we calculated, of $74.7 per month) and some household characteristics, based on the scenario in which Metformin and Atorvastatin were purchased. Table 2 Regression results for determinants of impoverishment The relationships between demographic factors and households' impoverishment were found to be significant in one-variable logistic regression tests. Female-headed households had 3.4 times greater odds of falling below the poverty than male-headed households. Illiterate-headed households had four times the odds of falling below the poverty line of literate-headed households. The probability of falling into poverty was less in households with more than five members. The likelihood of falling into poverty was less in households whose heads were between 40 and 60 years old than in those whose heads were either younger or older. In multi-variable logistic regression test, all factors, except for having an elderly head, over 80 years old, were significant. Female-headed households, illiterate-headed households and having fewer than five household members were all among the risk factors affecting households' poverty. In Iran, due to the social and economic changes over the past 30 to 40 years, improvements in technology, and increased access to health services, some changes have taken place in the age structure of the population, the incidence of disease, and the causes of mortality. The aging population shows an increased risk of non-contagious diseases, besides the dispersion and potential risk of contagious diseases [15]. In this situation, healthcare should be able to cover the needs of the society more accurately and quickly [37]. Medication is an essential and effective part of healthcare that accounts for a major share of households' expenditure [38]. Supply of and access to the highest variety of effective, healthy, high-quality and reasonably priced medication is one of the goals in Iran's 20-year vision plan [39]. This study has shown that 0.2% of Iranian households cannot pay for Amoxicillin, a widely used medication. In addition, 0.2 and 0.3% of households are not able to pay for Atorvastatin and Metformin, respectively (based on our calculation of the poverty line). Although most people would prefer to use OB drugs, these drugs are not affordable for a large proportion of the population. For instance, 12.3% of the population were found to be unable to afford Lipitor and 10.6% Glucophage. Treatment defects due to high drug costs make disease treatment very costly [39]. A noticeable point this study makes is that withdrawal from treatment, or a reduction of use, is likely among poor households, because of their high sensitivity to price changes [40]. Poverty leads to poor nutrition and unsanitary conditions, and these factors in turn create favorable condition for disease outbreak. In fact, these conditions increase the severity of health problems, such that these two variables intensify each other [41]. This issue increases the likelihood of diseases occurring in several family members or in an individual member of a household. On the other hand, longer and more frequent periods of illness enhance the probability of exposure to catastrophic health expenditure [42]. The present study examined the effect of the age, education and gender of the head of household on the probability of becoming poor due to drug costs. The results showed that female-headed households were almost three times more likely to become impoverished than male-headed households. Typically, the incomes of working women are lower than those of working men [43], which could explain this relationship. With increasing age of the head of household, the probability of becoming poor due to healthcare costs also increased. Furthermore, literate-headed households were less likely to fall into poverty than households with illiterate heads. These two factors have also been evaluated by Hanjani et al., whose results showed that the likelihood of exposure to high healthcare costs fell from 7.8 to 7.3% in literate households. Also, increasing age of the head of household enhances the risk of impoverishment. If the head of household is over 66 years old, the risk of impoverishment increases to 12.2% [31]. In addition, this study found that smaller-sized households were more likely to become impoverished than households with a population of five or more. A greater number of household members usually means a greater number of employed people and therefore increased income. The protective effect of more populated households on impoverishment due to health costs has also been confirmed in a study by Li et al. [44]. However, Ghiasvand's study showed a direct positive relationship between household size and healthcare costs [43]. Poverty caused by the cost of medication, under different scenarios, was found to be greater in rural areas than urban. This could be due to lower incomes in rural areas. Yazdi-Feizabadi and Akbari also showed that households situated in rural areas were at a greater risk of poverty due to healthcare costs than urban households [11]. This study showed that some households could not afford the medical costs of drugs and fell below the poverty line due to those costs. Various characteristics of the households had an impact on their impoverishment risk. However, the important point to make is that household characteristics should not affect whether households fall into poverty. The financial protection system against the costs of illness must act in such a way as to enable households to pay for the medications they need regardless of their ability to pay or the type of medication they use. The results of this analysis will be helpful for identifying vulnerable households, to which more attention should be paid. The data sets used in this study are freely available from Statistical center of Iran. Data of family costs and income is accessible on the following address: https://www.amar.org.ir/%D8%AF%D8%A7%D8%AF%D9%87%D9%87%D8%A7-%D9%88-%D8%A7%D8%B7%D9%84%D8%A7%D8%B9%D8%A7%D8%AA-%D8%A2%D9%85%D8%A7%D8%B1%DB%8C. LPG: Lowest-priced generic OB: PPP: Zare H, Trujillo A, Leidman E, Buttorff C. Income elasticity of health expenditures in Iran. Health Policy Plan. 2013;28(6):665–79. Branca F, Piwoz E, Schultink W, Sullivan LM. Nutrition and health in women, children, and adolescent girls. Bmj. 2015;351:h4173. United Nations. The Addis Ababa action agenda of the third international conference on financing for development. 2015. http://www.un.org/esa/ffd/wp-content/uploads/2015/08/AAAA_Outcome.pdf Montes MF. Five points on the Addis Ababa action agenda. In: South Center. Policy brief 24; 2016. https://www.southcentre.int/wp-content/uploads/2016/03/PB24_Five-points-on-Addis-Ababa-Action-Agenda_EN.pdf. Aregbeshola BS, Khan SM. Determinants of impoverishment due to out of pocket payments in Nigeria. J Ayub Med Coll Abbottabad. 2017;29(2):194–9. Xu K, Evans DB, Carrin G, Aguilar-Rivera AM, Musgrove P, Evans T. Protecting households from catastrophic health spending. Health Aff. 2007;26(4):972–83. Van Lerberghe W, Evans T, Rasanathan K, et al. World health report 2008 — Primary health care: now more than ever. Geneva: World Health Organization; 2008. Yardim M, Cilingiroglu N, Yardim N. Financial protection in health in Turkey: the effects of the health transformation programme. Health Policy Plan. 2014;29(2):177–92. Yardim M, Cilingiroglu N, Yardim N. Catastrophic health expenditure and impoverishment in Turkey. Health Policy. 2010;94(1):26–33. Wagstaff W, Doorslaer E. Catastrophe and impoverishment in paying for health care: with applications to Vietnam 1993–1998. Health Econ. 2003;12(11):921–33. Yazdi-feize-abadi V, Akbari M. Households' catastrophic health expenditures in provinces of Iran. Health Management Research Center: Kerman University of Medical Sciences; 2015. Rezapour A, Ghaderi H, Ebadifard-azar F, Larijani B, Gohari M. Effects of out-of-pocket payment on households in Iran; catastrophic and impoverishment: population based study in Tehran (2012). Life Sci J. 2013;10(3):1457–69. Moghadam MN, Banshi M, Javar MA, Amiresmaili M, Ganjavi S. Iranian household financial protection against catastrophic health care expenditures. Iran J Public Health. 2012;41(9):62–70. Azami-Aghdash S, Mohseni M, Etemadi M, Royani S, Moosavi A, Nakhaee M. Prevalence and cause of self-medication in Iran: a systematic review and meta-analysis article. Iran J Public Health. 2015;44(12):1580–93. Naghavi M. Health transition in Iran. Iran J Epidemiol. 2006;2(2):45–57. Viyanchi A, Rasekh HR, Safi Khani HR, Rajabzadeh Ghatari A. Drug insurance coverage in Iran and some selected countries: a comparative study. J Health Adm. 2015;18(60):7–23. Jalilian F, Hazavehei SMM, Vahidinia A, Moghimbeigy A, Motlagh FZ, Mirzaei M. Study of causes of self-medication among Hamadan rovince pharmacies visitors. Sci J Hamedan Univ. 2013;20(2):160–6. Amery H, Vafaee H, Alizadeh H, Ghiasi A, Shamaeianrazavi N, Khalafi A. Estimates of catastrophic health expenditures on families supported by Torbat Heydarieh University of medical sciences in 2012. J Torbat Heydarieh University of Med Sci. 2013;1(2):46–54. Amani F, Mohammadi S, Shaker A, Shahbazzadegan S. Study of arbitrary drug use among students in universities of Ardabil city in 2010. J Ardabil University of Med Sci. 2011;11(3):201–7. Mehrdad R. Health system in Iran. JMAJ. 2009;52(1):69–73. Hajizadeh M, Nghiem HS. Hospital care in Iran: an examination of national health system performance. Int J Healthc Manag. 2013;6(3):201–10. Davari M, Haycox A, Walley T. The Iranian health insurance system; past experiences, present challenges and future strategies. Iran J Public Health. 2012;41(9):1. Hajizadeh M, Nghiem HS. Out-of-pocket expenditures for hospital care in Iran: who is at risk of incurring catastrophic payments? Int J Health Care Finance Econ. 2011;11(4):267–85. Hajizadeh M, Connelly LB. Equity of health care financing in Iran: the effect of extending health insurance to the uninsured. Oxf Dev Stud. 2010;38(4):461–76. Naghdi S, Moradi T, Tavangar F, Bahrami G, Shahboulaghi M, Ghiasvand H. The barriers to achieve financial protection in Iranian health system: a qualitative study in a developing country. Ethiop J Health Sci. 2017;27(5):491–500. Rostamigooran N, Esmailzadeh H, Rajabi F, Majdzadeh R, Larijani B, Dastgerdi MV. Health system vision of Iran in 2025. Iran J Public Health. 2013;42(Supple1):18. An abstraction of the results of Iranian rural and urban households' expenses and incomes survey for 2013. The Statistical Center of Iran. Retrieved September 4, 2017. https://www.amar.org.ir/Portals/0/Files/abstract/1392/ch_hd_shr_92.pdf. Accessed 29 July 2019. Niëns LM, Brouwer WBF. Measuring the affordability of medicines: importance and challenges. Health Policy. 2013;112(1):45–52. ISNA. Medicines which are most used. 2013. Retrieved January 4, 2016. http://isna.ir/fa/news/92112315939. Urban & rural poverty line calculation. Donyae-eghtesad. Retrieved November 14, 2016. http://www.donya-e-eqtesad.com/news/865675/. Accessed 29 July 2019. Hanjani H, Fazaeli A. Estimation of fair financial contribution in health system of Iran. Soc Welf. 2006;5(19):279–300. Ferreira FH, Chen S, Dabalen A, Dikhanov Y, Hamadeh N, Jolliffe D, Narayan A, Prydz EB, Revenga A, Sangraula P, Serajuddin U. A global count of the extreme poor in 2012: data issues, methodology and initial results. J Econ Inequal. 2016;14(2):141–72. Andy SAA, Sarookhani D, Tavirany MR. Prevalence of sinusitis in Iran: a systematic review and meta-analysis study. Pharm Lett. 2016;8(5):31–9. Tabatabaei-Malazy O, Qorbani M, Samavat T, Sharifi F, Larijani B, Fakhrzadeh H. Prevalence of dyslipidemia in Iran: a systematic review and meta-analysis study. Int J Prev Med. 2014;5(4):373–93. Iran Diabetes Leadership Forum. The diabetes challenge in the Islamic Republic of Iran 2015. Tehran: Iran Diabetes Leadership Forum; 2015. [cited 17 November 2015]. https://docplayer.net/53441474-The-diabetes-challenge-in-the-islamic-republic-of-iran.html. Niëns LM, Cameron A, Van de Poel E, Ewen M, Brouwer WBF, Laing R. Quantifying the impoverishing effects of purchasing medicines: a cross-country comparison of the affordability of medicines in the developing world. PLoS Med. 2010;7(8):163–80. Gholamian M, Moemeni M, Sakaki S. An integral based approach in pharmaceutical procurement chain. Ind Manag. 2013;8(24):73–88. Delgoshaei B, Tourani S, Khalesi N, Dindust P. Pricing and reimbursement of pharmaceuticals in Iran and selected countries: a comparative study. J Health Adm. 2006;8(22):55–66. Pouragha B, Pourreza A, Hasanzadeh A, Sadrollahi MM, Kh A, Khabiri R. Pharmaceutical costs in social security organization and components influencing its utilization. Health Inf Manage. 2013;10(2):1–11. Barati A, Ghaderi H, Haj-Hasani D. Health consumption pattern in Kermanian households' consumption basket during 1996-2002. Payesh. 2006;5(2):105–11. Raghfar H, Khezri M, Vaez Mahdavi Z, Sangesari Mohazab K. Impact of health insurance inefficiency on poverty among Iranian households. Hakim Res J. 2013;16(1):9–19. Su TT, Kouyaté B, Flessa S. Catastrophic household expenditure for health care in a low-income society: a study from Nouna District, Burkina Faso. Bull World Health Organ. 2006;84(1):21–7. Ghiasvand H, Hadian M, Maleki MR, Shabaninejad H. Determinants of catastrophic medical payments in hospitals affiliated to Iran University of Medical Sciences 2009. Hakim Res J. 2010;13(3):145–54. Li Y, Wu Q, Xu L, Legge D, Hao Y, Gao L, Ning N, Wan G. Factors affecting catastrophic health expenditure and impoverishment from medical expenses in China: policy implications of universal health insurance. Bull World Health Organ. 2012;90(9):664–71. We appreciate the employees of the Statistical Center of Iran for supplying data. We also thank Mr. Akbari Javar for his consultation. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Department of Health Management, Economics and Policy Making, School of Management and Medical Informatics, Kerman University of Medical Sciences, Kerman, Iran Mohammadreza Amiresmaili Department of Health Management and Economics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran Zahra Emrani Health Management Research Center, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran Search for Mohammadreza Amiresmaili in: Search for Zahra Emrani in: MA: Conceptualization; design of methodology; Analysis and interpretation of data; Revising the manuscript (critical review, commentary). ZE: Acquisition of data; Application of computations to analyze or synthesize study data; Involved in drafting the manuscript. All authors read and approved the final manuscript. Correspondence to Zahra Emrani. Amiresmaili, M., Emrani, Z. Studying the impoverishing effects of procuring medicines: a national study. BMC Int Health Hum Rights 19, 23 (2019) doi:10.1186/s12914-019-0210-x DOI: https://doi.org/10.1186/s12914-019-0210-x Drug expenditure Healthcare availability, practices and development
CommonCrawl
The European Physical Journal C November 2014 , 74:3130 | Cite as Measurement of the muon reconstruction performance of the ATLAS detector using 2011 and 2012 LHC proton–proton collision data ATLAS Collaboration R. Abreu Y. Abulaiti T. Agatonovic-Jovin F. Ahmadov H. Akerstedt T. P. A. Åkesson G. L. Alberghi M. J. Alconada Verzini L. Alio L. J. Allison A. Alonso C. Alpigiani A. Altheimer Y. Amaral Coutinho D. Amidei S. P. Amor Dos Santos S. Amoroso G. Amundsen S. Angelidakis I. Angelozzi A. V. Anisenkov J. P. Araque J.-F. Arguin S. Argyropoulos H. Arnold M. Arratia O. Arslan N. Asbah A. Ashkenazi B. Åsman R. Astalos N. B. Atlay B. Auerbach A. Baas J. Backus Mayes P. Bagiacchi P. Balek F. Balli Sw. Banerjee A. A. E. Bannoura Z. Barnovska J. Barreiro Guimarães da Costa P. Bartos A. Bassalat M. Battaglia M. D. Beattie K. Becker C. Becot T. A. Beermann K. Behr A. Bellerive C. Bernard F. U. Bernlochner P. Berta G. Bertoli C. Bertsche D. Bertsche M. Bessner O. Bessidskaia C. Betancourt L. Bianchini J. Bilbao De Mendizabal C. W. Black J. E. Black D. Blackburn J.-B. Blanchard I. Bloch V. S. Bobrovnikov S. S. Bocchetta C. Bock T. T. Boek A. G. Bogdanchikov A. S. Boldyrev J. Bortfeldt J. Bouffard S. Boutouil A. J. Brennan K. Bristow T. M. Bristow J. Brosamer E. Brost J. Brown L. Bryngemark F. Buehrer M. K. Bugge B. Burghgrave I. Burmeister D. Büscher V. Büscher A. I. Butt P. Butti A. Buzatu S. Cabrera Urbán A. Calandri S. Camarda A. Campoverde M. Cano Bret T. Cao G. D. Carrillo-Montoya M. Casolino A. Castelli B. Cerio K. Cerny M. Cerv A. Cervelli D. Charfeddine C. C. Chau A. Chegwidden K. Chen L. Chen H. C. Cheng Y. Cheng V. Chiarella B. K. B. Chow J. J. Chwastowski L. Chytka Z. H. Citron L. Coffey B. Cole G. Compostella P. Conde Muiño S. H. Connell I. A. Connelly N. J. Cooper-Smith A. Corso-Radu D. Côté G. Cottin G. Cree S. Crépé-Renaudin W. A. Cribbs M. Crispin Ortuzar V. Croft C.-M. Cuciuc J. Cummings S. D'Auria M. D'Onofrio O. Dale F. Dallaire A. C. Daniells M. Dano Hoffmann S. Darmora A. Dattagupta C. David P. Davison I. Deigaard F. Deliot C. M. Delitzsch A. Dell'Acqua L. Dell'Asta M. Dell'Orso A. Demilly C. Deterre A. Di Domenico C. Di Donato D. Di Valentino F. A. Dias A. Dimitrievska C. Doglioni S. Donati P. Dondero E. Dubreuil O. A. Ducu M. Dührssen M. Düren A. Durglishvili M. Dyndal Y. Enari O. C. Endner M. Endo G. Ernis A. Ezhilov G. Facini R. J. Falla J. Faltova S. Feigl H. Feng S. Fernandez Perez A. Filipčič M. Filipuzzi K. D. Finelli A. Fischer J. Fischer W. C. Fisher E. A. Fitzgerald G. T. Fletcher G. Fletcher A. Floderus A. C. Florez Bustos S. Fracchia L. Franconi M. Franklin A. Gabrielli S. Gadatsch B. Galhardo G. Galster J. Gao F. M. Garay Walls C. García J. E. García Navarro Ch. Geich-Gimbel D. Gerbaudo P. Giannetti M. Gilchriese T. P. S. Gillam G. Gilles C. Giuliani M. Giulini S. Gkaitatzis I. Gkialas P. C. F. Glaysher M. Goblirsch-Kolb D. Golubkov R. Gonçalo S. González de la Hoz A. Gorišek C. Gössling H. M. X. Grabas L. Graber P. Grafström K.-J. Grahn J. Gramling E. Gramstad K. Grimm Ph. Gris J.-F. Grivaz J. P. Grohs A. Grohsjean G. C. Grossi Z. J. Grout L. Guan F. Guescini O. Gueta E. Guido T. Guillemin C. Gumpert S. Gupta N. G. Gutierrez Ortiz C. Gutschow N. Haddad S. Hageböeck M. Haleem G. Halladjian K. Hamano G. N. Hamity P. G. Hamnett R. Hann A. S. Hard F. Hariri P. F. Harrison A. Hasib A. D. Hawkins T. Heck V. Hedberg T. Heim L. Heinrich J. Hejbal J. Henderson Y. Heng C. Hengler G. H. Herbert Y. Hernández Jiménez R. Herrberg-Schubert R. Hickling E. Higón-Rodriguez E. Hill F. Hoenig J. I. Hofmann T. R. Holmes Y. Horii J.-Y. Hostachy M. Hrabovsky T. Hryn'ova C. Hsu S.-C. Hsu Y. Huang T. A. Hülsing E. Ideal T. Iizawa K. Ikematsu Y. Ilchenko Y. Inamaru J. M. Iturbe Ponce R. Iuppa J. Ivarsson M. Jackson D. O. Jamin J. Janssen G. Jarlskog N. Javadov T. Javůrek J. Jejelava G.-Y. Jeng J. Jentzsch C. Jeske S. Jézéquel A. Jinaru J. Jongmanns C. W. Kalderon A. Kamenshchikov N. Karastathis S. N. Karpov Z. M. Karpova K. Karthik A. Katre V. F. Kazanin J. J. Kempster H. Keoshkerian B. P. Kerševan H. Y. Kim S. B. King F. Kiss E.-E. Kluge D. Kobayashi N. Kondrashova K. Köneke A. C. König S. König R. Kopeliansky L. Köpke A. K. Kopp A. A. Korol D. Krasnopevtsev A. Kravchenko M. Kretz K. Kreutzfeldt H. Krüger A. Kruse M. C. Kruse M. Kruskal A. Kuhl R. Kurumida J. Lacey H. Laier S. Lammers E. Lançon A. T. Law C. A. Lee G. Lefebvre A. Lehan W. A. Leight A. Leisos A. G. Leister R. Leone S. Leone C. Leonidopoulos M. Levchenko J. Levêque M. Levy H. L. Li L. Li Y. Li J. Liebal T. H. Lin B. E. Lindquist M. Lisovyi B. Liu B. A. Long J. D. Long B. Lopez Paredes I. Lopez Paz N. Lu R. Lysak E. Lytken D. Macina D. Madaffari R. Madar A. Madsen M. Maeno J. Mahlstedt A. A. Maier Pa. Malecki V. M. Malyshev B. Mandelli I. Mandić R. Mantifel M. Marjanovic C. N. Marques S. P. Marsden H. Martinez L. Massa P. Mättig J. Mattmann D. A. Maximov G. Mc Goldrick S. Meehan C. Meineck B. Meirose S. Mergelmeyer N. Meric J.-P. Meyer L. Mijović M. Mikuž A. Milic T. Mitani A. Miucci J. U. Mjörnmark K. Mochizuki S. Molander K. Mönig C. Monini M. Moreno Llácer S. Moritz K. Motohashi S. Muanza R. D. Mudd J. A. Murillo Quijada H. Musheghyan O. Nackenhorst Y. Nagai H. Namasivayam R. Nayyar P. Yu. Nechaeva P. D. Nef D. H. Nguyen O. Novgorodova S. Nowak K. Ntekas F. Nuti B. J. O'Brien F. O'grady D. C. O'Neil V. O'Shea T. Obermann M. I. Ochoa H. Ohman W. Okamura H. Otono K. P. Oussoren K. Pachal M. Pagáčová M. Palka J. G. Panduro Vazquez L. Paolozzi Th. D. Papadopoulou K. Papageorgiou Fr. Pastore G. Pásztor N. D. Patel J. Pearce L. E. Pedersen M. Pedersen R. Pedro D. V. Perepelitsa M. T. Pérez García-Estañ R. Peschke R. F. Y. Peters N. E. Pettersson E. Pianori A. Pingel S. Pires M. Pitt L. Plazak M.-A. Pleier V. Pleskot P. Plucinski R. Poettgen R. Polifka C. S. Pollard K. Pommès K. Potamianos M. Proissl E. Protopapadaki D. Puddu D. Puldon G. Qin Y. Qin M. Queitsch-Maitland D. Quilty A. Qureshi V. Radeka S. K. Radhakrishnan P. Rados C. Rangel-Smith K. Rao T. Ravenscroft N. P. Readioff L. Rehnisch H. Reisin M. Relich H. Ren O. L. Rezanova P. Rieck J. Rieger E. Ritsch L. Rodrigues O. Røhne M. Ronzani P. Rose R. Rosten M. S. Rudolph F. Rühr A. Ruschke N. Ruthmann S. Sacerdoti A. Saddique H. F.-W. Sadrozinski Y. Sakurai P. H. Sales De Bruin J. Sánchez R. L. Sandbach I. Santoyo Castillo K. Sapp A. Sapronov B. Sarrazin C. Sawyer T. Scanlon V. Scarfone R. Schaefer U. Schäfer C. Schillo Y. J. Schnellbach L. Schoeffel B. D. Schoenrock S. Schramm M. Schreyer N. Schuh H.-C. Schultz-Coulon Ph. Schune Ph. Schwegler Ph. Schwemling F. G. Sciacca E. Scifo F. Scuri F. Scutti T. Serre T. Sfiligoj F. Sforza R. Shang C. Y. Shehu L. Shi C. O. Shimmin S. Shushkevich O. Sidiropoulou D. Sidorov Dj. Sijacki Lj. Simic S. Yu. Sivoklokov J. Sjölin K. Yu. Skovpen L. Smestad S. Yu. Smirnov O. Smirnova G. Snidero F. Socher E. Yu. Soldatov A. Soloshenko P. Sommer H. Y. Song A. Sood A. Sopczak V. Sorin P. Soueid A. M. Soukharev D. South F. Spanò W. R. Spearman F. Spettel L. A. Spiller S. Staerz S. Stamm M. M. Stanitzki M. Stoebe P. Stolte M. E. Stramaglia R. Ströhmer A. Struebig S. A. Stucci J. Su R. Subramaniam S. Sun M. Swiatlowski C. Taccini J. Taenzer A. A. Talyshev J. Y. C. Tam K. G. Tan B. B. Tannenwald T. Tashiro A. Tavares Delgado J. J. Teoh S. Terzo J. Thomas-Wilsker R. J. Thompson Yu. A. Tikhonov E. Tiouchichine S. Tokár L. Tomlinson E. Torró Pastor H. L. Tran B. Trocmé M. Trovatelli P. True J. C.-L. Tseng N. Tsirintanis A. N. Tuna S. A. Tupputi S. Turchikhin M. Ughetto F. C. Ungaro C. Unverdorben P. Urquijo A. Usanova N. Valencic L. Valery W. Van Den Wollenberg J. Van Nieuwkoop M. C. van Woerden R. Vanguri G. Vardanyan E. W. Varnes J. Veatch A. Venturini O. Viazlo R. Vigne M. Vogel K. Vorobev Z. Vykydal H. Wahlberg K. Wang X. Wang C. Wanotayaroj D. R. Wardrope S. Webb S. W. Weber J. S. Webster B. Weinert H. Weits S. Williams C. Willis B. T. Winter T. Wittig J. Wittkowski M. Wu T. R. Wyatt L. Xu R. Yakabe Y. Yamaguchi K. Yamauchi W.-M. Yao E. Yatsenko K. H. Yau Wong I. Yeletskikh A. L. Yen E. Yildirim K. Yoshihara C. J. S. Young D. R. Yu J. M. Yu I. Yusuff A. Zaman S. Zambito K. Zengel T. Ženiš L. Zhang L. Zhou K. Zhukov A. Zibell N. I. Zimine C. Zimmermann G. Zurzolo Regular Article - Experimental Physics This paper presents the performance of the ATLAS muon reconstruction during the LHC run with \(pp\) collisions at \(\sqrt{s}=7\)–8 TeV in 2011–2012, focusing mainly on data collected in 2012. Measurements of the reconstruction efficiency and of the momentum scale and resolution, based on large reference samples of \({J/\psi \rightarrow \mu \mu }\), \(Z \rightarrow \mu \mu \) and \({\Upsilon \rightarrow \mu \mu }\) decays, are presented and compared to Monte Carlo simulations. Corrections to the simulation, to be used in physics analysis, are provided. Over most of the covered phase space (muon \(|\eta |<2.7\) and \(5 \lesssim p_{\mathrm{T}}\lesssim 100\) GeV) the efficiency is above \(99\,\%\) and is measured with per-mille precision. The momentum resolution ranges from \(1.7\,\%\) at central rapidity and for transverse momentum \(p_{\mathrm{T}}\simeq 10\) GeV, to \(4\,\%\) at large rapidity and \(p_{\mathrm{T}}\simeq 100\) GeV. The momentum scale is known with an uncertainty of \(0.05\,\%\) to \(0.2\,\%\) depending on rapidity. A method for the recovery of final state radiation from the muons is also presented. The efficient identification of muons and the accurate measurement of their momenta are two of the main features of the ATLAS detector [1] at the LHC. These characteristics are often crucial in physics analysis, as for example in precise measurements of Standard Model processes [2, 3, 4], in the discovery of the Higgs boson, in the determination of its mass [5, 6], and in searches for physics beyond the Standard Model [7, 8]. This publication presents the performance of the ATLAS muon reconstruction during the LHC run at \(\sqrt{s}=7\)–8 TeV, focusing mainly on data collected in 2012. The performance of the ATLAS muon reconstruction has already been presented in a recent publication [9] based on 2010 data. The results presented here are based on an integrated luminosity \(\approx 500\) times larger, which allows a large reduction of the uncertainties. The measurements of the efficiency, of the momentum scale and resolution are discussed with a particular emphasis on the comparison between data and Monte Carlo (MC) simulation, on the corrections used in the physics analyses and on the associated systematic uncertainties. Muons with very large transverse momentum,1 \(p_{\mathrm{T}}> 120\) GeV, are not treated here as they will be the subject of a forthcoming publication on the alignment of the ATLAS muon spectrometer and its high-\(p_{\mathrm{T}}\) performance. This publication is structured as follows: Sect. 2 gives a short description of muon detection in ATLAS and Sect. 3 describes the real and simulated data samples used in the performance analysis. The measurement of the reconstruction efficiency is described in Sect. 4 while Sect. 5 reports the momentum scale and resolution. A method for including photons from final-state radiation in the reconstruction of the muon kinematics, is described in Sect. 6. Conclusions are given in Sect. 7. 2 Muon identification and reconstruction A detailed description of the ATLAS detector can be found elsewhere [1]. The ATLAS experiment uses the information from the muon spectrometer (MS) and from the inner detector (ID) and, to a lesser extent, from the calorimeter, to identify and precisely reconstruct muons produced in the \(pp\) collisions. The MS is the outermost of the ATLAS sub-detectors: it is designed to detect charged particles in the pseudorapidity region up to \(|\eta | = 2.7\), and to provide momentum measurement with a relative resolution better than 3 % over a wide \(p_{\mathrm{T}}\) range and up to 10 % at \(p_{\mathrm{T}}\approx 1\) TeV. The MS consists of one barrel part (for \(|\eta | < 1.05\)) and two end-cap sections. A system of three large superconducting air-core toroid magnets provides a magnetic field with a bending integral of about \(2.5\) Tm in the barrel and up to \(6\) Tm in the end-caps. Triggering and \(\eta \), \(\phi \) position measurements, with typical spatial resolution of \(5\)–10 mm, are provided by the Resistive Plate Chambers (RPC, three doublet layers for \(|\eta | < 1.05\)) and by the Thin Gap Chambers (TGC, three triplet and doublet layers for \(1.0<|\eta | < 2.4\)). Precise muon momentum measurement is possible up to \(|\eta |=2.7\) and it is provided by three layers of Monitored Drift Tube Chambers (MDT), each chamber providing six to eight \(\eta \) measurements along the muon track. For \(|\eta |>2\) the inner layer is instrumented with a quadruplet of Cathode Strip Chambers (CSC) instead of MDTs. The single hit resolution in the bending plane for the MDT and the CSC is about \(80\) \(\upmu \)m and \(60\) \(\upmu \)m, respectively. Tracks in the MS are reconstructed in two steps: first local track segments are sought within each layer of chambers and then local track segments from different layers are combined into full MS tracks. The ID provides an independent measurement of the muon track close to the interaction point. It consists of three sub-detectors: the Silicon Pixels and the Semi-Conductor Tracker (SCT) detectors for \(|\eta | < 2.5\) and the Transition Radiation Tracker (TRT) covering \(|\eta | < 2.0\). They provide high-resolution coordinate measurements for track reconstruction inside an axial magnetic field of \(2\) T. A track in the barrel region has typically 3 Pixel hits, 8 SCT hits, and approximately 30 TRT hits. The material between the interaction point and the MS ranges approximately from 100 to 190 radiation lengths, depending on \(\eta \), and consists mostly of calorimeters. The sampling liquid-argon (LAr) electromagnetic calorimeter covers \(|\eta |<3.2\) and is surrounded by hadronic calorimeters based on iron and scintillator tiles for \(|\eta |\lesssim 1.5\) and on LAr for larger values of \(|\eta |\). Muon identification is performed according to several reconstruction criteria (leading to different muon "types"), according to the available information from the ID, the MS, and the calorimeter sub-detector systems. The different types are: Stand-Alone (SA) muons: the muon trajectory is reconstructed only in the MS. The parameters of the muon track at the interaction point are determined by extrapolating the track back to the point of closest approach to the beam line, taking into account the estimated energy loss of the muon in the calorimeters. In general the muon has to traverse at least two layers of MS chambers to provide a track measurement. SA muons are mainly used to extend the acceptance to the range \(2.5<|\eta |<2.7\) which is not covered by the ID; Combined (CB) muon: track reconstruction is performed independently in the ID and MS, and a combined track is formed from the successful combination of a MS track with an ID track. This is the main type of reconstructed muons; Segment-tagged (ST) muons: a track in the ID is classified as a muon if, once extrapolated to the MS, it is associated with at least one local track segment in the MDT or CSC chambers. ST muons can be used to increase the acceptance in cases in which the muon crossed only one layer of MS chambers, either because of its low \(p_{\mathrm{T}}\) or because it falls in regions with reduced MS acceptance; Calorimeter-tagged (CaloTag) muons: a track in the ID is identified as a muon if it could be associated to an energy deposit in the calorimeter compatible with a minimum ionizing particle. This type has the lowest purity of all the muon types but it recovers acceptance in the uninstrumented regions of the MS. The identification criteria of this muon type are optimized for a region of \(|\eta |<0.1\) and a momentum range of \(25\lesssim p_{\mathrm{T}}\lesssim 100\) GeV. CB candidates have the highest muon purity. The reconstruction of tracks in the spectrometer, and as a consequence the SA and CB muons, is affected by acceptance losses mainly in two regions: at \(\eta \approx 0\), where the MS is only partially equipped with muon chambers in order to provide space for the services for the ID and the calorimeters, and in the region (\(1.1 < \eta < 1.3\)) between the barrel and the positive \(\eta \) end-cap, where there are regions in \(\phi \) with only one layer of chambers traversed by muons in the MS, due to the fact that some of the chambers of that region were not yet installed.2 The reconstruction of the SA, CB and ST muons (all using the MS information) has been performed using two independent reconstruction software packages, implementing different strategies [10] (named "Chains") both for the reconstruction of muons in the MS and for the ID-MS combination. For the ID-MS combination, the first chain ("Chain 1") performs a statistical combination of the track parameters of the SA and ID muon tracks using the corresponding covariance matrices. The second ("Chain 2") performs a global refit of the muon track using the hits from both the ID and MS sub-detectors. The use of two independent codes provided redundancy and robustness in the ATLAS commissioning phase. A unified reconstruction programme ("Chain 3") has been developed to incorporate the best features of the two chains and has been used, in parallel to the other two, for the reconstruction of 2012 data. It is planned to use only Chain 3 for future data taking. So far, the first two chains were used in all ATLAS publications. As the three chains have similar performance, only results for "Chain 1" are shown in the present publication. A summary of the results for the other two chains is reported in Appendix A. The following quality requirements are applied to the ID tracks used for CB, ST or CaloTag muons: at least 1 Pixel hit; at least 5 SCT hits; at most 2 active Pixel or SCT sensors traversed by the track but without hits; in the region of full TRT acceptance, \(0.1<|\eta |<1.9\), at least 9 TRT hits. The number of hits required in the first two points is reduced by one if the track traverses a sensor known to be inefficient according to a time-dependent database. The above requirements are dropped in the region \(|\eta |>2.5\), where short ID track segments can be matched to SA muons to form a CB muon. 3 Data and Monte Carlo samples 3.1 Data samples The results presented in this article are mostly obtained from the analysis of \(\sqrt{s}=8\) TeV \(pp\) collision events corresponding to an integrated luminosity of \(20.3\) fb\(^{-1}\) collected by the ATLAS detector in 2012. Results from \(pp\) collisions at \(\sqrt{s}=7\) TeV, collected in 2011, are presented in Appendix B. Events are accepted only if the ID, the MS and the calorimeter detectors were operational and both solenoid and toroid magnet systems were on. The online event selection was performed by a three-level trigger system described in Ref. [11]. The performance of the ATLAS muon trigger during the 2012 data taking period is reported in Ref. [12]. The \(Z\rightarrow \mu \mu \) candidates have been selected online by requiring at least one muon candidate with \(p_{\mathrm{T}}>24\) GeV, isolated from other activity in the ID. The \({J/\psi \rightarrow \mu \mu }\) and the \({\Upsilon \rightarrow \mu \mu }\) samples used for momentum scale and resolution studies have been selected online with two dedicated dimuon triggers that require two opposite-charge muons compatible with the same vertex, with transverse momentum \(p_{\mathrm{T}}>6\) GeV, and the dimuon invariant mass in the range 2.5–4.5 GeV for the \(J/\psi \) and 8–11 GeV for the \(\Upsilon \) trigger. The \({J/\psi \rightarrow \mu \mu }\) sample used for the efficiency measurement was instead selected using a mix of single-muon triggers and a dedicated trigger requiring a muon with \(p_{\mathrm{T}}>6\) GeV and an ID track with \(p_{\mathrm{T}}> 3.5\) GeV, such that the invariant mass of the muon+track pair, under a muon mass hypothesis, is in the window \(2.7\)–3.5 GeV. This dedicated trigger operated during the whole data taking period with a prescaled rate of \(\approx 1\) Hz. 3.2 Monte Carlo samples Monte Carlo samples for the process \(pp \rightarrow (Z/\gamma ^*) X \rightarrow \mu ^+\mu ^- X\), called \(Z \rightarrow \mu \mu \) in the following, were generated using POWHEG [13] interfaced to PYTHIA8 [14]. The CT10 [15] parton density functions (PDFs) have been used. The PHOTOS [16] package has been used to simulate final state photon radiation (FSR), using the exponentiated mode that leads to multi-photon emission taking into account \(\gamma ^*\) interference in \(Z\) decays. To improve the description of the dimuon invariant mass distribution, the generated lineshape was reweighted using an improved Born approximation with a running-width definition of the \(Z\) lineshape parameters. The ALPGEN [17] generator, interfaced with PYTHIA6 [18], was also used to generate alternative \(Z \rightarrow \mu \mu \) samples. Samples of prompt \({J/\psi \rightarrow \mu \mu }\) and of \({\Upsilon \rightarrow \mu \mu }\) were generated using PYTHIA8, complemented with PHOTOS to simulate the effects of final state radiation. The samples were generated requiring each muon to have \(p_{\mathrm{T}}>6.5\)(\(6\)) GeV for \(J/\psi \) (\(\Upsilon \)). The \(J/\psi \) distribution in rapidity and transverse momentum has been reweighted in the simulated samples to match the distribution observed in the data. The samples used for the simulation of the backgrounds to \(Z \rightarrow \mu \mu \) are described in detail in [19], they include \(Z\rightarrow \tau \tau \), \(W\rightarrow \mu \nu \) and \(W\rightarrow \tau \nu \), generated with POWHEG, \(WW\), \(ZZ\) and \(WZ\) generated with SHERPA [20], \(t\bar{t}\) samples generated with MC@NLO [21] and \(b\bar{b}\) as well as \(c\bar{c}\) samples generated with PYTHIA6. All the generated samples were passed through the simulation of the ATLAS detector based on GEANT4 [22, 23] and were reconstructed with the same programs used for the data. The ID and the MS were simulated with an ideal geometry without any misalignment. To emulate the effect of the misalignments of the MS chambers in real data, the reconstruction of the muon tracks in the simulated samples was performed using a random set of MS alignment constants. The amount of random smearing applied to these alignment constants was derived from an early assessment of the precision of the alignment, performed with special runs in which the toroidal magnetic field was off. The knowledge of the alignment constants improved with time. In particular the alignment constants used for the reconstruction of the data were more precise than those used to define the random smearing applied in the simulation, resulting in some cases in a worse MS resolution in MC than in data. 4 Efficiency The availability of two independent detectors to reconstruct the muons (the ID and the MS) enables a precise determination of the muon reconstruction efficiency in the region \(|\eta |<2.5\). This is obtained with the so called tag-and-probe method described in the next section. A different methodology, described in Sect. 4.2, is used in the region \(2.5<|\eta |<2.7\) in which only one detector (the MS) is available. 4.1 Muon reconstruction efficiency in the region \(|\eta |<2.5\) The tag-and-probe method is employed to measure the reconstruction efficiencies of all muon types within the acceptance of the ID (\(|\eta |<2.5\)). The conditional probability that a muon reconstructed by the ID is also reconstructed using the MS as a particular muon type, \(P (\mathrm Type | \mathrm ID )\), with \(\mathrm Type = (\mathrm CB, ST )\), can be measured using ID probes. Conversely, the conditional probability that a muon reconstructed by the MS is also reconstructed in the ID, \(P (\mathrm ID | \mathrm MS )\), is measured using MS tracks as probes. For each muon type, the total reconstruction efficiency is given by: $$\begin{aligned} \varepsilon (\mathrm Type ) = \varepsilon (\mathrm Type | \mathrm ID ) \cdot \varepsilon (\mathrm ID ), \end{aligned}$$ where \(\varepsilon (\mathrm ID )\) is the probability that a muon is reconstructed as an ID track. The quantity \(\varepsilon (\mathrm ID )\) cannot be measured directly and is replaced by \(\varepsilon (\mathrm ID | \mathrm MS )\) to give the tag-and-probe approximation: $$\begin{aligned} \varepsilon (\mathrm Type ) \simeq \varepsilon (\mathrm Type | \mathrm ID ) \cdot \varepsilon (\mathrm ID |\mathrm MS ). \end{aligned}$$ The level of agreement of the measured efficiency, \(\varepsilon ^{\mathrm{Data }}(\mathrm Type )\!,\) with the efficiency measured with the same method in MC, \(\varepsilon ^{\mathrm{MC }}(\mathrm Type )\), is expressed as the ratio between these two numbers, called "efficiency scale factor" or SF: $$\begin{aligned} SF=\frac{\varepsilon ^\mathrm{Data }(\mathrm Type )}{\varepsilon ^\mathrm{MC }(\mathrm Type )}. \end{aligned}$$ Possible biases introduced by the tag-and-probe approximation and other systematic effects on the efficiency measurement, which appear both in data and in MC, cancel in the SF. The SF is therefore used to correct the simulation in physics analysis. 4.1.1 The tag-and-probe method with \(Z \rightarrow \mu \mu \) events For \(Z\rightarrow \mu \mu \) decays, events are selected by requiring two oppositely charged isolated muons3 with transverse momenta of at least \(p_{\mathrm{T}}> 25 \) and \(10\) GeV respectively and a dimuon invariant mass within \(10\) GeV of the \(Z\)-boson mass. The muons are required to be back to back in the transverse plane (\(\Delta \phi > 2\)). One of the muons is required to be a CB muon, and to have triggered the readout of the event. This muon is called the "tag". The other muon, the so-called "probe", is required to be a MS track (i.e. a SA or a CB muon) when \(\varepsilon (\mathrm ID | \mathrm MS )\) is to be measured. The probe is required to be a CaloTag muon for the measurement of \(\varepsilon (\mathrm Type | \mathrm ID )\). The use of CaloTag muons as the ID probes reduces the background in the \(Z \rightarrow \mu \mu \) sample by an order of magnitude without biasing the efficiency measurement. The MS probes are also used to measure the efficiency of CaloTag muons. After selecting all tag-probe pairs, an attempt is made to match the probe to a reconstructed muon: a match is successful when the muon and the probe are close in the \(\eta -\phi \) plane (\(\Delta R<0.01\) for CaloTag probes to be matched with CB or ST muons and \(\Delta R<0.05\) for MS probes to be matched to ID or CaloTag muons). 4.1.2 Background treatment in \(Z \rightarrow \mu \mu \) events Apart from \(Z\rightarrow \mu \mu \) events, a small fraction of the selected tag-probe pairs may come from other sources. For a precise efficiency measurement, these backgrounds have to be estimated and subtracted. Contributions from \(Z\rightarrow \tau \tau \) and \(t\bar{t}\) decays are estimated using MC simulation. Additionally, QCD multijet events and \(W\rightarrow \mu \nu \) decays in association with jet activity (\(W+\)jets) can yield tag-probe pairs through secondary muons from heavy- or light-hadron decays. As these backgrounds are approximately charge-symmetric, they are estimated from the data using same-charge (SC) tag-probe pairs. This leads to the following estimate of the opposite-charge (OC) background for each region of the kinematic phase-space: $$\begin{aligned} N(\text {Bkg}) = N_{\text{ O }C}^{Z,t\bar{t}\text { MC}} + T \cdot ( N_{\text {SC}}^{\text {Data}} - N_{\text {SC}}^{Z,t\bar{t}\text { MC}} ) \end{aligned}$$ where \(N_{\text{ O }C}^{Z,t\bar{t}\text { MC}}\) is the contribution from \(Z\rightarrow \tau \tau \) and \(t\bar{t}\) decays, \(N_{\text {SC}}^{\text {Data}}\) is the number of SC pairs measured in data and \(N_{\text {SC}}^{Z,t\bar{t}\text { MC}}\) is the estimated contribution of the \(Z\rightarrow \mu \mu \), \(Z\rightarrow \tau \tau \) and \(t\bar{t}\) processes to the SC sample. \(T\) is a global transfer factor that takes into account the residual charge asymmetry of the QCD multijet and W+jets samples, estimated using the simulation: $$\begin{aligned} T = 1 + \theta ;\qquad \theta = \frac{N_{\text {OC}}^{\text {QCD+W MC}} - N_{\text {SC}}^{\text {QCD+W MC}}}{N_{\text {SC}}^{\text {Data}}}. \end{aligned}$$ For the kinematic region covered by the measurement, the transfer factor is \(T=1.15\) for CaloTag probes. For the MS probes the misidentification rate is low and the residual QCD multijet background has a large contribution from oppositely charged muon pairs in \(b\bar{b}\) decays, leading to \(T=2.6\). The efficiency for finding a muon of type \(\text {A}\) given a probe of type \(\text {B}\), corrected for the effect of background, can then be computed as: $$\begin{aligned} \varepsilon (\text {A}|\text {B}) = \frac{N^\mathrm{Match}_\mathrm{Probes}(\text {Data}) - N^\mathrm{Match}_\mathrm{Probes}(\text {Bkg})}{N^\mathrm{All}_\mathrm{Probes}(\text {Data}) - N^\mathrm{All}_\mathrm{Probes}(\text {Bkg})}, \end{aligned}$$ where \(N^\mathrm{All}_\mathrm{Probes}\) stands for the total number of probes considered and \(N^\mathrm{Match}_\mathrm{Probes}\) is the number of probes successfully matched to a reconstructed muon of type \(\text {A}\). According to the background estimate reported above, the sample of selected CaloTag probes is more than \(99.5\,\%\) pure in \(Z \rightarrow \mu \mu \) decays, as shown in Fig. 1. The \(Z \rightarrow \mu \mu \) purity is maximal for muon \(p_{\mathrm{T}}\simeq 40\) GeV and decreases to \(98.5\,\%\) (\(97\,\%\)) for \(p_{\mathrm{T}}=10\) (\(100\)) GeV. The \(Z \rightarrow \mu \mu \) purity has a weak dependence on the average number of inelastic \(pp\) interactions per bunch crossing, \(\langle \mu \rangle \), decreasing from \(99.8\,\%\) at \(\langle \mu \rangle = 10\) to \(99.5\,\%\) at \(\langle \mu \rangle =34\). A purity above \(99.8\,\%\) is obtained in the selection of MS probes, with weaker dependence on \(p_{\mathrm{T}}\) and \(\langle \mu \rangle \). Pseudorapidity distribution of the CaloTag (top) or MS (bottom) probes used in the tag-and-probe analysis. The bottom panel shows the ratio between observed and expected counts. The sum of the MC samples is normalized to the number of events in the data. The green band represents the statistical uncertainty 4.1.3 Low \(p_{\mathrm{T}}\) efficiencies from \({J/\psi \rightarrow \mu \mu }\) decays The efficiencies extracted from \(Z\rightarrow \mu \mu \) decays are complemented at low \(p_{\mathrm{T}}\) with results derived from a sample of \(J/\psi \rightarrow \mu \mu \) events. In 2012 ATLAS collected approximately 2M \(J/\psi \rightarrow \mu \mu \) decays which were not biased by dimuon triggers requirements, using a combination of single muon triggers (isolated and non-isolated) and the dedicated "muon + track" trigger described in Sect. 3.1. The analysis proceeds in a similar manner to the \(Z \rightarrow \mu \mu \) with some modifications due to the different kinematics of the \(J/\psi \). Tags are required to be CB muons with \(p_{\mathrm{T}}>4\) GeV and \(|\eta |<2.5\). As with the \(Z\), the tag must have triggered the read-out of the event. Probes are sought from amongst the ID tracks and must have \(p_{\mathrm{T}}> 2.5\) GeV and \(|\eta | < 2.5\), opposite charge to the tag muon, and must form with the tag an invariant mass in the window \(2.7\)–3.5 GeV. Finally the tag-probe pairs must fit to a common vertex with a very loose quality cut of \(\chi ^2 < 200\) for one degree of freedom, which removes tracks from different vertices, without any significant efficiency loss. Muon reconstruction efficiencies are then derived by binning in small cells of \(p_{\mathrm{T}}\) and \(\eta \) of the probe tracks. Invariant mass distributions are built in each cell for two samples: (a) all tag-probe pairs and (b) tag-probe pairs in which the probe failed to be reconstructed in the MS. The invariant mass distributions are fitted with a signal plus background model to obtain the number of \(J/\psi \) signal events in the two samples, called \(N_a(p_{\mathrm{T}},\eta )\) and \(N_b(p_{\mathrm{T}},\eta )\), respectively. The fit model is a Gaussian plus a second order polynomial for the background. The two samples are fitted simultaneously using the same mean and width to describe the signal. The MS reconstruction efficiency in a given \((p_{\mathrm{T}},\eta )\) cell is then defined as: $$\begin{aligned} \varepsilon _{p_{\mathrm{T}}, \eta }(\mathrm {Type}|\mathrm {ID}) = 1 - \frac{N_b(p_{\mathrm{T}},\eta )}{N_a(p_{\mathrm{T}},\eta )}. \end{aligned}$$ The largest contribution to the systematic uncertainty originates from the model used in the fit. This uncertainty was estimated by changing the background model to a first or a third order polynomial and by relaxing the constraint that the mass and the width of the \(J/\psi \) signal are the same between the two samples. The resulting variations in the efficiency are added in quadrature to the statistical uncertainty to give the total uncertainty on the efficiency. The efficiency integrated over the full \(\eta \) region is obtained as an average of the efficiencies of the different \(\eta \) cells. This method ensures a reduced dependency on local variations of background and resolution, and on the kinematic distribution of the probes. 4.1.4 Systematic uncertainties The main contributions to the systematic uncertainty on the measurement of the efficiency SFs are shown in Fig. 2, as a function of \(\eta \) and \(p_{\mathrm{T}}\), and are discussed below (the labels in parenthesis refer to the legend of Fig. 2): (Bkg) the uncertainty on the data-driven background estimate is evaluated by varying the charge-asymmetry parameter \(\theta \) of Eq. (5) by \(\pm 100\,\%\). This results in an uncertainty of the efficiency measurement below \(0.1\,\%\) in a large momentum range, reaching up to \(0.2\,\%\) for low muon momenta where the contribution of the background is most significant. (dR) the choice of the cone size used for matching reconstructed muons to probe objects has been optimized to minimize the amount of matches with wrong tracks while keeping the maximum match efficiency for correct tracks. A systematic uncertainty is evaluated by varying the cone size by \(\pm 50\,\%\). This yields an uncertainty of \(\approx 0.1\,\%\). (TP approximation) possible biases in the tag-and-probe method, for example due to different distributions between MS probes and "true" muons or due to correlation between ID and MS efficiencies, are investigated. The simulation is used to compare the efficiency measured with the tag-and-probe method with the "true" MC efficiency calculated as the fraction of generator-level muons that are successfully reconstructed. Agreement within less than \(0.1\,\%\) is observed, with the exception of the region \(|\eta |<0.1\). In the extraction of the data/MC scale factors, the difference between the measured and the "true" efficiency cancels to first order. To take into account possible imperfection of the simulation, half the observed difference is used as an additional systematic uncertainty on the SF. (Probes) the scale factor maps may be sensitive to disagreements between data and simulation in the kinematic distributions of the probes. The corresponding systematic uncertainty is estimated by reweighting the distribution of the probes in the simulation to bring it into agreement with the data. The resulting effect on the efficiency is below \(0.1\,\%\) over most of the phase space. (Low \(p_{\mathrm{T}}\)) for \(4<p_{\mathrm{T}}<10\) GeV the systematic uncertainties are obtained from the analysis performed with the \({J/\psi \rightarrow \mu \mu }\) sample, as discussed in Sect. 4.1.3 (not shown in Fig. 2). The resulting uncertainty on the low-\(p_{\mathrm{T}}\) SFs ranges between 0.5 % and 2 %, depending on \(p_{\mathrm{T}}\) and \(\eta \) and is dominated by the uncertainty on the background model. (High \(p_{\mathrm{T}}\)) no significant dependence of the measured SFs with \(p_{\mathrm{T}}\) was observed in the momentum range considered. An upper limit on the SF variation for large muon momenta has been extracted by using a MC simulation with built-in imperfections, including a realistic residual misalignment of the detector components or a 10 % variation of the muon energy loss. On the basis of this, a systematic uncertainty of \(\pm 0.42~\% \times (p_{\mathrm{T}}/ 1\) TeV\()\) is obtained. Systematic uncertainty on the efficiency scale factor for CB+ST muons, obtained from \(Z \rightarrow \mu \mu \) data, as a function of \(\eta \) (top) and \(p_{\mathrm{T}}\) (bottom) for muons with \(p_{\mathrm{T}}>10\) GeV. The background systematic uncertainty in the last two bins of the bottom plot is affected by a large statistical uncertainty. The combined systematic uncertainty is the sum in quadrature of the individual contributions 4.1.5 Results Figure 3 shows the muon reconstruction efficiency \(\varepsilon (\mathrm Type )\) as a function of \(\eta \) as measured from \(Z \rightarrow \mu \mu \) events. The combination of all the muon reconstruction types (for CB, ST, and CaloTag muons) gives a uniform muon reconstruction efficiency of about \(99\,\%\) over most the detector regions. The use of ST muons allows the recovery of efficiency especially in the region \(1.1<\eta < 1.3\) (from \(85\,\%\) to \(99\,\%\)) in which part of the MS chambers were not installed, as discussed in Sect. 2. The remaining inefficiency of the combination of CB or ST muons (CB+ST) at \(|\eta |<0.1\) (\(66\,\%\)) is almost fully recovered by the use of CaloTag muons (\(97\,\%\)). Muon reconstruction efficiency as a function of \(\eta \) measured in \(Z \rightarrow \mu \mu \) events for muons with \(p_{\mathrm{T}}>10\) GeV and different muon reconstruction types. CaloTag muons are only shown in the region \(|\eta |<0.1\), where they are used in physics analyses. The error bars on the efficiencies indicate the statistical uncertainty. The panel at the bottom shows the ratio between the measured and predicted efficiencies. The error bars on the ratios are the combination of statistical and systematic uncertainties The efficiencies measured in experimental and simulated data are in good agreement, in general well within 1 %. The largest differences are observed in the CB muons. To reconstruct an MS track, the Chain 1 reconstruction requires track segments in at least two layers of precision chambers (MDT or CSC) and at least one measurement of the \(\phi \) coordinate from trigger chambers (RPC or TGC). These requirements introduce some dependency on detector conditions and on the details of the simulation in the regions in which only two layers of precision chambers or only one layer of trigger chambers are crossed by the muons. This results in a reduction of efficiency in data with respect to MC of approximately 1 % in the region of \(\eta \sim 0.5\) due the RPC detector conditions and to local deviations up to about 2 % at \(0.9<|\eta |<1.3\) related to imperfections in the simulation of the barrel-endcap transition region. For the CB+ST muons the agreement between data and MC is very good, with the only exception of a low-efficiency region in data at \(\eta = 0.3\)–0.4 related to an inactive portion of an MDT chamber (not included in MC) in a region with reduced coverage due to the supporting structure of the ATLAS detector.4 The ID muon reconstruction efficiency, \(\varepsilon (\mathrm ID |\mathrm MS )\), for \(p_{\mathrm{T}}>10\) GeV as a function of \(\eta \) and \(p_{T}\) is shown in Fig. 4. The efficiency is greater than \(0.99\) and there is very good agreement between data and MC. The small efficiency reduction in the region \(1.5<\eta <2\) is related to temporary hardware problems in the silicon detectors. The larger uncertainty at \(|\eta |<0.1\) is related to the limited MS coverage in that region. ID muon reconstruction efficiency as a function of \(\eta \) (top) and \(p_{\mathrm{T}}\) (bottom) measured in \(Z \rightarrow \mu \mu \) events for muons with \(p_{\mathrm{T}}>10\) GeV. The error bars on the efficiencies indicate the statistical uncertainty. The panel at the bottom shows the ratio between the measured and predicted efficiencies. The green areas depict the pure statistical uncertainty, while the orange areas also include systematic uncertainties Reconstruction efficiency for CB (top), CB+ST (middle) and CaloTag (bottom) muons as a function of the \(p_{\mathrm{T}}\) of the muon, for muons with \(0.1 <|\eta |< 2.5\) for CB and CB+ST muons and for \(|\eta | < 0.1\) for CaloTag muons. The upper two plots also show the result obtained with \(Z \rightarrow \mu \mu \) and \({J/\psi \rightarrow \mu \mu }\) events. The insets on the upper plots show the detail of the efficiency as a function of \(p_{\mathrm{T}}\) in the low \(p_{\mathrm{T}}\) region. The CaloTag muon efficiency (bottom) is only measured with \(Z \rightarrow \mu \mu \) events. The error bars on the efficiencies indicate the statistical uncertainty for \(Z \rightarrow \mu \mu \) and include also the fit model uncertainty for \({J/\psi \rightarrow \mu \mu }\). The panel at the bottom shows the ratio between the measured and predicted efficiencies. The green areas show the pure statistical uncertainty, while the orange areas also include systematic uncertainties Figure 5 shows the reconstruction efficiencies for CB and for CB+ST muons as a function of the transverse momentum, including results from \(Z \rightarrow \mu \mu \) and \({J/\psi \rightarrow \mu \mu }\). A steep increase of the efficiency is observed at low \(p_{\mathrm{T}}\), in particular for the CB reconstruction, since a minimum momentum of approximately 3 GeV is required for a muon to traverse the calorimeter material and cross at least two layers of MS stations before being bent back by the magnetic field. Above \(p_{\mathrm{T}}\approx 20\) GeV, the reconstruction efficiency for both CB and CB+ST muons is expected to be independent of the transverse momentum. This is confirmed within \(0.5\,\%\) by the \(Z \rightarrow \mu \mu \) data. The drop in efficiency observed in the \(J/\psi \) data at \(p_{\mathrm{T}}>15\) GeV is due to the inefficiency of the MS reconstruction for muon pairs with small angular separation as in the case of highly boosted \(J/\psi \). This effect is well reproduced by MC and the SF of the \({J/\psi \rightarrow \mu \mu }\) analysis are in good agreement with those from \(Z \rightarrow \mu \mu \) in the overlap region. The CaloTag muon efficiency reaches a plateau of approximately \(0.97\) above \(p_{\mathrm{T}}\gtrsim 30\) GeV, where it is well predicted by the MC. Figure 6 shows the reconstruction efficiency for CB+ST muons as a function of \(\langle \mu \rangle \), showing a high value (on average above \(0.99\)) and remarkable stability. A small efficiency drop of about \(1\,\%\) is only observed for \(< \mu >\gtrsim 35\). This is mainly caused by limitations of the MDT readout electronics in the high-rate regions close to the beam lines. These limitations are being addressed in view of the next LHC run. Measured CB+ST muon reconstruction efficiency for muons with \(p_{\mathrm{T}}>10\) GeV as a function of the average number of inelastic \(pp\) collisions per bunch crossing \(\angle \mu \rangle \). The error bars on the efficiencies indicate the statistical uncertainty. The panel at the bottom shows the ratio between the measured and predicted efficiencies. The green areas depict the pure statistical uncertainty, while the orange areas also include systematic uncertainties 4.2 Muon reconstruction efficiency for \(|\eta |> 2.5\) As described in the previous sections, the CB muon reconstruction is limited by the ID acceptance which covers the pseudo-rapidity region \(|\eta |<2.5\). Above \(|\eta |=2.5\), SA muons are the only muon type that provides large efficiency. A measurement of the efficiency SF for muons in the range \(2.5<|\eta |<2.7\), hereafter called high-\(\eta \), is needed for the physics analyses that exploit the full MS acceptance. A comparison with the Standard Model calculations for \(Z \rightarrow \mu \mu \) events is used to measure the reconstruction efficiency SF in the high-\(\eta \) region. To reduce the theoretical and experimental uncertainties, the efficiency SF is calculated from the double ratio $$\begin{aligned} \mathrm{SF} = \frac{ \frac{N^\mathrm{Data}(2.5<|\eta _\mathrm{fwd}|<2.7)}{N^\mathrm{MC}(2.5<|\eta _\mathrm{fwd}|<2.7)}}{ \frac{N^\mathrm{Data}(2.2<|\eta _\mathrm{fwd}|<2.5)}{N^\mathrm{MC}(2.2<|\eta _\mathrm{fwd}|<2.5)}}, \end{aligned}$$ where the numerator is the ratio of the number of \(Z \rightarrow \mu \mu \) candidates in data and in MC for which one of the muons, called the forward muon, is required to be in the high-\(\eta \) region \(2.5<|\eta _\mathrm{fwd}|<2.7\) while the other muon from the \(Z\) decay, called the central muon, is required to have \(|\eta |<2.5\). The denominator is the ratio of \(Z \rightarrow \mu \mu \) candidates in data over MC with the forward muon lying in the control region \(2.2<|\eta _\mathrm{fwd}|<2.5\) and the central muon in the region \(|\eta | < 2.2\). In both the numerator and denominator the central muon is required to be a CB muon while the forward muon can either be a CB or SA muon. The simulation of muons with \(|\eta |<2.5\) is corrected using the standard SF described in the previous section. The selection of the central muon is similar to that of the tag muon in the tag-and-probe method. It is required to have triggered the event readout, to be isolated and to have transverse momentum \(p_{\mathrm{T}}>25\) GeV. The requirements for the forward muon include calorimeter-based isolation, requiring the transverse energy \(E_T\) measured in the calorimeter in a cone of \(\Delta R = 0.2\) (excluding the energy lost by the muon itself) around the muon track, to be less than \(10\,\%\) of the muon \(p_{\mathrm{T}}\). The central and forward muons are required to have opposite charge, a dimuon invariant mass within 10 GeV of the \(Z\) mass, and a separation in \((\eta ,\phi )\) space of \(\Delta R>0.2\). Different sources of systematic uncertainties have been considered: a first group is obtained by varying the \(p_{\mathrm{T}}\) and isolation cuts on the central muons and the dimuon mass window. These variations produce effects of less than 0.3 % in the efficiency SF for the \(p_{\mathrm{T}}\) range 20–60 GeV. The effect of the calorimetric isolation on the efficiency SF yields an uncertainty of less than 1 %, which is estimated by comparing the nominal SF values with the ones extracted when no calorimetric isolation is applied on the forward muons and by studying the dependence of this cut on the number of \(pp\) interactions. The contribution from the background processes, mainly dimuons from \(b\) and \(\bar{b}\) decays, has been studied using MC background samples and found to be negligible. Reconstruction efficiency for muons within \(2.5<|\eta |<2.7\) from \(Z \rightarrow \mu \mu \) events. The upper plot shows the efficiency obtained as the product of scale factor (Eq. 8) and the MC efficiency. The lower plot shows the scale factor. The error bars correspond to the statistical uncertainty while the green shaded band corresponds to the statistical and systematic uncertainty added in quadrature The theoretical uncertainty from higher-order corrections is estimated by varying the renormalization and factorization scales in the POWHEG NLO calculation at the generator level and is found to produce a negligible effect on the ratio of Eq. (8). The uncertainty from the knowledge of the parton densities is estimated by reweighting the PDFs used in the MC samples from CT10 to MSTW2008NLO [24] and by studying, at the generator level, the effect of the uncertainty associated to the MSTW2008 PDF set on the double ratio of Eq. (8), obtaining an overall theoretical uncertainty of less than \(0.55\,\%\). The efficiency in this region is obtained as the product of the SF and the "true" MC efficiency, calculated as the fraction of generator-level muons that are successfully reconstructed. The reconstruction efficiency and the SF for muons in the high-\(\eta \) region is shown in Fig. 7 as a function of the muon \(p_{\mathrm{T}}\). 4.3 Scale factor maps The standard approach used in ATLAS for physics analysis is to correct the muon reconstruction efficiency in the simulation using efficiency scale factors (SFs). The SFs are obtained with the tag-and-probe method using \(Z \rightarrow \mu \mu \) events, as described above, and are provided to the analyses in the form of \(\eta \)–\(\phi \) maps. Since no significant \(p_{\mathrm{T}}\) dependence of the SF has been observed, no \(p_{\mathrm{T}}\) binning is used in the SF maps. Different maps are produced for different data taking sub-periods with homogeneous detector conditions. The whole 2012 dataset is divided into 10 sub-periods. For each analysis, the final map is obtained as an average of the maps for all sub-periods, weighted by the periods' contribution to the integrated luminosity under study. Figures 8 and 9 show the maps of the efficiencies measured using the data in the \(\eta \)–\(\phi \) plane and the corresponding Scale Factors. The large data sample allows for a precise resolution of localized efficiency losses, for example in the muon spectrometer for \(|\eta | \sim 0\) due to limited coverage. The SF maps show local differences between data and MC related to detector conditions as discussed in Sect. 4.1.5. Reconstruction efficiency measured in the experimental data (top), and the data/MC efficiency scale factor (bottom) for CB muons as a function of \(\eta \) and \(\phi \) for muons with \(p_{\mathrm{T}}>10\) GeV Reconstruction efficiency measured in the experimental data (top) and the data/MC efficiency scale factor (bottom) for CB+ST muons as a function of \(\eta \) and \(\phi \) for muons with \(p_{\mathrm{T}}>10\) GeV 5 Momentum scale and resolution The large samples of \({J/\psi \rightarrow \mu \mu }\), \({\Upsilon \rightarrow \mu \mu }\) and \(Z \rightarrow \mu \mu \) decays collected by ATLAS are used to study in detail the muon momentum scale and resolution. The ATLAS simulation includes the best knowledge of the detector geometry, material distribution, and physics model of the muon interaction at the time of the MC events were generated. Additional corrections are needed to reproduce the muon momentum resolution and scale of experimental data at the level of precision that can be obtained using high-statistics samples of dimuon resonances. Section 5.1 describes the methodology used to extract the corrections to be applied to the MC simulation. In Sect. 5.2, the muon momentum scale and resolution is studied in the data and in MC samples with and without corrections. 5.1 Corrections to the muon momentum in MC Similarly to Ref. [9], the simulated muon transverse momenta reconstructed in the ID and in the MS sub-detectors, \(p_{\mathrm{T}}^\mathrm{MC, Det}\), where \(\mathrm{Det} = \mathrm{ID, MS}\), are corrected using the following equation: $$\begin{aligned} p_{\mathrm{T}}^\mathrm{Cor,Det}&= \frac{p_{\mathrm{T}}^\mathrm{MC,Det} + \sum \limits ^{1}_{n=0} s^\mathrm{Det}_{n}(\eta ,\phi )(p_{\mathrm{T}}^\mathrm{MC,Det})^{n}}{1+\sum \limits ^{2}_{m=0} \Delta r^\mathrm{Det}_{m}(\eta ,\phi )(p_{\mathrm{T}}^\mathrm{MC,Det})^{m-1}g_{m}} \nonumber \\&\mathrm (with s^\mathrm{ID}_{0}=0\mathrm and \Delta r^\mathrm{ID}_{0} =0), \end{aligned}$$ where \(g_{m}\) are normally distributed random variables with mean 0 and width 1 and the terms \(\Delta r_{m}^\mathrm{Det}(\eta ,\phi )\) and \(s_n^\mathrm{Det}(\eta , \phi )\) describe, respectively, the momentum resolution smearing and the scale corrections applied in a specific \(\eta \), \(\phi \) detector region. The motivations for Eq. (9) are the following: corrections are defined in \(\eta - \phi \) detector regions such that in each region the variation of momentum resolution and scale, and therefore of their possible corrections, are expected to be small. In particular the nominal muon identification acceptance region (up to \(|\eta |=2.7\)) is divided in 18 \(\eta \) sectors of size \(\Delta \eta \) between 0.2 and 0.4, for both the MS and the ID. In addition, the MS is divided into two types of \(\phi \) sectors of approximate size of \(\pi /8\), exploiting the octagonal symmetry of the magnetic system : the sectors that include the magnet coils (called "small sectors") and the sectors between two coils (called "large sectors"). The \(\Delta r_{m}^\mathrm{Det}(\eta ,\phi )\) correction terms introduce a \(p_{\mathrm{T}}\) dependent momentum smearing that effectively increases the relative momentum resolution, \(\frac{ \sigma (p_{\mathrm{T}})}{p_{\mathrm{T}}}\), when under-estimated by the simulation. The \(\Delta r_{m}^\mathrm{Det}(\eta ,\phi )\) terms can be related to different sources of experimental resolution by comparing the coefficient of the \(p_{\mathrm{T}}\) powers in the denominator of Eq. (9) to the following empirical parametrization of the muon momentum resolution (see for example [25]): $$\begin{aligned} \frac{ \sigma (p_{\mathrm{T}})}{p_{\mathrm{T}}} = r_0/p_{\mathrm{T}}\oplus r_1 \oplus r_2\cdot p_{\mathrm{T}}, \end{aligned}$$ where \(\oplus \) denotes a sum in quadrature. The first term (proportional to \(1/p_{\mathrm{T}}\)) accounts for fluctuations of the energy loss in the traversed material. Multiple scattering, local magnetic field inhomogeneities and local radial displacements are responsible for the second term (constant in \(p_{\mathrm{T}}\)). The third term (proportional to \(p_{\mathrm{T}}\)) describes intrinsic resolution effects caused by the spatial resolution of the hit measurements and by residual misalignment. Energy loss fluctuations are relevant for muons traversing the calorimeter in front of the MS but they are negligible in the ID measurement. For this reason \( \Delta r^\mathrm{ID}_{0}\) is set to zero in Eq. (9). Imperfect knowledge of the magnetic field integral and of the radial dimension of the detector are reflected in the multiplicative momentum scale difference \(s_1^\mathrm{Det} \) between data and simulation. In addition, the \(s^\mathrm{MS}_0(\eta , \phi )\) term is necessary to model the \(p_{\mathrm{T}}\) scale dependence observed in the MS momentum reconstruction due to differences between data and MC in the energy loss of muons passing through the calorimeter and other materials between the interaction point and the MS. As the energy loss between the interaction point and the ID is negligible, \(s^\mathrm{ID}_0(\eta )\) is set to zero. The separate correction of ID and MS momentum reconstruction allows a direct understanding of the sources of the corrections. In a second step the corrections are propagated to the CB momentum reconstruction, \(p_{\mathrm{T}}^\mathrm{Cor,CB}\), using a weighted average: $$\begin{aligned} p_{\mathrm{T}}^\mathrm{Cor,CB} = f\cdot p_{\mathrm{T}}^\mathrm{Cor,ID} + (1 - f)\cdot p_{\mathrm{T}}^\mathrm{Cor, MS}, \end{aligned}$$ with the weight \(f\) derived for each muon by expressing the CB transverse momentum before corrections, \(p_{\mathrm{T}}^\mathrm{MC,CB}\), as a linear combination of \(p_{\mathrm{T}}^\mathrm{MC,ID}\) and \(p_{\mathrm{T}}^\mathrm{MC,MS}\): $$\begin{aligned} p_{\mathrm{T}}^\mathrm{MC,CB} = f\cdot p_{\mathrm{T}}^\mathrm{MC,ID} + (1 - f)\cdot p_{\mathrm{T}}^\mathrm{MC, MS} \end{aligned}$$ and solving the corresponding linear equation. 5.1.1 Correction extraction using a template fit to \({J/\psi \rightarrow \mu \mu }\) and \(Z \rightarrow \mu \mu \) events The MS and ID correction parameters contained in Eq. (9) need to be extracted from data. For this purpose, a MC template maximum likelihood fit is used to compare the simulation to the data for \({J/\psi \rightarrow \mu \mu }\) and \(Z \rightarrow \mu \mu \) candidate events: this gives sensitivity to reconstructed muon momenta in the \(p_{\mathrm{T}}\) range from a few GeV to \(\approx 100\) GeV. The dataset used for the correction extraction consists of \(6\)M \({J/\psi \rightarrow \mu \mu }\) and \(9\)M \(Z \rightarrow \mu \mu \) candidates passing the final selection. The \({J/\psi \rightarrow \mu \mu }\) and \(Z \rightarrow \mu \mu \) candidates have been selected online according to the requirements described in Sect. 3.1 and, offline, by requiring two CB muons. For the correction extraction in a specific \(\eta - \phi \) Region Of Fit (ROF), the ID and MS reconstructed momenta are considered individually. All the events with at least one of the two muons in the ROF contribute to the correction extraction fit. The angles from the CB reconstruction are used to define the ROF and to calculate the invariant mass distributions. The ID corrections are extracted using the distribution of the ID dimuon invariant mass, \(m_{\mu \mu }^\mathrm{ID }\). Events with \(m_{\mu \mu }^\mathrm{ID }\) in the window \(2.76\)–3.6 GeV and \(p_{\mathrm{T}}^\mathrm{ID }\) in the range \(8\)–17 GeV are selected as \({J/\psi \rightarrow \mu \mu }\) candidate decays; events with \(m_{\mu \mu }^\mathrm{ID }\) between \(76\) and \(96\) GeV and the leading (sub-leading) muons with \(26< p_{\mathrm{T}}^\mathrm{ID }< 300\) GeV (\(15< p_{\mathrm{T}}^\mathrm{ID }< 300\) GeV) are selected as \(Z \rightarrow \mu \mu \) candidate decays. To enhance the sensitivity to the \(p_{\mathrm{T}}\) dependent correction effects, the \(m_{\mu \mu }^\mathrm{ID }\) is classified according to the \(p_{\mathrm{T}}\) of the muons: for \({J/\psi \rightarrow \mu \mu }\) candidates the \(p_{\mathrm{T}}^\mathrm{ID }\) of the sub-leading muon defines three bins with lower thresholds at \(p_{\mathrm{T}}^\mathrm{ID }={8, 9, 11}\) GeV, for \(Z \rightarrow \mu \mu \) candidates the \(p_{\mathrm{T}}^\mathrm{ID }\) of the leading muon defines three bins with lower thresholds at \(p_{\mathrm{T}}^\mathrm{ID }={26, 47, 70}\) GeV. Similarly, the MS corrections are extracted using the distribution of the MS reconstructed dimuon invariant mass, \(m_{\mu \mu }^\mathrm{MS }\), in the same way as for the ID. However, as in the MS part of Eq. (9) more correction parameters and more ROFs are present, an additional variable sensitive to the momentum scale and resolution is added to the MS fit. The variable, used only in \(Z \rightarrow \mu \mu \) candidate events, is defined by the following equation: $$\begin{aligned} \rho = \frac{p_{\mathrm{T}}^\mathrm{MS } - p_{\mathrm{T}}^\mathrm{ID }}{p_{\mathrm{T}}^\mathrm{ID }}, \end{aligned}$$ representing a measurement of the \(p_{\mathrm{T}}\) imbalance between the measurement in the ID and in the MS. The \(\rho \) variable is binned according to \(p_{\mathrm{T}}^\mathrm{MS }\) of the muon in the ROF: the lower thresholds are \(p_{\mathrm{T}}^\mathrm{MS }={20, 30, 35, 40, 45, 55, 70}\) GeV. In order to compare the simulation to the data distributions, the corresponding templates of \(m_{\mu \mu }^\mathrm{ID }\), \(m_{\mu \mu }^\mathrm{MS }\), and \(\rho \) are built using the MC samples of the \({J/\psi \rightarrow \mu \mu }\) and \(Z \rightarrow \mu \mu \) signals. The background in the \(Z \rightarrow \mu \mu \) mass region is added to the templates using the simulation and corresponds to approximately 0.1 % of the \(Z \rightarrow \mu \mu \) candidates. The non-resonant background to \({J/\psi \rightarrow \mu \mu }\), coming from decays of light and heavy hadrons and from Drell–Yan production, accounts for about 15 % of the selected \({J/\psi \rightarrow \mu \mu }\) candidates. As it is not possible to accurately simulate it, a data driven approach is used to evaluate it: an analytic model of the background plus the \(J/\psi \) signal is fitted to the dimuon mass spectrum of the \({J/\psi \rightarrow \mu \mu }\) candidates in a mass range \(2.7\)–4.0 GeV, then the background model and its normalization are used in the template fit from which the momentum correction are extracted. The analytic fit is performed independently on the ID and MS event candidates. The non-resonant dimuon background is parametrized with an exponential function, while the \(J/\psi \) and \(\psi ^{2S}\) resonances are parametrized by a Crystal-Ball function [26] in the ID fits, or by a Gaussian distribution convoluted with a Landau in the MS fits, where energy loss effects due to the calorimeter material are larger. The template fit machinery involves several steps: first a binned likelihood function \(\mathcal {L}\) is built to compare the data to the MC templates of signal plus background. Then modified templates are generated by varying the correction parameters in Eq. (9) and applying them to the muon momentum of the simulated signal events. The \(-2\ln \mathcal {L}\) between data and the modified template is then minimized using MINUIT [27]. The procedure is iterated across all the ROFs: the first fit is performed using only events with both muons in the ROF, the following fits allow also one of the muons in a previously analysed ROF and one in the ROF under investigation. After all the detector ROFs have been analysed, the fit procedure is iterated twice in order to improve the stability of the results. The correction extraction is performed first for the ID and then for the MS, such that the ID transverse momentum present in Eq. (13) can be kept constant during the MS correction extraction. Although the use of \(p_{\mathrm{T}}\) bins for the construction of the templates gives a good sensitivity to the \(p_{\mathrm{T}}\) dependence of the scale corrections, the fit is not very sensitive to the resolution correction terms \(\Delta r^\mathrm{MS }_0(\eta ,\phi )\) and \(\Delta r^\mathrm{MS }_2(\eta ,\phi )\) of Eq. (9). The reasons for this are, at low \(p_{\mathrm{T}}\), the \(p_{\mathrm{T}}>8\) GeV selection cut applied to the \(J/\psi \) data sample, which limits the sensitivity to \(\Delta r^\mathrm{MS }_0(\eta ,\phi )\), and, at high \(p_{\mathrm{T}}\), the limited statistics of the \(Z \rightarrow \mu \mu \) data sample with \(p_{\mathrm{T}}^\mathrm{MS }>100\) GeV, which limits the sensitivity to \(\Delta r^\mathrm{MS }_2(\eta ,\phi )\). As the energy loss fluctuations do not show significant disagreement between data and MC for \(|\eta |>0.8\), the parameter \(\Delta r^\mathrm{MS }_0(\eta ,\phi )\) has been fixed to zero in this region. The effect of the misalignment of MS chambers in real data, which is expected to be the largest contribution to \(\Delta r^\mathrm{MS }_2(\eta ,\phi )\), is already taken into account in the simulation as described in Sect. 3.2. Therefore the \(\Delta r^\mathrm{MS }_2(\eta ,\phi )\) term is also fixed to zero in the MS correction extraction. Two of the systematic uncertainties described in Sect. 5.1.2 are used to cover possible deviations from zero of these two terms. Systematic uncertainties cover imperfections in the model used for the muon momentum correction and in the fit procedure used for the extraction of the correction terms. In particular the correction extraction procedure has been repeated using the following different configurations: variation of \(\pm 5\) GeV in the dimuon mass window used for the \(Z \rightarrow \mu \mu \) event selection. This is intended to cover resolution differences between data and MC that are beyond a simple Gaussian smearing. This results in one of the largest systematic uncertainties on the resolution corrections, with an average effect of \(\approx \) 10 % on the \(\Delta r_{1}^\mathrm{ID }\), \(\Delta r_{2}^\mathrm{ID }\), and \(\Delta r_{1}^\mathrm{MS }\) parameters. Two variations of the \(J/\psi \) templates used in the fit. The first concerns the \(J/\psi \) background parametrization: new \(m_{\mu \mu }^\mathrm{MS }\) and \(m_{\mu \mu }^\mathrm{ID }\) background templates are generated using a linear model, for the MS fits, and a linear-times-exponential model, for the ID fits. The second variation concerns the \(J/\psi \) event selection: the minimum muon \(p_{\mathrm{T}}^{MS,ID}\) cut is raised from 8 to 10 GeV, thus reducing the weight of low-\(p_{\mathrm{T}}\) muons on the corrections. The resulting variations on the resolution correction parameters are \(\approx 10\) % of \(\Delta r_{1}^\mathrm{ID }\) and \(\Delta r_{1}^\mathrm{MS }\). The effect is also relevant for the MS scale corrections with a variation of \(\approx 0.01\) GeV on \(s_0^\mathrm{MS }\) and of \(\approx 4\times 10^{-4}\) on \(s_1^\mathrm{MS }\). The ID correction extraction is repeated using \({J/\psi \rightarrow \mu \mu }\) events only or \(Z \rightarrow \mu \mu \) events only. Since such configurations have a reduced statistical power, only the \(s_1^\mathrm{ID }\) correction parameter is left free in the fit, while the resolution correction terms are fixed to nominal values. The resulting uncertainty on \(s_1^\mathrm{ID }\), ranging from 0.01 % to 0.05 % from the central to the forward region of the ID, accounts for non-linear effects on the ID scale. The parameter \(\Delta r^\mathrm{MS }_0\) of Eq. (9) is left free in all the regions, instead of fixing it to zero for \(|\eta |>0.8\). The largest variation of 0.08 GeV is applied as an additional systematic uncertainty on the parameter. The MS correction is extracted using a special \(Z \rightarrow \mu \mu \) MC sample with ideal geometry, i.e. where no simulation of the misalignment of the MS chambers is applied. This is needed because the standard simulation has a too pessimistic resolution in the \(|\eta |<1.25\) region, forcing the \(\Delta r^\mathrm{MS }_1\) parameter to values compatible with zero. The template fit performed with the ideal-geometry \(Z \rightarrow \mu \mu \) MC sample gives \(\Delta r^\mathrm{MS }_1> 0\) in the region \(0.4<|\eta |<1.25\). The largest variation of \(\Delta r^\mathrm{MS }_1\), corresponding to \(0.012\), is applied as an additional systematic uncertainty for this region. Variation of the normalization of the MC samples used in \(Z \rightarrow \mu \mu \) background estimate by factors of two and one half. The resulting systematic uncertainty is small except for the detector regions with \(|\eta |>2.0\), where the effect is comparable to the other uncertainties. Independently from the fit procedure, the following studies are used to derive additional systematic uncertainties: The simulation of the ID includes an excess of material for \(|\eta |>2.3\) resulting in a muon momentum resolution with is too pessimistic. Such imperfection is covered by adding a systematic uncertainties of \(2\times 10^{-3}\) on the \(s^\mathrm{ID}_{1}\) parameter, and of 0.01 on the \(\Delta r^\mathrm{ID}_{1}\) parameter, both for \(|\eta |>2.3\). These are the largest systematic uncertainties on the ID correction parameters. The position of the mass peak in the \(Z \rightarrow \mu \mu \) sample is studied in finer \(\eta \) bins than those used to extract the corrections, using the fit that will be discussed in Sect. 5.2 as an alternative to the template fitting method. An additional uncertainty of \(2\times 10^{-4}\) on the \(s_1^\mathrm{ID }(\eta )\) parameter is found to cover all the observed deviations between data and corrected MC. The effect of the measurement of the angle of the muon tracks has been checked by using the \(J/\psi \) MC and conservatively increasing the track angular resolution by \(\approx 40\,\%\). The maximum effect is an increase of the resolution correction \(\Delta r^\mathrm{ID }_1\) of \(0.001\), which is added to the systematic uncertainties. Special runs with the toroidal magnetic field off have been used to evaluate the quality of the MS chamber alignment. These results are compared to the chamber misalignments in the simulation to define the systematic uncertainty on the \(\Delta r^\mathrm{MS }_2(\eta ,\phi )\) resolution correction parameter. The final uncertainty on each of the eight muon momentum correction parameters is derived from the sum in quadrature of all the listed uncertainty sources. This is simplified for use in standard physics analyses, for which only four systematic variations are provided: global upper and lower scale variations and independent resolution variations for the ID and the MS. The upper and lower scale variations are obtained by a simultaneous variation of all the ID and MS scale correction parameters by \(1\sigma \). The resolution variation for ID (MS) is obtained by the simultaneous variation of all the ID (MS) correction parameters. The MC-smearing approach of Eq. (9) cannot be used to correct the MC when the resolution in real data is better than in the simulation. To deal with these cases, the amount of resolution that should be subtracted in quadrature from the simulation to reproduce the data is included in the positive ID and MS resolution variations. Then the prescription for physics analysis is to symmetrize the effect of the positive variation of resolution parameters around the nominal value of the physical observables under study. Summary of ID muon momentum resolution and scale corrections used in Eq. (9), averaged over three main detector regions. The corrections are derived in 18 \(\eta \) detector regions, as described in Sect. 5.1.1, and averaged according to the \(\eta \) width of each region. The uncertainties are the result of the sum in quadrature of the statistical and systematic uncertainties. Only upper uncertainties are reported for the \(\Delta r\) parameters; lower uncertainties are evaluated by symmetrization, as described in Sect. 5.1.2 \(\Delta r_1^\mathrm{ID }\) \(\Delta r_2^\mathrm{ID }\) [TeV\(^{-1}\)] \(s_1^\mathrm{ID }\) \(|\eta |<1.05\) \(0.0068 ^{+0.0010}\) \( 0.146 ^{+0.039}\) \(-0.92^{+0.26}_{-0.22}\times 10^{-3}\) \(1.05\le |\eta |<2.0\) \(|\eta |\ge 2.0\) Summary of MS momentum resolution and scale corrections for small and large MS sectors, averaged over three main detector regions. The corrections for large and small MS sectors are derived in 18 \(\eta \) detector regions, as described in Sect. 5.1.1, and averaged according to the \(\eta \) width of each region. The parameters \(\Delta r_0^\mathrm{MS }\), for \(|\eta |>1.05\), and \(\Delta r_2^\mathrm{MS }\), for the full \(\eta \) range, are fixed to zero. The uncertainties are the result of the sum in quadrature of the statistical and systematic uncertainties. Only upper uncertainties are reported for the \(\Delta r\) parameters; lower uncertainties are evaluated by symmetrization, as described in Sect. 5.1.2 \(\Delta r_0^\mathrm{MS }\) [GeV] \(\Delta r_1^\mathrm{MS }\) \(\Delta r_2^\mathrm{MS }\) [TeV\(^{-1}\)] \(s_0^\mathrm{MS }\) [GeV] \(s_1^\mathrm{MS }\) \(|\eta |<1.05\) (small) \(0.115 ^{+ 0.083}\) \( 0.0030 ^{+ 0.0079}\) \(0 ^{+ 0.21}\) \(-0.035^{+0.017}_{-0.011}\) \(+3.57^{+0.38}_{-0.60}\times 10^{-3}\) \(|\eta |<1.05\) (large) \(1.05\le |\eta |<2.0\) (small) \(0 ^{+ 0.080}\) \(1.05\le |\eta |<2.0\) (large) \(|\eta |\ge 2.0\) (small) \(|\eta |\ge 2.0\) (large) 5.1.3 Result of the muon momentum scale and resolution corrections The ID and MS correction parameters used in Eq. (9) are shown in Tables 1 and 2, averaged over three \(\eta \) regions. The scale correction to the simulated ID track reconstruction is always below 0.1 % with an uncertainty ranging from \(\approx 0.02\) %, for \(|\eta |<1.0\), to 0.2 %, for \(|\eta |>2.3\). The correction to the MS scale is \(\lesssim 0.1\) % except for the large MS sectors in the barrel region of the detector, where a correction of \(\approx \)0.3 % is needed, and for specific MS regions with \(1.25<|\eta |<1.5\) where a correction of about \(-0.4\) % is needed. An energy loss correction of approximately \(30\) MeV is visible for low values of \(p_{\mathrm{T}}\) in the MS reconstruction. This correction corresponds to about \(1\,\%\) of the total energy loss in the calorimeter and in the dead material in front of the spectrometer and is compatible with the accuracy of the material budget used in the simulation. Depending on the considered \(p_{\mathrm{T}}\) range, total resolution smearing corrections below 10 % and below 15 % are needed for the simulated ID and MS track reconstructions. 5.2 Measurement of the dimuon mass scale and resolution The collected samples of \({J/\psi \rightarrow \mu \mu }\), \({\Upsilon \rightarrow \mu \mu }\) and \(Z \rightarrow \mu \mu \) decays have been used to study the muon momentum resolution and to validate the momentum corrections obtained with the template fit method described in the previous section with a different methodology. In addition the \(\Upsilon \) sample, not used in the extraction of the corrections, provides an independent validation. Neglecting angular effects, the invariant mass resolution \(\sigma (m_{\mu \mu })\) is related to the momentum resolution by $$\begin{aligned} \frac{\sigma (m_{\mu \mu })}{m_{\mu \mu }} \, = \, \frac{1}{2}\frac{\sigma (p_1)}{p_1} \oplus \frac{1}{2}\frac{\sigma (p_2)}{p_2}, \end{aligned}$$ where \(p_1\) and \(p_2\) are the momenta of the two muons. If the momentum resolution is similar for the two muons then the relative mass resolution is proportional to the relative momentum resolution: $$\begin{aligned} \frac{\sigma (m_{\mu \mu })}{m_{\mu \mu }} \, = \, \frac{1}{\sqrt{2}} \frac{\sigma (p)}{p}. \end{aligned}$$ The mass resolution has been obtained by fitting the width of the invariant mass peaks. In the \({J/\psi \rightarrow \mu \mu }\) and \({\Upsilon \rightarrow \mu \mu }\) decays, the intrinsic width of the resonance is negligible with respect to the experimental resolution. In the \(Z \rightarrow \mu \mu \) case the fits have been performed using a convolution of the true line-shape obtained from the MC simulation with an experimental resolution function. The momentum scale was obtained by comparing the mass peak position in data and in MC. Details of the event selection and of the invariant mass fits are given below. Dimuon invariant mass distribution of \({J/\psi \rightarrow \mu \mu }\) (left), \({\Upsilon \rightarrow \mu \mu }\) (center) and \(Z \rightarrow \mu \mu \) (right) candidate events reconstructed with CB muons. The upper panels show the invariant mass distribution for data and for the signal MC simulation plus the background estimate. The points show the data, the filled histograms show the simulation with the MC momentum corrections applied and the dashed histogram shows the simulation when no correction is applied. Background estimates are added to the signal simulation. The lower panels show the Data/MC ratios. The band represents the effect of the systematic uncertainties on the MC momentum corrections. In the \(J/\psi \) case the background was fitted in a sideband region as described in the text. In the \(\Upsilon \) case a simultaneous fit of the normalization of the three simulated \({\Upsilon \rightarrow \mu \mu }\) distributions and of a linear background was performed. In the \(Z\) case, the MC background samples are added to the signal sample according to their expected cross sections. The sum of background and signal MC is normalized to the data 5.2.1 Event selection and mass fitting The \(J/\psi \) and \(\Upsilon \) events are selected online by the dedicated dimuon triggers described in Sect. 3.1. The offline event selection requires in addition that both muons are reconstructed as CB muons and have \(p_{\mathrm{T}}>7\) GeV. The trigger acceptance limits the muons to the region \(|\eta |<2.4\). The resulting data samples consist of \(17\)M and \(4.7\)M candidates for \(J/\psi \) and \(\Upsilon \), respectively. The \(Z \rightarrow \mu \mu \) sample was selected online with the single-muon trigger described in Sect. 4.1. One of the two muons can be outside the trigger acceptance, allowing coverage of the full range \(|\eta |<2.7\). The offline selection requires two opposite-charge muons, one with \(p_{\mathrm{T}}>25\) GeV and one with \(p_{\mathrm{T}}>20\) GeV. The two muons are required to be isolated, to have opposite charges and to be compatible with the primary interaction vertex. The invariant mass distribution of the \({J/\psi \rightarrow \mu \mu }\), \({\Upsilon \rightarrow \mu \mu }\) and \(Z \rightarrow \mu \mu \) samples are shown in Fig. 10 and compared with uncorrected and corrected MC. With the uncorrected MC the signal peaks have smaller width and are slightly shifted with respect to data. After correction, the lineshapes of the three resonances agree very well with the data. For a detailed study, the position \(\langle m_{\mu \mu } \rangle \) and the width \(\sigma (m_{\mu \mu })\) of the mass peaks are extracted in bins of \(\eta \) and \(p_{\mathrm{T}}\) from fits of the invariant mass distributions of the three resonances. In the \(J/\psi \) case, for each bin, the background is obtained from a fit of two sideband regions outside the \(J/\psi \) mass peak (\(2.55<m_{\mu \mu }<2.9\) and \(3.3<m_{\mu \mu }<4.0\) GeV) using a second order polynomial. The background is then subtracted from the signal mass window. The parameters \(\langle m_{\mu \mu } \rangle \) and \(\sigma (m_{\mu \mu })\) of the background subtracted signal distribution are obtained with a Gaussian fit in the range \(\langle m_{\mu \mu } \rangle \pm 1.5 \sigma (m_{\mu \mu })\), obtained using an iterative procedure. Systematic uncertainties associated to the fit are evaluated by repeating the fit using a third order polynomial as the background model and by varying the fit range to \(\pm 1 \times \) and \(\pm 2 \times \sigma (m_{\mu \mu })\). As shown in Fig. 10, the three \(\Upsilon \) resonances (1S, 2S, 3S) partially overlap. Moreover in the \(\Upsilon \) case the mass window imposed by the trigger limits considerably the size of the sidebands available for fixing the background level. Therefore a different fit strategy is adopted in this case. For each bin, the whole invariant mass distribution in the range \(8.5<m_{\mu \mu }<11.5\) GeV is fitted with a linear background plus three Crystal-Ball functions representing the three resonances. The \(\alpha \) and \(n\) parameters that fix the tail of the Crystal-Ball function are fixed to the values obtained from a fit of the signal MC mass distribution. The relative mass shifts of the three signal peaks are fixed using the PDG masses of the three resonances, while the widths of the three peaks, divided by the corresponding PDG masses, are constrained to be equal. The remaining free parameters in the fit are the mass scale, the width \(\sigma (m_{\mu \mu })\) of the \(\Upsilon (1S)\), the relative normalizations of the \(\Upsilon (2S)\) and \(\Upsilon (3S)\) distributions with respect to \(\Upsilon (1S)\) and two parameters for the linear background. A similar fit is performed on the MC simulation of the invariant mass distribution obtained by adding the three signal peaks and a flat background distribution. The fit systematic uncertainties have been evaluated by chaining the fit range to \(8.25< m_{\mu \mu }< 11.75\) and \(8.75< m_{\mu \mu }< 11.0\) GeV and by varying the \(\alpha \) and \(n\) parameters in the range allowed by fits to the simulation. Ratio of the fitted mean mass, \(\langle m_{\mu \mu } \rangle \), for data and corrected MC from \(Z\) (top), \(\Upsilon \) (middle), and \(J/\psi \) (bottom) events as a function of the pseudorapidity of the highest-\(p_{\mathrm{T}}\) muon. The ratio is shown for corrected MC (filled symbols) and uncorrected MC (empty symbols). The error bars represent the statistical and the systematic uncertainty on the mass fits added in quadrature. The bands show the uncertainty on the MC corrections calculated separately for the three samples In the \(Z \rightarrow \mu \mu \) case, for each bin, the true lineshape predicted by the MC simulation is parametrized with a Breit–Wigner function. The measured dimuon mass spectrum is fitted with a Crystal-Ball function, representing the experimental resolution effects, convoluted with the Breit–Wigner parametrization of the true lineshape. The fit is repeated in different ranges around the mass peak (corresponding approximately to one to two standard deviations) and the spread of the results is used to evaluate the systematic uncertainty of the fit. Ratio of the fitted mean mass, \(\langle m_{\mu \mu } \rangle \), for data and corrected MC from \(J/\psi \), \(\Upsilon \) and \(Z\) events as a function of the average transverse momentum in three \(|\eta |\) ranges. Both muons are required to be in the same \(|\eta |\) range. The \(J/\psi \) and \(\Upsilon \) data are shown as a function of the \(\bar{p}_\mathrm{T} = \frac{1}{2}(p_{\mathrm {T},1}+p_{\mathrm {T},2})\) while for \(Z\) data are plotted as a function of \(p_{\mathrm{T}}^*\) as defined in Eq. (16). The error bars represent the statistical uncertainty and the systematic uncertainty on the fit added in quadrature. The bands show the uncertainty on the MC corrections calculated separately for the three samples 5.2.2 Mass scale results Figure 11 shows the Data/MC ratio of the mean mass \(\langle m_{\mu \mu } \rangle \) obtained from the fits to the \(Z\), \(J/\psi \), \(\Upsilon \) samples described above, as a function of the pseudorapidity of the highest-\(p_{\mathrm{T}}\) muon for pairs of CB muons. For the uncorrected MC, the ratio deviates from unity in the large \(|\eta |\) region of the \(j/\psi \) and \(\Upsilon \) cases by up to \(5\,\%\). This is mainly due to imperfections in the simulation of the muon energy loss that have a larger effect at low \(p_{\mathrm{T}}\) and in the forward \(\eta \) region where the MS measurement has a larger weight in the MS-ID combination. The corrected MC is in very good agreement with the data, well within the scale systematics that are \(\approx 0.035\,\%\) in the barrel region and increase with \(|\eta |\) to reach \(\sim 0.2\,\%\) in the region \(|\eta |>2\) for the \(Z \rightarrow \mu \mu \) case. Figure 12 shows the data/MC ratio for \(\langle m_{\mu \mu } \rangle \) as a function of the transverse momentum \(\langle p_{\mathrm{T}}\rangle \) for muons in three different pseudorapidity regions. For the \(J/\psi \) and \(\Upsilon \) cases, \(\langle p_{\mathrm{T}}\rangle \) is defined as the average momentum \(\bar{p}_T = \frac{1}{2}(p_{\mathrm {T},1}+p_{\mathrm {T},2})\) while in the \(Z\) case it is defined as $$\begin{aligned} p_{\mathrm{T}}^{*} = m_{Z} \sqrt{ \frac{\sin {\theta _1}\sin {\theta _2}}{2 (1-\cos \alpha _{12})}}, \end{aligned}$$ where \(m_{Z}\) is the \(Z\) pole mass [28], \(\theta _1\), \(\theta _2\) are the polar angles of the two muons and \(\alpha _{12}\) is the opening angle of the muon pair. This definition, based on angular variables only, removes the correlation between the measurement of the dimuon mass and of the average \(p_{\mathrm{T}}\) that is particularly relevant around the Jacobian peak at \(p_{\mathrm{T}}=m_Z/2\) in the distribution of muons from \(Z\) decays. The data from the three resonances span from \(\langle p_{\mathrm{T}}\rangle =7\) GeV to \(\langle p_{\mathrm{T}}\rangle =120\) GeV and show that the momentum scale is well known and within the assigned systematic uncertainties in the whole \(p_{\mathrm{T}}\) range. 5.2.3 Resolution results The dimuon mass width \(\sigma (m_{\mu \mu })\) for CB muons is shown as a function of the leading-muon \(\eta \) in Fig. 13 for the three resonances. The width of the uncorrected MC is 5–10 % smaller than that of the data. After correction the MC reproduces the width of the data well within the correction uncertainties. At a given \(\eta \), the relative dimuon mass resolution \(\sigma (m_{\mu \mu })/m_{\mu \mu }\) depends approximately on \(\langle p_{\mathrm{T}}\rangle \) (Eq. 15). This allows a direct comparison of the momentum resolution using different resonances. This is shown in Fig. 14, where the relative mass resolution from \({J/\psi \rightarrow \mu \mu }\), \({\Upsilon \rightarrow \mu \mu }\) and \(Z \rightarrow \mu \mu \) events is compared in three regions of \(|\eta |\). The \({J/\psi \rightarrow \mu \mu }\) and \({\Upsilon \rightarrow \mu \mu }\) resolutions are in good agreement. Dimuon invariant mass resolution for CB muons for \({J/\psi \rightarrow \mu \mu }\) (a), \({\Upsilon \rightarrow \mu \mu }\) (b) and \(Z \rightarrow \mu \mu \) (c) events for data and for uncorrected and corrected MC as a function of the pseudorapidity of the highest-\(p_{\mathrm{T}}\) muon. The upper plots show the fitted resolution parameter for data, uncorrected MC and corrected MC. The lower panels show the data/MC ratio, using uncorrected and corrected MC. The error bars represent the statistical uncertainty and the systematic uncertainty on the fit added in quadrature. The bands in the lower panels represent the systematic uncertainty on the correction Dimuon invariant mass resolution for CB muons measured from \(J/\psi \), \(\Upsilon \) and \(Z\) events as a function of the average transverse momentum in three \(|\eta |\) ranges. Both muons are required to be in the same \(|\eta |\) range. The \(J/\psi \) and \(\Upsilon \) data are plotted as a function of \(\bar{p}_\mathrm{T} = \frac{1}{2}(p_{\mathrm {T},1}+p_{\mathrm {T},2})\) while for \(Z\) data are plotted as a function of \(p_{\mathrm{T}}^*\) as defined in Eq. (16). The error bars represent statistical and systematic errors added in quadrature. The lower panel shows the ratio between data and the corrected MC, with bands representing the uncertainty on the MC corrections for the three calibration samples Dimuon invariant mass resolution for muons reconstructed with the ID only, measured from \(J/\psi \), \(\Upsilon \) and \(Z\) events as a function of the average transverse momentum in three \(|\eta |\) ranges. Other details as in Fig. 14 Dimuon invariant mass resolution for muons reconstructed with the MS only, measured from \(J/\psi \), \(\Upsilon \) and \(Z\) events as a function of the average transverse momentum in three \(|\eta |\) ranges. Other details as in Fig. 14 In the \(Z \rightarrow \mu \mu \) sample, due to the decay kinematics, below \(\langle p_{\mathrm{T}}\rangle = m_Z /2\) there is a strong correlation between \(\langle p_{\mathrm{T}}\rangle \) and the pseudorapidity of the muons, in such a way that the lower is the \(\langle p_{\mathrm{T}}\rangle \), the larger is the \(|\eta |\) of the muons. Above \(\langle p_{\mathrm{T}}\rangle = m_Z /2\), the correlation effect is strongly reduced and the \(Z\) measurements are well aligned with those from the lighter resonances. In the barrel region, \(|\eta |<1\), the mass resolution increases from \(\sigma (m_{\mu \mu })/m_{\mu \mu } \approx 1.2\,\%\) at \(p_{\mathrm{T}}<10\) GeV to \(\sigma (m_{\mu \mu })/m_{\mu \mu } \approx 2\,\%\) at \(p_{\mathrm{T}}=100\) GeV. For \(|\eta |>1\) it goes from \(\sigma (m_{\mu \mu })/m_{\mu \mu } \approx 2\,\%\) to \(\approx 3\,\%\) in the same \(p_{\mathrm{T}}\) range. This behavior is very well reproduced by the corrected MC. Following Eq. (15), it is possible to scale \(\sigma (m_{\mu \mu })/m_{\mu \mu }\) by \(\sqrt{2}\) to extract a measurement of the relative momentum resolution \(\sigma (p)/p\), which ranges from \(\approx 1.7\,\%\) in the central region and at low \(p_{\mathrm{T}}\) to \(\approx 4\,\%\) at large \(\eta \) and \(p_{\mathrm{T}}= 100\) GeV. To understand better the \(p_{\mathrm{T}}\) dependence of the momentum resolution of CB muons, it is useful to study separately the resolution of the ID and of the MS measurements, as shown in Figs. 15 and 16. The ID measurement has a better resolution than the MS in the \(p_{\mathrm{T}}\) range under study for \(|\eta |<2\) while the MS has a better resolution at larger \(|\eta |\). The resolution of the CB muons is significantly better than the ID or the MS measurements taken separately in the whole \(|\eta |\) range. The ID resolution has an approximately linear increases with \(p_{\mathrm{T}}\), corresponding to a non-zero \(r_2\) term in Eq. (10). The MS resolution is largest in the region \(1<|\eta |<2\) which contains the areas with the lowest magnetic field integral. In the region \(|\eta |<1\) there is a visible increase at low \(p_{\mathrm{T}}\) that corresponds to the presence of a non-zero \(r_0\) term in Eq. (10). The \(p_{\mathrm{T}}\) dependence of the resolutions for both the ID and the MS measurements is well reproduced by the corrected MC. According to studies based on MC, the MS measurement is expected to dominate over the ID in the whole \(|\eta |\) range for sufficiently large \(p_{\mathrm{T}}\). 6 Final state radiation recovery The invariant mass distributions of resonances that decay into muons, such as \(Z \rightarrow \mu \mu \) and \(H\rightarrow ZZ\rightarrow 4\ell \), is affected by QED final state radiation of photons, causing the mass reconstructed using muons to be shifted to lower values. In this section, a dedicated method to include FSR photons in the reconstruction of resonances decaying into muons is introduced and tested with \(Z \rightarrow \mu \mu \) data. This method has been used in several ATLAS publications [6, 29]. Invariant mass distribution of \(Z \rightarrow \mu \mu \) events with identified FSR in data before (filled triangles) and after (filled circles) FSR correction, for collinear (top) and non-collinear (bottom) FSR. The MC prediction is shown before correction (red histogram) and after correction (blue histogram) Final state radiation photons emitted collinearly to muons can be reconstructed with the LAr calorimeter: electromagnetic clusters are searched for within a narrow cone around the axis defined by the muon momentum direction at the interaction point (i.e. the direction which would be followed by an uncharged particle). The longitudinal segmentation of the LAr calorimeter is exploited to reduce fake photon clusters produced by muon energy losses in the calorimeter. This is achieved by using as a discriminant the fraction \(\mathrm {f_1}\) of the cluster energy deposited in the first segment of the calorimeter divided by the total cluster energy. Collinear FSR photon candidates are required to have \(E_{T}>1.5\) GeV, \(\Delta R_{\mathrm {cluster},\mu }<0.15\) and \(\mathrm {f_1}>0.1\). In addition, non-collinear FSR photons are recovered using the standard ATLAS photon reconstruction, selecting isolated photons emitted with \(\Delta R_{\mathrm {cluster},\mu }>0.15\) and with \(E_{T}>10\) GeV [30]. The effect of adding a collinear or non-collinear FSR photon to the \(Z \rightarrow \mu \mu \) invariant mass in data is studied in a sample obtained with a dedicated selection of \(Z \rightarrow \mu \mu \) candidates plus at least one radiated photon candidate. The correction for collinear FSR is applied for events in the mass window \(66\) GeV\( < m_{\mu \mu } <89\) GeV while the correction for non-collinear FSR photons is applied only if the collinear search has failed and the dimuon mass satisfies \(m_{\mu \mu } < 81\) GeV. In Fig. 17 the invariant mass distributions for the sample of \(Z \rightarrow \mu \mu \) events with a FSR photon candidate are shown before and after the addition of collinear and non-collinear FSR photons. A good agreement between data and MC is observed for the corrected \(Z \rightarrow \mu \mu \) events. According to MC studies, the collinear FSR selection has an efficiency of \(70 \pm 4\) % for FSR photons emitted with \(E_{T}>1.5\) GeV and \(\Delta R_{\gamma ,\mu }<0.15\) in the fiducial region defined requiring \(|\eta |<2.37\) and excluding the calorimeter crack region \(1.37< |\eta | < 1.52\). About \(85\,\%\) of the corrected events have genuine FSR photons, with the remaining photons coming from muon bremsstrahlung or ionization or from random matching with energy depositions from other sources. The fraction of all \(Z \rightarrow \mu \mu \) events corrected with a collinear FSR photon is \(\simeq \) 4 %. The non-collinear FSR selection has an efficiency of \(60\pm 3\) % in the fiducial region and a purity of \(\ge 95\) %. The fraction of \(Z \rightarrow \mu \mu \) events corrected with a non-collinear FSR photon is \(\simeq 1\) %. The FSR correction may introduce systematic variations in the invariant mass scale and resolution. To study these effects, a Gaussian fit of the \(Z \rightarrow \mu \mu \) distribution has been performed in the mass range \(91.18 \pm 3.00\) GeV. The FSR correction induces a mass shift of \(+40\pm 3\) MeV and an improvement of the resolution of \(3 \pm 1\) % in the full \(Z \rightarrow \mu \mu \) sample. The effects observed in the data are well reproduced by the MC. The systematic uncertainty introduced by the FSR recovery on the inclusive \(Z\) mass scale can be understood by considering a 0.5 % photon energy scale uncertainty, the fact that only 5 % of the \(Z\) events are corrected, and that the fraction of energy carried by the photons is a few %. This leads to a systematic uncertainty smaller than 2 MeV. The effect of pile up on the FSR correction has been estimated by dividing the data and the MC into three categories based on the average number of interactions per bunch crossing: \(\langle \mu \rangle =0\)–17, 17–23, 23–40. A comparison of the fitted \(Z\) mass between data and MC has been performed in the three categories and no dependence on \(\langle \mu \rangle \) was observed. Good agreement between data and MC within the statistical uncertainties was found. The performance of the ATLAS muon reconstruction has been measured using data from LHC \(pp\) collisions at \(\sqrt{s} = 7\)–8 TeV. The muon reconstruction efficiency is close to \(99\,\%\) over most of the pseudorapidity range of \(|\eta |<2.5\) and for \(p_{\mathrm{T}}>10\) GeV. The large collected sample of 9M \(Z \rightarrow \mu \mu \) decays allows the measurement of the efficiency over the full acceptance of \(|\eta |<2.7\), and with a precision at the 1 per-mille level for \(|\eta |<2.5\). By including \({J/\psi \rightarrow \mu \mu }\) decays, the efficiency measurement has been extended over the transverse momentum range from \(p_{\mathrm{T}}\approx 4\) GeV to \(p_{\mathrm{T}}\approx 100\) GeV. The muon momentum scale and resolution has been studied in detail using large calibration samples of \({J/\psi \rightarrow \mu \mu }\), \({\Upsilon \rightarrow \mu \mu }\) and \(Z \rightarrow \mu \mu \) decays. These studies have been used to correct the MC simulation to improve the data-MC agreement and to minimize the uncertainties in physics analyses. The momentum scale for combined muons is known with an uncertainty of \(\pm 0.05\,\%\) for \(|\eta | <1\), which increases to \(\lesssim 0.2\,\%\) for \(|\eta |>2.3\) for \(Z \rightarrow \mu \mu \) events. The dimuon mass resolution is \(\approx 1.2\,\%\) (\(2\,\%\)) at low-\(p_{\mathrm{T}}\) increasing to \(\approx 2\,\%\) (\(3\,\%\)) at \(p_{\mathrm{T}}\approx 100\) GeV for \(|\eta |<1\) (\(|\eta |>1\)). The resolution is reproduced by the corrected simulation within relative uncertainties of \(3\,\%\) to \(10\,\%\) depending on \(\eta \) and \(p_{\mathrm{T}}\). The mass resolution for the \(Z \rightarrow \mu \mu \) resonance was found to improve when photons from QED final state radiation are recovered. The FSR recovery allows to recover \(\approx 4\,\%\) of the events from the low-mass tail to the peak region, improving the dimuon mass resolution by \(\approx 3\,\%\). ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the \(z\)-axis along the beam pipe. The \(x\)-axis points from the IP to the centre of the LHC ring, and the \(y\)-axis points upward. Cylindrical coordinates \((r,\phi )\) are used in the transverse plane, \(\phi \) being the azimuthal angle around the beam pipe. The pseudorapidity and the transverse momentum are defined in terms of the polar angle \(\theta \) as \(\eta =-\ln \tan (\theta /2)\) and \(p_{\mathrm{T}}= p \sin \theta \), respectively. The \(\eta -\phi \) distance between two particles is defined as \(\Delta R=\sqrt{\Delta \eta ^2 +\Delta \phi ^2}\). The installation of all the muon chambers in this region has been completed during the 2013–2014 LHC shutdown. Here a muon is considered to be isolated when the sum of the momenta of the other tracks with \(p_{\mathrm{T}}>1\) GeV in a cone of \(\Delta R=0.4\) around the muon track is less than \(0.15\) times the muon momentum itself. Different cone sizes and cuts on the momentum fraction are used in other parts of this paper. This effect is also visible in Fig. 9 at \(\phi \simeq -1\). We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWF and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Cz
CommonCrawl
In this section you will: Solve quadratic equations by factoring. Solve quadratic equations by the square root property. Solve quadratic equations by completing the square. Solve quadratic equations by using the quadratic formula. The computer monitor on the left in (Figure) is a 23.6-inch model and the one on the right is a 27-inch model. Proportionally, the monitors appear very similar. If there is a limited amount of space and we desire the largest monitor possible, how do we decide which one to choose? In this section, we will learn how to solve problems such as this using four different methods. Solving Quadratic Equations by Factoring An equation containing a second-degree polynomial is called a quadratic equation. For example, equations such as and are quadratic equations. They are used in countless ways in the fields of engineering, architecture, finance, biological science, and, of course, mathematics. Often the easiest method of solving a quadratic equation is factoring. Factoring means finding expressions that can be multiplied together to give the expression on one side of the equation. If a quadratic equation can be factored, it is written as a product of linear terms. Solving by factoring depends on the zero-product property, which states that if then or where a and b are real numbers or algebraic expressions. In other words, if the product of two numbers or two expressions equals zero, then one of the numbers or one of the expressions must equal zero because zero multiplied by anything equals zero. Multiplying the factors expands the equation to a string of terms separated by plus or minus signs. So, in that sense, the operation of multiplication undoes the operation of factoring. For example, expand the factored expression by multiplying the two factors together. The product is a quadratic expression. Set equal to zero, is a quadratic equation. If we were to factor the equation, we would get back the factors we multiplied. The process of factoring a quadratic equation depends on the leading coefficient, whether it is 1 or another integer. We will look at both situations; but first, we want to confirm that the equation is written in standard form, where a, b, and c are real numbers, and The equation is in standard form. We can use the zero-product property to solve quadratic equations in which we first have to factor out the greatest common factor (GCF), and for equations that have special factoring formulas as well, such as the difference of squares, both of which we will see later in this section. The Zero-Product Property and Quadratic Equations The zero-product property states where a and b are real numbers or algebraic expressions. A quadratic equation is an equation containing a second-degree polynomial; for example where a, b, and c are real numbers, and if it is in standard form. Solving Quadratics with a Leading Coefficient of 1 In the quadratic equation the leading coefficient, or the coefficient of is 1. We have one method of factoring quadratic equations in this form. Given a quadratic equation with the leading coefficient of 1, factor it. Find two numbers whose product equals c and whose sum equals b. Use those numbers to write two factors of the form where k is one of the numbers found in step 1. Use the numbers exactly as they are. In other words, if the two numbers are 1 and the factors are Solve using the zero-product property by setting each factor equal to zero and solving for the variable. Solving a Quadratic Equation by Factoring when the Leading Coefficient is not 1 Factor and solve the equation: [reveal-answer q="fs-id1815377″]Show Solution[/reveal-answer] [hidden-answer a="fs-id1815377″] To factor we look for two numbers whose product equals and whose sum equals 1. Begin by looking at the possible factors of The last pair, sums to 1, so these are the numbers. Note that only one pair of numbers will work. Then, write the factors. To solve this equation, we use the zero-product property. Set each factor equal to zero and solve. The two solutions are and We can see how the solutions relate to the graph in (Figure). The solutions are the x-intercepts of Factor and solve the quadratic equation: Solve the Quadratic Equation by Factoring Solve the quadratic equation by factoring: Find two numbers whose product equals and whose sum equals List the factors of The numbers that add to 8 are 3 and 5. Then, write the factors, set each factor equal to zero, and solve. The solutions are and [/hidden-answer] [latex]x=7,[/latex] Using the Zero-Product Property to Solve a Quadratic Equation Written as the Difference of Squares Solve the difference of squares equation using the zero-product property: Recognizing that the equation represents the difference of squares, we can write the two factors by taking the square root of each term, using a minus sign as the operator in one factor and a plus sign as the operator in the other. Solve using the zero-factor property. Solve by factoring: [latex]x=-5,[/latex] When the leading coefficient is not 1, we factor a quadratic equation using the method called grouping, which requires four terms. With the equation in standard form, let's review the grouping procedures: With the quadratic in standard form, multiply Find two numbers whose product equals and whose sum equals Rewrite the equation replacing the term with two terms using the numbers found in step 1 as coefficients of x. Factor the first two terms and then factor the last two terms. The expressions in parentheses must be exactly the same to use grouping. Factor out the expression in parentheses. Set the expressions equal to zero and solve for the variable. Solving a Quadratic Equation Using Grouping Use grouping to factor and solve the quadratic equation: First, multiply Then list the factors of The only pair of factors that sums to is Rewrite the equation replacing the b term, with two terms using 3 and 12 as coefficients of x. Factor the first two terms, and then factor the last two terms. Solve using the zero-product property. The solutions are [latex]\text{and}-3.\,[/latex]See (Figure).[/hidden-answer] Solve using factoring by grouping: [reveal-answer q="fs-id768476″]Show Solution[/reveal-answer] [hidden-answer a="fs-id768476″] [latex]x=-\frac{2}{3},[/latex] Solving a Polynomial of Higher Degree by Factoring Solve the equation by factoring: This equation does not look like a quadratic, as the highest power is 3, not 2. Recall that the first thing we want to do when solving any equation is to factor out the GCF, if one exists. And it does here. We can factor out from all of the terms and then proceed with grouping. Use grouping on the expression in parentheses. Now, we use the zero-product property. Notice that we have three factors. The solutions are [latex]-\frac{2}{3},[/latex]and [/hidden-answer] Using the Square Root Property When there is no linear term in the equation, another method of solving a quadratic equation is by using the square root property, in which we isolate the term and take the square root of the number on the other side of the equals sign. Keep in mind that sometimes we may have to manipulate the equation to isolate the term so that the square root property can be used. The Square Root Property With the term isolated, the square root property states that: where k is a nonzero real number. Given a quadratic equation with an term but no term, use the square root property to solve it. Isolate the term on one side of the equal sign. Take the square root of both sides of the equation, putting a sign before the expression on the side opposite the squared term. Simplify the numbers on the side with the sign. Solving a Simple Quadratic Equation Using the Square Root Property Solve the quadratic using the square root property: Take the square root of both sides, and then simplify the radical. Remember to use a sign before the radical symbol. The solutions are [latex]-2\sqrt{2}.[/latex][/hidden-answer] Solving a Quadratic Equation Using the Square Root Property Solve the quadratic equation: First, isolate the term. Then take the square root of both sides. The solutions are [latex]\text{and}-\frac{\sqrt{6}}{2}.[/latex][/hidden-answer] Solve the quadratic equation using the square root property: Not all quadratic equations can be factored or can be solved in their original form using the square root property. In these cases, we may use a method for solving a quadratic equation known as completing the square. Using this method, we add or subtract terms to both sides of the equation until we have a perfect square trinomial on one side of the equal sign. We then apply the square root property. To complete the square, the leading coefficient, a, must equal 1. If it does not, then divide the entire equation by a. Then, we can use the following procedures to solve a quadratic equation by completing the square. We will use the example to illustrate each step. Given a quadratic equation that cannot be factored, and with first add or subtract the constant term to the right sign of the equal sign. Multiply the b term by and square it. Add to both sides of the equal sign and simplify the right side. We have The left side of the equation can now be factored as a perfect square. Use the square root property and solve. The solutions are [latex]\text{and}-2-\sqrt{3}.[/latex] Solving a Quadratic by Completing the Square Solve the quadratic equation by completing the square: First, move the constant term to the right side of the equal sign. Then, take of the b term and square it. Add the result to both sides of the equal sign. Factor the left side as a perfect square and simplify the right side. The solutions are [latex]\text{and}\frac{3}{2}-\frac{\sqrt{29}}{2}.[/latex][/hidden-answer] Solve by completing the square: Using the Quadratic Formula The fourth method of solving a quadratic equation is by using the quadratic formula, a formula that will solve all quadratic equations. Although the quadratic formula works on any quadratic equation in standard form, it is easy to make errors in substituting the values into the formula. Pay close attention when substituting, and use parentheses when inserting a negative number. We can derive the quadratic formula by completing the square. We will assume that the leading coefficient is positive; if it is negative, we can multiply the equation by and obtain a positive a. Given [latex]a\ne 0,[/latex]we will complete the square as follows: First, move the constant term to the right side of the equal sign: As we want the leading coefficient to equal 1, divide through by a: Then, find of the middle term, and add to both sides of the equal sign: Next, write the left side as a perfect square. Find the common denominator of the right side and write it as a single fraction: Now, use the square root property, which gives Finally, add to both sides of the equation and combine the terms on the right side. Thus, Written in standard form, any quadratic equation can be solved using the quadratic formula: where a, b, and c are real numbers and Given a quadratic equation, solve it using the quadratic formula Make sure the equation is in standard form: Make note of the values of the coefficients and constant term, and Carefully substitute the values noted in step 2 into the equation. To avoid needless errors, use parentheses around each number input into the formula. Calculate and solve. Solve the Quadratic Equation Using the Quadratic Formula Identify the coefficients: Then use the quadratic formula. Solving a Quadratic Equation with the Quadratic Formula Use the quadratic formula to solve First, we identify the coefficients: and Substitute these values into the quadratic formula. The solutions to the equation are and or and [/hidden-answer] Solve the quadratic equation using the quadratic formula: [latex]x=\frac{1}{3}[/latex] The Discriminant The quadratic formula not only generates the solutions to a quadratic equation, it tells us about the nature of the solutions when we consider the discriminant, or the expression under the radical, The discriminant tells us whether the solutions are real numbers or complex numbers, and how many solutions of each type to expect. (Figure) relates the value of the discriminant to the solutions of a quadratic equation. Value of Discriminant One rational solution (double solution) perfect square Two rational solutions not a perfect square Two irrational solutions Two complex solutions For , where , , and are real numbers, the discriminant is the expression under the radical in the quadratic formula: It tells us whether the solutions are real numbers or complex numbers and how many solutions of each type to expect. Using the Discriminant to Find the Nature of the Solutions to a Quadratic Equation Use the discriminant to find the nature of the solutions to the following quadratic equations: Calculate the discriminant for each equation and state the expected type of solutions. There will be one rational double solution. As is a perfect square, there will be two rational solutions. There will be two complex solutions. Using the Pythagorean Theorem One of the most famous formulas in mathematics is the Pythagorean Theorem. It is based on a right triangle, and states the relationship among the lengths of the sides as where and refer to the legs of a right triangle adjacent to the angle, and refers to the hypotenuse. It has immeasurable uses in architecture, engineering, the sciences, geometry, trigonometry, and algebra, and in everyday applications. We use the Pythagorean Theorem to solve for the length of one side of a triangle when we have the lengths of the other two. Because each of the terms is squared in the theorem, when we are solving for a side of a triangle, we have a quadratic equation. We can use the methods for solving quadratic equations that we learned in this section to solve for the missing side. The Pythagorean Theorem is given as where and refer to the legs of a right triangle adjacent to the angle, and refers to the hypotenuse, as shown in (Figure). Finding the Length of the Missing Side of a Right Triangle Find the length of the missing side of the right triangle in (Figure). As we have measurements for side b and the hypotenuse, the missing side is a. Use the Pythagorean Theorem to solve the right triangle problem: Leg a measures 4 units, leg b measures 3 units. Find the length of the hypotenuse. Access these online resources for additional instruction and practice with quadratic equations. The Zero-Product Property Quadratic Formula with Two Rational Solutions Length of a leg of a right triangle Key Equations quadratic formula Many quadratic equations can be solved by factoring when the equation has a leading coefficient of 1 or if the equation is a difference of squares. The zero-factor property is then used to find solutions. See (Figure), (Figure), and (Figure). Many quadratic equations with a leading coefficient other than 1 can be solved by factoring using the grouping method. See (Figure) and (Figure). Another method for solving quadratics is the square root property. The variable is squared. We isolate the squared term and take the square root of both sides of the equation. The solution will yield a positive and negative solution. See (Figure) and (Figure). Completing the square is a method of solving quadratic equations when the equation cannot be factored. See (Figure). A highly dependable method for solving quadratic equations is the quadratic formula, based on the coefficients and the constant term in the equation. See (Figure). The discriminant is used to indicate the nature of the roots that the quadratic equation will yield: real or complex, rational or irrational, and how many of each. See (Figure). The Pythagorean Theorem, among the most famous theorems in history, is used to solve right-triangle problems and has applications in numerous fields. Solving for the length of one side of a right triangle requires solving a quadratic equation. See (Figure). How do we recognize when an equation is quadratic? It is a second-degree equation (the highest variable exponent is 2). When we solve a quadratic equation, how many solutions should we always start out seeking? Explain why when solving a quadratic equation in the form we may graph the equation and have no zeroes (x-intercepts). When we solve a quadratic equation by factoring, why do we move all terms to one side, having zero on the other side? We want to take advantage of the zero property of multiplication in the fact that if then it must follow that each factor separately offers a solution to the product being zero: In the quadratic formula, what is the name of the expression under the radical sign and how does it determine the number of and nature of our solutions? Describe two scenarios where using the square root property to solve a quadratic equation would be the most efficient method. One, when no linear term is present (no x term), such as Two, when the equation is already in the form For the following exercises, solve the quadratic equation by factoring. [latex]x=3[/latex] [latex]x=\frac{-1}{3}[/latex] [latex]x=-5[/latex] For the following exercises, solve the quadratic equation by using the square root property. For the following exercises, solve the quadratic equation by completing the square. Show each step. [latex]x=11[/latex] [latex]z=-\frac{1}{2}[/latex] For the following exercises, determine the discriminant, and then state how many solutions there are and the nature of the solutions. Do not solve. Not real One rational Two real; rational For the following exercises, solve the quadratic equation by using the quadratic formula. If the solutions are not real, state No Real Solution. For the following exercises, enter the expressions into your graphing utility and find the zeroes to the equation (the x-intercepts) by using 2nd CALC 2:zero. Recall finding zeroes will ask left bound (move your cursor to the left of the zero,enter), then right bound (move your cursor to the right of the zero,enter), then guess (move your cursor between the bounds near the zero, enter). Round your answers to the nearest thousandth. To solve the quadratic equation we can graph these two equations and find the points of intersection. Recall 2nd CALC 5:intersection. Do this and find the solutions to the nearest tenth. Beginning with the general form of a quadratic equation, solve for x by using the completing the square method, thus deriving the quadratic formula. Show that the sum of the two solutions to the quadratic equation is A person has a garden that has a length 10 feet longer than the width. Set up a quadratic equation to find the dimensions of the garden if its area is 119 ft.2. Solve the quadratic equation to find the length and width. 7 ft. and 17 ft. Abercrombie and Fitch stock had a price given as where is the time in months from 1999 to 2001. ( is January 1999). Find the two months in which the price of the stock was $30. Suppose that an equation is given where represents the number of items sold at an auction and is the profit made by the business that ran the auction. How many items sold would make this profit a maximum? Solve this by graphing the expression in your graphing utility and finding the maximum using 2nd CALC maximum. To obtain a good window for the curve, set [0,200] and [0,10000]. maximum at A formula for the normal systolic blood pressure for a man age measured in mmHg, is given as Find the age to the nearest year of a man whose normal blood pressure measures 125 mmHg. The cost function for a certain company is and the revenue is given by Recall that profit is revenue minus cost. Set up a quadratic equation and find two values of x (production level) that will create a profit of $300. The quadratic equation would be The two values of are 20 and 60. A falling object travels a distance given by the formula ft, where is measured in seconds. How long will it take for the object to traveled 74 ft? A vacant lot is being converted into a community garden. The garden and the walkway around its perimeter have an area of 378 ft2. Find the width of the walkway if the garden is 12 ft. wide by 15 ft. long. An epidemiological study of the spread of a certain influenza strain that hit a small school population found that the total number of students, who contracted the flu days after it broke out is given by the model where Find the day that 160 students had the flu. Recall that the restriction on is at most 6. a process for solving quadratic equations in which terms are added to or subtracted from both sides of the equation in order to make one side a perfect square discriminant the expression under the radical in the quadratic formula that indicates the nature of the solutions, real or complex, rational or irrational, single or double roots. a theorem that states the relationship among the lengths of the sides of a right triangle, used to solve right triangle problems an equation containing a second-degree polynomial; can be solved using multiple methods a formula that will solve all quadratic equations square root property one of the methods used to solve a quadratic equation, in which the term is isolated so that the square root of both sides of the equation can be taken to solve for x zero-product property the property that formally states that multiplication by zero is zero, so that each factor of a quadratic equation can be set equal to zero to solve equations Previous: Complex Numbers Next: Other Types of Equations Quadratic Equations by OpenStax is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.
CommonCrawl
Subscribe About Archive BAIR PICO: Pragmatic Compression for Human-in-the-Loop Decision-Making Siddharth Reddy Oct 6, 2021 Fig. 1: Given the original image $\mathbf{x}$, we would like to generate a compressed image $\hat{\mathbf{x}}$ such that the user's action $\mathbf{a}$ upon seeing the compressed image is similar to what it would have been had the user seen the original image instead. In a 2D top-down car racing video game with an extremely high compression rate (50%), our compression model learns to preserve bends and discard the road farther ahead. Imagine remotely operating a Mars rover from a desk on Earth. The low-bandwidth network connection can make it challenging for the teleoperation system to provide the user with high-dimensional observations like images. One approach to this problem is to use data compression to minimize the number of bits that need to be communicated over the network: for example, the rover can compress the pictures it takes on Mars before sending them to the human operator on Earth. Standard lossy image compression algorithms would attempt to preserve the image's appearance. However, at low bitrates, this approach can waste precious bits on information that the user does not actually need in order to perform their current task. For example, when deciding where to steer and how much to accelerate, the user probably only pays attention to a small subset of visual features, such as obstacles and landmarks. Our insight is that we should focus on preserving those features that affect user behavior, instead of features that only affect visual appearance (e.g., the color of the sky). In this post, we outline a pragmatic compression algorithm called PICO that achieves lower bitrates by intentionally allowing reconstructed images to deviate drastically from the visual appearance of their originals, and instead optimizing reconstructions for the downstream tasks that the user wants to perform with them (see Fig. 1). Pragmatic Compression The straightforward approach to optimizing reconstructions for a specific task would be to train the compression model to directly minimize the loss function for that task. For example, if the user's task is to classify MNIST digits, then one could train the compression model to generate reconstructions that minimize the cross-entropy loss of the user's image classification policy. However, this approach requires prior knowledge of how to evaluate the utility of the user's actions (e.g., the cross-entropy loss for digit labels), and the ability to fit an accurate model of the user's decision-making policy (e.g., an image classifier). The key idea in our work is that we can avoid these limitations by framing the problem more generally: instead of trying to optimize for a specific task, we aim to produce a compressed image that induces the user to take the same action that they would have taken had they seen the original image. Furthermore, we aim to do so in the streaming setting (e.g., real-time video games), where we do not assume access to ground-truth action labels for the original images, and hence cannot compare the user's action upon seeing the compressed image to some ground-truth action. To accomplish this, we use an adversarial learning procedure that involves training a discriminator to detect whether a user's action was taken in response to the compressed image or the original. We call our method PragmatIc COmpression (PICO). Maximizing Functional Similarity of Images through Human-in-the-Loop Adversarial Learning Let $\mathbf{x}$ denote the original image, $\hat{\mathbf{x}}$ the compressed image, $\mathbf{a}$ the user's action, $\pi$ the user's decision-making policy, and $f_{\theta}$ the compression model. PICO aims to minimize the divergence of the user's policy evaluated on the compressed image $\pi(\mathbf{a} | \hat{\mathbf{x}})$ from the user's policy evaluated on the original image $\pi(\mathbf{a} | \mathbf{x})$. Since the user's policy $\pi$ is unknown, we approximately minimize the divergence using conditional generative adversarial networks, where the side information is the original image $\mathbf{x}$, the generator is the compression model $f_{\theta}(\hat{\mathbf{x}} | \mathbf{x})$, and the discriminator $D(\mathbf{a}, \mathbf{x})$ tries to discriminate the action $\mathbf{a}$ that the user takes after seeing the generated image $\hat{\mathbf{x}}$ (see Fig. 1). To train the action discriminator $D(\mathbf{a}, \mathbf{x})$, we need positive and negative examples of user behavior; in our case, examples of user behavior with and without compression. To collect these examples, we randomize whether the user sees the compressed image or the original before taking an action. When the user sees the original $\mathbf{x}$ and takes action $\mathbf{a}$, and we record the pair $(\mathbf{a}, \mathbf{x})$ as a positive example of user behavior. When the user sees the compressed image $\hat{\mathbf{x}}$ and takes action $\mathbf{a}$, we record $(\mathbf{a}, \mathbf{x})$ as a negative example. We then train an action discriminator $D_{\phi}(\mathbf{a}, \mathbf{x})$ to minimize the standard binary cross-entropy loss. Note that this action discriminator is conditioned on the original image $\mathbf{x}$ and the user action $\mathbf{a}$, but not the compressed image $\hat{\mathbf{x}}$—this ensures that the action discriminator captures differences in user behavior caused by compression, while ignoring differences between the original and compressed images that do not affect user behavior. Distilling the Discriminator and Training the Compression Model The action discriminator $D_{\phi}(\mathbf{a}, \mathbf{x})$ gives us a way to approximately evaluate the user's policy divergence. However, we cannot train the compression model $f_{\theta}(\hat{\mathbf{x}}|\mathbf{x})$ to optimize this loss directly, since $D_{\phi}$ does not take the compressed image $\hat{\mathbf{x}}$ as input. To address this issue, we distill the trained action discriminator $D_{\phi}(\mathbf{a}, \mathbf{x})$, which captures differences in user behavior caused by compression, into an image discriminator $D_{\psi}(\hat{\mathbf{x}}, \mathbf{x})$ that links the compressed images to these behavioral differences. Details can be found in Section 3.2 of the full paper. Structured Compression using Generative Models One approach to representing the compression model $f_{\theta}$ could be to structure it as a variational autoencoder (VAE), and train the VAE end to end on PICO's adversarial loss function instead of the standard reconstruction error loss. This approach is fully general, but requires training a separate model for each desired bitrate, and can require extensive exploration of the pixel output space before it discovers an effective compression model. To simplify variable-rate compression and exploration in our experiments, we forgo end-to-end training: we first train a generative model on a batch of images without the human in the loop by optimizing a task-agnostic perceptual loss, then train our compression model to select which subset of latent features to transmit for any given image. We use a variety of different generative models in our experiments, including VAE, $\beta$-VAE, NVAE, and StyleGAN2 models. User Studies We evaluate our method through experiments with human participants on four tasks: reading handwritten digits, browsing an online shopping catalogue of cars, verifying photos of faces, and playing a car racing video game. The results show that our method learns to match the user's actions with and without compression at lower bitrates than baseline methods, and adapts the compression model to the user's behavior. Transcribing Handwritten Digits For users performing a digit reading task, PICO learned to preserve the digit number, while a baseline compression method that optimizes perceptual similarity learns to preserve task-irrelevant details like line thickness and pose angle. Fig. 2: Left: the y-axis represents the rate of agreement of user actions (digit labels) upon seeing a compressed image with user actions upon seeing the original version of that image. Right: each of the five columns in the two groups of compressed images represents a different sample from the stochastic compression model $f(\hat{\mathbf{x}}|\mathbf{x})$ at bitrate 0.011. Car Shopping and Surveying We asked one group of participants to perform a "shopping" task, in which we instructed them to select pictures of cars that they perceive to be within their budget. For these users, PICO learned to preserve the sportiness and perceived price of the car, while randomizing color and background. To test whether PICO can adapt the compression model to the specific needs of different downstream tasks in the same domain, we asked another group of participants to perform a different task with the same car images: survey paint jobs (while ignoring perceived price and other features). For these users, PICO learned to preserve the color of the car, while randomizing the model and pose of the car. Photo Attribute Verification For users performing a photo verification task that involves checking for eyeglasses, PICO learned to preserve eyeglasses while randomizing faces, hats, and other task-irrelevant features. When we changed the task to checking for hats, PICO adapted to preserving hats while randomizing eyeglasses. Car Racing Video Game For users playing a 2D car racing video game with an extremely high compression rate (50%), PICO learned to preserve bends in the road better than baseline methods, enabling users to drive more safely and stay off the grass. Fig. 5: Left: what is actually happening (uncompressed). Right: what the user sees (compressed with PICO). This work is a proof of concept that uses pre-trained generative models to speed up human-in-the-loop learning during our small-scale user studies. However, end-to-end training of the compression model may be practical for real-world web services and other applications, where large numbers of users already continually interact with the system. PICO's adversarial training procedure, which involves randomizing whether users see compressed or uncompressed images, can be implemented in a straightforward manner using standard A/B testing frameworks. Furthermore, in our experiments, we evaluate on extremely high compression rates in order to highlight differences between PICO and other methods, which leads to large visual distortions—in real-world settings with lower compression rates, we would likely see smaller distortions. Continued improvements to generative model architectures for video, audio, and text could unlock a wide range of real-world applications for pragmatic compression, including video compression for robotic space exploration, audio compression for hearing aids, and spatial compression for virtual reality. We are especially excited about using PICO to shorten media for human consumption: for example, summarizing text in such a way that a user who only reads the summary can answer reading comprehension questions just as accurately as if they had read the full text, or trimming a podcast to eliminate pauses and filler words that do not communicate useful information. If you want to learn more, check out our pre-print on arXiv: Siddharth Reddy, Anca D. Dragan, Sergey Levine, Pragmatic Image Compression for Human-in-the-Loop Decision-Making, arXiv, 2021. To encourage replication and extensions, we have released our code. Additional videos are available through the project website. Subscribe to our RSS feed.
CommonCrawl
This article is cited in 168 scientific papers (total in 168 papers) Integral symmetric bilinear forms and some of their applications Abstract: We set up the technique of discriminant-forms, which allows us to transfer many results for unimodular symmetric bilinear forms to the nonunimodular case and is convenient in calculations. Further, these results are applied to Milnor's quadratic forms for singularities of holomorphic functions and also to algebraic geometry over the reals. UDC: 511+513.6 MSC: Primary 10C05; Secondary 14B05, 14J25 Citation: V. V. Nikulin, "Integral symmetric bilinear forms and some of their applications", Izv. Akad. Nauk SSSR Ser. Mat., 43:1 (1979), 111–177; Math. USSR-Izv., 14:1 (1980), 103–167 \paper Integral symmetric bilinear forms and some of their applications \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=A1980KM22000006} Finashin S. Kharlamov V., "Chirality of Real Non-Singular Cubic Fourfolds and Their Pure Deformation Classification", Rev. Mat. Complut. V. V. Nikulin, "On arithmetic groups generated by reflections in Lobachevskii spaces", Math. USSR-Izv., 16:3 (1981), 573–601 A. N. Tyurin, "A local invariant of a Riemannian manifold", Math. USSR-Izv., 19:1 (1982), 125–149 V. M. Kharlamov, "Rigid isotopic classification of real planar curves of degree 5", Funct. Anal. Appl., 15:1 (1981), 73–74 T. Fidler, "Pencils of lines and the topology of real algebraic curves", Math. USSR-Izv., 21:1 (1983), 161–170 V. A. Krasnov, "Harnack–Thom inequalities for mappings of real algebraic varieties", Math. USSR-Izv., 22:2 (1984), 247–275 M. Sh. Farber, "The classification of simple knots", Russian Math. Surveys, 38:5 (1983), 63–117 V. V. Nikulin, "Involutions of integral quadratic forms and their applications to real algebraic geometry", Math. USSR-Izv., 22:1 (1984), 99–172 V. M. Kharlamov, "Classification of nonsingular surfaces of degree $4$ in $\mathbb{RP}^3$ with respect to rigid isotopies", Funct. Anal. Appl., 18:1 (1984), 39–45 V. V. Nikulin, "Filtrations of 2-elementary forms and involutions of integral symmetric and skew-symmetric bilinear forms", Math. USSR-Izv., 27:1 (1986), 159–182 O. Ya. Viro, "Progress in the topology of real algebraic varieties over the last six years", Russian Math. Surveys, 41:3 (1986), 55–82 G. M. Polotovsky, "Connection between the rigid isotopy class of a fifth-order nonsingular curve in $\mathbb{R}P^2$ and its disposition with respect to a line", Funct. Anal. Appl., 20:4 (1986), 330–332 Eduard Looijenga, Jonathan Wahl, "Quadratic functions and smoothing surface singularities", Topology, 25:3 (1986), 261 V. V. Nikulin, "On correspondences between K3 surfaces", Math. USSR-Izv., 30:2 (1988), 375–383 Shigeyuki Kondō, "On the Albanese variety of the moduli space of polarizedK3 surfaces", Invent math, 91:3 (1988), 587 Rick Miranda, "Persson's list of singular fibers for a rational elliptic surface", Math Z, 205:1 (1990), 191 Brian Harbourne, Rick Miranda, "Exceptional curves on rational numerically elliptic surfaces", Journal of Algebra, 128:2 (1990), 405 Hans Sterk, "Compactifications of the period space of Enriques surfaces Part I", Math Z, 207:1 (1991), 1 V. A. Krasnov, "Algebraic cycles on a real algebraic GM-manifold and their applications", Russian Acad. Sci. Izv. Math., 43:1 (1994), 141–160 Jürg Fröhlich, Emmanuel Thiran, "Integral quadratic forms, Kac-Moody algebras, and fractional quantum Hall effect. AnADE-O classification", J Statist Phys, 76:1-2 (1994), 209 Donald G. James, "Representations by unimodular quadratic ℤ-lattices", Math Z, 215:1 (1994), 465 Rudolf Scharlau, Boris B. Venkov, "The genus of the Barnes-Wall lattice", Comment Math Helv, 69:1 (1994), 322 Hans Sterk, "Lattices and K3 surfaces of degree 6", Linear Algebra and its Applications, 226-228 (1995), 297 D. O. Orlov, "Equivalences of derived categories andK3 surfaces", Journal of Mathematical Sciences (New York), 84:5 (1997), 1361 Ori J. Ganor, David R. Morrison, Nathan Seiberg, "Branes, Calabi-Yau spaces, and toroidal compactification of the N = 1 six-dimensional E8 theory", Nuclear Physics B, 487:1-2 (1997), 93 Gritsenko V.A., Nikulin V.V., "Automorphic forms and Lorentzian Kac-Moody algebras. Part II", International Journal of Mathematics, 9:2 (1998), 201–275 Andrei Mikhailov, "Momentum lattice for CHL string", Nuclear Physics B, 534:3 (1998), 612 A. I. Degtyarev, V. I. Zvonilov, "Rigid isotopy classification of real algebraic curves of bidegree $(3,3)$ on quadrics", Math. Notes, 66:6 (1999), 670–674 V. G. Zhuravlev, "Embedding $p$-elementary lattices", Izv. Math., 63:1 (1999), 73–102 V. A. Krasnov, "Real algebraic varieties without real points", Izv. Math., 63:4 (1999), 757–790 Robbert Dijkgraaf, "Instanton strings and hyper-Kähler geometry", Nuclear Physics B, 543:3 (1999), 545 Degtyarev A., Itenberg I., Kharlamov V., Real Enriques surfaces, Lecture Notes in Math., 1746, Springer-Verlag, Berlin, 2000, xvi+259 pp. Eberhard Freitag, Carl Friedrich Hermann, "Some Modular Varieties of Low Dimension", Advances in Mathematics, 152:2 (2000), 203 Myung-Hwan Kim, Byeong-Kweon Oh, "Generation of Isometries of Certain -Lattices by Symmetries", Journal of Number Theory, 83:1 (2000), 76 Kondo S., "A Complex Hyperbolic Structure for the Moduli Space of Curves of Genus Three", J. Reine Angew. Math., 525 (2000), 219–232 A. I. Degtyarev, V. M. Kharlamov, "Topological properties of real algebraic varieties: du coté de chez Rokhlin", Russian Math. Surveys, 55:4 (2000), 735–814 K. KOIKE, H. SHIGA, N. TAKAYAMA, T. TSUTSUI, "STUDY ON THE FAMILY OF K3 SURFACES INDUCED FROM THE LATTICE (D4)3⊕ <-2> ⊕ < 2>: STUDY ON THE FAMILY OF K3 SURFACES", Int. J. Math, 12:09 (2001), 1049 Degtyarev A., Kharlamov V., "Real rational surfaces are quasi-simple", Journal fur Die Reine und Angewandte Mathematik, 551 (2002), 87–99 Sarah-Marie Belcastro, "PICARD LATTICES OF FAMILIES OF K3 SURFACES", Communications in Algebra, 30:1 (2002), 61 J. Keum, D.-Q. Zhang, "Fundamental groups of open K3 surfaces, Enriques surfaces and Fano 3-folds", Journal of Pure and Applied Algebra, 170:1 (2002), 67 Robert L. Griess, Gerald Höhn, "Virasoro frames and their stabilizers for the E<sub>8</sub> lattice type vertex operator algebra", crll, 2003:561 (2003), 1 C. G. Madonna, V. V. Nikulin, "On a Classical Correspondence between K3 Surfaces", Proc. Steklov Inst. Math., 241 (2003), 120–153 D. O. Orlov, "Derived categories of coherent sheaves and equivalences between them", Russian Math. Surveys, 58:3 (2003), 511–591 Trygve Johnsen, Andreas Leopold Knutsen, "Rational Curves in Calabi-Yau Threefolds", Communications in Algebra, 31:8 (2003), 3917 Catanese F. Frediani P., "Real Hyperelliptic Surfaces and the Orbifold Fundamental Group", J. Inst. Math. Jussieu, 2:2 (2003), 169–233 Shinobu Hosono, B.H.. Lian, Keiji Oguiso, Shing-Tung Yau, "c = 2 Rational Toroidal Conformal Field Theories via the Gauss Product", Commun. Math. Phys, 241:2-3 (2003), 245 Paolo Stellari, "Some Remarks about the FM-partners of K3 Surfaces with Picard Numbers 1 and 2", Geom Dedicata, 108:1 (2004), 1 V. V. Nikulin, "On Correspondences of a K3 Surface with Itself. I", Proc. Steklov Inst. Math., 246 (2004), 204–226 Degtyarev A., Itenberg I., Kharlamov V., "Finiteness and quasi-simplicity for symmetric K3-surfaces", Duke Mathematical Journal, 122:1 (2004), 1–49 Nils R. Scheithauer, "Generalized Kac–Moody algebras, automorphic forms and Conway's group I", Advances in Mathematics, 183:2 (2004), 240 Daniel Huybrechts, Paolo Stellari, "Equivalences of twisted K3 surfaces", Math Ann, 332:4 (2005), 901 Bert van Geemen, "Some remarks on Brauer groups of K3 surfaces", Advances in Mathematics, 197:1 (2005), 222 DANIEL HUYBRECHTS, "GENERALIZED CALABI–YAU STRUCTURES, K3 SURFACES, AND B-FIELDS", Int. J. Math, 16:01 (2005), 13 Shigeyuki Kondō, "Maximal subgroups of the Mathieu group M23 and symplectic automorphisms of supersingular K3 surfaces", Internat Math Res Notices, 2006 (2006), 1 Vik. S. Kulikov, V. M. Kharlamov, "Surfaces with DIF$\ne$DEF real structures", Izv. Math., 70:4 (2006), 769–807 D.-Q. Zhang, "The alternating groups and K3 surfaces", Journal of Pure and Applied Algebra, 207:1 (2006), 119 Kharlamov V., "Overview of topological properties of real algebraic surfaces", Algebraic Geometry and Geometric Modeling, Mathematics and Visualization, 2006, 103–117 Paolo Stellari, "Derived categories and Kummer varieties", Math Z, 256:2 (2007), 425 Frédéric Bihan, Frédéric Mangolte, "Topological types of real regular Jacobian elliptic surfaces", Geom Dedicata, 127:1 (2007), 57 Ken-Ichi Yoshikawa, "Real K3 surfaces without real points, equivariant determinant of the Laplacian, and the Borcherds Φ-function", Math Z, 258:1 (2007), 213 Antonio Rapagnetta, "On the Beauville form of the known irreducible symplectic varieties", Math Ann, 340:1 (2007), 77 V. A. Krasnov, "Fano Surfaces of Real Quartics", Math. Notes, 81:1 (2007), 72–84 A. I. Degtyarev, I. V. Itenberg, V. M. Kharlamov, "Deformation finiteness for real hyperkähler manifolds", Mosc. Math. J., 7:2 (2007), 257–263 Adrian Clingher, Charles F. Doran, "On K3 surfaces with large complex structure", Advances in Mathematics, 215:2 (2007), 504 V. V. Nikulin, "On the connected components of moduli of real polarized K3-surfaces", Izv. Math., 72:1 (2008), 91–111 Shamik Banerjee, Ashoke Sen, "Duality orbits, dyon spectrum and gauge theory limit of heterotic string theory on T<sup>6</sup>", J High Energy Phys, 2008:3 (2008), 022 Alessandra Sarti, "Transcendental lattices of someK 3-surfaces", Math Nachr, 281:7 (2008), 1031 Michela Artebani, Alessandra Sarti, "Non-symplectic automorphisms of order 3 on K3 surfaces", Math Ann, 342:4 (2008), 903 Finashin S., Kharlamov V., "Deformation classes of real four-dimensional cubic hypersurfaces", J. Algebraic Geom., 17:4 (2008), 677–707 Degtyarev A., Itenberg I., Kharlamov V., "On deformation types of real elliptic surfaces", Amer. J. Math., 130:6 (2008), 1561–1627 Macri E., Stellari P., "Automorphisms and autoequivalences of generic analytic K3 surfaces", Journal of Geometry and Physics, 58:1 (2008), 133–164 Markman E., "On the monodromy of moduli spaces of sheaves on K3 surfaces", Journal of Algebraic Geometry, 17:1 (2008), 29–99 CANER KOCA, ALİ SİNAN SERTÖZ, "IRREDUCIBLE HEEGNER DIVISORS IN THE PERIOD SPACE OF ENRIQUES SURFACES", Int. J. Math, 19:02 (2008), 209 V. A. Krasnov, "On the Fano Variety of a Class of Real Four-Dimensional Cubics", Math. Notes, 85:5 (2009), 682–689 Schütt M., "CM newforms with rational coefficients", Ramanujan J., 19:2 (2009), 187–205 Macri E., Stellari P., "Infinitesimal Derived Torelli Theorem for K3 Surfaces (with an Appendix by Sukhendu Mehrotra)", Internat Math Res Notices, 2009, no. 17, 3190 Vijay Kumar, Washington Taylor, "Freedom and constraints in the K3 landscape", J High Energy Phys, 2009:5 (2009), 066 Finashin, S, "On the deformation chirality of real cubic fourfolds", Compositio Mathematica, 145:5 (2009), 1277 Huybrechts D., "The Global Torelli Theorem: classical, derived, twisted", Algebraic Geometry, Proceedings of Symposia in Pure Mathematics, 80, no. 1- 2, 2009, 235–258 V. A. Krasnov, "Equivariant Topological Classification of the Fano Varieties of Real Four-Dimensional Cubics", Math. Notes, 85:4 (2009), 574–583 Artebani M., Sarti A., Taki Sh., "$K3$ surfaces with non-symplectic automorphisms of prime order", Math. Z., 2010 Hulek K., Schütt M., "Enriques surfaces and Jacobian elliptic K3 surfaces", Math. Z., 2010 Finashin S., Kharlamov V., "Topology of real cubic fourfolds", Journal of Topology, 3:1 (2010), 1–28 Radu Laza, "The moduli space of cubic fourfolds via the period map", Ann of Math, 172:1 (2010), 673 S. Kondo, "Moduli of Plane Quartics, Gopel Invariants and Borcherds Products", Internat Math Res Notices, 2010 Shingo Taki, "Classification of non-symplectic automorphisms of order 3 on K3 surfaces", Math. Nachr, 2010, n/a Allcock D., Carlson J.A., Toledo D., "Hyperbolic Geometry and Moduli of Real Cubic Surfaces", Annales Scientifiques de l Ecole Normale Superieure, 43:1 (2010), 69–115 Kenji Hashimoto, Tomohide Terasoma, "Period map of a certainK3 family with an 𝔖5-action", Journal für die reine und angewandte Mathematik (Crelles Journal), 2010, - Diarmuid Crowley, Jörg Sixt, "Stably diffeomorphic manifolds andl2q+1(ℤ[π])", Forum Mathematicum, 2010, - Matthias Schütt, "K3 surfaces with non-symplectic automorphisms of 2-power order", Journal of Algebra, 323:1 (2010), 206 Ron Livné, Matthias Schütt, Noriko Yui, "The modularity of K3 surfaces with non-symplectic group actions", Math. Ann, 348:2 (2010), 333 Jin-Gen Yang, Jinjing Xie, "Discrminantal groups and Zariski pairs of sextic curves", Geom Dedicata, 2011 Ursula Whitcher, "Symplectic Automorphisms and the Picard Group of a K3 Surface", Comm. in Algebra, 39:4 (2011), 1427 Ken-Ichi Yoshikawa, "K3 surfaces with involution, equivariant analytic torsion, and automorphic forms on the moduli space III: the case r(M) ≥ 18", Math. Z, 2011 Frederik Denef, Gregory W. Moore, "Split states, entropy enigmas, holes and halos", J. High Energ. Phys, 2011:11 (2011) Toshiyuki Katsura, Shigeyuki Kondō, "Rational curves on the supersingular K3 surface with Artin invariant 1 in characteristic 3", Journal of Algebra, 2011 Kenji Koike, "Hessian K3 surfaces of non-Sylvester type", Journal of Algebra, 330:1 (2011), 388 Degtyarev A., "Topology of plane algebraic curves: the algebraic approach", Topology of Algebraic Varieties and Singularities, Contemporary Mathematics, 538, 2011, 137–161 Anton Kapustin, Natalia Saulina, "Topological boundary conditions in abelian Chern–Simons theory", Nuclear Physics B, 845:3 (2011), 393 Artebani M. Kondo Sh., "The Moduli of Curves of Genus Six and K3 Surfaces", Trans. Am. Math. Soc., 363:3 (2011), 1445–1462 V. A. Krasnov, "Real Two-Dimensional Intersections of a Quadric by a Cubic", Math. Notes, 90:4 (2011), 509–516 Emanuele Macrì, Paolo Stellari, "Fano varieties of cubic fourfolds containing a plane", Math. Ann, 2012 Mang Xu, JiaJin Zhang, "Lattices, Diophantine equations and applications to rational surfaces", Sci. China Math, 2012 Shingo Taki, "Classification of non-symplectic automorphisms on K3 surfaces which act trivially on the Néron–Severi lattice", Journal of Algebra, 358 (2012), 16 Shigeyuki Kondō, "The moduli space of Hessian quartic surfaces and automorphic forms", Journal of Pure and Applied Algebra, 2012 Kwang-Woo Lee, "WHICH K3 SURFACES WITH PICARD NUMBER 19 COVER AN ENRIQUES SURFACE", Bulletin of the Korean Mathematical Society, 49:1 (2012), 213 Giovanni Mongardi, "Symplectic involutions on deformations of K3[2]", centr.eur.j.math, 10:4 (2012), 1472 Adrian Clingher, Charles F. Doran, "Lattice polarized K3 surfaces and Siegel modular forms", Advances in Mathematics, 231:1 (2012), 172 Max Pumperla, Frank Reidegeld, "-manifolds from surfaces with non-symplectic automorphisms", Journal of Geometry and Physics, 2012 Klaus Hulek, Matthias Schütt, "Arithmetic of singular Enriques surfaces", ANT, 6:2 (2012), 195 Madonna C.G., "On Some Moduli Spaces of Bundles on K3 Surfaces, II", Proc. Amer. Math. Soc., 140:10 (2012), 3397–3408 Atsuhira NAGANO, "PERIOD DIFFERENTIAL EQUATIONS FOR THE FAMILIES OF K3 SURFACES WITH TWO PARAMETERS DERIVED FROM THE REFLEXIVE POLYTOPES", Kyushu J. Math, 66:1 (2012), 193 Noam D. Elkies, Matthias Schütt, "Modular forms and K3 surfaces", Advances in Mathematics, 240 (2013), 106 Izv. Math., 77:3 (2013), 509–524 Markman E., "Prime Exceptional Divisors on Holomorphic Symplectic Varieties and Monodromy Reflections", Kyoto J. Math., 53:2, 2 (2013), 345–403 Ciro Ciliberto, A.L.eopold Knutsen, "On k-gonal loci in Severi varieties on general K3 surfaces and rational curves on hyperkähler manifolds", Journal de Mathématiques Pures et Appliquées, 2013 A. Taormina, K. Wendland, "The overarching finite symmetry group of Kummer surfaces in the Mathieu group M 24", J. High Energ. Phys, 2013:8 (2013) V. V. Nikulin, "Kählerian K3 surfaces and Niemeier lattices. I", Izv. Math., 77:5 (2013), 954–997 Boissiere S. Nieper-Wisskirchen M. Sarti A., "Smith Theory and Irreducible Holomorphic Symplectic Manifolds", J. Topol., 6:2 (2013), 361–390 Goto Ya., Livne R., Yui N., "Automorphy of Calabi-Yau Threefolds of Borcea-Voisin Type Over Q", Commun. Number Theory Phys., 7:4 (2013), 581–670 Grgoire Menet, "Duality for relative Prymians associated to K3 double covers of del Pezzo surfaces of degree 2", Math. Z, 2014 Gerald Höhn, N.R.. Scheithauer, "A generalized Kac–Moody algebra of rank 14", Journal of Algebra, 404 (2014), 222 Jennifer Cano, Meng Cheng, Michael Mulligan, Chetan Nayak, Eugeniu Plamadeala, "Bulk-edge correspondence in (2 + 1)-dimensional Abelian topological phases", Phys. Rev. B, 89:11 (2014) Michela Artebani, Samuel Boissière, Alessandra Sarti, "The Berglund–Hübsch–Chiodo–Ruan mirror symmetry for K3 surfaces", Journal de Mathématiques Pures et Appliquées, 2014 A. P. Braun, Y. Kimura, T. Watari, "The Noether-Lefschetz problem and gauge-group-resolved landscapes: F-theory on K3 × K3 as a test case", J. High Energ. Phys, 2014:4 (2014) Ljudmila Kamenova, Misha Verbitsky, "Families of Lagrangian fibrations on hyperkähler manifolds", Advances in Mathematics, 260 (2014), 401 Ichiro Shimada, De-Qi Zhang, "Dynkin diagrams of rank 20 on supersingular K3 surfaces", Sci. China Math, 2014 Florian Beye, "Chiral four-dimensional heterotic covariant lattices", J. High Energ. Phys, 2014:11 (2014) Dylan Attwell-Duval, "On the number of cusps of orthogonal Shimura varieties", Ann. Math. Québec, 2014 Comparin P., Garbagnati A., "Van Geemen-Sarti Involutions and Elliptic Fibrations on K3 Surfaces Double Cover of P-2", J. Math. Soc. Jpn., 66:2 (2014), 479–522 J.O.. Kleppe, J.C.. Ottem, "Components of the Hilbert scheme of space curves on low-degree smooth surfaces", Int. J. Math, 2015, 1550017 K.G.. O'Grady, "Periods of double EPW-sextics", Math. Z, 2015 C.T.. McMullen, "Automorphisms of projective K3 surfaces with minimum entropy", Invent. math, 2015 Yichao Zhang, "Zagier duality and integrality of Fourier coefficients for weakly holomorphic modular forms", Journal of Number Theory, 155 (2015), 139 G. Mongardi, M. Wandel, "Induced automorphisms on irreducible symplectic manifolds", Journal of the London Mathematical Society, 2015 M.C.. N. Cheng, Sarah Harrison, "Umbral Moonshine and K3 Surfaces", Commun. Math. Phys, 2015 Viatcheslav Kharlamov, Rareş Răsdeaconu, "Counting Real Rational Curves onK3 Surfaces", Int Math Res Notices, 2015:14 (2015), 5436 V. V. Nikulin, "Degenerations of Kählerian K3 surfaces with finite symplectic automorphism groups", Izv. Math., 79:4 (2015), 740–794 Shaul Zemel, "A p-adic approach to the Weil representation of discriminant forms arising from even lattices", Ann. Math. Québec, 39:1 (2015), 61 Esnault H. Oguiso K., "Non-Liftability of Automorphism Groups of a K3 Surface in Positive Characteristic", Math. Ann., 363:3-4 (2015), 1187–1206 Ma Sh. Ohashi H. Taki Sh., "Rationality of the Moduli Spaces of Eisenstein K3 Surfaces", Trans. Am. Math. Soc., 367:12 (2015), 8643–8679 Finashin S., Kharlamov V., "Apparent Contours of Nonsingular Real Cubic Surfaces", Trans. Am. Math. Soc., 367:10 (2015), PII S0002-9947(2015)06286-2, 7221–7289 Ma Sh., "Rationality of the Moduli Spaces of 2-Elementary K3 Surfaces", J. Algebr. Geom., 24:1 (2015), 81–158 V. V. Nikulin, "Degenerations of Kählerian K3 surfaces with finite symplectic automorphism groups. II", Izv. Math., 80:2 (2016), 359–402 Nikulin V.V., "Kahlerian K3 Surfaces and Niemeier Lattices, II", Development of Moduli Theory - Kyoto 2013, Advanced Studies in Pure Mathematics, 69, ed. Fujino O. Kondo S. Moriwaki A. Saito M. Yoshioka K., Math Soc Japan, 2016, 421–471 Garbagnati A. Sarti A., "Kummer surfaces and K3 surfaces with $(\mathbb{Z} /2\mathbb{Z} )^4$ symplectic action", Rocky Mt. J. Math., 46:4 (2016), 1141–1205 V. V. Nikulin, "Degenerations of Kählerian K3 surfaces with finite symplectic automorphism groups. III", Izv. Math., 81:5 (2017), 985–1029 Valery Gritsenko, Viacheslav V. Nikulin, "Examples of lattice-polarized $K3$ surfaces with automorphic discriminant, and Lorentzian Kac–Moody algebras", Trans. Moscow Math. Soc., 78 (2017), 75–83 V. A. Krasnov, "On intersections of two real quadrics", Izv. Math., 82:1 (2018), 91–139 Cheng M.C.N., Harrison S.M., Volpato R., Zimet M., "K3 String Theory, Lattices and Moonshine", Res. Math. Sci., 5 (2018), 32 Rams S., Schuett M., "On Enriques Surfaces With Four Cusps", Publ. Res. Inst. Math. Sci., 54:3 (2018), 433–468 V. A. Krasnov, "On a classical correspondence of real K3 surfaces", Izv. Math., 82:4 (2018), 662–693 V. V. Nikulin, "Classification of Picard lattices of K3 surfaces", Izv. Math., 82:4 (2018), 752–816 V. A. Gritsenko, "Reflective modular forms and applications", Russian Math. Surveys, 73:5 (2018), 797–864 È. B. Vinberg, "On Some Free Algebras of Automorphic Forms", Funct. Anal. Appl., 52:4 (2018), 270–289 V. V. Przyjalkowski, "Toric Landau–Ginzburg models", Russian Math. Surveys, 73:6 (2018), 1033–1118 Schaffler L., "K3 Surfaces With Z(2)(2) Symplectic Action", Rocky Mt. J. Math., 48:7 (2018), 2347–2383 V. A. Krasnov, "Real Kummer surfaces", Izv. Math., 83:1 (2019), 65–103 V. V. Nikulin, "Classification of degenerations and Picard lattices of Kählerian K3 surfaces with symplectic automorphism group $D_6$", Izv. Math., 83:6 (2019), 1201–1233 V. A. Gritsenko, H. Wang, "Antisymmetric paramodular forms of weight 3", Sb. Math., 210:12 (2019), 1702–1723 Viacheslav V. Nikulin, "Classification of Degenerations and Picard Lattices of Kählerian K3 Surfaces with Symplectic Automorphism Group $C_4$", Proc. Steklov Inst. Math., 307 (2019), 130–161 This page: 2685
CommonCrawl
In a resistor network, is there any relation between the shortest path and the maximum electric current path? Consider a shortest path problem between the source $s$ and sink $t$ in an undirected weighted graph. There's a well known algorithm such as Dijkstra's algorithm that solves this problem. Naturally, this graph can be considered as a resistive network where each edge $e$ with distance (or a cost) $d_{e}$ corresponds to a resistor with resistance $d_{e}$ or conductance $1/d_{e}$. From a physical point of view, assume that we send one unit of current from s to t, say by attaching $s$ to a current source and $t$ to ground. It will induce a certain electric flow in $G$ and this flow is unique. And the approximated solution (or the approximate electric flow) can be obtained in time $\tilde{O}(m)$ where $m$ is the number of edges in the graph. My question: what is the relation between the shortest path in a graph and the path in an electric network where the max electric current flows? Is it true that two paths are always the same when there exists a unique shortest path from $s$ to $t$? The intuition: The electric current tends to flow through the path with the least resistance, and the least resistance between $s$ to $t$ may be interpreted as the shortest path between $s$ to $t$. 8--R--7--R--+ | | | | R | +--R--6--R--+ +--R--3--R--4 +--R--2 R | | @MarzioDeBiasi: Thank you for the picture and actual current computation. But ironically, your answer only fortifies my conjecture about the shortest path and max current path. (I apologize for not explaining my conjecture clearly if that causes some confusion.) As you pointed out, we have the current of 33.9 mA from 4 to 0, but that's due to the superimposition of many currents. For example, the current flows 8 - 2 - 3 - 4 - 0, 8 - 3 - 4 - 0, 8 - 5 - 6 - 7 - 4 - 0 and 8 - 7 - 4 - 0 contribute to the current flow from 4 to 0. And none of them are larger than the curent of 25mA on the path of 8 - 1 - 0. On the other hand, there's only one path for the flow 8 - 1 - 0 and it sends the largest amoun of current in the circuit. graph-theory co.combinatorics spectral-graph-theory shortest-path Federico Magallanez Federico MagallanezFederico Magallanez $\begingroup$ I think that using the Ohm's law you can make the path with least resistance arbitrarily long: suppose that $s$ and $t$ are directly connected with a resistor $R$; then you can make another path $P$ from $s$ to $t$ using $n-1$ "segments", with each segment having $n$ resistors in parallel. The total resistence of $P$ is $(n-1)(R/n) = R-R/n < R$. Perhaps a better model is a flow network. $\endgroup$ – Marzio De Biasi Jul 25 '12 at 12:55 $\begingroup$ @MarzioDeBiasi: But we usually assume that there's no parallel edges in a weighted graph when we analyze the shortest path problem. What if there's no parallel edges or no parallel resistors between any two nodes? $\endgroup$ – Federico Magallanez Jul 25 '12 at 13:39 $\begingroup$ ok, do you allow the following 4 edges: $n_1 \leftarrow^R \rightarrow n_A$; $n_A \leftarrow^R \rightarrow n_2$, $n_1 \leftarrow^R \rightarrow n_B$, $n_B \leftarrow^R \rightarrow n_2$ ? (in this case the reason above is still valid) $\endgroup$ – Marzio De Biasi Jul 25 '12 at 15:47 $\begingroup$ @MarzioDeBiasi: Maybe I miss something in your comment, but I guess you are talking about a square-like graph with four edges and four nodes where all the edges have the same weight, which have two shortest paths $n_1 − n_A − n_2$ and $n_1 − n_B − n_2$ of length $2R$. But I'm not sure how it works as a counterexample of my conjecture about the shortest path and the max current path. $\endgroup$ – Federico Magallanez Jul 25 '12 at 17:12 $\begingroup$ Your claim that "there is only one path for the flow 8-1-0" is simply not true. Let $x\flat$ denote the node just to the left of node $x$ in your picture, and consider the path 8-7-6-$6\flat$-$5\flat$-5-3-2-$2\flat$-$1\flat$-1-0. Yes, the flow through this path also contributes to the current through 1-0. $\endgroup$ – Jeffε Jul 27 '12 at 3:02 Just an extended comment to include the picture: if the following (random) resistor network is valid then the max current flow 33.9mA is not on the shortest path from $s$ (+5V) to $t$ (GND). Marzio De BiasiMarzio De Biasi $\begingroup$ I updated my question, and apologize for not explaining my conjecture clearly if that caused some confusion. $\endgroup$ – Federico Magallanez Jul 26 '12 at 22:51 Your intuition is in a certain sense correct. By straightforward linear algebra we can find the electric potential of each node in the graph, given the net current from source to sink. Now we can calculate the probability distribution of the path taken by the electric current through the graph. By the second law of thermodynamics, (that no process may independently cause a decrease in entropy,) it is impossible for electric current to independently flow from a lower to a higher potential, and even by the first law (conservation of energy,) it is impossible for current to flow between nodes of equal potential through a non-zero resistance. So every edge in the graph is directed from a higher to a lower potential, any edges of zero resistance are identified to a single node, and any non-zero-resistance edges between nodes of equal potential are eliminated. Thus the graph is directed and acyclic, and the path taken by the electric current from the source node to the sink node is a finite Markov chain, because we are confident that an electric current will take an independent and memoryless path through the given graph. The Markov transition probability of each edge in the graph is then given by the ratio of the current through that edge to the total (gross, not net!) current flowing out from the same node: $$P(T_{ij})= \frac{(I_{ij})_+}{\sum_k (I_{ik})_+}= \frac{\frac 1 {R_{ij}}(E_{i}-E_{j})_{+}}{\sum_{k}\frac 1 {R_{ik}}(E_{i}-E_{k})_{+}}$$ where we have denoted the positive part of a parenthesized expression by a subscript "+". If we compute the self-information of each edge $$S(T_{ij})=-\log P(T_{ij})$$ then the shortest path weighted additively by self-information is indeed the most likely path taken by the electric current, and the current that actually traverses that entire path is equal to the total current times the product of the Markov transition probabilities of all the edges along that path. However, as to your question the shortest path weighted by Markov self-information is not necessarily the shortest path weighted by resistance, (because of the effects of "crowding out" by current taking other paths sharing the same edge.) JL344JL344 $\begingroup$ I know that there's a close relation between current flow and markov chain probability which is pointed out in the book by Snell and Doyle. But I'm not sure what you mean by 'crowding out'. Can you give a reference which explains crowding out? $\endgroup$ – Federico Magallanez Jul 27 '12 at 19:09 $\begingroup$ It's just my way of trying to explain that the current actually traversing a complete path under a Markov chain analysis may well be less than the minimum of the total currents for each edge on the path. The term "crowding out" as I have used it is very non-technical, not a specific term that can be referenced in the literature. Sorry if that confused you. $\endgroup$ – JL344 Jul 27 '12 at 21:23 Not the answer you're looking for? Browse other questions tagged graph-theory co.combinatorics spectral-graph-theory shortest-path or ask your own question. Finding short and fat paths Longest path among all pairwise shortest paths in unweighted undirected biconnected planar graphs Using Max-Flow (Ford Fulkerson) to find satisfying flow Source sending 0 transport units Finding the shortest path in the presence of negative cycles Algorithm for Max Network Flow with lower bounds and its complexity What's the fastest algorithm to compute Max{max flow with single source and multiple sinks} Max network flow with arbitrary source / sink What is the proof that visibility graphs can be used to compute the shortest path? Maximum difference between two shortest paths
CommonCrawl
JGM Home Symmetries of line bundles and Noether theorem for time-dependent nonholonomic systems June 2018, 10(2): 139-172. doi: 10.3934/jgm.2018005 A note on time-optimal paths on perturbed spheroid Piotr Kopacz 1,2, Jagiellonian University, Faculty of Mathematics and Computer Science, ul. Prof. St. Lojasiewicza 6, 30 - 348 Kraków, Poland Gdynia Maritime University, Faculty of Navigation, Al. Jana Pawla Ⅱ 3, 81-345 Gdynia, Poland Received January 2016 Revised March 2018 Published May 2018 Fund Project: The author is supported by a grant from the Polish National Science Center under research project number 2013/09/N/ST10/02537 Figure(28) We consider Zermelo's problem of navigation on a spheroid in the presence of space-dependent perturbation $W$ determined by a weak velocity vector field, $|W|_h<1$. The approach is purely geometric with application of Finsler metric of Randers type making use of the corresponding optimal control represented by a time-minimal ship's heading $\varphi(t)$ (a steering direction). A detailed exposition including investigation of the navigational quantities is provided under a rotational vector field. This demonstrates, in particular, a preservation of the optimal control $\varphi(t)$ of the time-efficient trajectories in the presence and absence of acting perturbation. Such navigational treatment of the problem leads to some simple relations between the background Riemannian and the resulting Finsler geodesics, thought of the deformed Riemannian paths. Also, we show some connections with Clairaut's relation and a collision problem. The study is illustrated with an example considered on an oblate ellipsoid. Keywords: Time-minimal path, spheroid, ellipsoid, Zermelo problem, navigation, Finsler geometry, Randers metric, rotation. Mathematics Subject Classification: Primary: 53C22, 53C60; Secondary: 49N90, 49J15. Citation: Piotr Kopacz. A note on time-optimal paths on perturbed spheroid. Journal of Geometric Mechanics, 2018, 10 (2) : 139-172. doi: 10.3934/jgm.2018005 The International Maritime Organization (IMO), COLREG: Convention on the International Regulations for Preventing Collisions at Sea, London, United Kingdom, 2004.Google Scholar N. Aldea and P. Kopacz, Generalized Zermelo navigation on Hermitian manifolds under mild wind, Diff. Geom. Appl., 54 (2017), 325-343. doi: 10.1016/j.difgeo.2017.05.007. Google Scholar N. Aldea and P. Kopacz, Generalized Zermelo navigation on Hermitian manifolds with a critical wind, Results Math., 72 (2017), 2165-2180. doi: 10.1007/s00025-017-0757-6. Google Scholar K. J. Arrow, On the use of winds in flight planning, J. Meteor., 6 (1949), 150-159. Google Scholar D. Bao and C. Robles, Ricci and flag curvatures in Finsler geometry, in A sampler of Riemann-Finsler geometry (eds. D. Bao et al.), Math. Sci. Res. Inst. Publ., 50 (2004), Cambridge Univ. Press, 197-259. Google Scholar D. Bao, C. Robles and Z. Shen, Zermelo navigation on Riemannian manifolds, J. Diff. Geom., 66 (2004), 377-435. Google Scholar D. C. Brody, G. W. Gibbons and D. M. Meier, Time-optimal navigation through quantum wind, New J. Phys., 17 (2015), 033048, 8pp. doi: 10.1088/1367-2630/17/3/033048. Google Scholar D. C. Brody, G. W. Gibbons and D. M. Meier, A Riemannian approach to Randers geodesics, J. Geom. Phys., 106 (2016), 98-101. doi: 10.1016/j.geomphys.2016.03.019. Google Scholar D. C. Brody and D. M. Meier, Solution to the quantum Zermelo navigation problem, Phys. Rev. Lett., 114 (2015), 100502.Google Scholar E. Caponio, M. A. Javaloyes and M. Sànchez, Wind Finslerian structures: from Zermelo's navigation to the causality of spacetimes, preprint, arXiv: 1407.5494.Google Scholar C. Carathéodory, Calculus of Variations and Partial Differential Equations of the First Order, San Francisco-London-Amsterdam, 1965. Google Scholar S.-S. Chern and Z. Shen, Riemann-Finsler Geometry, Nankai tracts in mathematics, World Scientific, River Edge (N. J.), London, Singapore, 2005. doi: 10.1142/5263. Google Scholar M. A. Earle, Sphere to spheroid comparisons, J. Navigation, 59 (2006), 491-496. Google Scholar A. de Mira Fernandes, Sul problema brachistocrono di Zermelo, Rendiconti della R. Acc. dei Lincei, XV (1932), 47-52. Google Scholar C. A. R. Herdeiro, Mira Fernandes and a generalised Zermelo problem: Purely geometric formulations, Bol. Soc. Port. Mat., 2010, Special Issue. Publication date estimated, 179-191. Google Scholar M. R. Jardin, Toward Real-Time en Route Air Traffic Control Optimization, Ph. D thesis, Stanford University, 2003.Google Scholar A. B. Katok, Ergodic perturbations of degenerate integrable Hamiltonian systems, Math. USSR Izv., 37 (1973), 539-576. Google Scholar P. Kopacz, On generalization of Zermelo navigation problem on Riemannian manifolds, preprint, arXiv: 1604.06487.Google Scholar P. Kopacz, Application of planar Randers geodesics with river-type perturbation in search models, Appl. Math. Model., 49 (2017), 531-553. doi: 10.1016/j.apm.2017.05.007. Google Scholar P. Kopacz, A note on generalization of Zermelo navigation problem on Riemannian manifolds with strong perturbation, An. Sti. U. Ovid. Co.-Mat., 25 (2017), 107-123. Google Scholar P. Kopacz, Application of codimension one foliation in Zermelo's problem on Riemannian manifolds, Diff. Geom. Appl., 35 (2014), 334-349. doi: 10.1016/j.difgeo.2014.04.007. Google Scholar P. Kopacz, Proposal on a Riemannian approach to path modeling in a navigational decision support system, in Activities of Transport Telematics. TST 2013. Communications in Computer and Information Science 395 (eds. J. Mikulski), Springer, Berlin, Heidelberg, (2013), 294-302.Google Scholar T. Levi-Civita, Über Zermelo's Luftfahrtproblem, ZAMM-Z. Angew. Math. Me., 11 (1931), 314-322. Google Scholar B. Manià, Sopra un problema di navigazione di Zermelo, Math. Ann., 133 (1937), 584-599. doi: 10.1007/BF01571651. Google Scholar A. Pallikaris and G. Latsas, New Algorithm for Great Elliptic Sailing (GES), J. Navigation, 62 (2012), 493-507. Google Scholar A. Pressley, Elementary Differential Geometry, Springer Undergraduate Mathematics Series, Springer-Verlag, London, 2010. doi: 10.1007/978-1-84882-891-9. Google Scholar C. Robles, Geodesics in Randers spaces of constant curvature, T. Am. Math. Soc., 359 (2007), 1633-1651. doi: 10.1090/S0002-9947-06-04051-7. Google Scholar B. Russell and S. Stepney, Zermelo navigation in the quantum brachistochrone, J. Phys. A - Math. Theor., 48 (2015), 115303, 29pp. doi: 10.1088/1751-8113/48/11/115303. Google Scholar Z. Shen, Finsler Metrics with ${\bf{K}} = 0$ and ${\bf{S}} = 0$, Can. J. Math., 55 (2003), 112-132. doi: 10.4153/CJM-2003-005-6. Google Scholar W.-K. Tseng, M. A. Earle and J.-L. Guo, Direct and inverse solutions with geodetic latitude in terms of longitude for rhumb line sailing, J. Navigation, 65 (2012), 549-559. Google Scholar A. Weintrit and P. Kopacz, Computational algorithms implemented in marine navigation electronic systems, in Telematics in the Transport Environment. TST 2012. Communications in Computer and Information Science 329 (eds. J. Mikulski), Springer, Berlin, Heidelberg, (2012), 148-158.Google Scholar A. Weintrit and P. Kopacz, A Novel Approach to Loxodrome (Rhumb Line), Orthodrome (Great Circle) and Geodesic Line in ECDIS and Navigation in General, in Methods and Algorithms in Navigation (eds. A. Weintrit and T. Neumann), CRC Press, (2011), 123-132.Google Scholar R. Yoshikawa and S. V. Sabau, Kropina metrics and Zermelo navigation on Riemannian manifolds, Geom. Dedicata, 171 (2014), 119-148. doi: 10.1007/s10711-013-9892-8. Google Scholar E. Zermelo, Über die Navigation in der Luft als Problem der Variationsrechnung, Jahresber. Deutsch. Math.-Verein., 89 (1930), 44-48. Google Scholar E. Zermelo, Über das Navigationsproblem bei ruhender oder veränderlicher Windverteilung, ZAMM-Z. Angew. Math. Me., 11 (1931), 114-124. Google Scholar W. Ziller, Geometry of the Katok examples, Ergodic Theory Dyn. Syst., 3 (1983), 135-157. doi: 10.1017/S0143385700001851. Google Scholar Figure 1. The Riemannian geodesics on the spheroid $\Sigma^2$ with $a: = \frac{3}{4}$ starting from $(0, \frac{\pi}{2})$, with the increments $\Delta \varphi_0 = \frac{\pi}{8}$. On the right, the corresponding solutions $x(t), y(t), z(t)$ for the point $(1, 0, 0)\in\mathbb{R}^3$; $t\leq3$ Figure Options Download as PowerPoint slide Figure 2. The contour plot and the graph of the norm $|W|_h$ in the case of the infinitesimal rotation as the acting mild perturbation, with $c: = \frac{5}{7}$ Figure 28. The geodesics of the new Riemannian metric $\alpha$ starting from $(0, \frac{\pi}{2})$, with the increments $\Delta \varphi_0 = \frac{\pi}{8}$ (16 curves); $t\leq3$ Figure 3. The time-efficient paths on the spheroid with $a: = \frac{3}{4}$ starting from $(0, \frac{\pi}{2})$, with the increments $\Delta \varphi_0 = \frac{\pi}{8}$, $t\leq3$ (left) and divided into time segments with $t\in[0, 1) \text{- red}, t\in[1, 2) \text{- blue}, t\in[2, 3] \text{ - purple}$. Right: "top view" Figure 4. The transpolar (blue) and circumpolar (red) Randers geodesics starting from $(\phi_0, \theta_0) = (0, \frac{\pi}{2})$ in the presence of the rotational perturbation, with $c: = \frac{5}{7}$, the increments $\Delta \varphi_0 = \frac{\pi}{4}$, $t\leq50$. Right: "top" view Figure 5. The $F$-isochrones in the presence of the perturbing infinitesimal rotation, with $c: = \frac{5}{7}$ for $t = 1$ (blue), $t = 2$ (red), $t = 3$ (purple), $t = 4$ (magenta) and the starting point in $(\phi_0, \theta_0): = (0, \frac{\pi}{2})$ Figure 6. The $F$-isochrones on the spheroid $\Sigma^2$ under the perturbing infinitesimal rotation (7), with $c: = \frac{5}{7}$, $t = \left\{1\ \text{(blue)}, 2\ \text{(red)}, 3 \ \text{(purple)}, 4\ \text{(magenta)}\right\}$ and the starting point $(0, \frac{\pi}{2})$. Right: "top" view Figure 7. Comparing the solutions $x(t)$ - blue, $y(t)$ - red, $z(t)$ - black in the absence (dashed) and the presence (solid) of the wind (7), with $\Delta \varphi_0 = \frac{\pi}{8}$ and the starting point $(1, 0, 0)\in\mathbb{R}^3$; $t\leq3$ Figure 8. The comparison of the perturbed (red) and unperturbed (blue) time-efficient paths starting from $(0, \frac{\pi}{2})$, with $\Delta \varphi_0 = \frac{\pi}{4}$, $t\leq1$ (left) and $\Delta \varphi_0 = \frac{\pi}{8}$, $t\leq3$ (middle and "top" view on the right) Figure 9. The corresponding background $h$-Riemannian (blue), new $\alpha$-Riemannian (green) and $F$-Randers (red) geodesics starting from $(0, \frac{\pi}{2})$, with $\Delta \varphi_0 = \frac{\pi}{4}$, $t\leq1$ (left) and with $\Delta \varphi_0 = \frac{\pi}{8}$, $t\leq3$ (middle: "side" view and right: "top" view) Figure 15. The initial resulting speed $|{\bf{v}}_0|_h$ (black) as the function of the initial control angle $\varphi_0\in[0, 2\pi )$ Figure 16. The linear (on the left) and angular speeds (on the right) as the functions of time, in the absence (dashed) and in the presence (solid) of the perturbation (7); the resulting linear speed is shown in black; $t\leq7$ Figure 20. The polar plot of the solutions $\phi(t)$ (blue), $\theta(t)$ (red) in the absence (dashed) and presence (solid) of the perturbation (7); $t\leq20$ Figure 21. The distance (Euclidean) between the "Riemannian ship" and "Finslerian ship", with $t\leq 15$ (left) and $t\leq 50$ (right). The red horizontal line indicates collision and the green one - maximum distance, so the ships are positioned in the antipodal points of the spheroid's equator. The time of the first collision is $t_{col_1} = \frac{14\pi}{5}\approx8.8$ Figure 22. The first collision (also, the seventh intersection) of the corresponding time-minimal trajectories, i.e., the background Riemannian (blue) and the Randers preserving the optimal control (red); $t\leq\frac{14\pi}{5}\approx8.8$ Figure 23. The first intersection (no collision) of the Riemannian (blue) and Randers (red) geodesics coming from the starting point in $(0, \frac{\pi}{2})$; with $t\leq1$ (solid), $t\leq 1.66$ (dashed). Right: "top" view Figure 24. The spherical coordinates $\phi$ (blue), $\theta$ (red) of the corresponding Riemannian (dashed, $t\leq1$) and Randers (solid, $t\leq1.66$) geodesics coming from the starting point in $(0, \frac{\pi}{2})$ till the first intersection (no collision) in the Cartesian (left) and the polar plot (right; time is expressed by length of the radius). The azimuth (longitude) of the first intersection $\phi\approx 45^\circ$ is marked by the black line Figure 10. The Riemannian background geodesic of the ellipsoid $\Sigma^2$ departing from $(0, \frac{\pi}{2})$ and its planar $xy$-projection; $t\leq25$ Figure 12. The background Riemannian geodesic (blue) versus the corresponding Randers geodesic (red), $t\leq7$, and their $xy$-projection (right), $t\leq20$ Figure 13. Comparing the corresponding geodesics in the absence (blue, $h$-Riemannian) and in the presence (red, $F$-Randers) of the perturbing vector field ($7$), with $c: = \frac{5}{7}$. Right: "top" view; $t\leq25$ Figure 14. The Cartesian solutions in the absence (dashed) and presence (solid) of the applied rotational perturbation (7) in the base $(x, y, z)$ (left) and the corresponding polar plot of the spherical solutions in the base $(\phi, \theta)$ (right); $t\leq7$ Figure 11. The Randers geodesic starting from $(0, \frac{\pi}{2})$ under the rotational perturbation (7), with $c: = \frac{5}{7}$, $t\leq25$, and its $xy$-projection, $t\leq20$ Figure 18. The parametric plots of the solutions $\phi, \theta$ (blue, $t\leq7$, left), their first derivatives (black, $t\leq3$, middle) and second derivatives (right, $t\leq5.35$, overlapped), in the absence (dashed) and presence (solid) of the rotational wind ($7$) Figure 17. The resulting time-optimal steering angle $\Phi(t)$ ("course over ground") without (dashed blue) and with (solid red) the wind (7) in the Cartesian plot (left) and the corresponding polar plot (right); $t\leq7$ Figure 19. The drift angle $\Psi(t)$ (dashed blue), the optimal control $\varphi(t)$ (black) and the optimal resulting angle ("course over ground") $\Phi(t)$ (red) in the presence of the perturbation (7); $t\leq7$ Figure 26. The first collision (also, the first intersection) in $(\phi, \theta)\approx(45^\circ, 128^\circ)$ of the time-minimal trajectories generated from the same Riemannian geodesic (blue), i.e. the Randers geodesic with the preserved optimal control (red) and with the supplementary optimal control (black). On the right, "top view"; $t\leq3.3$ Figure 27. The first collision (also, the first intersection) in $(\phi, \theta)\approx(138^\circ, 53^\circ)$ of the Riemannian geodesic (blue) and the Randers geodesic with the supplementary optimal control (black). The corresponding Randers geodesic preserving the optimal control is presented in red. Right: "top" view; $t\leq2.06$ Figure 25. Left: the Riemannian geodesic (blue) and its two generated Randers geodesics: with the preserved optimal control (red, "Randers_1") and the supplementary optimal control (black, "Randers_2"). Right: the corresponding distances (Euclidean) between the ships: "Riemannian-Randers_1" (purple), "Riemannian-Randers_2" (dashed black) and "Randers_1-Randers_2" (blue); $t\leq 15$. The red horizontal line indicates collision and the green one the maximum distance, i.e., the ships are located in the antipodal points of the spheroid's equator Patrick Foulon, Vladimir S. Matveev. Zermelo deformation of finsler metrics by killing vector fields. Electronic Research Announcements, 2018, 25: 1-7. doi: 10.3934/era.2018.25.001 Gautier Picot. Shooting and numerical continuation methods for computing time-minimal and energy-minimal trajectories in the Earth-Moon system using low propulsion. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 245-269. doi: 10.3934/dcdsb.2012.17.245 Martin Frank, Armin Fügenschuh, Michael Herty, Lars Schewe. The coolest path problem. Networks & Heterogeneous Media, 2010, 5 (1) : 143-162. doi: 10.3934/nhm.2010.5.143 Michael Hochman. Lectures on dynamics, fractal geometry, and metric number theory. Journal of Modern Dynamics, 2014, 8 (3&4) : 437-497. doi: 10.3934/jmd.2014.8.437 Louis Caccetta, Ian Loosen, Volker Rehbock. Computational aspects of the optimal transit path problem. Journal of Industrial & Management Optimization, 2008, 4 (1) : 95-105. doi: 10.3934/jimo.2008.4.95 Erik I. Verriest. Generalizations of Naismith's problem: Minimal transit time between two points in a heterogenous terrian. Conference Publications, 2011, 2011 (Special) : 1413-1422. doi: 10.3934/proc.2011.2011.1413 Paolo Antonelli, Daniel Marahrens, Christof Sparber. On the Cauchy problem for nonlinear Schrödinger equations with rotation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 703-715. doi: 10.3934/dcds.2012.32.703 Suddhasattwa Das, Yoshitaka Saiki, Evelyn Sander, James A. Yorke. Solving the Babylonian problem of quasiperiodic rotation rates. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 2279-2305. doi: 10.3934/dcdss.2019145 Roland Gunesch, Philipp Kunde. Weakly mixing diffeomorphisms preserving a measurable Riemannian metric with prescribed Liouville rotation behavior. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1615-1655. doi: 10.3934/dcds.2018067 Rabah Amir, Igor V. Evstigneev. On Zermelo's theorem. Journal of Dynamics & Games, 2017, 4 (3) : 191-194. doi: 10.3934/jdg.2017011 Christophe Cheverry, Thierry Paul. On some geometry of propagation in diffractive time scales. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 499-538. doi: 10.3934/dcds.2012.32.499 Ovidiu Cârjă, Alina Lazu. On the minimal time null controllability of the heat equation. Conference Publications, 2009, 2009 (Special) : 143-150. doi: 10.3934/proc.2009.2009.143 Marc-Auréle Lagache, Ulysse Serres, Vincent Andrieu. Minimal time synthesis for a kinematic drone model. Mathematical Control & Related Fields, 2017, 7 (2) : 259-288. doi: 10.3934/mcrf.2017009 Lydia Ouaili. Minimal time of null controllability of two parabolic equations. Mathematical Control & Related Fields, 2019, 0 (0) : 0-0. doi: 10.3934/mcrf.2019031 Guoliang Xue, Weiyi Zhang, Tie Wang, Krishnaiyan Thulasiraman. On the partial path protection scheme for WDM optical networks and polynomial time computability of primary and secondary paths. Journal of Industrial & Management Optimization, 2007, 3 (4) : 625-643. doi: 10.3934/jimo.2007.3.625 Chunhui Qiu, Rirong Yuan. On the Dirichlet problem for fully nonlinear elliptic equations on annuli of metric cones. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5707-5730. doi: 10.3934/dcds.2017247 Annalisa Iuorio, Christian Kuehn, Peter Szmolyan. Geometry and numerical continuation of multiscale orbits in a nonconvex variational problem. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 1-22. doi: 10.3934/dcdss.2020073 Bernard Bonnard, Thierry Combot, Lionel Jassionnesse. Integrability methods in the time minimal coherence transfer for Ising chains of three spins. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4095-4114. doi: 10.3934/dcds.2015.35.4095 Bernard Bonnard, Olivier Cots, Jérémy Rouot, Thibaut Verron. Time minimal saturation of a pair of spins and application in Magnetic Resonance Imaging. Mathematical Control & Related Fields, 2019, 0 (0) : 0-0. doi: 10.3934/mcrf.2019029 Daniel Glasscock, Andreas Koutsogiannis, Florian Karl Richter. Multiplicative combinatorial properties of return time sets in minimal dynamical systems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 5891-5921. doi: 10.3934/dcds.2019258 Piotr Kopacz Figures and Tables
CommonCrawl
Modified Talk Test: a Randomized Cross-over Trial Investigating the Comparative Utility of Two "Talk Tests" for Determining Aerobic Training Zones in Overweight and Obese Patients Ignacio Orizola-Cáceres1, Hugo Cerda-Kohler1,2, Carlos Burgos-Jara1, Roberto Meneses-Valdes1, Rafael Gutierrez-Pino1 & Carlos Sepúlveda ORCID: orcid.org/0000-0002-7708-90021 To validate the traditional talk test (TTT) and an alternative talk test (ATT; using a visual analog scale) in overweight/obese (OW-OB) patients and to establish its accuracy in determining the aerobic training zones. We recruited 19 subjects aged 34.9 ± 6.7 years, diagnosed with overweight/obesity (BMI 31.8 ± 5.7). Every subject underwent incremental cycloergometric tests for maximal oxygen consumption, and TTT in a randomized order. At the end of each stage during the TTT, each subject read out loud a 40 words text and then had to identify the comfort to talk in two modalities: TTT which consisted in answering "Yes," "I don't know," or "No" to the question Was talking comfortable?, or ATT through a 1 to 10 numeric perception scale (visual analog scale (VAS)). The magnitude of differences was interpreted in comparison to the smallest worthwhile change and was used to determine agreement. There was an agreement between the power output at the VAS 2–3 of ATT and the power output at the ventilatory threshold 1 (VT1) (very likely equivalent; mean difference − 1.3 W, 90% confidence limit (CL) (− 8.2; 5.6), percent chances for higher/similar/lower values of 0.7/99.1/0.2%). Also, there was an agreement between the power output at the VAS 6–7 of ATT and the power output at the ventilatory threshold 2 (VT2) (very likely equivalent; mean difference 11.1 W, 90% CL (2.8; 19.2), percent chances for higher/similar/lower values of 0.0/97.6/2.4%). ATT is a tool to determine exercise intensity and to establish aerobic training zones for exercise prescription in OW-OB patients. Aerobic training zones delimited by ventilatory threshold 1 (VT1) and ventilatory threshold 2 (VT2) were established through the alternative talk test (ATT) in overweight and obese people. Visual analog scale (VAS) 2–3 and VAS 6–7 identified VT1 and VT2, respectively, in overweight and obese people. ATT is a simple tool that could be applied to large populations due to its low cost and easy application and could be used for exercise prescription in community health centers. Worldwide, low levels of physical fitness are associated with an increased risk of all-cause and cardiovascular disease [1], and physical inactivity is responsible for a substantial economic burden [2]. Also, poor cardiorespiratory fitness is an independent risk factor for developing non-communicable diseases (NCDs) and cardiovascular disease [3, 4]. Noteworthily, regular physical activity reduces the all-cause mortality risk by ~ 14% and is one of the leading non-pharmacologic strategies for preventing and treating obesity [1]. However, exercise prescription is complicated, depending on duration, frequency, intensity, and exercise type [5]. Moreover, under the same standardized training prescription, there are evident individual variations in post-training adaptations [6, 7]. Possibly, these individual training responses variations are dose-dependent. Therefore, the control of the physical training load is essential to maximize the benefits associated with health-related exercise [8]. There are several ways of prescribing exercise intensity, and some of these guidelines rely on objective criteria, such as percentages of absolute values of the heart rate reserve (%HRR) or the maximal oxygen uptake (VO2max). However, there is a convincing body of evidence suggesting that the relative distribution of training intensity is regulated more effectively based on the individual physiological and metabolic response to training [9, 10]. Exercise intensity determined by the individual physiologic and metabolic response (e.g., ventilatory threshold 1 [VT1], ventilatory threshold 2 [VT2], or lactate threshold) induces homogeneous training response and adaptations to training programs [11]. Besides, it is beneficial to identify these thresholds since it allows establishing the three classical training zones: zone 1, intensity < VT1; zone 2, intensities between VT1 and VT2; and zone 3, intensity > VT2 [12], favoring the control of external and internal training load [13, 14]. Despite the usefulness of laboratory evaluations individualizing exercise prescriptions, access to these measurements at the healthcare level is limited due to the cost of its implementation [15]. Therefore, research needs to develop low-cost and easy-to-apply training load control methods to improve access to individualized training programs in a specific population. In several countries, community healthcare centers do not have the equipment, infrastructure, and qualified professionals to assess these physiological variables. Moreover, probably the control of exercise intensity during a session only is evaluated with HR, %HRR, or rating of perceived exertion (RPE). However, if these methods are not associated with physiological and metabolic variables, the effectiveness of physical exercise could be diminished. Low-cost methods, such as the talk test (TT) and the RPE, have been demonstrated to be of value relative to both performance diagnostics and prescription. Within the last years, the TT has been suggested as a useful surrogate of gas exchange thresholds in a variety of populations [16]. The TT is an easy-to-apply and low-cost tool for intensity monitoring [17] and involves an individual reading a similar text during exercise and then being asked if he or she can still speak comfortably [16]. However, the traditional talk test (TTT) only has three options for the question "was talking comfortable?" and lacks quantitative psychometric properties, being questioned as a substitute for objective physiological measures for prescribing individual training exercise [18]. The main objective of the study was to validate the TTT and an alternative TT (ATT; using a visual analog scale) in overweight/obese (OW-OB) patients and to establish its accuracy in determining the aerobic training zones, previously described. We hypothesized that ATT is a valid tool to establish aerobic training zones in OW-OB patients. Participant Characteristics A total of 19 subjects with a nutritional diagnosis of OW-OB according to BMI ≥ 25 kg/m2, physically inactive according to the World Health Organization (WHO) classification [19], and no diagnosis of NCDs participate in the study. Before the evaluations, the subjects signed an informed consent approved by the Scientific Ethics Committee of the Universidad Finis Terrae (resolution no. 21/2017). All procedures were performed in compliance with the Helsinki Declaration principles for human experiments. The participants visited the laboratory in five separate days nonconsecutive (each evaluation day was intercepted per 3 rest days to prevent unexpected side effects as delayed onset muscle soreness "DOMS"). On the first day, the subjects signed informed consent. Next days, the subjects arrived between 08:00 and 10:00 am and were evaluated in a randomized order for the following procedures: body composition, cardiorespiratory fitness test, TTT, and ATT. Fat, lean, and fat-free body mass were measured by double-energy X-ray absorptiometry (DEXA) using manufacturer-supplied algorithms (Total Body Analysis, version 3.6; Lunar, Madison, WI, USA). The general characterization of subjects is presented in Table 1. Table 1 Participant's characteristics Cardiorespiratory Fitness After a 5-min warm-up at 50 W and a constant cadence of 55 ± 5 rpm, the participants performed a maximal incremental test on an electronic automatized cycle-ergometer (Cyclus2, Germany). An initial workload of 40 W was used, with increments of 15 W (women) and 20 W (men) every 1 min until exhaustion. The test was performed with a constant cadence of 55 ± 5 revolutions per minute (rpm). Gas exchange was recorded continuously with a portable breath-to-breath gas analyzer (Cortex Metalyzer 3B, Leipzig, Germany) and was calibrated according to the manufacturer's instructions before each trial. Pulmonary ventilation (VE), oxygen uptake (VO2), expired carbon dioxide (VCO2), and respiratory exchange ratio (RER) were averaged over 10 s in the mixing chamber mode, with the highest 30 s value (i.e., three consecutive 10 s averages) used in the analysis. VO2max was determined according to previously established criteria [20]: (i) plateau in VO2 (i.e., increase < 150 ml min−1), (ii) RER > 1.1, and (iii) ≥ 90% of theoretical maximal heart rate. The VO2max was expressed both as absolute values (L min−1) and relative to body mass (ml kg−1 min−1). The power output at VO2max (pVO2max) was determined as the minimum workload at which VO2max was reached. Ventilatory threshold 1 (VT1) and ventilatory threshold 2 (VT2) were identified separately by three researchers according to the following criteria [21]: an increase in VE/VO2 and end-tidal PO2 (PETO2) without a concomitant increase in VE/VCO2 for VT1, and an increase in VE/VO2 and VE/VCO2 and a decrease in end-tidal PCO2 (PETCO2) for VT2. The cardiorespiratory fitness and ventilatory threshold are shown in Table 2. Table 2 Cardiorespiratory fitness and ventilatory threshold Talk Test After a 10-min warm-up, subjects performed an incremental test on an electronic automatized cycle-ergometer (Cyclus2, Germany). The protocol considered load (W) increments every 3 min, the time necessary to stabilize ventilation, primary variable for voice production [22, 23]. During the last 30 s of each stage, out loud reading of 40 words from the text "Lectura del Abuelo" was requested. Two methods evaluated the ability to converse during exercise: (i) traditional talk test (TTT) by answering "yes," "no," or "I do not know" to the question "was talking comfortable?" and (ii) alternative talk test (ATT) using a 1 to 10 visual analog scale (VAS) [24]. Both, text "Lectura del Abuelo" and VAS, are shown in Table 3. Table 3 Text "Lectura del Abuelo" and visual scale analog le> Statistical Analyses Data in the text and figures are presented as mean ± SD and 90% confidence limit/interval (CL/CI). A 90% confidence interval (CI; 1–2α) is used instead of a 95% CI (1–α) because magnitude-based inferences (MBI) performed two one-sided tests (each with an α of 5%). All data were first log-transformed to reduce bias arising from non-uniformity error. The magnitude of differences was interpreted in comparison to the smallest worthwhile change (SWC) (Cohen's d = 0.6) [21]. Cohen's d for within-subjects designs is calculated using the average standard deviation of both repeated measures as a standardizer with a Hedges' correction to minimize bias (Cohen's dadj) [25]: $$ \mathrm{Cohen}'\mathrm{s}\ d\ \mathrm{adjusted}=\frac{Mdiff}{\left( SD1+ SD2\right)/2} $$ This SWC of 0.6 was set as the equivalence region, representing about one stage of difference during the incremental test, and was used to determine agreement. The probability of any substantial difference or realistic equivalence relative to the predefined target values was interpreted using the following scale: < 0.5%, most unlikely; 0.5–5%, very unlikely; 5–25%, unlikely; 25–75%, possibly; 75–95%, likely; 95–99.5%, very likely; > 99.5%, most likely [26]. Effects were declared relevant if the outcome probability was likely (≥ 75%) (i.e., methods were considered in agreement and, therefore, interchangeable). Statistical analysis was performed with the "mbir" package of the R software [27]. Statistical significance was set at p < 0.05. We recruited 34 volunteer participants, of which 6 subjects were excluded because they did not meet the criteria for entering the study, and 9 did not complete all testing procedures. The final analysis, therefore, included 19 patients who completed the evaluations of which 13 were men and 6 were women. Agreement Between Traditional Talk Test and Ventilatory Thresholds Results of the equivalence tests between TTT and ventilatory thresholds are presented in Fig. 1 and Table 4. Evidence for an agreement was observed between the power output at the "first no" (FN) and the power output at the ventilatory threshold 2 (most likely equivalent; mean difference − 2.9 W, 90% CL (− 10.9; 5.1)). There was no agreement between the power output at the "last yes" (LY) and the power output at the ventilatory threshold 1 (unlikely equivalent; mean difference − 22.4 W, 90% CL (− 1.3; − 13.3)). As represented in Fig. 1a, there was an agreement between the power output at the LY and the power output at the VAS 4–5 of ATT (very likely equivalent; mean difference 7.1 W, 90% CL (0.4; 13.7)). Mean difference and uncertainty for the difference (90% confidence interval) between a traditional talk test and ventilatory threshold and b alternative talk test and ventilatory thresholds. The unshaded area represents our statistical equivalence region. Abbreviations: SWC, smallest worthwhile change; WTTT/LY, watts of traditional talk test in the last stage where the answer was "yes"; WVT1, watts at ventilatory threshold 1; WATT, watts of alternative talk test; WTTT/FN, watts of traditional talk test in the first stage where the answer was "no"; WVT2, watts at ventilatory threshold 2; W watts Table 4 Analysis of agreement Agreement Between Alternative Talk Test and Ventilatory Thresholds Figure 1b shows an agreement between the power output at the VAS 2–3 of ATT and the power output at the ventilatory threshold 1 (very likely equivalent; mean difference − 1.3 W, 90% CL (− 8.2; 5.6)). There was no agreement between the power output at the VAS 4–5 of ATT and the power output at the ventilatory threshold 1 (most unlikely equivalent; mean difference − 29.5 W, 90% CL (− 37.6; − 21.2)). Results showed no agreement between the power output at the VAS 4–5 of ATT and the power output at the ventilatory threshold 2 (most unlikely equivalent; mean difference 38.7 W, 90% CL (27.1; 50.2)). As represented in Fig. 1b, there was an agreement between the power output at the VAS 6–7 of ATT and the power output at the ventilatory threshold 2 (very likely equivalent; mean difference 11.1 W, 90% CL (2.8; 19.2)). There was an agreement between the power output at the VAS 6–7 of ATT and the power output at the "first no" of TTT (very likely equivalent; mean difference − 13.9 W, 90% CL (− 18.7; − 9.1)). The regulation and control of exercise intensity are some of the most challenging parts of exercise prescription. There are several ways of prescribing exercise intensity, and some of these recommendations are based on objective criteria, such as percentages of absolute values of the %HRR or VO2max. However, recent investigations propose a more individualized exercise prescription to personalize a training regime based on individual metabolic responses and, therefore, enhance the potential benefits of regular physical activity [11]. Therefore, the goal of this investigation was to prove the usefulness of the TTT and/or ATT as a low-cost tool to determine exercise intensity and establish aerobic training zones for exercise prescription in OW-OB patients. Our main finding shows that the three aerobic training zones delimited by VT1 and VT2 could be established through the TT, primarily through the ATT. Regarding the TTT, previous studies have shown an association between VT1 and the last stage of the TT where talking was comfortable in healthy subjects [16, 28, 29] and between VT2 with TT stages where comfort to talk is lost in patients with heart diseases [30, 31]. However, these previous studies used the VO2 values to compare intensities between the TTT and ventilatory thresholds. This methodology does not allow obtaining external load values (e.g., watts) that can be used for prescribing and controlling aerobic training. In our study, the TTT failed to determine the transition from zone 1 to zone 2 because we found no agreement between the power output of the different answers related to the TTT and the power output at VT1. The transition threshold between zone 2 and zone 3 could be established with the power output at the first stage where the answer was "no," which was most likely equivalent to the power output at VT2. This lack of consistency with previous results could be related to the statistical analysis. The previous studies used correlation analysis (e.g., Pearson correlation) [16, 28], which focuses on the association of changes in two outcomes that often measure quite different constructs [32]. Our study used agreement analysis, which measures the degree of concordance in the results between two or more assessments of the variable of interest and assumes that the variables measure the same construct [32], being agreement analysis better to assess if methods are interchangeable [33]. To our knowledge, there is only one study in well-trained cyclists that measure agreement between workload at VTs and TT. Rodriguez et al. found agreement between the power output at the first stage where the answer was "I do not know" and the power output associated with VT1 [34]. Also, they found an agreement between the power output at the first stage where the answer was "no" and the power output associated with VT2, results that partially disagree with our recent findings in OW-OB patients. Regarding the ATT, the absence of psychometric properties of the TTT may induce an under- or overestimation of the degree of talking comfort during physical exercise in physically inactive persons. The ATT would allow identifying the "difficult to talk" with numeric magnitudes by giving a quantitative variable to the TT [18, 35, 36]. Speech production during exercise is associated with changes in physiological variables related to exercise, being the consequence of the need to adapt the breathing pattern compatible with speech production. Accordingly, Rotstein et al. found a significant association between VO2, HR, and VE responses and the ratings of perceived speech production difficulty. Our results agree with these previous findings, showing that the intensity (power output) associated with VT1 is very likely equivalent to the last stage where talking was "very easy" (VAS 2–3), allowing to determine the threshold to zone 1. The power output at the first stage where talking was "hard" (VAS 6–7) was very likely equivalent to the power output at VT2 (Table 4), which allows determining the threshold to zone 3. The power output where talking was "somewhat hard" (VAS 4–5) was most likely higher than the power output at VT1 and most likely lower than the power output at VT2, thus representing the intensity related to zone 2 (Fig. 1b). Taken together, these results showed that the ATT could be used to determine exercise intensity and establish aerobic training zones for exercise prescription in OW-OB patients. The main limitation of clinical context is to have a low-cost tool to prescribe physical exercise. To solve this problem, ATT is a simple tool that could be applied to large populations due to its low cost and easy application. However, further research is needed to determine the effect of endurance training controlled with ATT on obese people. Some limitations exist in this study as the low number of subjects recruited. Several studies have similar limitations [16, 34, 37]. Possibly the specific characteristics of them (e.g., participants with OW/OB) reduced the adherence of the participants. Another important issue to resolve for future studies is replicating the study on a treadmill with TTT and ATT in OW/OB patients with and without comorbidities. Interestingly, these tools could help healthcare professionals promote the massive practice of physical exercise in a population with the absence of technology to control the training program load. Finally, low-cost tools would help to improve the capacity of healthcare professionals to control exercise intensity, improving the health benefits of exercise and physical activity. ATT is a low-cost and easy-to-apply tool to determine exercise intensity and to establish aerobic training zones for exercise prescription in OW-OB patients. The TTT could under- or overestimate the physical effort in patients diagnosed with OW-OB, specifically at the training zone 1. Please contact the author for data requests. %HRR: Percentages of absolute values of the heart rate reserve ATT: Alternative talk test CL: Confidence limit FN: First no LY: Last yes METs: Metabolic equivalent NCDs: OB: OW: PETCO2: End-tidal PCO2 PETO2: End-tidal PO2 pVO2max: Power output at VO2max RER: Respiratory exchange ratio RPE: Rating of perceived exertion SWC: Smallest worthwhile change TT: TTT: Traditional talk test VAS: Visual analog scale VCO2: Expired carbon dioxide Pulmonary ventilation VO2: Oxygen uptake VO2max: Maximal oxygen uptake VT1: Ventilatory threshold 1 Watts of the alternative talk test WTTT/LY: Watts of the traditional talk test in the last stage where the answer was "yes" WVT1: Watts at ventilatory threshold 1 WTTT/FN: Watts of the traditional talk test in the first stage where the answer was "no" Blair SN, Kohl HW, Barlow CE, Paffenbarger RS, Gibbons LW, Macera CA. Changes in physical fitness and all-cause mortality. A prospective study of healthy and unhealthy men. JAMA. 1995;273(14):1093–8. https://doi.org/10.1001/jama.1995.03520380029031. Lear SA, Hu W, Rangarajan S, Gasevic D, Leong D, Iqbal R, Casanova A, Swaminathan S, Anjana RM, Kumar R, Rosengren A, Wei L, Yang W, Chuangshi W, Huaxing L, Nair S, Diaz R, Swidon H, Gupta R, Mohammadifard N, Lopez-Jaramillo P, Oguz A, Zatonska K, Seron P, Avezum A, Poirier P, Teo K, Yusuf S. The effect of physical activity on mortality and cardiovascular disease in 130 000 people from 17 high-income, middle-income, and low-income countries: the PURE study. Lancet. 2017;390(10113):2643–54. https://doi.org/10.1016/S0140-6736(17)31634-3. Ross R, Blair SN, Arena R, Church TS, Després J-P, Franklin BA, Haskell WL, Kaminsky LA, Levine BD, Lavie CJ, Myers J, Niebauer J, Sallis R, Sawada SS, Sui X, Wisløff U, American Heart Association Physical Activity Committee of the Council on Lifestyle and Cardiometabolic Health, Council on Clinical Cardiology, Council on Epidemiology and Prevention, Council on Cardiovascular and Stroke Nursing, Council on Functional Genomics and Translational Biology, Stroke Council. Importance of assessing cardiorespiratory fitness in clinical practice: a case for fitness as a clinical vital sign: a scientific statement from the American Heart Association. Circulation. 2016;134(24):e653–99. https://doi.org/10.1161/CIR.0000000000000461. Nyberg ST, Batty GD, Pentti J, Virtanen M, Alfredsson L, Fransson EI, Goldberg M, Heikkilä K, Jokela M, Knutsson A, Koskenvuo M, Lallukka T, Leineweber C, Lindbohm JV, Madsen IEH, Magnusson Hanson LL, Nordin M, Oksanen T, Pietiläinen O, Rahkonen O, Rugulies R, Shipley MJ, Stenholm S, Suominen S, Theorell T, Vahtera J, Westerholm PJM, Westerlund H, Zins M, Hamer M, Singh-Manoux A, Bell JA, Ferrie JE, Kivimäki M. Obesity and loss of disease-free years owing to major non-communicable diseases: a multicohort study. Lancet Public Health. 2018;3(10):e490–7. https://doi.org/10.1016/S2468-2667(18)30139-7. Burgos C, Henríquez-Olguín C, Ramírez-Campillo R, Mahecha Matsudo S, Cerda-Kohler H, Burgos C, et al. Exercise as a tool to reduce body weight. Rev Méd Chil. 2017;145(6):765–74. https://doi.org/10.4067/s0034-98872017000600765. Bouchard C, Leon AS, Rao DC, Skinner JS, Wilmore JH, Gagnon J. The HERITAGE family study. Aims, design, and measurement protocol. Med Sci Sports Exerc. 1995;27(5):721–9. Timmons JA, Knudsen S, Rankinen T, Koch LG, Sarzynski M, Jensen T, Keller P, Scheele C, Vollaard NBJ, Nielsen S, Åkerström T, MacDougald OA, Jansson E, Greenhaff PL, Tarnopolsky MA, van Loon LJC, Pedersen BK, Sundberg CJ, Wahlestedt C, Britton SL, Bouchard C. Using molecular classification to predict gains in maximal aerobic capacity following endurance exercise training in humans. J Appl Physiol. 2010;108(6):1487–96. https://doi.org/10.1152/japplphysiol.01295.2009. Montero D, Lundby C. Refuting the myth of non-response to exercise training: "non-responders" do respond to higher dose of training. J Physiol Lond. 2017;595(11):3377–87. https://doi.org/10.1113/JP273480. Scharhag-Rosenberger F, Meyer T, Gässler N, Faude O, Kindermann W. Exercise at given percentages of VO2max: heterogeneous metabolic responses between individuals. J Sci Med Sport. 2010;13(1):74–9. https://doi.org/10.1016/j.jsams.2008.12.626. Condello G, Reynolds E, Foster C, de Koning JJ, Casolino E, Knutson M, et al. A simplified approach for estimating the ventilatory and respiratory compensation thresholds. J Sports Sci Med. 2014;13:309–14. Weatherwax RM, Ramos JS, Harris NK, Kilding AE, Dalleck LC. Changes in metabolic syndrome severity following individualized versus standardized exercise prescription: a feasibility study. Int J Environ Res Public Health. 2018;15(11). https://doi.org/10.3390/ijerph15112594. Skinner JS, McLellan TM, McLellan TH. The transition from aerobic to anaerobic metabolism. Res Q Exerc Sport. 1980;51(1):234–48. https://doi.org/10.1080/02701367.1980.10609285. Pallarés JG, Morán-Navarro R, Ortega JF, Fernández-Elías VE, Mora-Rodriguez R. Validity and reliability of ventilatory and blood lactate thresholds in well-trained cyclists. PLoS ONE. 2016;11(9):e0163389. https://doi.org/10.1371/journal.pone.0163389. Zapata-Lamana R, Henríquez-Olguín C, Burgos C, Meneses-Valdés R, Cigarroa I, Soto C, Fernández-Elías VE, García-Merino S, Ramirez-Campillo R, García-Hermoso A, Cerda-Kohler H. Effects of polarized training on cardiometabolic risk factors in young overweight and obese women: a randomized-controlled trial. Front Physiol. 2018;9:1287. https://doi.org/10.3389/fphys.2018.01287. Jamnick NA, Botella J, Pyne DB, Bishop DJ. Manipulating graded exercise test variables affects the validity of the lactate threshold and [Formula: see text]. PLoS ONE. 2018;13(7):e0199794. https://doi.org/10.1371/journal.pone.0199794. Quinn TJ, Coons BA. The talk test and its relationship with the ventilatory and lactate thresholds. J Sports Sci. 2011;29(11):1175–82. https://doi.org/10.1080/02640414.2011.585165. Jeanes EM, Jeans EA, Foster C, Porcari JP, Gibson M, Doberstein S. Translation of exercise testing to exercise prescription using the talk test. J Strength Cond Res. 2011;25(3):590–6. https://doi.org/10.1519/JSC.0b013e318207ed53. Rotstein A, Meckel Y, Inbar O. Perceived speech difficulty during exercise and its relation to exercise intensity and physiological responses. Eur J Appl Physiol. 2004;92:431–6. Nuttall FQ. Body mass index. Nutr Today. 2015;50(3):117–28. https://doi.org/10.1097/NT.0000000000000092. Howley ET, Bassett DR, Welch HG. Criteria for maximal oxygen uptake: review and commentary. Med Sci Sports Exerc. 1995;27:1292–301. Batterham AM, Hopkins WG. Making meaningful inferences about magnitudes. Int J Sports Physiol Perform. 2006;1(1):50–7. https://doi.org/10.1123/ijspp.1.1.50. Meckel Y, Rotstein A, Inbar O. The effects of speech production on physiologic responses during submaximal exercise. Med Sci Sports Exerc. 2002;34(8):1337–43. https://doi.org/10.1097/00005768-200208000-00016. Foster C, Porcari JP, Ault S, Doro K, Dubiel J, Engen M, et al. Exercise prescription when there is no exercise test: the talk test. Kinesiology. 2018;50:33–48. De Lucca L, Freccia GW, Silva AEL e, de Oliveira FR. Talk test as method to control exercise intensity. Rev Bras Cineantropometria Desempenho Humano. 2012;14:114–24. Lakens D. Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Front Psychol. 2013;4 [cited 2019 Feb 9]. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3840331/. Macpherson TW, McLaren SJ, Gregson W, Lolli L, Drust B, Weston M. Using differential ratings of perceived exertion to assess agreement between coach and player perceptions of soccer training intensity: an exploratory investigation. J Sports Sci. 2019;37(24):2783–8. https://doi.org/10.1080/02640414.2019.1653423 Routledge. Peterson K, Caldwell A. mbir: magnitude-based inferences. J Open Source Softw. 2019;4:746. Recalde JP, T P, Foster, Carl, Skemp-Arlt, M K, et al. The talk test as a simple marker of ventilatory threshold. S Afr J Sports Med. 2002;2002:5–8 South African Sports Medicine Association. Persinger R, Foster C, Gibson M, Fater DCW, Porcari JP. Consistency of the talk test for exercise prescription. Med Sci Sports Exerc. 2004;36(9):1632–6. Zanettini R, Centeleghe P, Franzelli C, Mori I, Benna S, Penati C, Sorlini N. Validity of the talk test for exercise prescription after myocardial revascularization. Eur J Prev Cardiol. 2013;20(2):376–82. https://doi.org/10.1177/2047487312438982. Reed JL, Pipe AL. The talk test: a useful tool for prescribing and monitoring exercise intensity. Curr Opin Cardiol. 2014;29(5):475–80. https://doi.org/10.1097/HCO.0000000000000097. Liu J, Tang W, Chen G, Lu Y, Feng C, Tu XM. Correlation and agreement: overview and clarification of competing concepts and measures. Shanghai Arch Psychiatry. 2016;28:115–20. Watson PF, Petrie A. Method agreement analysis: a review of correct methodology. Theriogenology. 2010;73(9):1167–79. https://doi.org/10.1016/j.theriogenology.2010.01.003. Rodríguez-Marroyo JA, Villa JG, García-López J, Foster C. Relationship between the talk test and ventilatory thresholds in well-trained cyclists. J Strength Cond Res. 2013;27(7):1942–9. https://doi.org/10.1519/JSC.0b013e3182736af3. Borg GA. Perceived exertion: a note on "history" and methods. Med Sci Sports. 1973;5(2):90–3. Borg GA. Psychophysical bases of perceived exertion. Med Sci Sports Exerc. 1982;14:377–81. Gillespie BD, McCormick JJ, Mermier CM, Gibson AL. Talk test as a practical method to estimate exercise intensity in highly trained competitive male cyclists. J Strength Cond Res. 2015;29(4):894–8. https://doi.org/10.1519/JSC.0000000000000711. This research received no external funding. Unidad de Fisiología Integrativa, Laboratorio de Ciencias del Ejercicio, Clínica MEDS, Santiago, Chile Ignacio Orizola-Cáceres, Hugo Cerda-Kohler, Carlos Burgos-Jara, Roberto Meneses-Valdes, Rafael Gutierrez-Pino & Carlos Sepúlveda Applied Sports Science Unit, High-Performance Center, National Institute of Sports, Santiago, Chile Hugo Cerda-Kohler Ignacio Orizola-Cáceres Carlos Burgos-Jara Roberto Meneses-Valdes Rafael Gutierrez-Pino Carlos Sepúlveda IOC has carried out the conceptualization, methodology, recruitment of participants, and data collection, as well as the statistical study and writing of the original draft. HCK has been the reviewer of the work helping in the statistical analysis and perfection of the methodological aspects. CBJ, RMV, and RGP have carried out data collection and data curation. CS has collaborated in the project administration and methodology, in writing and guiding for its edition, and in the perfection of the methodological aspects. All authors have contributed to the proof-reading of the manuscript and have approved the final article. Correspondence to Carlos Sepúlveda. The Scientific Council of University Finis Terrae approved the research (Nº21/2017) (58076/14-11-2018). All the participants gave written consent. All the participants gave written consent for the publication of data. The authors, Ignacio Orizola-Cáceres, Hugo Cerda-Kohler, Carlos Burgos-Jara, Roberto Meneses-Valdes, Rafael Gutierrez-Pino, and Carlos Sepúlveda, declare that they have no competing interests. Orizola-Cáceres, I., Cerda-Kohler, H., Burgos-Jara, C. et al. Modified Talk Test: a Randomized Cross-over Trial Investigating the Comparative Utility of Two "Talk Tests" for Determining Aerobic Training Zones in Overweight and Obese Patients. Sports Med - Open 7, 23 (2021). https://doi.org/10.1186/s40798-021-00315-9 Aerobic training zones
CommonCrawl
High intensity exercise downregulates FTO mRNA expression during the early stages of recovery in young males and females Jessica Danaher1, Christos G. Stathis2, Robin A. Wilson3,4, Alba Moreno-Asso2,4, R. Mark Wellard5 & Matthew B. Cooke ORCID: orcid.org/0000-0002-4978-42944,6 Physical exercise and activity status may modify the effect of the fat mass- and obesity-associated (FTO) genotype on body weight and obesity risk. To understand the interaction between FTO's effect and physical activity, the present study investigated the effects of high and low intensity exercise on FTO mRNA and protein expression, and potential modifiers of exercise-induced changes in FTO in healthy-weighted individuals. Twenty-eight untrained males and females (25.4 ± 1.1 years; 73.1 ± 2.0 kg; 178.8 ± 1.4 cm; 39.0 ± 1.2 ml.kg.min− 1 VO2peak) were genotyped for the FTO rs9939609 (T > A) polymorphism and performed isocaloric (400 kcal) cycle ergometer exercise on two separate occasions at different intensities: 80% (High Intensity (HI)) and 40% (Low Intensity (LO)) VO2peak. Skeletal muscle biopsies (vastus lateralis) and blood samples were taken pre-exercise and following 10 and 90 mins passive recovery. FTO mRNA expression was significantly decreased after HI intensity exercise (p = 0.003). No differences in basal and post-exercise FTO protein expression were evident between FTO genotypes. Phosphorylated adenosine monophosphate-activated protein kinase (AMPK) and Akt substrate of 160 kDa (AS160) were significantly increased following HI intensity exercise (p < 0.05). Multivariate models of metabolomic data (orthogonal two partial least squares discriminant analysis (O2PLS-DA)) were unable to detect any significant metabolic differences between genotypes with either exercise trial (p > 0.05). However, skeletal muscle glucose accumulation at 10 mins following HI (p = 0.021) and LO (p = 0.033) intensity exercise was greater in AA genotypes compared to TT genotypes. Our novel data provides preliminary evidence regarding the effects of exercise on FTO expression in skeletal muscle. Specifically, high intensity exercise downregulates expression of FTO mRNA and suggests that in addition to nutritional regulation, FTO could also be regulated by exercise. Trial registration ACTRN12612001230842. Registered 21 November 2012 – Prospectively registered, https://www.anzctr.org.au/ The fat mass- and obesity-associated (FTO) gene was initially considered an "obesity gene" when early human studies demonstrated significant associations between its genetic polymorphism and body mass index (BMI) [1, 2]. Subsequent studies using transgenic overexpression and gene knockout models have sought to understand these associations by determining FTO's biological role within various tissues [3,4,5]. While it is evident that FTO targets neural tissue and its energy homeostatic functions, whole body loss-of-function mutations may mask other tissue-specific activities attributed to FTO. Studies have shown that homozygous knockout of FTO leads to loss of lean mass [4, 5], which does not occur in targeted hypothalamic knockout models [5], suggesting FTO can promote its biological effects through pathways and tissues other than neural. Skeletal muscle is the largest energy producing and consuming organ within the body and is influential on energy expenditure and whole body metabolism. Recent evidence suggests that FTO may act as an 'energy sensor' within skeletal muscle regulating important cellular metabolic pathways for substrate utilisation, storage and growth [6, 7]. This role could be via 5′ adenosine monophosphate-activated protein kinase (AMPK) regulation, as AMPK is recognized as a key energy sensor that can regulate energy status [8]. Activation of AMPK increases glucose uptake in skeletal muscle [9], enhances lipid oxidation [10] and reduces fatty acid incorporation into triacylglycerol [11]. Conversely, ablation of AMPK reduces fatty acid oxidation and enhances skeletal muscle lipid accumulation, leading to elevated triglyceride content [12]. Recent work by Wu and colleagues [7] suggests a relationship between FTO and AMPK and molecular regulation of lipid metabolism and metabolic diseases. Specifically, AMPK's regulation of lipid accumulation in skeletal muscle could be via FTO-dependent demethylation of mRNA, with inhibition of AMPK upregulating FTO expression and enhancing lipid accumulation, while activation of AMPK downregulates FTO expression and reduces lipid accumulation [7]. Exercise is perhaps the most powerful physiological activator of AMPK [13]. When activated, AMPK stimulates energy generating processes such as glucose uptake and fatty acid oxidation and decreases energy consuming processes such as protein and lipid synthesis. In addition, AMPK activation in peripheral tissues seems to counteract many of the cellular abnormalities observed in animal models of metabolic syndrome including insulin resistance, inflammation and ectopic lipid deposition. These observations could explain in part how higher physical activity levels attenuate the influence of FTO variation on obesity risk [14], and why lifestyle interventions demonstrate greater efficacy in promoting weight loss in FTO risk A-allele carriers compared to those carrying the TT genotype [15]. It is possible that exercise-induced activation of AMPK downregulates FTO expression and/or leads to reduced FTO-dependent demethylation of mRNA which subsequently enhances lipid oxidation and reduces fat deposition. Over the long term this could result in weight loss. However, no study to date has looked at the effect of exercise on skeletal muscle FTO mRNA and protein levels and whether such effects are mediated via changes in AMPK. Additionally, using a metabolomics approach to analyse relative concentrations of multiple metabolites can help characterise core metabolic changes (i.e. metabolite "signature") that may otherwise be missed when analysing single or multiple metabolite changes. Importantly, this technology can assist in identifying potential variations in metabolic pathways between genotypes which could assist in elucidating the mechanisms by which exercise regulates FTO. Thus, the present study investigated the effect of exercise-induced metabolic perturbations on skeletal muscle FTO mRNA and protein expression, and determined whether these changes are genotype variant specific. In addition, this study sought to identify potential metabolic modifiers of FTO expression. It was hypothesized that higher exercise intensity would cause larger metabolic perturbations and AMPK activation, leading to greater downregulation of skeletal muscle FTO mRNA expression in variants encompassing the risk A-allele (AA and AT genotypes) compared to individuals homozygous for the non-risk allele (TT genotypes). A total of 28, apparently healthy, sedentary males and females (25.4 ± 1.1 years; 73.1 ± 2.0 kg; 178.8 ± 1.4 cm; 39.0 ± 1.2 ml.kg.min− 1 peak oxygen uptake (VO2peak)) volunteered to take part in this study. Participants were excluded from participating if they had diagnosed diabetes (fasting blood glucose greater than 7.0 mmol. L− 1), were performing any regular fitness training (> 30 mins, 3 x per week) for 6 months prior, taking contraindicated prescription medication which influence metabolism (including thyroid, hyperlipidmeic, hypoglycemic, or antihypertensive), or were pregnant. Participants believed to meet the eligibility criteria were asked to provide written informed consent based on documents previously approved by the Victoria University Human Research Ethics Committee (HRETH 12/197) and all procedures were performed in accordance with the ethical standards set out in the 1964 Declaration of Helsinki. Preliminary testing Prior to the experimental exercise trials, cells from inside each participant's cheek were collected using a standard buccal swab, with QuickExtract solution (Illumina) used to extract DNA from these swabs. Genotyping of the rs9939609 (T > A) polymorphism of the FTO gene was performed using a Taqman allelic discrimination assay (Life Technologies, VIC, Australia) and a CFX96 Real-Time thermal cycler (Bio-Rad Laboratories, VIC, Australia) as per manufacturer's instructions. For quality control purposes, a positive and negative control was used. The context sequence for the SNP tested was [VIC/FAM] GGTTCCTTGCGACTGCTGTGAATTT [A/T]GTGATGCACTTGGATAGTCTCTGTT. The overall genotyping efficiency was 100%. Body composition assessment Dual Energy X-ray Absorptiometry (DEXA; Hologic Discovery W, MA, USA) was used to assess body composition. Calibrations were performed the morning of DEXA analysis, and participants were in a standardised supine position throughout the duration of the scan. A whole-body scan was used (~ 1.5 mSv) to identify total body mass, fat mass, lean muscle mass and bone mineral content. Height, hip and waist circumference, and blood pressure were measured using a stadiometer, tape measure and sphygmomanometer (Omron HEM7322; Omron Healthcare, VIC, Australia), respectively. Graded exercise test To ascertain the fitness level of participants, VO2peak was measured approximately one week prior to the first experimental exercise trial. A standard graded exercise protocol on an Excalibur Lode Cycle ergometer (Netherlands) was performed: Males, 3 × 3 min sub-maximal workloads at 50, 100 and 150 W followed by successive 1-min workload increments of 25 W until volitional exhaustion; Females, 3 × 3 min sub-maximal workloads at 25, 50 and 75 W followed by successive 1-min workload increments of 25 W until volitional exhaustion. Participants were encouraged to maintain a pedal frequency between 80 and 100 revolutions per minute (rpm) and the test was terminated when this could not be maintained for a period of 5 s. Expired air was directed by a Hans Rudolph valve via a ventilometer into a mixing chamber and analysed for oxygen and carbon dioxide content (Moxus; AEI Technologies, PA, USA). Prior to each VO2peak test the gas analyser was calibrated using commercially prepared gas mixtures (BOC Gases, Australia). Data obtained from the graded exercise test were used to calculate the workload each participant required for the subsequent experimental exercise trials at 80 and 40% of their VO2peak. Experimental exercise trial protocol Participants were asked to complete two isocaloric acute exercise trials in a non-randomised order, separated by at least 1 week for males and 1 month for females: i) High Intensity (HI), 80% VO2peak (AA, 127.6 ± 13.1 W; AT, 126.0 ± 15.0 W; TT, 113.9 ± 15.1 W), and ii) Low Intensity (LO), 40% VO2peak (AA, 63.8 ± 6.6 W; AT, 62.9 ± 7.5 W; TT, 57.1 ± 7.6 W). Similar to the VO2peak test, these protocols were performed on an Excalibur Lode Cycle ergometer (Netherlands) and participants were encouraged to maintain a pedal frequency between 80 and 100 rpm. Exercise was stopped once each participant had expended 400 kcal as estimated via indirect calorimetry (Moxus; AEI Technologies, PA, USA). Substrate utilisation was calculated using standard stoichiometric equations [16], with the assumption that protein oxidation was minor and constant. Energy expenditure was calculated based on the following formula, with respiratory values in l.min− 1 units: $$ \mathrm{Energy}\ \mathrm{Expenditure}\ \left(\mathrm{kJ}.{\min}^{-1}\right)=16.318\ast {\mathrm{VO}}_2-4.602\ast {\mathrm{VCO}}_2 $$ Respiratory Exchange Ratio (RER) data were used to examine the response to metabolic demand by measuring the area under the curve (AUC) for RER transition from the beginning to the end of exercise. Borg Scale Ratings of Perceived Exertion (RPE 6–20 scale) were recorded every 10 mins throughout the exercise bouts, and immediately upon cessation of exercise, to determine perceived physical demand between allelic variants of FTO. Participants were asked to refrain from consuming caffeine and alcohol, and from undertaking strenuous exercise 24 h prior to attending the experimental exercise trials. Participants recorded their dietary intake for 24 h before the first experimental exercise trial and were asked to replicate meals the day prior to the subsequent trial. Experimental exercise trials were conducted in the morning, approximately 10–12 h after the last meal to produce basal state conditions. Exercise was preceded by a rest period and followed by 90 mins of passive recovery in a supine position. Plasma analysis Each participant had an intravenous cannula inserted into a vein in the antecubital space to obtain blood samples throughout each experimental exercise protocol, and this was kept patent with isotonic saline (0.9% NaCl, Pfizer). Blood was sampled pre-exercise (0 mins), and at 10 and 90 mins throughout the post-exercise passive recovery period. Samples were immediately placed into lithium heparin (BD Vacutainer) tubes and centrifuged at 12,000 rpm for 2 min. Plasma samples were decanted and analysed for glucose concentration (YSI 2300 STAT; Yellow Springs Instruments, OH, USA). Plasma albumin was measured using a commercially available Bromocresol Green Albumin Assay Kit (Sigma Aldrich, Australia) and used as an indirect marker and estimation of plasma volume changes during exercise [17]. Skeletal muscle analysis Skeletal muscle biopsies Skeletal muscle biopsies were collected from the vastus lateralis tissue under local anaesthesia pre-exercise (0 mins), and at 10 and 90 mins throughout the post-exercise passive recovery period. A fresh incision was made for each biopsy, which was taken distal to proximal (at least 1 cm apart) in the middle of the muscle belly, approximately 5–8 cm above the left kneecap. Muscle sampling was performed using a Bergström needle with suction [18]. Muscle samples were immediately snap-frozen in liquid nitrogen and stored at − 80 °C until analysis. Metabolomics analysis – GC-MS Muscle metabolite extraction and preparation Approximately 20 mg wet weight of each skeletal muscle sample was diluted with 250 μl of methanol (MeOH) [spiked with 4% 13C6-Sorbitol as an extraction internal standard (ISTD)]. Supernatant from completely homogenized samples were separated and transferred into 6 mm diameter conical bottom glass vial inserts (Phenomenex, NSW, Australia). Pooled biological quality control (PBQC) samples were created using 10 μl of supernatant from each extracted sample. Samples were then dried in vacuo (RVC 2–33, John Morris, Australia) at a temperature of − 55 °C and pressure of 3 mbar for 3 h, prior to being placed into glass vials for Gas Chromatography-Mass Spectroscopy (GC-MS) analysis. Additional glass vials containing methyloxime (MeOX) (10 μl per sample) and trimethylsilane (TMS) (20 μl per sample) were prepared for derivatisation. Instrumentation and data handling The GC-MS system used comprised of a 7000B Agilent GC triple-quadrupole and a 5975C Agilent triple-axis MS detector (Agilent Technologies, CA, USA). A MPS2XL GC-MS autosampler (Gerstal Technologies, Mülheim, Germany) was set to select samples for analysis in a randomised order. MeOX and TMS derivatised samples were injected onto the GC column using a hot needle technique. The injection was operated in splitless (1 μl sample) and split (0.20 μl sample) modes to avoid overloaded chromatogram peaks. The instrumentation conditions and data handling procedures (including mass spectra and peak verification processes) were as previously described [19]. Overloaded peaks (lactate, glucose, mannose, sucrose, fructose, urea and cholesterol) were analysed separately from the split mode. Muscle metabolite concentrations (expressed as arbitrary units (AU)) for each metabolite detected in each sample were normalised to the ISTD (13C6-Sorbitol) and to muscle sample wet weight. Skeletal muscle mRNA expression Total RNA was isolated from ~ 20 mg skeletal muscle using TRIzol reagent. Total RNA concentration and purity of each sample was determined using a Nanodrop Spectrophotometer (Thermo Scientific, VIC, Australia). RNA (1 μg) was reverse transcribed to cDNA using an iScript cDNA Synthesis Kit (Bio-Rad Laboratories, VIC, Australia), as per manufacturer's instructions. Relative mRNA expression was determined by QuantStudio 7 Flex (Applied Biosystems, CA, USA) using 20X PrimePCR Assays and SsoAdvanced Universal SYBR Green Supermix (Bio-Rad Laboratories, VIC, Australia). mRNA sequences of the oligonucleotide primers used are listed in Supplementary Table S-1. ß-Actin (ACTB) was used as an internal control standard for each reaction due to its previous verification as a constitutively expressed housekeeping gene in human skeletal muscle following acute exercise [20]. The relative amount of the target mRNA was calculated using the fold change 2-∆∆CT method [21]. Skeletal muscle protein expression Twenty-five cryosections of skeletal muscle (30 μm sections) were lysed in homogenisation buffer (0.125 M Tris HCl, 10% Glycerol, 4% SDS, 10 mM EGTA, 0.1 M DTT [pH 8.0]). Total protein concentration of muscle lysate was measured using Pierce BCA protein estimation (Abcam, VIC, Australia) and RED 660 Protein Assay with SDS neutralizer (G Bioscience, MO, USA), as per manufacturer's instructions. Protein was resolved on 7.5% or 12% Mini-PROTEAN TGX Stain-Free Gels (Bio-Rad Laboratories, VIC, Australia) and transferred to PVDF membrane. Membranes were blocked with skim milk in Tris-Buffered Saline-Tween (TBST) and incubated overnight with the following primary antibodies: FTO (GeneTex #GTX63821), pan Actin (NeoMakers #MS-1295-P0), Phospho Ser588 AS160 (Akt substrate of 160 kDa) (Cell Signalling Technology (CST) #8730), total AS160 (CST #2447), Phospho AMPKα (CST #2535) and total AMPKα (CST #2603). After incubation, membranes were washed and incubated for 1 h at 4 °C with horseradish peroxidase-linked secondary antibody (CST #7074). Proteins were detected via chemiluminescence using Clarity Western ECL Substrate within a VersaDoc Imager (Bio-Rad Laboratories, VIC, Australia). Densitometry was performed using Image Lab Software (Bio-Rad Laboratories, VIC, Australia) with the total proteins in each lane of the stain free PVDF membrane normalised to internal controls run on each gel. FTO content was expressed relative to Actin, phosphorylated AS160 content was expressed relative to total AS160, and phosphorylated AMPK was reported relative to total AMPK. Data analysis and statistical methods Sample size calculation The estimated sample size for the main outcome measures of gene expression was based on an assumed correlation of 0.7 between the pre and post exercise outcome measures, and an effect size of Cohen's d = 0.62 for fold change in skeletal muscle mRNA expression of metabolic genes following acute high and/or low intensity exercise [22, 23]. In order to have power of 80% and a significance level of 5%, an estimated total sample size of 28, taking into account a predicted ~ 20% dropout rate. An orthogonal two partial least squares discriminant analysis (O2PLS-DA) multivariate model was used as an analogous extension of the common PLS-DA model. This multivariate analysis model was selected due to its previously shown suitability in combining 'omics' data [24]. Model quality was reported for O2PLS-DA using R2X(cum) and Q2, which represents, the measure of fit (i.e. the explained variation in metabolites) and the goodness of prediction (i.e. the variation in genotype that can be predicted by the model), respectively, as estimated by cross-validation (SIMCA statistical modelling, version 14, MKS, Sweden). The maximum possible Q2 value is 1.0 as it is a fraction of the total variability, therefore Q2 ≥ 0.7 can be considered a good predictor and < 0.5 as insignificant. Likewise, the maximum possible R2X(cum) value is 1.0, with this representative of a perfectly fitting model, whilst a R2X(cum) value of 0.0 would indicate no model fit at all. The area under the curve (AUC) of the receiver operating characteristic (ROC) curve was used to determine the overall accuracy and separation performance of the genotypes in each O2PLS-DA model. MetaboAnalyst 3.0 [25] was used to generate an additional PLS-DA on the whole set of metabolites (variables) at each time point, with normalisation via a generalised log transformation applied to the data matrix to improve symmetry. This multivariate pattern analysis model determined the metabolites with variable importance for projection (VIP) values ≥1.0. The VIP value was used to reflect variable importance, and the metabolite subset with values ≥1.0 is herein referred to as 'VIP metabolites'. Univariate analysis VIP metabolites were selected for further analysis (to reduce variability) and analysed using SPSS software (IBM SPSS Statistics for Windows, Version 20, NY, USA). Unpaired two-way ANOVA's with repeated-measures were used to calculate individual significance for each genotype with time as the within group factor and genotype as the between group factor. Where univariate analysis revealed any significant main effects for time, subsequent pairwise comparisons were performed to detect differences over time. Where a genotype by time interaction was detected, multiple comparisons with Tukey's post hoc tests were completed to identify differences. One-way ANOVA's were performed for participant characteristic and indirect calorimetry (substrate utilisation, energy expenditure and RER) data, with unpaired t-tests completed when interactions between factors were found. Linear regression and covariant analysis (ANCOVA) were used to determine the effect of age and sex (both known to influence associations between FTO rs9939609 and obesity-related traits) on allelic representation of dependent variables. Regression analysis was used to observe relationships between skeletal muscle mRNA expression of metabolic genes and muscle metabolites showing genotype by time interactions. Data are expressed as mean ± SEM unless otherwise stated. The level of significance was set at p < 0.05. Participant characteristics Participant characteristics for each FTO rs9939609 genotype were similar with no significant differences in total body mass, height, BMI, hip and waist circumference, fat mass, lean muscle mass, bone mineral content, blood pressure or VO2peak noted (p > 0.05) (Table 1). A genotype effect was detected for age (p = 0.038), with AT genotypes significantly older than TT genotypes (p = 0.019). A trend towards significance for a genotype effect was detected for fasting plasma glucose concentrations (p = 0.057). Table 1 Participant characteristics when separated by FTO genotype of the rs9939609 polymorphism Participant's physiological responses to HI and LO exercise trials Workloads (W) performed during the HI and LO intensity exercise protocols were similar between genotypes (p > 0.05) (Table 2). Both HI and LO intensity exercise trials elicited an increase in heart rate, with higher elevations during the HI trial compared to the LO trial (data not shown). Heart rate was similar between genotypes before, during and following HI and LO intensity exercise within each trial (data not shown). Additionally, RPE (considered on a numerical scale and presented as median (interquartile range)) was similar between genotypes at the completion of each exercise protocol (HI: AA, 16 (14–17), AT 16 (15–18), TT, 17 (15–19) (representing "Hard – Very Hard") (p = 0.254); LO: AA, 12 (11–13), AT 11 (11–12), TT, 13 (11–14) (representing "Fairly Light – Somewhat Hard") (p = 0.456)). There were no significant differences between genotypes in time to expend 400 kcal during the HI (p = 0.511) and LO intensity (p = 0.472) exercise protocol, or for average RER (HI, p = 0.323; LO, p = 0.603), glucose utilisation (g.kgLBM− 1.T.I− 1) (HI, p = 0.740; LO, p = 0.310) and fat utilisation (g.kgLBM− 1.T.I− 1) (HI, p = 0.709; LO, p = 0.498) measured during each exercise protocol (Table 2). The transition in substrate utilisation during exercise (RER AUC) was not significantly different between genotypes for each exercise protocol (HI, p = 0.206; LO, p = 0.410) (Table 2). The absence of an effect of age or sex on respiratory gas exchange measurements between FTO genotypes was confirmed by ANCOVA (p > 0.05). Table 2 Respiratory gas exchange measurements, and calculated fat and glucose utilisation, between FTO rs9939609 genotypes after isocaloric HI (80% VO2peak) and LO (40% VO2peak) intensity exercise Metabolite analysis Skeletal muscle metabolites: multivariate analysis Analysis of the chromatogram resulted in the detection of 48 identifiable metabolites (see Supplementary Table S-2 for metabolite identification details). Unpaired multivariate data models, O2PLS-DA with Pareto data scaling, were used to determine participant variation during the HI and LO intensity exercise trials, regardless of time (Fig. 1a & c). The O2PLS-DA modelling method demonstrated a similar metabolic signature between genotypes in the HI intensity exercise trial (p = 0.999), with very good validation metrics for data goodness of fit, R2X(cum) = 0.914, and very poor validation metrics for goodness of prediction, Q2 = 0.084 (Fig. 1a). Orthogonal variation in metabolites (X) accounted for 44% of the variation, and orthogonal variation between genotypes (Y) accounted for 32% of the variation. The O2PLS-DA modelling method also demonstrated a similar metabolic signature between genotypes in the LO intensity exercise trial (p = 0.982), with moderate validation metrics for data goodness of fit, R2X(cum) = 0.511, and very poor validation metrics for goodness of prediction, Q2 = 0.017 (Fig. 1c). Orthogonal variation in metabolites (X) accounted for 27% of the variation, and orthogonal variation between genotypes (Y) accounted for 17% of the variation. O2PLS-DA models of genotype variation for the a HI and c LO intensity exercise trials. O2PLS-DA models, a & c Green represent those homozygous for the risk A-allele (AA genotypes), blue is representative of heterozygous participants (AT genotypes), and red represents those who had not inherited the risk A-allele (TT genotypes). The ellipse is representative of a 95% confidence interval. Component 1 describes the orthogonal metabolite variation (within group variation) and Component 2 shows the primary variation between genotypes. Components are scaled proportionally to R2X (A, R2X[1] = 0.099, R2X[2] = 0.052; B, R2X[1] = 0.346, R2X[2] = 0.092). Loading plots of the O2PLS-DA models, b & d Genotype shown in blue and metabolites shown in green The metabolites associated with each genotype can be extracted from the loadings scatter plot (Fig. 1b & d). Distribution of metabolites in the direction of each genotype signifies their contribution to model variation due to the respective genotype, whilst metabolites with the least importance are clustered in the centre. The metabolites likely to contribute most to each genotype in the HI intensity exercise trial model were, AA: alanine, glutamate and glycine; AT: proline, adenosine monophosphate (AMP), urea and myoinositol; TT: glycerol-3-phosphate (glyercol-3-P), glycerate-3-phosphate (glycerate-3-P) and pyrophosphate. Data from the LO intensity exercise trial did not provide sufficient power to differentiate metabolite variation in relation to genotype using loading plot observations, or to generate a secondary predictive component. AUC of the ROC curve showed a poorer fit in the LO intensity exercise trial compared to HI intensity exercise trial, with the AA genotype better described by the model than the TT genotype (see Supplementary Figure S-1). Correlation coefficient scores based on the weighted sum of the PLS regression were used to identify the top 10 metabolites with the greatest influence on the components at each time point, regardless of exercise intensity (Supplementary Figure S-2). PLS-DA cross validation determined 27 metabolites in total with VIP scores ≥1. These VIP metabolites were used for subsequent univariate analysis to determine metabolic changes over time and between genotypes for the HI and LO intensity exercise trials. Skeletal muscle metabolites: Univariate analysis HI intensity exercise A significant main effect for time was observed for skeletal muscle alanine, erythronate, fumarate, gamma hydroxybutyric acid (GHB), glucose, glutamate, glycine, glycolate, lactate, leucine, malate, maltose, mannose, monopalmitoglycerol, nicotinamide, phenylalanine, proline, tyrosine and uric acid following HI intensity exercise (p < 0.05) (see Supplementary Table S-3). Time as a main effect approached significance for muscle ß-alanine (p = 0.052) and glycerate-3-P (p = 0.056) following HI intensity exercise. At 10 mins post HI intensity exercise, muscle alanine, erythronate, fumarate, GHB, glucose, glycolate, lactate, malate, maltose, mannose, monopalmitoglycerol and tyrosine were significantly elevated compared to pre-exercise (p < 0.05), whereas muscle glutamate and proline were significantly decreased (p < 0.05). A trend for lower muscle nicotinamide was detected at 10 mins post HI intensity exercise compared to pre-exercise (p = 0.065). At 90 mins post HI intensity exercise, muscle erythronate and maltose were significantly elevated compared to pre-exercise (p < 0.05), with a trend towards significance for higher levels for glucose (p = 0.066), glycolate (p = 0.089) and uric acid (p = 0.060). Conversely, muscle fumarate, glutamate, glycine, leucine, phenylalanine and proline were significantly lower at 90 mins post HI intensity exercise compared to pre-exercise (p < 0.05). No main effect for genotype was identified for any of the VIP muscle metabolites (p > 0.05). A significant genotype by time interaction was observed for muscle glucose (p = 0.036), with subsequent analysis revealing a significantly higher level of muscle glucose in AA genotypes compared to TT genotypes at 10 mins following HI intensity exercise (p = 0.021). LO intensity exercise A significant main effect for time was observed for skeletal muscle alanine, erythronate, fumarate, glucose, glutamate, glycolate, glycerate-3-P, lactate, malate, maltose, monopalmitoglycerol, pyrophosphate and tyrosine following LO intensity exercise (p < 0.05) (see Supplementary Table S-3). Time as a main effect approached significance for muscle mannose (p = 0.068), uric acid (p = 0.074) and phenylalanine (p = 0.086). At 10 mins post LO intensity exercise, muscle alanine, erythronate, fumarate, glucose, glycolate, lactate, malate, monopalmitoglycerol and tyrosine were significantly elevated compared to pre-exercise (p < 0.05), with a trend for elevated muscle maltose (p = 0.060). At 90 mins post LO intensity exercise, muscle erythronate, glutamate, glycerate-3-P, glycolate, lactate, monopalmitoglycerol and pyrophosphate were significantly elevated compared to pre-exercise (p < 0.05), while only a trend towards significance for elevated fumarate (p = 0.064), maltose (p = 0.068) and glucose (p = 0.074) were observed compared to pre-exercise. No main effect for genotype was identified for any of the VIP muscle metabolites (p > 0.05). Similar to the HI intensity exercise trial, a genotype by time interaction was observed for muscle glucose (p = 0.035), with subsequent analysis revealing a significantly higher level of muscle glucose in AA genotypes compared to AT (p = 0.028) and TT (p = 0.033) genotypes at 10 mins post LO intensity exercise. The absence of an effect of age or sex on muscle metabolite responses between FTO genotypes for both the HI and LO exercise trial was confirmed by ANCOVA (p > 0.05). Plasma metabolites: Univariate analysis Exercise-induced changes in plasma albumin concentrations were similar between genotypes at all observed time points during both exercise trials (data not shown) (p > 0.05). A significant main effect for time for plasma glucose was observed in the HI intensity exercise trial (p < 0.001). Subsequent pairwise comparisons revealed significantly higher plasma glucose at 10 mins (p = 0.001) and 90 mins (p = 0.042) post HI intensity exercise compared to pre-exercise (Fig. 2a). No main effect for time for plasma glucose was detected in the LO intensity exercise trial (p = 0.533) (Fig. 2b). No genotype main effect (HI, p = 0.656; LO, p = 0.196), or genotype by time interaction (HI, p = 0.681; LO, p = 0.932) was identified for either exercise trial. Plasma glucose concentrations between genotypes of the FTO rs9939609 polymorphism. Plasma glucose concentrations sampled prior to and following (during passive recovery) isocalorically matched HI (80% VO2peak) and LO (40% VO2peak) intensity exercise. Values expressed as mean ± SEM Skeletal muscle mRNA expression analysis A significant main effect for time was observed for FTO (p = 0.002), AMPK (p = 0.009) and mTOR (mammalian target of rapamycin) (p = 0.001) mRNA expression following HI intensity exercise at 80% VO2peak (Fig. 3). Time as a main effect approached significance for GLUT4 (glucose transporter type 4) mRNA expression (p = 0.054). Subsequent pairwise comparisons revealed a significant decrease in FTO (p < 0.001) and mTOR (p = 0.001) mRNA expression from pre-exercise to 10 mins post-exercise, and in mTOR (p = 0.002) from pre-exercise to 90 mins post-exercise. A significant increase in AMPK mRNA expression was observed from pre-exercise to 90 mins post-exercise (p = 0.009). mRNA expression in human vastus lateralis skeletal muscle. Data are expressed as mean ± SEM of mRNA expression fold change over pre-exercise (set arbitrarily to 1) when normalised to β-Actin at 10 and 90 mins following HI (80% VO2peak) and LO (40% VO2peak) intensity exercise for FTO, AMPK, GLUT4 and mTOR. Significant changes over time (from pre-exercise) represented by b = p < 0.01, and c = p < 0.001 A weak trend for a genotype by time interaction was observed for FTO (p = 0.095). No genotype by time interactions were identified for the mRNA expression of AMPK (p = 0.304), GLUT4 (p = 0.366) or mTOR (p = 0.377). No genotype main effects were identified for FTO (p = 0.894), AMPK (p = 0.606), GLUT4 (p = 0.310) or mTOR (p = 0.611) mRNA expression. The absence of an effect of age or sex on FTO mRNA expression during the HI intensity exercise trial was confirmed by ANCOVA (p > 0.05). A significant main effect for time was observed for AMPK mRNA expression following LO intensity exercise at 40% VO2peak (p = 0.009), with subsequent pairwise comparisons revealing a significance increase in AMPK mRNA from pre-exercise to 10 mins post-exercise (p = 0.005) and from pre-exercise to 90 mins post-exercise (p = 0.004) (Fig. 3). Time as a main effect approached significance for GLUT4 (p = 0.093) mRNA expression following LO intensity exercise, with no main effect for time observed for FTO (p = 0.505) or mTOR (p = 0.642) mRNA expression. No genotype main effects (FTO, p = 0.931; AMPK, p = 0.804; GLUT4, p = 0.164; mTOR, p = 0.280), or genotype by time interactions (FTO, p = 0.970; AMPK, p = 0.803; GLUT4, p = 0.277; mTOR, p = 0.528) were identified. The absence of an effect of age or sex on FTO mRNA expression during the LO intensity exercise trial was confirmed by ANCOVA (p > 0.05). Regression analysis of mRNA Regression analysis was used to determine if a relationship between skeletal muscle FTO mRNA and muscle glucose existed, as glucose was the only metabolite to demonstrate a genotype by time interaction. Further regression analyses between FTO mRNA and mRNA of other metabolic genes are in Supplementary Figure S-3. A negative correlation was detected between skeletal muscle glucose levels and mRNA expression of FTO during the HI (r = − 0.234, p = 0.033) and LO intensity exercise trial (r = − 0.264 p = 0.017), regardless of time and genotype (see Supplementary Figure S-4). When accounting for genotype, the negative correlation remained between skeletal muscle glucose levels and mRNA expression of FTO in AA genotypes (HI, r = − 0.370, p = 0.044; LO, r = − 0.395, p = 0.031), during both exercise trials. No relationship between skeletal muscle glucose levels and mRNA expression of FTO was observed in AT genotypes (HI, r = − 0.205, p = 0.306; LO, r = − 0.291, p = 0.141) or TT genotypes (HI, r = − 0.100, p = 0.621; LO, r = 0.027, p = 0.899), during either the HI or LO intensity exercise trials. Further regression analyses between mRNA of other metabolic genes (AMPK, mTOR and GLUT4) and muscle glucose are in Supplementary Figure S-4. Skeletal muscle protein expression analysis No main effect for time (p = 0.128), genotype main effect (p = 0.181), or genotype by time interaction (p = 0.485) was detected for skeletal muscle FTO protein expression in response to HI intensity exercise (Fig. 4b). Similarly, LO intensity exercise did not have a significant effect on skeletal muscle FTO protein expression, with no main effect for time (p = 0.544), genotype main effect (p = 0.378) or genotype by time interaction (p = 0.650) observed (Fig. 4c). Expression levels of FTO, AS160 Ser588 phosphorylation, and phosphorylated AMPK in the vastus lateralis muscle. a Representative Western Blots of pAS160Ser588, total AS160, pAMPK, total AMPK, FTO and Actin for each genotype measured during pre-exercise rest (R), and at 10 mins and 90 mins during passive recovery following HI intensity exercise. b & c Relative expression of FTO protein when normalised to Actin following HI and LO intensity exercise respectively. d Relative expression of AS160 Ser588 phosphorylation (pAS160Ser588) relative to total AS160 following HI intensity exercise. e Relative expression of phosphorylated AMPK relative to total AMPK following HI intensity exercise. Data are expressed as mean ± SEM of the fold change at 10 and 90 mins compared to pre-exercise measurements (set arbitrarily to 1). 'a' represents significant main effect for time (p < 0.05) A significant main effect for time was observed for pAS160Ser588 (p = 0.049; Fig. 4d) and pAMPK (p = 0.035; Fig. 4e) in skeletal muscle following HI intensity exercise. Subsequent pairwise comparisons revealed a significant increase in pAMPK relative to total AMPK from pre-exercise to 10 mins (p = 0.010) post-exercise, and a significant increase in pAS160Ser588 relative to total AS160 from pre-exercise to 10 mins (p = 0.011) and 90 mins (p = 0.046) post-exercise. No genotype main effects (pAMPK, p = 0.563; pAS160Ser588, p = 0.252), or genotype by time interactions (pAMPK, p = 0.490; pAS160Ser588, p = 0.386) were identified. The absence of an effect of age or sex on FTO and pAMPK protein expression between FTO genotypes for both the HI and LO exercise trial was confirmed by ANCOVA (p > 0.05). However, there was a significant sex effect for pAS160Ser588, with females demonstrating a greater increase at 10 min recovery (p = 0.042) compared to males. No effect of age was evident. No relationships were detected between skeletal muscle levels of pAS160Ser588, pAMPK or FTO expression and the mRNA expression of FTO, AMPK, mTOR or GLUT4 during the exercise trials, regardless of time and genotype (see Supplementary Figure S-5). The present study provides a potential mechanism by which exercise may attenuate the influence of the FTO rs9939609 polymorphism on obesity risk. To the best of our knowledge, this is the first preliminary report on the effects of two isocaloric bouts of high and low intensity exercise on skeletal muscle FTO gene expression. The current findings demonstrate that an acute bout of high intensity exercise significantly downregulates skeletal muscle FTO mRNA during the early stages of recovery. This was not observed for lower intensity exercise. Downregulation of FTO mRNA expression was associated with elevated muscle glucose levels, but only in those individuals carrying the at risk AA genotype. Despite higher intensity exercise inducing greater metabolic perturbations compared to the low intensity trial, metabolomics analysis was unable to identify any unique metabolic differences between the FTO genotypes. This study suggests that in addition to nutritional regulation, FTO is also regulated by exercise and may be involved in exercise's role in reducing obesity risk. The acute and significant downregulation of FTO mRNA following high intensity exercise is a major novel finding of this investigation. A weak trend for genotype by time interaction was also identified suggesting greater downregulation of FTO mRNA in the AA genotype compared to the other genotypes (AT and TT). Indeed, AA genotypes demonstrated a 0.32-fold decrease in FTO mRNA expression compared to a 0.21-fold and 0.11-fold decrease for AT and TT genotypes respectively, at 10 min post-exercise. Previous studies have shown that FTO gene activity is nutritionally regulated with high fat diet, fasting and glucose ingestion all having effects on FTO mRNA levels [26,27,28]. Only one other study has investigated the effect of high intensity training on FTO mRNA expression and showed lifestyle changes (diet and exercise) did not impact FTO gene expression in peripheral blood mononuclear cells [29]. However, when FTO genotype was considered, FTO expression was up-regulated in AA genotype carriers and down-regulated in AG/GG genotype carriers in the intervention group [29]. Though the direction of FTO mRNA change was opposite to the present study observations, different cell type, age, sex and intervention period, may explain such differences. The downregulation of FTO mRNA following exercise in the present study could be due to its role as an 'energy sensor'. FTO gene activity is sensitive to the energy status of the cell [30], and it is possible that FTO is responding to the change in energy status and increased energy demand of the muscle as a result of exercise. While a change energy status is quite complex, the current study used an untargeted metabolomics approach to see if the high and/or low intensity bout of exercise could unmask any differences in metabolic responses between FTO genotypes that would otherwise not be seen at rest. High intensity exercise induced a significant increase in muscle alanine, erythronate, fumarate, GHB, glucose, glycolate, lactate, malate, maltose, mannose, monopalmitoglycerol and tyrosine during the first 10 min of recovery. LO exercise caused similar but not as many metabolite changes. Despite greater metabolic perturbations following the high intensity compared to the low intensity exercise, O2PLS-DA multivariate regression analysis was unable to distinguish between FTO allelic variants based on metabolic profiles following the exercise bouts. A limitation of the O2PLS-DA model is that baseline (resting) and post-exercise measurements are grouped together. By incorporating a time point in which energy demand is at a minimum and under tight homeostatic regulation, the ability to identify differences may be confounded. Indeed, previous investigations have found similar metabolic profiles between allelic variants of FTO in at rest [31, 32]. While the current study was unable to identify any specific metabolite(s) and/or metabolic by-product(s), previous research has suggested kreb cycle intermediate fumarate as a potential modifier of FTO. Gerken and colleagues [26] demonstrated inhibition of Fto-catalyzed 1-meA demethylation by fumarate within 2OG decarboxylation assays. While the Gerken study [26] examined FTO function and not gene regulation, it is possible that elevated levels of fumarate (as seen during exercise) are also inhibiting its expression. It is clear that further functional studies are needed to explore other metabolites, especially those significantly impacted by exercise, as possible modifier candidates. Recent studies have suggested that AMPK may also regulate FTO expression and function in skeletal muscle and could explain another mechanism by which exercise downregulates FTO mRNA. Using C2C12 cells, Wu and colleagues [7] showed that inhibition of AMPK upregulates FTO expression and activity and lipid accumulation, while activation of AMPK downregulates FTO expression and activity and reduces lipid accumulation. The current study showed that phosphorylated AMPKα was significantly increased during the early stages of recovery following high intensity exercise, however, no genotype by time interaction was identified. Further, no relationship was found between elevated AMPKα levels and the downregulation of FTO. While phosphorylated AMPKα may not be driving the downregulation of FTO mRNA in AA genotypes, it could still be impacting the FTO function. Observations from Wu et al. [7] and others [4, 5] suggest that inhibition of FTO function drives higher fat oxidation and lower fat accumulation possibly via FTO-dependent demethylation of mRNA m6A. In the current study, fat oxidation and/or markers of lipid accumulation were not measured post-exercise, however, individuals homozygous for the risk A-allele demonstrated greater muscle glucose levels compared to non-risk T-allele at 10 min recovery following both high and low intensity exercise. Higher intramuscular glucose levels post-exercise could reflect a metabolic shift towards greater lipid oxidation and away from glucose oxidation potentially via the AMPK activation and FTO-dependent demethylation of N6-methyladenosine mechanism as previously mentioned above. However, acute higher post-exercise intramuscular glucose levels observed in AA genotypes could be due to a number of processes involved in glucose metabolism including glucose delivery and uptake into the muscle, and the resynthesis of glycogen levels post-exercise. Plasma glucose concentrations were measured in the present study and were found to be similar between FTO genotypes in response to high intensity exercise and thus it is also unlikely that differences in plasma glucose levels could be responsible for the greater muscle glucose uptake. Glucose uptake into the sarcoplasm depends on the skeletal muscle expression of GLUT4 (an insulin and contraction regulated glucose transport isoform) [33], which is normally increased following exercise and can facilitate post-exercise glucose uptake [34]. Although the current study did not measure GLUT4 translocation directly, we did measure phosphorylation of AS160, an insulin dependent and independent regulator of GLUT4 vesicle movement to, and/or fusion with, the plasma membrane [35]. Despite exercise increasing AS160/Total AS160 in the early stages of recovery, there were no significant differences between genotypes. Though not statistically significant, the AA genotype group did complete their trial on average by about 3–7 min faster and produce higher total workloads in both exercise trials compared to AT and TT genotypes. Further research is needed to determine whether higher intramuscular glucose levels are due to genetic factors inherent in AA genotypes or influences from the aforementioned factors. Several limitations do exist in the current study. Firstly, we acknowledge that our final sample size (n = 28) is relatively low. The average partial eta-squared for observed skeletal muscle variables (VIP metabolites, and protein and mRNA expression levels) was found to be of a medium effect size when performing high and low intensity exercise data analysis (η2 = 0.068 and η2 = 0.051 respectively). Secondly, we studied both males and females that were young and of "healthy" weight range (BMI range 24–26 kg/m2). It is apparent that substrate metabolism is subject to sex-specific regulation. Sex differences in muscle fibre type distribution and substrate availability to, and in, skeletal muscle [36], which also includes molecular differences in glucose and lipid metabolism of skeletal muscle. We used sex as well as age (another known factor) as covariates within our ANCOVA analysis. When sex or age were considered, the reported significant findings were still present. The influence of age was not anticipated given our relatively similar age distribution across alleles. Sex has shown to influence the effect of the FTO polymorphism on obesity related traits. However, it was clear from our study that despite sex differences within our cohort (nearly a 50/50 sex split), downregulation of FTO mRNA still occurred in the at-risk AA allele and within both sexes. Thirdly, while the vastus lateralis muscle is the most common muscle of choice for biopsies because of its accessibility, it is of mixed fibre type composition and thus we cannot comment on fibre type specific differences. Furthermore, it is possible that the sampling window of up to 90 min was insufficient to detect any significant changes in FTO protein content resulting from exercise, with previous research demonstrating changes in the expression levels of other muscle proteins occurring greater than 3 h [37]. To our knowledge, this is the first study to suggest that disturbed metabolic alterations with exercise, especially those larger perturbations during high intensity exercise, are creating an environment that favours downregulation of FTO mRNA. The risk FTO allelic variant maybe impacted more, with higher intramuscular glucose levels observed in the AA group compared to AT and TT. We hope that further work extends our preliminary findings to determine if chronic repetitive stimuli (i.e. exercise training), lowers the obesity risk in individuals with the FTO risk variant by modulating FTO protein and/or function. Graphpad Prism software 7.02 and SIMCA Statistical Modeling software 14 were used to create artwork. The datasets used and/or analysed during this study are available from the corresponding author on reasonable request. ACTB: Beta-Actin AMP: BMI: FTO: Fat Mass- and Obesity-Associated Gene GC-MS: Gas Chromatography-Mass Spectroscopy GHB: Gamma Hydroxybutyric Acid GLUT4: Glucose Transporter Type 4 High Intensity Exercise LO: Low Intensity Exercise mRNA: Messenger Ribonucleic Acid mTOR: O2PLS-DA: Orthogonal two Partial Least Squares Discriminant Analysis RER: Respiratory Exchange Ratio Receiver Operating Characteristic RPE: Ratings of Perceived Exertion T. I: VIP: Variable Importance for Projection Workload (watts) Dina C, Meyre D, Gallina S, Durand E, Körner A, Jacobson P, et al. Variation in FTO contributes to childhood obesity and severe adult obesity. Nat Genet. 2007;39:724–6. Frayling TM, Timpson NJ, Weedon MN, Zeggini E, Freathy RM, Lindgren CM, et al. A common variant in the FTO gene is associated with body mass index and predisposes to childhood and adult obesity. Science. 2007;316:889–94. Church C, Moir L, McMurray F, Girard C, Banks GT, Teboul L, et al. Overexpression of Fto leads to increased food intake and results in obesity. Nat Genet. 2010;42:1086–92. Fischer J, Koch L, Emmerling C, Vierkotten J, Peters T, Brüning JC, et al. Inactivation of the Fto gene protects from obesity. Nature. 2009;458:894–8. McMurray F, Church CD, Larder R, Nicholson G, Wells S, Teboul L, et al. Adult onset global loss of the Fto gene alters body composition and metabolism in the mouse. PLoS Genet. 2013. https://doi.org/10.1371/journal.pgen.1003166. Wang X, Huang N, Yang M, Wei D, Tai H, Han X, et al. FTO is required for myogenesis by positively regulating mTOR-PGC1α pathway-mediated mitochondria biogenesis. Cell Death Dis. 2017. https://doi.org/10.1038/cddis.2017.122. Wu W, Feng J, Jiang D, Zhou X, Jiang Q, Cai M, et al. AMPK regulates lipid accumulation in skeletal muscle cells through FTO-dependent demethylation of N6-methyladenosine. Sci Rep. 2017. https://doi.org/10.1038/srep41606. Hardie DG, Ross FA, Hawley SA. AMPK: a nutrient and energy sensor that maintains energy homeostasis. Nat Rev Mo Cell Biol. 2012;13:251–62. Merrill GF, Kurth EJ, Hardie DG, Winder WW. AICA riboside increases AMP-activated protein kinase, fatty acid oxidation, and glucose uptake in rat muscle. Am J Phys. 1997;273:1107–12. O'Neill HM, Lally JS, Galic S, Thomas M, Azizi PD, Fullerton MD, et al. AMPK phosphorylation of ACC2 is required for skeletal muscle fatty acid oxidation and insulin sensitivity in mice. Diabetologia. 2014;57:1693–702. Collier CA, Bruce CR, Smith AC, Lopaschuk G, Dyck DJ. Metformin counters the insulin-induced suppression of fatty acid oxidation and stimulation of triacylglycerol storage in rodent skeletal muscle. Am J Physiol Endocrinol Metab. 2006;291:182–9. Fujii N, Ho RC, Manabe Y, Jessen N, Toyoda T, Holland WL, et al. Ablation of AMP-activated protein kinase alpha2 activity exacerbates insulin resistance induced by high-fat feeding of mice. Diabetes. 2008;57:2958–66. Richter EA, Ruderman NB. AMPK and the biochemistry of exercise: implications for human health and disease. Biochem J. 2009;418:261–75. Kilpeläinen TO, Qi L, Brage S, Sharp SJ, Sonestedt E, Demerath E, et al. Physical activity attenuates the influence of FTO variants on obesity risk: a meta-analysis of 218,166 adults and 19,268 children. PLoS Med. 2011. https://doi.org/10.1371/journal.pmed.1001116. Xiang L, Wu H, Pan A, Patel B, Xiang Q, Qi L, et al. FTO genotype and weight loss in diet and lifestyle interventions: a systematic review and meta-analysis. Am J Clin Nutr. 2016;103:1162–70. Jeukendrup AE, Wallis GA. Measurement of substrate oxidation during exercise by means of gas exchange measurements. Int J Sports Med. 2005;26:28–37. Maughan RJ, Whiting PH. Davidson RJ 1985. Estimation of plasma volume changes during marathon running. Br J Sports Med. 1985;19:138–41. Evans WS, Phinney S, Young V. Suction applied to a muscle biopsy maximised sample size. Med Sci Sports Exerc. 1982;14:101–2. Danaher J, Gerber T, Wellard RM, Stathis CG, Cooke MB. The use of metabolomics to monitor simultaneous changes in metabolic variables following supramaximal low volume high intensity exercise. Metabolomics. 2016;12:1–13. Willoughby DS, Stout JR, Wilborn CD. Effects of resistance training and protein plus amino acid supplementation on muscle anabolism, mass, and strength. Amino Acids. 2007;32:467–77. Livak KJ, Schmittgen TD. Analysis of relative gene expression data using real-time quantitative PCR and the 2-∆∆CT method. Methods. 2001;25:402–8. Egan B, Carson BP, Garcia-Roves PM, Chibalin AV, Sarsfield FM, Barron B, et al. Exercise intensity-dependent regulation of peroxisome proliferator-activated receptor γ coactivator-1α mRNA abundance is associated with differential activation of upstream signalling kinases in human skeletal muscle. J Physiol. 2010;558:1779–90. Gibala MJ, McGee SL, Garnham AP, Howlett KF, Snow RJ, et al. Brief intense interval exercise activates AMPK and p38 MAPK signaling and increases the expression of PGC-1α in human skeletal muscle. J Appl Physiol. 2009;106:929–34. Bylesjö M, Eriksson D, Kusano M, Moritz T, Trygg J. Data integration in plant biology: the O2PLS method for combined modeling of transcript and metabolite data. Plant J. 2007;52:1181–91. Xia J, Sinelnikov IV, Han B, Wishart DS. MetaboAnalyst 3.0 – making metabolomics more meaningful. Nucleic Acids Res. 2015;43:W251–7. Gerken T, Girard CA, Tung YCL, Webby CJ, Saudek V, Hewitson KS, et al. The obesity-associated FTO gene encodes a 2-oxoglutarate-dependent nucleic acid demethylase. Science. 2007;318:1469–72. Poritsanos NJ, Lew PS, Fischer J, Mobbs CV, Nagy JI, Wong D, et al. Impaired hypothalamic Fto expression in response to fasting and glucose in obese mice. Nutr Diabetes. 2011. https://doi.org/10.1038/nutd.2011.15. Tung YCL, Ayuso E, Shan X, Bosch F, O'Rahilly S, Coll AP, et al. Hypothalamic specific manipulation of Fto, the ortholog of the human obesity gene FTO, affects food intake in rats. PLoS One. 2010. https://doi.org/10.1371/journal.pone.0008771. Doaei S, Kalantari N, Izadi P, Salonurmi T, Jarrahi AM, Rafieifar S, et al. Changes in FTO and IRX3 gene expression in obese and overweight male adolescents undergoing an intensive lifestyle intervention and the role of FTO genotype in this interaction. J Transl Med. 2019. https://doi.org/10.1186/s12967-019-1921-4. Olszewski PK, Fredriksson R, Olszewska AM, Stephansson O, Alsiö J, Radomska KJ, et al. Hypothalamic FTO is associated with the regulation of energy intake not feeding reward. BMC Neurosci. 2009;10:129. Kjeldahl K, Rasmussen MA, Hasselbalch AL, Kyvik KO, Christiansen L, Rezzi S, et al. No genetic footprints of the fat mass and obesity associated (FTO) gene in human plasma 1H CPMG NMR metabolic profiles. Metabolomics. 2014;10:132–40. Wahl S, Krug S, Then C, Kirchhofer A, Kastenmüller G, Brand T, et al. Comparative analysis of plasma metabolomics response to metabolic challenge tests in healthy subjects and influence of the FTO obesity risk allele. Metabolomics. 2014;10:386–401. Charron MJ, Brosius FC 3rd, Alper SL, Lodish HF. A glucose transport protein expressed predominately in insulin-responsive tissues. Proc Natl Acad Sci U S A. 1989;86:2535–9. McCoy M, Proietto J, Hargreaves M. Skeletal muscle GLUT-4 and postexercise muscle glycogen storage in humans. J Appl Physiol. 1996;80:411–5. Treebak JT, Glund S, Deshmukh A, Klein DK, Long YC, Jensen TE, et al. AMPK-mediated AS160 phosphorylation in skeletal muscle is dependent on AMPK catalytic and regulatory subunits. Diabetes. 2006;55:2051–8. Lundsgaard AM, Kiens B. Gender differences in skeletal muscle substrate metabolism - molecular mechanisms and insulin sensitivity. Front Endocrinol. 2014;5:195. Egan B, O'Connor PL, Zierath JR, O'Gorman DJ. Time course analysis reveals gene-specific transcript and protein kinetics of adaptation to short-term aerobic exercise training in human skeletal muscle. PLoS One. 2013. https://doi.org/10.1016/j.cmet.2012.12.012. We greatly appreciate the assistance and support of technical staff at Metabolomics Australia (University of Melbourne), Victoria University's Institute for Health and Sport, and the Western Centre of Health, Research and Education. Victoria University Researcher Development Grants Scheme. School of Science, College of Science, Engineering and Health, RMIT University, Melbourne, Australia Jessica Danaher Institute for Health and Sport, Victoria University, Melbourne, Australia Christos G. Stathis & Alba Moreno-Asso School of Medicine, NYU Langone Health, New York, USA Robin A. Wilson Australian Institute for Musculoskeletal Science (AIMSS), Department of Medicine-Western Health, Melbourne Medical School, The University of Melbourne, Melbourne, Australia Robin A. Wilson, Alba Moreno-Asso & Matthew B. Cooke Science and Engineering Faculty, Queensland University of Technology, Brisbane, Australia R. Mark Wellard Department of Health and Medical Sciences, Faculty of Health, Arts and Design, Swinburne University of Technology, Melbourne, VIC, 3122, Australia Matthew B. Cooke Christos G. Stathis Alba Moreno-Asso Conceptualisation, M.B.C.; Formal Analysis, J.D., A.M-A., R.A.W., R.M.W., M.B.C.; Data Curation, J.D., A.M-A.; Writing – Original Draft Preparation, J.D., C.G.S., M.B.C; Writing – Rewriting and final review, M.B.C; Writing – Editing, A.M-A., R.A.W., R.M.W., C.G.S; Supervision, M.B.C. and C.G.S.; Project Administration, J.D.; Funding Acquisition, M.B.C., C.G.S., J.D. The authors read and approved the final manuscript. Correspondence to Matthew B. Cooke. Participants believed to meet the eligibility criteria were asked to provide written informed consent based on documents previously approved by the Victoria University Human Research Ethics Committee (HRETH 12/197) and all procedures were performed in accordance with the ethical standards set out in the 1964 Declaration of Helsinki. Additional file 1: Figure S-1 AUC of the ROC curve. Top 10 Features. Correlations FTO mRNA and other mRNA. Correlations Muscle Glucose and mRNA. Correlations Protein and mRNA. Additional file 6: Table S-1 mRNA Assay ID's. Metabolite Parameters. Metabolites (VIP). Danaher, J., Stathis, C.G., Wilson, R.A. et al. High intensity exercise downregulates FTO mRNA expression during the early stages of recovery in young males and females. Nutr Metab (Lond) 17, 68 (2020). https://doi.org/10.1186/s12986-020-00489-1
CommonCrawl
An extension of a Theorem of V. Šverák to variable exponent spaces Derivation of the Quintic NLS from many-body quantum dynamics in $T^2$ Pointwise estimate for elliptic equations in periodic perforated domains Li-Ming Yeh 1, Department of Applied Mathematics, National Chiao Tung University, Hsinchu, 30050, Taiwan Received November 2014 Revised February 2015 Published June 2015 Pointwise estimate for the solutions of elliptic equations in periodic perforated domains is concerned. Let $\epsilon$ denote the size ratio of the period of a periodic perforated domain to the whole domain. It is known that even if the given functions of the elliptic equations are bounded uniformly in $\epsilon$, the $C^{1,\alpha}$ norm and the $W^{2,p}$ norm of the elliptic solutions may not be bounded uniformly in $\epsilon$. It is also known that when $\epsilon$ closes to $0$, the elliptic solutions in the periodic perforated domains approach a solution of some homogenized elliptic equation. In this work, the Hölder uniform bound in $\epsilon$ and the Lipschitz uniform bound in $\epsilon$ for the elliptic solutions in perforated domains are proved. The $L^\infty$ and the Lipschitz convergence estimates for the difference between the elliptic solutions in the perforated domains and the solution of the homogenized elliptic equation are derived. Keywords: Periodic perforated domains, homogenized elliptic equation, two-phase media.. Mathematics Subject Classification: Primary: 35J05, 35J15, 35J2. Citation: Li-Ming Yeh. Pointwise estimate for elliptic equations in periodic perforated domains. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1961-1986. doi: 10.3934/cpaa.2015.14.1961 E. Acerbi, V. Chiado Piat, G. Dal Maso and D. Percivale, An extension theorem from connected sets, and homogenization in general periodic domains,, \emph{Nonlinear Analysis}, 18 (1992), 481. doi: 10.1016/0362-546X(92)90015-7. Google Scholar R. A. Adams, Sobolev Spaces,, Academic Press, (1975). Google Scholar Gregoire Allaire, Homogenization and two-scale convergence,, \emph{SIAM I. Math. Anal.}, 23 (1992), 1482. doi: 10.1137/0523084. Google Scholar R. C. Morgan and I. Babuska, An approach for constructing families of homogenized equations for periodic media. I: An integral representation and its consequences,, \emph{SIAM J. Math. Anal.}, 22 (1991), 1. doi: 10.1137/0522001. Google Scholar N. Bakhvalov and G. Panasenko, Homogenisation: Averaging Processes in Periodic Media: Mathematical Problems in the Mechanics of Composite Materials,, Kluwer Academic Publishers, (1989). doi: 10.1007/978-94-009-2247-1. Google Scholar B. Berkowitz, et al., Physical pictures of transport in heterogeneous media: Advection-dispersion, random walk, and fractional derivative formulations,, \emph{Water resources Research}, 38 (2002), 1191. Google Scholar Susanne C. Brenner and L. Ridgway Scott, The Mathematical Theory of Finite Element Methods,, Springer, (2008). doi: 10.1007/978-0-387-75934-0. Google Scholar Alain Bensoussan, Jacques-Louis Lions and George Papanicolaou, Asymptotic Analysis for Periodic Structures,, Elsevier North-Holland, (1978). Google Scholar Li-Qun Cao, Asymptotic expansions and numerical algorithms of eigenvalues and eigenfunctions of the Dirichlet problem for second order elliptic equations in perforated domains,, \emph{Numerische Mathematik}, 103 (2006), 11. doi: 10.1007/s00211-005-0668-4. Google Scholar Philippe G. Ciarlet, The Finite Element Method for Elliptic Problems,, Amsterdam: North-Holland, (1978). Google Scholar D. Cioranescu, A. Damlamian, G. Grisoa and D. Onofrei, The periodic unfolding method for perforated domains and Neumann sieve models,, \emph{J. Math. Pures Appl.}, 89 (2008), 248. doi: 10.1016/j.matpur.2007.12.008. Google Scholar M. Giaquinta, Multiple integrals in the calculus of variations,, Study 105, (1983). Google Scholar D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order,, Springer-Verlag, (1983). doi: 10.1007/978-3-642-61798-0. Google Scholar Thomas Y. Hou, Xiao-Hui Wu and Zhiqiang Cai, Convergence of a multiscale finite element method for elliptic problems with rapidly oscillating coefficients,, \emph{Math. Comp.}, 68 (1999), 913. doi: 10.1090/S0025-5718-99-01077-7. Google Scholar V.V. Jikov, S. M. Kozlov and O. A. Oleinik, Homogenization of Differential Operators and Integral Functions,, Springer-Verlag, (1994). doi: 10.1007/978-3-642-84659-5. Google Scholar Viviane Klein and Malgorzata Peszynska, Adaptive Double-Diffusion Model and Comparison to a Highly Heterogeneous Micro-Model,, \emph{Journal of Applied Mathematics}, (2012) (2012). Google Scholar J. L. Lions, Asymptotic expansions in perforated media with a periodic structure,, \emph{The Rocky Mountain J. Math.}, 10 (1980), 125. doi: 10.1216/RMJ-1980-10-1-125. Google Scholar N. Neuss, W. Jäger and G. Wittum, Homogenization and multigrid,, \emph{Computing}, 66 (2001), 1. doi: 10.1007/s006070170036. Google Scholar O. A. Oleinik, A. S. Shamaev and G. A. Tosifan, Mathematical Problems in Elasticity and Homogenization,, North-Holland, (1992). Google Scholar Patrizia Donato, Florian Gaveau. Homogenization and correctors for the wave equation in non periodic perforated domains. Networks & Heterogeneous Media, 2008, 3 (1) : 97-124. doi: 10.3934/nhm.2008.3.97 Daniela De Silva, Fausto Ferrari, Sandro Salsa. Recent progresses on elliptic two-phase free boundary problems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 6961-6978. doi: 10.3934/dcds.2019239 Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski. An improved homogenization result for immiscible compressible two-phase flow in porous media. Networks & Heterogeneous Media, 2017, 12 (1) : 147-171. doi: 10.3934/nhm.2017006 Clément Cancès. On the effects of discontinuous capillarities for immiscible two-phase flows in porous media made of several rock-types. Networks & Heterogeneous Media, 2010, 5 (3) : 635-647. doi: 10.3934/nhm.2010.5.635 Brahim Amaziane, Mladen Jurak, Leonid Pankratov, Anja Vrbaški. Some remarks on the homogenization of immiscible incompressible two-phase flow in double porosity media. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 629-665. doi: 10.3934/dcdsb.2018037 Yangyang Qiao, Huanyao Wen, Steinar Evje. Compressible and viscous two-phase flow in porous media based on mixture theory formulation. Networks & Heterogeneous Media, 2019, 14 (3) : 489-536. doi: 10.3934/nhm.2019020 Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski. The existence of weak solutions to immiscible compressible two-phase flow in porous media: The case of fields with different rock-types. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1217-1251. doi: 10.3934/dcdsb.2013.18.1217 Massimo Lanza de Cristoforis, aolo Musolino. A quasi-linear heat transmission problem in a periodic two-phase dilute composite. A functional analytic approach. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2509-2542. doi: 10.3934/cpaa.2014.13.2509 Theodore Tachim Medjo. A two-phase flow model with delays. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3273-3294. doi: 10.3934/dcdsb.2017137 Jan Prüss, Jürgen Saal, Gieri Simonett. Singular limits for the two-phase Stefan problem. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5379-5405. doi: 10.3934/dcds.2013.33.5379 Marianne Korten, Charles N. Moore. Regularity for solutions of the two-phase Stefan problem. Communications on Pure & Applied Analysis, 2008, 7 (3) : 591-600. doi: 10.3934/cpaa.2008.7.591 Kenta Ohi, Tatsuo Iguchi. A two-phase problem for capillary-gravity waves and the Benjamin-Ono equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (4) : 1205-1240. doi: 10.3934/dcds.2009.23.1205 Mamadou Sango. Homogenization of the Neumann problem for a quasilinear elliptic equation in a perforated domain. Networks & Heterogeneous Media, 2010, 5 (2) : 361-384. doi: 10.3934/nhm.2010.5.361 T. Tachim Medjo. Averaging of an homogeneous two-phase flow model with oscillating external forces. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3665-3690. doi: 10.3934/dcds.2012.32.3665 Eberhard Bänsch, Steffen Basting, Rolf Krahl. Numerical simulation of two-phase flows with heat and mass transfer. Discrete & Continuous Dynamical Systems - A, 2015, 35 (6) : 2325-2347. doi: 10.3934/dcds.2015.35.2325 Ciprian G. Gal, Maurizio Grasselli. Longtime behavior for a model of homogeneous incompressible two-phase flows. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 1-39. doi: 10.3934/dcds.2010.28.1 Jie Jiang, Yinghua Li, Chun Liu. Two-phase incompressible flows with variable density: An energetic variational approach. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3243-3284. doi: 10.3934/dcds.2017138 V. S. Manoranjan, Hong-Ming Yin, R. Showalter. On two-phase Stefan problem arising from a microwave heating process. Discrete & Continuous Dynamical Systems - A, 2006, 15 (4) : 1155-1168. doi: 10.3934/dcds.2006.15.1155 Feng Ma, Mingfang Ni. A two-phase method for multidimensional number partitioning problem. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 203-206. doi: 10.3934/naco.2013.3.203 Theodore Tachim-Medjo. Optimal control of a two-phase flow model with state constraints. Mathematical Control & Related Fields, 2016, 6 (2) : 335-362. doi: 10.3934/mcrf.2016006 Li-Ming Yeh
CommonCrawl
Is it pions or gluons that mediate the strong force between nucleons? From my recent experience teaching high school students I've found that they are taught that the strong force between nucleons is mediated by virtual-pion exchange, whereas between quarks it's gluons. They are not, however, taught anything about colour or quark-confinement. At a more sophisticated level of physics, is it just that the maths works equally well for either type of boson, or is one (type of boson) in fact more correct than the other? particle-physics quantum-chromodynamics quarks pions Brandon Enright qftmeqftme $\begingroup$ See the answer by Lubos at physics.stackexchange.com/q/9661 . The correct type is the gluon. $\endgroup$ – anna v May 10 '11 at 12:04 $\begingroup$ @anna I posed this question after having read @Lubosh's answer. I don't feel that it answers my question and, either way, I was kind of hoping for a slightly more expansive answer. When I get a chance I'll add an edit, containing some LaTex, that should better describe why I posted this query. $\endgroup$ – qftme May 10 '11 at 12:08 $\begingroup$ Lubos gave a complete answer, but one could add that nuclear forces are in analogy with the electromagnetic forces between molecules, the Van der Waals forces. There the mediator is the photon, but the moments of the charge distributions are what control the forces exerted between molecules. In a similar way the strong nuclear forces are such a spillover, except that in contrast to the photon the gluon carries color and couples to itself so it is much more complicated. $\endgroup$ – anna v May 10 '11 at 13:42 $\begingroup$ Yes. Depending on the energy and distance scale in question. $\endgroup$ – dmckee --- ex-moderator kitten May 10 '11 at 14:18 Dear qftme, I agree that your question deserves a more expansive answer. The answer, "pions" or "gluons", depends on the accuracy with which you want to describe the strong force. Historically, people didn't know about quarks and gluons in the 1930s when they began to study the forces in the nuclei for the first time. In 1935, Hideki Yukawa made the most important early contribution of Japanese science to physics when he proposed that there may be short-range forces otherwise analogous to long-range electromagnetism whose potential is $$V(r) = K\frac{e^{-\mu r}}{r} $$ The Fourier transform of this potential is simply $1/(p^2+\mu^2)$ which is natural - an inverted propagator of a massless particle. (The exponential was added relatively to the Coulomb potential; and in the Fourier transform, it's equivalent to the addition of $\mu^2$ in the denominator.) The Yukawa particle (a spinless boson) was mediating a force between particles that was only significantly nonzero for short enough distances. The description agreed with the application to protons, neutrons, and the forces among them. So the mediator of the strong force was thought to be a pion and the model worked pretty well. (In the 1930s, people were also confusing muons and pions in the cosmic rays, using names that sound bizarre to the contemporary physicists' ears - such as a mesotron, a hybrid of pion and muon, but that's another story.) The pion model was viable even when the nuclear interactions were understood much more quantitatively in the 1960s. The pions are "pseudo-Goldstone bosons". They're spinless (nearly) massless bosons whose existence is guaranteed by the existence of a broken symmetry - in this case, it was the $SU(3)$ symmetry rotating the three flavors we currently know as flavors of the $u,d,s$ light quarks. The symmetry is approximate which is why the pseudo-Goldstone bosons, the pions (and kaons), are not exactly massless. But they're still significantly lighter than the protons and neutrons. However, the theory with the fundamental pion fields is not renormalizable - it boils down to the Lagrangian's being highly nonlinear and complicated. It inevitably produces absurd predictions at short enough distances or high enough energies - distances that are shorter than the proton radius. A better theory was needed. Finally, it was found in Quantum Chromodynamics that explains all protons, neutrons, and even pions and kaons (and hundreds of others) as bound states of quarks (and gluons and antiquarks). In that theory, all the hadrons are described as complicated composite particles and all the forces ultimately boil down to the QCD Lagrangian where the force is due to the gluons. So whenever you study the physics at high enough energy or resolution so that you see "inside" the protons and you see the quarks, you must obviously use gluons as the messengers. Pions as messengers are only good in approximate theories in which the energies are much smaller than the proton mass. This condition also pretty much means that the velocities of the hadrons have to be much smaller than the speed of light. Luboš MotlLuboš Motl $\begingroup$ So shouldnt it be possible to derive the pion model.as a low energy approximation of QCD? Do you know of a paper doing that? $\endgroup$ – lalala May 30 '17 at 19:47 $\begingroup$ I think that the statement that "the pion model is an approximation of QCD" is valid morally but not in any systematic, exact sense. There is no meaningful limit in which the pions would describe all the degrees of freedom etc. So there isn't and there can't be any rigorous derivation as far as I can say. All such argumentation has to be incomplete, heuristic etc. $\endgroup$ – Luboš Motl May 31 '17 at 5:56 $\begingroup$ I just want to add that this is what renormalization is in principle. The resolution of your model depends on how high order of interactions you include. $\endgroup$ – Liam Clink Dec 7 '18 at 18:18 gluons mediate the strong force between quarks. Pions mediate the nuclear force or nucleon-nucleon interaction or RESIDUAL strong force. So, the answer to your question is BOTH. In different measure, but both. See Wikipedia: http://en.wikipedia.org/wiki/Nuclear_force Manuel CabedoManuel Cabedo These are some nice answers! Wanted to add that, as you know, how strongly the quarks couple to each other (or interact with each other) is momentum-dependent. So within the nucleons (protons and neutrons) quark coupling is very strong (which is why the quarks are confined in the nucleons). Because the interquark interaction is so strong at these energies, it is impossible to treat it perturbatively (that is, in terms of gluon-exchange). This is why, in the regime of nucleons, we talk instead about meson exchange like pion exchange (work by Witten and Weinberg), not gluon exchange. In summary: QCD has momentum-dependent coupling. So at low energies it is impossible to treat it perturbatively (as quarks exchanging gluons). We change our view to treating it as baryons (like nucleons) exchanging mesons. RosieKRosieK If you look at the standard model you will only find gluons. This is very clear and should settle any doubts. (Pions are a historical relic of the middle of the twentieth century which only provide an approximation.) Allan MaherAllan Maher $\begingroup$ This answer misses all the subtlety of the question. Nuclear physics is still done using effective models explicitly including meson exchange based models. It's not a historic relic but a regime of interest just like every other effective theory. $\endgroup$ – dmckee --- ex-moderator kitten Sep 30 '15 at 17:59 $\begingroup$ Yeah gluons are only the mediators of strong force between quarks "inside" protons or neutrons. Pions are actually the mediators between protons and neutrons. It may be that they may have their origin in a gluon, sure. But it is the pions that mediate the strong force. $\endgroup$ – Roghan Arun May 18 '20 at 14:26 Not the answer you're looking for? Browse other questions tagged particle-physics quantum-chromodynamics quarks pions or ask your own question. Is the stability of the nucleus due to pions or gluons? Feynman diagrams in effective theories Protons' repulsion within a nucleus Why is a pion so light compared to a neutron or proton? Why does the pion live in a representation of isospin SU(2) and is the mediator of the strong force generated by color SU(3)? What is direct interaction, if exist, between gluons and pions? Can a composite boson like the pion be an exchange particle for the strong nuclear force? Role of the Yukawa potential Does the relation below between the masses of pions imply that the weak force is not a basic force? How can one rule out the possibility of there being neutral quarks? If proton spin emergence from quarks and gluons is mysterious, why is silver atom spin not? Strong force between quarks that are out of causal contact Are virtual pions necessary to mediate the residual strong interaction? Difference between boundedness (electrons around nuclei) and color confinement (quarks) Is this Feynman diagram possible? Do virtual mesons exchanged between nucleons in the nuclear force ever decay before reaching the recipient nucleon?
CommonCrawl
Quick way to iterate multiples of a prime N that are not multiples of primes X, Y, Z, …? Is there a way to quickly iterate multiples of some prime $N$ while avoiding multiples of blacklisted primes $X$, $Y$, $Z$, ...? By quickly I mean is there a faster way than: Increment current number by N. Check if current number is divisible any each number in blacklist (M checks if there are M primes in the blacklist). If so, jump to 1, else return the current number as the next number in the iteration. My end goal would be an O(1) algorithm that lets me answer queries like, "Give me the 5th number that is a multiple of N but not a multiple of X, Y, or Z." But the above algorithm requires computing the 1-4th numbers first, so it's linear time. I suspect the answer is no because it seems that if there is a way it would allow speeding up the Sieve of Eratosthenes. Although in some cases the sieve avoids redundant marking by starting at a prime square, it still redundantly marks numbers sometimes. For example the sieve will wastefully mark 60 three times, once for 2, once for 3, and once for 5. You can see this by watching 60 light up three different colors in the linked Wikipedia article's first graphic: elementary-number-theory prime-numbers sieve-theory Joseph GarvinJoseph Garvin Up to $kN$ (inclusive) there are $k$ positive integer multiples of $N$, of which $\lfloor k/X \rfloor$ are multiples of $X$, etc. By inclusion-exclusion, the number that escape a "blacklist" of $3$ primes is $$ k - \left\lfloor \frac{k}{X} \right\rfloor - \left\lfloor \frac{k}{Y} \right\rfloor - \left\lfloor \frac{k}{Z} \right\rfloor + \left\lfloor \frac{k}{XY} \right\rfloor + \left\lfloor \frac{k}{XZ} \right\rfloor + \left\lfloor \frac{k}{YZ}\right\rfloor - \left\lfloor \frac{k}{XYZ}\right\rfloor $$ Approximate it without the $\lfloor \cdot \rfloor$'s, and you're off by at most $4$, so then adjust... EDIT: For example, suppose you want the $35$'th positive integer multiple of $N$ (any prime other than $5$, $7$ or $11$) that is not a multiple of $X=5$, $Y=7$ or $Z=11$. Thus you want $k$ so that $$f(k) = k - \left\lfloor \frac{k}{X} \right\rfloor - \left\lfloor \frac{k}{Y} \right\rfloor - \left\lfloor \frac{k}{Z} \right\rfloor + \left\lfloor \frac{k}{XY} \right\rfloor + \left\lfloor \frac{k}{XZ} \right\rfloor + \left\lfloor \frac{k}{YZ}\right\rfloor - \left\lfloor \frac{k}{XYZ}\right\rfloor = 35$$ Now $$\eqalign{k &- \frac{k}{X} - \frac{k}{Y} - \frac{k}{Z} + \frac{k}{XY} + \frac{k}{XZ} + \frac{k}{YZ} - \frac{k}{XYZ} \cr &= k \left(1-\frac{1}{X}\right)\left(1 -\frac{1}{Y}\right) \left( 1 - \frac{1}{Z}\right) = \frac{48 k}{77}}$$ which would be $35$ for $k = 56.14\ldots$. Now $f(56) = 34$ so we try $f(57)$ and find that this is $35$. Thus $57N$ is the multiple of $N$ we are looking for. Robert IsraelRobert Israel $\begingroup$ If I understand right, this computes the size of the set of numbers I want to iterate, but it doesn't help with iterating or computing e.g. the 5th number, or am I not thinking hard enough yet? $\endgroup$ – Joseph Garvin Jan 8 '13 at 3:09 $\begingroup$ No, you don't understand. I'll edit my answer to include an example. $\endgroup$ – Robert Israel Jan 9 '13 at 5:06 $\begingroup$ Ah, that makes more sense, thanks. $\endgroup$ – Joseph Garvin Jan 10 '13 at 20:24 Not the answer you're looking for? Browse other questions tagged elementary-number-theory prime-numbers sieve-theory or ask your own question. A better way to prime factorize a set of numbers? Generate Sieve of Eratosthenes without "sieve" (generate prime set in interval) Understanding the sieve of eratosthenes Why can the sieve of eratosthenes not be used to confirm the twin primes conjecture? How to compute a prime gap from all previous prime gaps? Prime Factorization using The Sieve of Eratosthenes Is there a way to find the prime numbers up to 1000 with less than 200 calculations?
CommonCrawl
Journal of the Australian Mathematical Society (1) Psychological Medicine (1) Australian Mathematical Society Inc (1) Effectiveness of mirtazapine as add-on to paroxetine v. paroxetine or mirtazapine monotherapy in patients with major depressive disorder with early non-response to paroxetine: a two-phase, multicentre, randomized, double-blind clinical trial Le Xiao, Xuequan Zhu, Amy Gillespie, Yuan Feng, Jingjing Zhou, Xu Chen, Yuanyuan Gao, Xueyi Wang, Xiancang Ma, Chengge Gao, Yunshi Xie, Xiaoping Pan, Yan Bai, Xiufeng Xu, Gang Wang, Runsen Chen Journal: Psychological Medicine , First View Published online by Cambridge University Press: 14 January 2020, pp. 1-9 This study aimed to examine the efficacy of combining paroxetine and mirtazapine v. switching to mirtazapine, for patients with major depressive disorder (MDD) who have had an insufficient response to SSRI monotherapy (paroxetine) after the first 2 weeks of treatment. This double-blind, randomized, placebo-controlled, three-arm study recruited participants from five hospitals in China. Eligible participants were aged 18–60 years with MDD of at least moderate severity. Participants received paroxetine during a 2-week open-label phase and patients who had not achieved early improvement were randomized to paroxetine, mirtazapine or paroxetine combined with mirtazapine for 6 weeks. The primary outcome was improvement on the Hamilton Rating Scale for Depression 17-item (HAMD-17) scores 6 weeks after randomization. A total of 204 patients who showed early non-response to paroxetine monotherapy were randomly assigned to receive either mirtazapine and placebo (n = 68), paroxetine and placebo (n = 68) or mirtazapine and paroxetine (n = 68), with 164 patients completing the outcome assessment. At week 8, the least squares (LS) mean change of HAMD-17 scores did not significantly differ among the three groups, (12.98 points) in the mirtazapine group, (12.50 points) in the paroxetine group and (13.27 points) in the mirtazapine plus paroxetine combination group. Participants in the paroxetine monotherapy group were least likely to experience adverse effects. After 8 weeks follow-up, paroxetine monotherapy, mirtazapine monotherapy and paroxetine/mirtazapine combination therapy were equally effective in non-improvers at 2 weeks. The results of this trial do not support a recommendation to routinely offer additional treatment or a switch in treatment strategies for MDD patients who do not show early improvement after 2 weeks of antidepressant treatment. $L^{q}$ -SPECTRUM OF SELF-SIMILAR MEASURES WITH OVERLAPS IN THE ABSENCE OF SECOND-ORDER IDENTITIES Classical measure theory SZE-MAN NGAI, YUANYUAN XIE Journal: Journal of the Australian Mathematical Society / Volume 106 / Issue 1 / February 2019 Published online by Cambridge University Press: 22 August 2018, pp. 56-103 For the class of self-similar measures in $\mathbb{R}^{d}$ with overlaps that are essentially of finite type, we set up a framework for deriving a closed formula for the $L^{q}$ -spectrum of the measure for $q\geq 0$ . This framework allows us to include iterated function systems that have different contraction ratios and those in higher dimension. For self-similar measures with overlaps, closed formulas for the $L^{q}$ -spectrum have only been obtained earlier for measures satisfying Strichartz's second-order identities. We illustrate how to use our results to prove the differentiability of the $L^{q}$ -spectrum, obtain the multifractal dimension spectrum, and compute the Hausdorff dimension of the measure.
CommonCrawl
Better approximation by a Durrmeyer variant of $ \alpha- $Baskakov operators İsmail Aslan 1,, and Türkan Yeliz Gökçer 2, Hacettepe University, Department of Mathematics, Çankaya, TR-06800, Ankara, Turkey İstanbul Gedik University, Faculty of Engineering, Department of Computer Engineering, 34876, İstanbul, Turkey * Corresponding author: İsmail Aslan Received July 2021 Revised October 2021 Early access December 2021 Fund Project: This study is supported financially by the Scientific and Technological Research Council of Turkey (TÜBÏTAK; project number: 119F262), for which we are thankful In this note, we construct a pseudo-linear kind discrete operator based on the continuous and nondecreasing generator function. Then, we obtain an approximation to uniformly continuous functions through this new operator. Furthermore, we calculate the error estimation of this approach with a modulus of continuity based on a generator function. The obtained results are supported by visualizing with an explicit example. Finally, we investigate the relation between discrete operators and generalized sampling series. Keywords: Discrete operators, pseudo-linear operators, uniform g-distance, rate of approximation, g-modulus of continuity. Mathematics Subject Classification: Primary: 41A35, 41A25; Secondary: 41A99. Citation: İsmail Aslan, Türkan Yeliz Gökçer. Approximation by pseudo-linear discrete operators. Mathematical Foundations of Computing, doi: 10.3934/mfc.2021037 T. Acar, D. Costarelli and G. Vinti, Linear prediction and simultaneous approximation by m-th order Kantorovich type sampling series, Banach J. Math. Anal., 14 (2020), 1481-1508. doi: 10.1007/s43037-020-00071-0. Google Scholar L. Angeloni, D. Costarelli and G. Vinti, A characterization of the convergence in variation for the generalized sampling series, Ann. Acad. Sci. Fenn. Math., 43 (2018), 755-767. doi: 10.5186/aasfm.2018.4343. Google Scholar L. Angeloni and G. Vinti, Discrete operators of sampling type and approximation in $ \varphi$-variation, Math. Nachr., 291 (2018), 546-555. doi: 10.1002/mana.201600508. Google Scholar İ. Aslan, Approximation by sampling type discrete operators, Commun. Fac. Sci. Univ. Ank. Ser. A1. Math. Stat., 69 (2020), 969-980. doi: 10.31801/cfsuasmas.671237. Google Scholar İ. Aslan, Convergence in phi-variation and rate of approximation for nonlinear integral operators using summability process, Mediterr. J. Math., 18 (2021), Paper No. 5, 19 pp. doi: 10.1007/s00009-020-01623-2. Google Scholar İ. Aslan, Approximation by sampling-type nonlinear discrete operators, Commun. Fac. Sci. Univ. Ank. Ser. A1. Math. Stat., 69 (2020), 969-980. doi: 10.31801/cfsuasmas.671237. Google Scholar I. Aslan and O. Duman, Summability on Mellin-type nonlinear integral operators, Integral Transform. Spec. Funct., 30 (2019), 492-511. doi: 10.1080/10652469.2019.1594209. Google Scholar I. Aslan and O. Duman, Approximation by nonlinear integral operators via summability process, Math. Nachr., 293 (2020), 430-448. doi: 10.1002/mana.201800187. Google Scholar I. Aslan and O. Duman, Characterization of absolute and uniform continuity, Hacet. J. Math. Stat., 49 (2020), 1550-1565. doi: 10.15672/hujms.585581. Google Scholar I. Aslan and O. Duman, Nonlinear approximation in N-dimension with the help of summability methods, Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM, 115 (2021), Paper No. 105, 27 pp. doi: 10.1007/s13398-021-01046-y. Google Scholar B. Bede, L. Coroianu and S. G. Gal, Approximation by Max-Product Type Operators, Springer, Cham, 2016. doi: 10.1007/978-3-319-34189-7. Google Scholar B. Bede and O'R. Donal, The theory of pseudo-linear operators, Knowledge-Based Systems, 38 (2013), 19-26. doi: 10.1016/j.knosys.2012.07.003. Google Scholar B. Bede, H. Nobuhara, M. Daňková and A. Di Nola, Approximation by pseudo-linear operators, Fuzzy Sets and Systems, 159 (2008), 804-820. doi: 10.1016/j.fss.2007.11.007. Google Scholar B. Bede, E. D. Schwab, H. Nobuhara and I. J. Rudas, Approximation by Shepard type pseudo-linear operators and applications to image processing, Internat. J. Approx. Reason, 50 (2009), 21-36. doi: 10.1016/j.ijar.2008.01.007. Google Scholar L. Bezuglaya and V. Katsnelson, The sampling theorem for functions with limited multi-band spectrum I, Z. Anal. Anwend., 12 (1993), 511-534. doi: 10.4171/ZAA/550. Google Scholar A. Boccuto and X. Dimitriou, Rates of approximation for general sampling-type operators in the setting of filter convergence, Appl. Math. Comput., 229 (2014), 214-226. doi: 10.1016/j.amc.2013.12.044. Google Scholar P. L. Butzer, W. Splettstösser and R. L. Stens, The sampling theorem and linear prediction in signal analysis, Jahresber. Deutsch. Math.-Verein., 90 (1988), 1-70. Google Scholar P. L. Butzer and R. L. Stens, Prediction of non-bandlimited signals from past samples in terms of splines of low degree, Math. Nachr., 132 (1987), 115-130. doi: 10.1002/mana.19871320109. Google Scholar P. L. Butzer and R. L. Stens, Linear predictions in terms of samples from the past: An overview, Numerical Methods and Approximation Theory Ⅲ, (1988), 1-22. Google Scholar P. L. Butzer and R. L. Stens, Sampling theory for not necessarily band-limited functions: A historical overview, SIAM Rev., 34 (1992), 40-53. doi: 10.1137/1034002. Google Scholar C.-C. Chiu and W.-J. Wang, A simple computation of MIN and MAX operations for fuzzy numbers, Fuzzy Sets and Systems, 126 (2002), 273-276. doi: 10.1016/S0165-0114(01)00041-0. Google Scholar L. Coroianu, D. Costarelli, S. G. Gal and G. Vinti, The max-product generalized sampling operators: Convergence and quantitative estimates, Appl. Math. Comput., 355 (2019), 173-183. doi: 10.1016/j.amc.2019.02.076. Google Scholar L. Coroianu and S. G. Gal, Saturation and inverse results for the Bernstein max-product operator, Period. Math. Hungar., 69 (2014), 126-133. doi: 10.1007/s10998-014-0062-z. Google Scholar L. Coroianu and S. G. Gal, Approximation by truncated max-product operators of Kantorovich-type based on generalized $ (\phi,\psi)$-kernels, Math. Methods Appl. Sci., 41 (2018), 7971-7984. doi: 10.1002/mma.5262. Google Scholar L. Coroianu, S. G. Gal and B. Bede, Approximation of fuzzy numbers by max-product Bernstein operators, Fuzzy Sets and Systems, 257 (2014), 41-66. doi: 10.1016/j.fss.2013.04.010. Google Scholar D. Costarelli and A. R. Sambucini, Approximation results in Orlicz spaces for sequences of Kantorovich max-product neural network operators, Results Math., 73 (2018), Art. 15, 15 pp. doi: 10.1007/s00025-018-0799-4. Google Scholar D. Costarelli, A. R. Sambucini and G. Vinti, Convergence in Orlicz spaces by means of the multivariate max-product neural network operators of the Kantorovich type and applications, Neural Comput. Appl., 31 (2019), 5069-5078. doi: 10.1007/s00521-018-03998-6. Google Scholar D. Costarelli, M. Seracini and G. Vinti, A comparison between the sampling Kantorovich algorithm for digital image processing with some interpolation and quasi-interpolation methods, Appl. Math. Comput., 374 (2020), 125046, 18pp. doi: 10.1016/j.amc.2020.125046. Google Scholar D. Costarelli and G. Vinti, Approximation by nonlinear multivariate sampling Kantorovich type operators and applications to image processing, Numer. Funct. Anal. Optim., 34 (2013), 819-844. doi: 10.1080/01630563.2013.767833. Google Scholar D. Costarelli and G. Vinti, Max-product neural network and quasi-interpolation operators activated by sigmoidal functions, J. Approx. Theory, 209 (2016), 1-22. doi: 10.1016/j.jat.2016.05.001. Google Scholar D. Costarelli and G. Vinti, Estimates for the neural network operators of the max-product type with continuous and p-integrable functions, Results Math., 73 (2018), Art. 12, 10 pp. doi: 10.1007/s00025-018-0790-0. Google Scholar O. Duman, Statistical convergence of max-product approximating operators, Turkish J. Math., 34 (2010), 501-514. Google Scholar T. Y. Gokcer and O. Duman, Summation process by max-product operators, Computational Analysis, Springer Proc. Math. Stat., Springer, New York, 155 (2016), 59–67. doi: 10.1007/978-3-319-28443-9_4. Google Scholar T. Y. Gokcer and O. Duman, Approximation by max-min operators: A general theory and its applications, Fuzzy Sets and Systems, 394 (2020), 146-161. doi: 10.1016/j.fss.2019.11.007. Google Scholar T. Y. Gokcer and O. Duman, Regular summability methods in the approximation by max-min operators, Fuzzy Sets and Systems, 426 (2022), 106-120. doi: 10.1016/j.fss.2021.03.003. Google Scholar M. Gondran and M. Minoux, Dioïds and semirings: Links to fuzzy sets and other applications, Fuzzy Sets and Systems, 158 (2007), 1273-1294. doi: 10.1016/j.fss.2007.01.016. Google Scholar C.-C. Liu, Y.-K. Wu, Y.-Y. Lur and C.-L. Tsai, On the power sequence of a fuzzy matrix with convex combination of max-product and max-min operations, Fuzzy Sets and Systems, 289 (2016), 157-163. doi: 10.1016/j.fss.2015.06.010. Google Scholar S. Ries and R. L. Stens, Approximation by generalized sampling series, Proceedings of the International Conference on Constructive Theory of Functions (Varna, 1984), Bulgarian Academy of Science, Sofia, (1984), 746–756. Google Scholar R.-H. Shen and L.-Y. Wei, Convexity of functions produced by Bernstein operators of max-product kind, Results Math., 74 (2019), Art. 92, 6 pp. doi: 10.1007/s00025-019-1015-x. Google Scholar D. Shepard, A two-dimensional interpolation function for irregularly spaced data, Proc. 1968 ACM National Conference, (1968), 517-524. doi: 10.1145/800186.810616. Google Scholar H. Tahayori, A. G. B. Tettamanzi, G. Degli Antoni and A. Visconti, On the calculation of extended max and min operations between convex fuzzy sets of the real line, Fuzzy Sets and Systems, 160 (2009), 3103-3114. doi: 10.1016/j.fss.2009.06.005. Google Scholar L. Valverde, On the structure of F-indistinguishability operators, Fuzzy Sets and Systems, 17 (1985), 313-328. doi: 10.1016/0165-0114(85)90096-X. Google Scholar X. Zhang, S. Tan, C.-C. Hang and P.-Z. Wang, An efficient computational algorithm for min-max operations, Fuzzy Sets and Systems, 104 (1999), 297-304. doi: 10.1016/S0165-0114(97)00207-8. Google Scholar Figure 1. Approximation to $ f $ given in (5) by pseudo-linear discrete operators Liliana Trejo-Valencia, Edgardo Ugalde. Projective distance and $g$-measures. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3565-3579. doi: 10.3934/dcdsb.2015.20.3565 Teresa Alberico, Costantino Capozzoli, Luigi D'Onofrio, Roberta Schiattarella. $G$-convergence for non-divergence elliptic operators with VMO coefficients in $\mathbb R^3$. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 129-137. doi: 10.3934/dcdss.2019009 Liang Huang, Jiao Chen. The boundedness of multi-linear and multi-parameter pseudo-differential operators. Communications on Pure & Applied Analysis, 2021, 20 (2) : 801-815. doi: 10.3934/cpaa.2020291 JIAO CHEN, WEI DAI, GUOZHEN LU. $L^p$ boundedness for maximal functions associated with multi-linear pseudo-differential operators. Communications on Pure & Applied Analysis, 2017, 16 (3) : 883-898. doi: 10.3934/cpaa.2017042 Harun Karsli. On approximation to discrete q-derivatives of functions via q-Bernstein-Schurer operators. Mathematical Foundations of Computing, 2021, 4 (1) : 15-30. doi: 10.3934/mfc.2020023 Danilo Costarelli. Preface: Special issue on approximation by linear and nonlinear operators with applications. Part I. Mathematical Foundations of Computing, 2021, 4 (4) : i-ii. doi: 10.3934/mfc.2021028 Wei Liu, Pavel Krejčí, Guoju Ye. Continuity properties of Prandtl-Ishlinskii operators in the space of regulated functions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3783-3795. doi: 10.3934/dcdsb.2017190 Ling-Xiong Han, Wen-Hui Li, Feng Qi. Approximation by multivariate Baskakov–Kantorovich operators in Orlicz spaces. Electronic Research Archive, 2020, 28 (2) : 721-738. doi: 10.3934/era.2020037 Guangjun Shen, Xueying Wu, Xiuwei Yin. Stabilization of stochastic differential equations driven by G-Lévy process with discrete-time feedback control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 755-774. doi: 10.3934/dcdsb.2020133 Françoise Demengel. Ergodic pairs for degenerate pseudo Pucci's fully nonlinear operators. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3465-3488. doi: 10.3934/dcds.2021004 Marta García-Huidobro, Raul Manásevich, J. R. Ward. Vector p-Laplacian like operators, pseudo-eigenvalues, and bifurcation. Discrete & Continuous Dynamical Systems, 2007, 19 (2) : 299-321. doi: 10.3934/dcds.2007.19.299 Lanzhe Liu. Mean oscillation and boundedness of Toeplitz Type operators associated to pseudo-differential operators. Communications on Pure & Applied Analysis, 2015, 14 (2) : 627-636. doi: 10.3934/cpaa.2015.14.627 Hiroshi Isozaki, Hisashi Morioka. A Rellich type theorem for discrete Schrödinger operators. Inverse Problems & Imaging, 2014, 8 (2) : 475-489. doi: 10.3934/ipi.2014.8.475 François Lalonde, Egor Shelukhin. Proof of the main conjecture on $g$-areas. Electronic Research Announcements, 2015, 22: 92-102. doi: 10.3934/era.2015.22.92 Sel Ly, Nicolas Privault. Stochastic ordering by g-expectations. Probability, Uncertainty and Quantitative Risk, 2021, 6 (1) : 61-98. doi: 10.3934/puqr.2021004 Wolfgang Arendt, Patrick J. Rabier. Linear evolution operators on spaces of periodic functions. Communications on Pure & Applied Analysis, 2009, 8 (1) : 5-36. doi: 10.3934/cpaa.2009.8.5 Ana-Maria Acu, Ioan Cristian Buscu, Ioan Rasa. Generalized Kantorovich modifications of positive linear operators. Mathematical Foundations of Computing, 2022 doi: 10.3934/mfc.2021042 Lucian Coroianu, Sorin G. Gal. New approximation properties of the Bernstein max-min operators and Bernstein max-product operators. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021034 Shan Gao, Jinting Wang. On a discrete-time GI$^X$/Geo/1/N-G queue with randomized working vacations and at most $J$ vacations. Journal of Industrial & Management Optimization, 2015, 11 (3) : 779-806. doi: 10.3934/jimo.2015.11.779 Purshottam Narain Agrawal, Şule Yüksel Güngör, Abhishek Kumar. Better degree of approximation by modified Bernstein-Durrmeyer type operators. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021024 İsmail Aslan Türkan Yeliz Gökçer
CommonCrawl
Energy conservative solutions to a nonlinear wave system of nematic liquid crystals Controllability results for a class of one dimensional degenerate/singular parabolic equations May 2013, 12(3): 1431-1443. doi: 10.3934/cpaa.2013.12.1431 Convexity of the free boundary for an exterior free boundary problem involving the perimeter Hayk Mikayelyan 1, and Henrik Shahgholian 2, Xi'an Jiaotong-Liverpool University, Mathematical Sciences, 111 Ren'ai Road, Suzhou 215123, Jiangsu Prov., China Institutionen för Matematik, Kungliga Tekniska Högskolan, 100 44 Stockholm, Sweden Received November 2011 Revised March 2012 Published September 2012 We prove that if the given compact set $K$ is convex then a minimizer of the functional \begin{eqnarray*} I(v)=\int_{B_R} |\nabla v|^p dx+ Per(\{v>0\}), 1 < p < \infty, \end{eqnarray*} over the set $\{v\in W^{1,p}_0 (B_R)| v\equiv 1 \ \text{on} \ K\subset B_R\}$ has a convex support, and as a result all its level sets are convex as well. We derive the free boundary condition for the minimizers and prove that the free boundary is analytic and the minimizer is unique. Keywords: mean curvature., Free boundary problems. Mathematics Subject Classification: Primary: 35R35; Secondary: 49K2. Citation: Hayk Mikayelyan, Henrik Shahgholian. Convexity of the free boundary for an exterior free boundary problem involving the perimeter. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1431-1443. doi: 10.3934/cpaa.2013.12.1431 A. Acker, On the existence of convex classical solutions for multilayer free boundary problems with general nonlinear joining conditions, Trans. Amer. Math. Soc., 350 (1998), 2981-3020. doi: 10.1090/S0002-9947-98-01943-6. Google Scholar F. J. Jr. Almgren, Existence and regularity almost everywhere of solutions to elliptic variational problems with constraints, Mem. Amer. Math. Soc., 4 (1976). doi: 10.1090/S0002-9904-1975-13681-0. Google Scholar O. Alvarez, J.-M. Lasry and P.-L. Lions, Convex viscosity solutions and state constraints, J. Math. Pures Appl., 76 (1997), 265-288. doi: 10.1016/S0021-7824(97)89952-7. Google Scholar L. Ambrosio, N. Fusco and D. Pallara, "Functions of Bounded Variation and Free Discontinuity Problems," Oxford Mathematical Monographs. Oxford University Press, New York, 2000. Google Scholar R. Argiolas, A two-phase variational problem with curvature, Matematiche (Catania), 58 (2003), 131-148. Google Scholar I. Athanasopoulos, L. A. Caffarelli, C. Kenig and S. Salsa, An area-Dirichlet integral minimization problem, Comm. Pure Appl. Math., 54 (2001), 479-499. Google Scholar L. C. Evans and R. F. Gariepy, "Measure Theory and Fine Properties of Functions," Studies in Advanced Mathematics. CRC Press, Boca Raton, FL, 1992. Google Scholar D. Gilbarg and N. Trudinger, "Elliptic Partial Differential Equations of Second Order," Springer Verlag, Berlin, 2001. Google Scholar A. Henrot and H. Shahgholian, Existence of classical solutions to a free boundary problem for the $p$-Laplace operator. I. The exterior convex case, J. Reine Angew. Math., 521 (2000), 85-97. doi: 10.1515/crll.2000.031. Google Scholar A. Henrot and H. Shahgholian, The one phase free boundary problem for the $p$-Laplacian with non-constant Bernoulli boundary condition, Trans. Amer. Math. Soc., 354 (2002), 2399-2416. doi: 10.1090/S0002-9947-02-02892-1. Google Scholar D. Kinderlehrer, L. Nirenberg and J. Spruck, Regularity in elliptic free boundary problems I, J. Analyse Math., 34 (1978), 86-119. doi: 10.1007/BF02790009. Google Scholar B. Kirchheim and J. Kristensen, Differentiability of convex envelopes, C. R. Acad. Sci. Paris S閞. I Math., 333 (2001), 725-728. doi: 10.1016/S0764-4442(01)02117-6. Google Scholar P. Laurence and E. Stredulinsky, Existence of regular solutions with convex levels for semilinear elliptic equations with nonmonotone $L^1$ nonlinearities. Part I, Indiana Univ. Math. J., 39 (1990), 1081-1114. doi: 10.1512/iumj.1990.39.39051. Google Scholar J. L. Lewis, Capacitary functions in convex rings, Arch. Rational Mech. Anal., 66 (1977), 201-224. doi: 10.1007/BF00250671. Google Scholar G. Lieberman, Boundary regularity for solutions of degenerate elliptic equations, Nonlinear Anal., 12 (1988), 1203-1219. doi: 10.1016/0362-546X(88)90053-3. Google Scholar F. Mazzone, A single phase variational problem involving the area of level surfaces, Comm. Part. Diff. Eq., 28 (2003), 991-1004. doi: 10.1081/PDE-120021183. Google Scholar I. Tamanini, Regularity results for almost minimal oriented hypersurface in $R^n$, Quaderni del Dipartimento di Matematica, Universitá di Lecce 1, (1994). Google Scholar Jun Wang, Wei Wei, Jinju Xu. Translating solutions of non-parametric mean curvature flows with capillary-type boundary value problems. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3243-3265. doi: 10.3934/cpaa.2019146 G. Kamberov. Prescribing mean curvature: existence and uniqueness problems. Electronic Research Announcements, 1998, 4: 4-11. Avner Friedman. Free boundary problems arising in biology. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 193-202. doi: 10.3934/dcdsb.2018013 Jinju Xu. A new proof of gradient estimates for mean curvature equations with oblique boundary conditions. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1719-1742. doi: 10.3934/cpaa.2016010 Avner Friedman. Free boundary problems for systems of Stokes equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1455-1468. doi: 10.3934/dcdsb.2016006 Serena Dipierro, Enrico Valdinoci. (Non)local and (non)linear free boundary problems. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 465-476. doi: 10.3934/dcdss.2018025 Noriaki Yamazaki. Almost periodicity of solutions to free boundary problems. Conference Publications, 2001, 2001 (Special) : 386-397. doi: 10.3934/proc.2001.2001.386 Avner Friedman, Xiulan Lai. Free boundary problems associated with cancer treatment by combination therapy. Discrete & Continuous Dynamical Systems, 2019, 39 (12) : 6825-6842. doi: 10.3934/dcds.2019233 Ugur G. Abdulla, Evan Cosgrove, Jonathan Goldfarb. On the Frechet differentiability in optimal control of coefficients in parabolic free boundary problems. Evolution Equations & Control Theory, 2017, 6 (3) : 319-344. doi: 10.3934/eect.2017017 Daniela De Silva, Fausto Ferrari, Sandro Salsa. On two phase free boundary problems governed by elliptic equations with distributed sources. Discrete & Continuous Dynamical Systems - S, 2014, 7 (4) : 673-693. doi: 10.3934/dcdss.2014.7.673 Daniela De Silva, Fausto Ferrari, Sandro Salsa. Recent progresses on elliptic two-phase free boundary problems. Discrete & Continuous Dynamical Systems, 2019, 39 (12) : 6961-6978. doi: 10.3934/dcds.2019239 Huiqiang Jiang. Regularity of a vector valued two phase free boundary problems. Conference Publications, 2013, 2013 (special) : 365-374. doi: 10.3934/proc.2013.2013.365 Mingxin Wang. Existence and uniqueness of solutions of free boundary problems in heterogeneous environments. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 415-421. doi: 10.3934/dcdsb.2018179 Jesús Ildefonso Díaz. On the free boundary for quenching type parabolic problems via local energy methods. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1799-1814. doi: 10.3934/cpaa.2014.13.1799 Mingxin Wang. Erratum: Existence and uniqueness of solutions of free boundary problems in heterogeneous environments. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021269 Changfeng Gui, Huaiyu Jian, Hongjie Ju. Properties of translating solutions to mean curvature flow. Discrete & Continuous Dynamical Systems, 2010, 28 (2) : 441-453. doi: 10.3934/dcds.2010.28.441 Giulio Colombo, Luciano Mari, Marco Rigoli. Remarks on mean curvature flow solitons in warped products. Discrete & Continuous Dynamical Systems - S, 2020, 13 (7) : 1957-1991. doi: 10.3934/dcdss.2020153 Zhengchao Ji. Cylindrical estimates for mean curvature flow in hyperbolic spaces. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1199-1211. doi: 10.3934/cpaa.2021016 Georgi I. Kamberov. Recovering the shape of a surface from the mean curvature. Conference Publications, 1998, 1998 (Special) : 353-359. doi: 10.3934/proc.1998.1998.353 Tobias H. Colding and Bruce Kleiner. Singularity structure in mean curvature flow of mean-convex sets. Electronic Research Announcements, 2003, 9: 121-124. Hayk Mikayelyan Henrik Shahgholian
CommonCrawl
Home > Journals > Electron. J. Probab. > Volume 25 > Article 2020 The $L^{2}$ boundedness condition in nonamenable percolation Tom Hutchcroft Electron. J. Probab. 25: 1-27 (2020). DOI: 10.1214/20-EJP525 Let $G=(V,E)$ be a connected, locally finite, transitive graph, and consider Bernoulli bond percolation on $G$. In recent work, we conjectured that if $G$ is nonamenable then the matrix of critical connection probabilities $T_{p_{c}}(u,v)=\mathbb {P}_{p_{c}}(u\leftrightarrow v)$ is bounded as an operator $T_{p_{c}}:L^{2}(V)\to L^{2}(V)$ and proved that this conjecture holds for several classes of graphs, including all transitive, nonamenable, Gromov hyperbolic graphs. In notation, the conjecture states that $p_{c}<p_{2\to 2}$, where for each $q\in [1,\infty ]$ we define $p_{q\to q}$ to be the supremal value of $p$ for which the operator norm $\|T_{p}\|_{q\to q}$ is finite. We also noted in that work that the conjecture implies two older conjectures, namely that percolation on transitive nonamenable graphs always has a nontrivial nonuniqueness phase, and that critical percolation on the same class of graphs has mean-field critical behaviour. In this paper we further investigate the consequences of the $L^{2}$ boundedness conjecture. In particular, we prove that the following hold for all transitive graphs: i) The two-point function decays exponentially in the distance for all $p<p_{2\to 2}$; ii) If $p_{c}<p_{2\to 2}$, then the critical exponent governing the extrinsic diameter of a critical cluster is $1$; iii) Below $p_{2\to 2}$, percolation is "ballistic" in the sense that the intrinsic (a.k.a. chemical) distance between two points is exponentially unlikely to be much larger than their extrinsic distance; iv) If $p_{c}<p_{2\to 2}$, then $\|T_{p_{c}}\|_{q\to q} \asymp (q-1)^{-1}$ and $p_{q\to q}-p_{c} \asymp q-1$ as $q\downarrow 1$; v) If $p_{c}<p_{2\to 2}$, then various 'multiple-arm' events have probabilities comparable to the upper bound given by the BK inequality. In particular, the probability that the origin is a trifurcation point is of order $(p-p_{c})^{3}$ as $p \downarrow p_{c}$. All of these results are new even in the Gromov hyperbolic case. Finally, we apply these results together with duality arguments to compute the critical exponents governing the geometry of intrinsic geodesics at the uniqueness threshold of percolation in the hyperbolic plane. Tom Hutchcroft. "The $L^{2}$ boundedness condition in nonamenable percolation." Electron. J. Probab. 25 1 - 27, 2020. https://doi.org/10.1214/20-EJP525 Received: 12 April 2019; Accepted: 22 September 2020; Published: 2020 First available in Project Euclid: 16 October 2020 MathSciNet: MR4162843 Digital Object Identifier: 10.1214/20-EJP525 Primary: 60B99, 60K35 Keywords: Critical exponents, nonamenable, percolation Rights: Creative Commons Attribution 4.0 International License. Electron. J. Probab. Vol.25 • 2020 Institute of Mathematical Statistics and Bernoulli Society Tom Hutchcroft "The $L^{2}$ boundedness condition in nonamenable percolation," Electronic Journal of Probability, Electron. J. Probab. 25(none), 1-27, (2020)
CommonCrawl
Journal of NeuroEngineering and Rehabilitation Ambulatory assessment of walking balance after stroke using instrumented shoes Fokke B. van Meulen1, Dirk Weenk1,2, Jaap H. Buurke1,3, Bert-Jan F. van Beijnum1,2 & Peter H. Veltink1 Journal of NeuroEngineering and Rehabilitation volume 13, Article number: 48 (2016) Cite this article For optimal guidance of walking rehabilitation therapy of stroke patients in an in-home setting, a small and easy to use wearable system is needed. In this paper we present a new shoe-integrated system that quantifies walking balance during activities of daily living and is not restricted to a lab environment. Quantitative parameters were related to clinically assessed level of balance in order to assess the additional information they provide. Data of 13 participants who suffered a stroke were recorded while walking 10 meter trials and wearing special instrumented shoes. The data from 3D force and torque sensors, 3D inertial sensors and ultrasound transducers were fused to estimate 3D (relative) position, velocity, orientation and ground reaction force of each foot. From these estimates, center of mass and base of support were derived together with a dynamic stability margin, which is the (velocity) extrapolated center of mass with respect to the front-line of the base of support in walking direction. Additionally, for each participant step lengths and stance times for both sides as well as asymmetries of these parameters were derived. Using the proposed shoe-integrated system, a complete reconstruction of the kinematics and kinetics of both feet during walking can be made. Dynamic stability margin and step length symmetry were not significantly correlated with Berg Balance Scale (BBS) score, but participants with a BBS score below 45 showed a small-positive dynamic stability margin and more asymmetrical step lengths. More affected participants, having a lower BBS score, have a lower walking speed, make smaller steps, longer stance times and have more asymmetrical stance times. The proposed shoe-integrated system and data analysis methods can be used to quantify daily-life walking performance and walking balance, in an ambulatory setting without the use of a lab restricted system. The presented system provides additional insight about the balance mechanism, via parameters describing walking patterns of an individual subject. This information can be used for patient specific and objective evaluation of walking balance and a better guidance of therapies during the rehabilitation. The study protocol is a subset of a larger protocol and registered in the Netherlands Trial Registry, number NTR3636. Impaired walking balance commonly follows a stroke, which reduces the patient's ability to walk and hence their independence in daily life [1]. Clinical assessment methods of walking balance have been developed to grade a patient's ability to walk (independently) after stroke [2]. Frequently used assessment scales result in ordinal values, which do not objectively and quantitatively describe balance during walking. These assessment scales only quantify walking balance during prescribed conditions, while knowledge about underlying balance mechanisms is often lacking [3]. This knowledge is mandatory for a better guidance during the rehabilitation of walking and subsequent assessment of walking balance performance during daily life. However, existing systems for quantitative assessment of balance during walking are lab restricted or can only be used for a limited number of steps. For a better guidance during the rehabilitation of walking in a daily life setting, a wearable sensing system that qualitatively evaluates walking balance is needed [4]. This system should quantitatively estimate parameters to describe the movements of the patients' feet and body center of mass (CoM) during walking in a daily life setting [5, 6]. Preferably, such a system has small-embedded sensors which do not interfere with daily life body movements and behavior [7]. During walking, the CoM is moving within the area between both feet (i.e., base of support, BoS). To evaluate a persons' stability during walking the extrapolated center of mass (XCoM) can be calculated, which is the position of the CoM extrapolated using the velocity of the CoM. A person will be dynamically stable when the vertical projection of the XCoM on the ground is within the BoS [8–10]. Moments of dynamic instability need to be followed by another step to prevent a fall [8, 10]. These moments of instability normally occur during walking and are necessary for forward progression. A decrease of the distance between BoS and the vertical projection of the XCoM is related to a lower walking speed or a more affected walking pattern [8]. Objective evaluation of walking balance parameters during daily life contributes to insight in underlying mechanisms of balance during community ambulation. To continuously assess the dynamic stability of a person, information on the position of the XCoM relative to the BoS is necessary. For a continuous evaluation of the BoS, information on movement of both feet relative to each other is required. A feasible method for movement assessment in a daily life setting is the use of inertial measurement units (IMUs). This allows easy assessment of foot movements in a daily life setting without the use of an external physical reference system [11]. Previous studies reported on the use of IMUs for the estimation of qualitative and quantitative parameters of walking and balance performance, such as cadence, stride length and velocity [5, 12, 13]. However, using only IMUs it is not possible to accurately evaluate parameters which depend on the relative position of both feet, such as step length, step width and size of the BoS. By their physical working principle, IMUs do not provide information about relative positions between sensors, only about changes of position of the same sensor. This problem can be solved by fusing data of IMUs and feet distance estimates of an ultrasound sensors system [5]. For a continuous evaluation of CoM position as well as the XCoM, ground reaction forces (GRF) beneath both feet should be known [14] in addition to relative positions of both feet. For the estimation of the GRF beneath both feet, traditionally, multiple force plates or sensorised walkways are used in a lab situation [8, 15, 16]. These systems mostly cause restriction in walking or are only able to measure one or two steps. For the evaluation of forces underneath both feet during daily life activities, shoes instrumented with force or pressure sensors have been investigated in several studies [17–21]. These shoe integrated sensor systems allow ambulatory estimation of ground reaction forces, making it suitable for monitoring multiple steps and walking with changes in walking direction. However, there is no system available that allows the assessment of dynamic stability in a daily life setting and over multiple steps. Such a system would require an ambulatory estimation of foot orientations, relative foot positions and ground reaction forces simultaneously. The objective of this study is to develop a method to assess balance dynamics during gait in stroke patients in an ambulatory setting and to relate our balance metrics to standardized clinical stability parameters in order to assess the additional information they provide. For this purpose shoes, integrated with force and inertial sensing and ultrasound transducers were combined into a wearable gait measurement system. Quantitative parameters such as the dynamic stability margin, as well as additional temporal, kinematic and kinetic gait parameters will be estimated using the system. Finally, these parameters were related to a frequently used clinically assessment scale of balance, the Berg balance scale (BBS), to evaluate the predictability of the different parameters by clinically-assessed levels of balance. The ambulatory measurement system used in this study consists of Xsens ForceShoes™ (Xsens Technologies B.V., Enschede, The Netherlands) additionally equipped with ultrasound sensors. All sensors are integrated into an extra sole underneath a pair of sandals. Per foot, each forefoot and heel segment contain one inertial measurement unit (IMU) and one 3D force/moment sensor, see Fig. 1. Only data of the IMUs in the forefoot segments were used. Data of the two IMUs and four force/moment sensors were collected with a sample frequency of 50 Hz. The distance between the feet was estimated using two ultrasound transducers that were mounted near the IMUs in the forefoot segment (Fig. 1). The distance between both shoes was estimated by measuring the time of flight of a 40 kHz ultrasound pulse, that was sent from one shoe to the other. Accurate distance measurements were done approximately 13 times a second [22]. Measurement setup. In this study Xsens ForceShoes™ were used, sandals with underneath one inertial measurement unit (IMU, dashed orange square) and one force/moment (FMS, dashed green square) sensor per heel or forefoot segment. Near the IMU in the forefeet an ultrasound transducer (US, dashed red circle) was mounted in each shoe. Kinematic and kinetic data were used to estimate the position of the center of mass (CoM) relative to the position of both feet, the projection of the center of mass on the ground (CoM'-blue circle) within the base of support (BoS) and the extrapolated CoM (XCoM' - green circle) For this study seventeen stroke patients from Roessingh rehabilitation hospital, located in Enschede, the Netherlands, were recruited. Recruited participants were between 35 and 75 years of age and had a hemiparesis as a result of a single unilateral ischemic or hemorrhagic stroke, diagnosed at least six months earlier. Exclusion criteria were inability to perform given instructions, inability to understand questionnaires, a medical history with more than one stroke or another medical history which might negatively influence the participant's walking pattern. The study protocol is a subset of a larger protocol approved by the local medical ethics committee (METC Twente, the Netherlands, P12-27) [11]. The whole study is registered in the Netherlands Trial Registry, NTR3636. All participants signed written informed consent before participating. Two participants with severely affected lower extremity function were not able to complete the task without assistance due to unstable walking patterns. The corresponding measurements were excluded from the analysis. Data of two other participants were not fully recorded because of a broken cable during the session or sensors that were not properly functioning. Remaining were 13 participants (8 male and 5 female) with an average age of 64.1(SD ± 8.7) years, 2.4 (SD ± 1.8) years post stroke. Participant-specific information is reported in Table 1 and includes gender, age, number of years post stroke, dominant and affected side, weight, height, BBS score and whether or not a walking aid is normally used during activities of daily living. Participants were ranked from low to high BBS score. Table 1 General participant characteristics Experimental protocol Participants performed twice a timed 10 meter walk at a self-selected comfortable pace along a 10 meter path [23], while wearing the instrumented shoes and without the use of any walking aid. To relate results of the new setup with a frequently used clinically assessment scale to assess balance, participants' balance was evaluated using the Berg balance scale (BBS) [24]. All assessments were performed by the same technical physician who has adequate clinical expertise to perform the assessment. Kinematic data All data were processed offline and analyzed using MATLAB\(^{\circledR }\) (MathWorks Inc., Natick, MA). Three dimensional (3D) positions (p), velocities (v) and orientations (R) were estimated using an extended Kalman filter (upper part of Fig. 2). The filter fuses ultrasound range estimates (d US ), essential for estimating relative foot positions, with 3D accelerations (y Acc ), 3D angular velocities (y Gyr ), with the goal to estimate the state vector: $$\begin{array}{@{}rcl@{}} \boldsymbol{x} &=& \big(\ \boldsymbol{p}_{r}\ \ \boldsymbol{p}_{l}\ \ \boldsymbol{v}_{r}\ \ \boldsymbol{v}_{l}\ \ \boldsymbol{\theta}_{\epsilon,r}\ \ \boldsymbol{\theta}_{\epsilon,l}\ \ \boldsymbol{b}_{\epsilon,r}\ \ \boldsymbol{b}_{\epsilon,l}\ \big)^{T} \end{array} $$ Sensor fusion. The upper part (kinematics) is an extended Kalman filter that fuses the signals from the accelerometer (y Acc ) and gyroscope (y Gyr ) and applies zero velocity, height and ultrasound range measurement updates (d US ). Outputs are 3D position (p), velocity (v) and orientation (R) estimates of the forefoot segments. For kinetic estimation, data from the 3D force/moment sensors (y F and y M ) are used to estimate 3D CoM. With y M ) are used to estimate 3D CoM. With subscript k the samples are indicated. The estimation frequency is 50 Hz and ultrasound range updates are applied at approximately 13 Hz with position, velocity, orientation error (θ ε ) and gyroscope bias error (b ε ) of each IMU. The subscripts r and l indicates respectively the right and left foot. The filter starts with an initialization in which the initial positions and orientations are set based on the accelerometer signal and the initial ultrasound range, assuming the patient is standing with both feet flat on the floor. When a step is made, the 3D position, velocity and orientations of right and left forefoot are predicted using the IMU data. After this prediction, two measurement updates are performed. First, height-, and velocity are updated to be zero when the foot is in contact with the ground, which is detected using the method presented by Skog and others [25]. Second, when an accurate ultrasound range estimate is available, estimated using: $$\begin{array}{@{}rcl@{}} d_{US} &=& v_{s}\ \cdot\ t_{ToF} \end{array} $$ based on the speed of sound (v s ) and the time of flight (t ToF ) of an ultrasound pulse between both transducers, the position of the (last) moving foot is updated according to the estimated range. This estimated range is equal to the distance between both feet: $$\begin{array}{@{}rcl@{}} ||\ \boldsymbol{p}_{r}\ -\ \boldsymbol{p}_{l}\ || = d_{US} \end{array} $$ Subsequently, the orientation and gyroscope bias are updated based on the error states. More details can be found in [5]. These algorithms were validated in healthy subjects using an optical reference system. Mean absolute differences in estimated step lengths and step widths were below 2 cm [5] and mean absolute differences in estimated feet distances were below 1 cm [22]. Kinetic data The trajectories of the center of pressure per foot (C o P r or C o P l ) in the global frame were estimated using measured forces (y F ) and moments (y M ) of the two force/moment sensors of one foot, using: $$\begin{array}{@{}rcl@{}} {CoP}_{i} &=& \left(\begin{array}{c} -\frac{M_{y,i}}{F_{z,i}} \\ \frac{M_{x,i}}{F_{z,i}} \\ 0 \end{array}\right) \end{array} $$ in which subscript i indicates the right or the left foot, F z,i is the vertical component of the GRF and M x,i and M y,i are the horizontal components of the moments [14]. After combining (relative) foot positions (p) and estimated CoP trajectories of each foot, the total CoP was estimated by weighting the CoP trajectories of the right (C o P r ) and left (C o P l ) foot by the relative magnitude of the GRF of the right (F r ) or the left (F l ) foot: $$\begin{array}{@{}rcl@{}} CoP &=& \frac{||\boldsymbol{F}_{r}||}{||\boldsymbol{F}_{l} + \boldsymbol{F}_{r}||}{CoP}_{r} + \frac{||\boldsymbol{F}_{l}||}{||\boldsymbol{F}_{l} + \boldsymbol{F}_{r}||}{CoP}_{l} \end{array} $$ Knowing the relative foot positions and the position of the total CoP, the position of the CoM was obtained using the method of Schepers and others [14]. In this method the CoM position estimation is a summation of the low-pass filtered component of the total CoP movement and the high pass filtered component of the double integrated CoM acceleration (lower part of Fig. 2). Schepers and others evaluated their method in which they assume a known relative distance between both feet (|| p r −p l ||), by comparing their method and an optical reference system in seven stroke patients. They found small positional differences between methods, rms values were equal or below 2 cm (± 0.7 cm) in all directions [14]. Parameter selection Hemiparetic stroke patients use different walking strategies to stay comfortable and in balance. To be able to compensate for their reduced coordination of their affected side, they often reduce their walking speed, make shorter steps, a longer stance time on their non-affected side and lean more towards their non-affected side [26–29]. This results in a more asymmetrical walking pattern of the more affected patients. Using the complete kinematic and kinetic reconstruction during walking, temporal, kinematic and kinetic parameters can be calculated to quantify these typical walking patterns. First, walking speed was calculated as the average velocity of both feet during walking, estimated with the extended Kalman filter. As a reference of the current protocol, walking speed was estimated by measuring the duration of 10 meter walking using a stopwatch, which includes gait initiation. Next, stance times were calculated for the affected and non-affected side. Stance times were defined as the period between first contact of the foot (heel or forefoot) with the ground until end of contact of the foot. Contact of the forefoot and heel segments with the ground were evaluated per segment at any time, by thresholding the magnitude of the 3D force at 20 Newton. From the estimated 3D positions of the left and right foot, step lengths (LSL and RSL respectively) were calculated using the method as described by Huxham and others [30] (see Fig. 3). Top-down view of foot positions. Top-down view of foot positions from four steps of participant number 3. For both left and right foot, the step lengths (LSL and RSL respectively) are calculated using triangles obtained from the positions during stance and indicated in the left part of the figure. The CoM' and its trajectory (blue) and the XCoM' (green) together with the front line of the BoS, just before heel-off (pink) are shown on the right. The shortest distance from the XCoM' to the front line of the BoS (in walking direction, to the right in this figure) is calculated (DSM) and indicated in the figure with a black line Asymmetries in stance times and step lengths between the non-affected and the affected side were calculated using: $$\begin{array}{@{}rcl@{}} SI = \frac{p_{A} - p_{N}}{p_{N}} \end{array} $$ with SI the symmetry index value, p A the parameter value for the affected side and p N the parameter value for the non-affected side. Larger positive and negative values indicate a greater asymmetry towards the affected and non-affected side respectively. SI values equal to zero, indicate a perfect symmetry. Furthermore, position and velocity of the CoM relative to the BoS were evaluated. The participants' BoS was defined by the area between all foot segments which were in contact with the ground. Knowing the position of the CoM (relative to the BoS) and the velocity of the CoM, the XCoM' was calculated as [10]: $$\begin{array}{@{}rcl@{}} \text{XCoM'} = \text{CoM'} + \frac{v_{\text{CoM}}}{\omega_{0}} \end{array} $$ with CoM' the position of the vertical projection of the CoM on the ground, v CoM the velocity of the CoM in the transversal plane and \(\omega _{0} = \sqrt {g/l_{0}}\), in which g= 9.81 m/s 2 (earth gravitational acceleration) and l 0 the greater trochanter height, as we estimated from a proportion of the total body height [31]. Knowing XCoM' relative to BoS, a dynamic stability margin (DSM) was calculated as the shortest distance from the XCoM' to the front line (in the walking direction) of the BoS. When one foot is in swing phase, i.e., the BoS is restricted to the size of only the other foot, no estimation of DSM was made. We define negative DSM values as dynamically stable, the XCoM' is within the BoS and positive DSM values as dynamically unstable, the XCoM is outside the BoS. Figure 3 shows a top down view of four consecutive steps of a walking trial of participant #3. In this figure XCoM' and the front-line of the BoS just before heel-off are indicated, including the shortest distance between them. To exclude gait initiation and termination steps, from each of the two walking trials per participant, the first two and last two steps were removed. For both walking trials of each participant the mean of all parameters was calculated per side. The mean DSM was not calculated per side. To be able to compare different participants, parameters were normalized to body size as described by Hof and others [32]. Velocity values were normalized to \(v_{0} = \sqrt {g l_{0}}\), stance times to \(t_{0} = \sqrt {l_{0}/g}\) and step length and DSM values to l 0. Linear regression analysis using Pearson's correlation coefficients (r) was performed to relate the different parameters with the clinically-assessed levels of balance. The temporal and kinematic parameters, were taken as dependent variables, and the BBS score (assessed using the instrumented shoes) as independent value. When investigating correlation between BBS with symmetry indices, the absolute value was used, to neglect to which side the asymmetry occurs. Statistical significance was determined as a p-value of less than 0.05. The explained variance (R 2) was calculated and assumed to be low when this value is less than 0.5, i.e., less than 50 % of the variance can be explained by the linear regression model. For all participants the normalized walking speed in walking direction (v n ) was estimated. Table 2 shows the mean velocity during the selected steps for each participant, as estimated by the extended Kalman filter (v). As a reference, the velocity estimated from the stopwatch (v ref ) of the complete timed 10 meter walk is also listed. More-affected participants with a lower BBS score show a significantly lower walking speed (r = 0.71, p < 0.01). All correlation values (r) of the different parameters and BBS, their significance levels and the explained variance (R 2) are presented in Table 3. Table 2 Velocity for each participant Table 3 Relation quantifying parameters and BBS Figure 4 shows the dynamic stability margin of participant #3, during the selected steps of a single walking trial, over time. If one foot is in swing phase no estimation of DSM is made, which is represented as a gap in Fig. 4. The four steps that are shown in Fig. 3 are indicated by the rectangular box and a zoom of these steps is shown in the inset of the figure. The mean DSM of this trial is 0.00 m. Example of the dynamic stability margin over time. The the DSM evaluated during the double stance phases for one walking trial of participant number 3. Negative values indicate that the XCoM' is inside the BoS. Mean DSM for this trial was 0.00 m. The inset is a magnification (indicated by the box) and corresponds to the time window (12.00-14.75 s) of which the steps are shown in Fig. 3 The normalized mean DSM values were estimated for both walking trials of each participant and related to participant's average normalized walking speed (v n ), as shown in Fig. 5. The average DSM is positive, i.e., XCoM' is outside the BoS. Especially participants with lower BBS scores show a lower walking speed and small positive mean DSM values. No significant correlation between BBS and DSM was found (r=0.41, p=0.167). Mean DSM versus velocity. Mean DSM (normalized to l 0) versus velocity (normalized to v 0) estimated by the filter (v n in Table 2), for all 13 participants (indicated with the numbers). Numbers indicate participant identification number, which are ranked from a low to high BBS score. Filled data markers are of those participants with a BBS score below or equal to 45 Results of the mean normalized stance times for the affected side versus the non-affected side are shown in Fig. 6. Overall, participants show a longer stance time on their non-affected leg and participants with lower BBS scores (below 45) show longer stance times for both sides. Furthermore, more asymmetry in stance times is visible for the participants with a lower BBS score (see also Table 2). Although participants 6 and 9 (having higher BBS scores) show large asymmetries in stance time as well, the asymmetry in stance times significantly correlates with the BBS score (r=−0.58, p<0.05). Stance time of the affected side versus the non affected side. Mean stance time for affected versus non-affected side (normalized to \(t_{0} = \sqrt {l_{0}/g}\)). Numbers indicate participant identification number, which are ranked from a low to high BBS score. Filled data markers are of those participants with a BBS score below or equal to 45. Both trials of a patient are averaged. The line x = y is plotted to indicate a symmetric walking pattern The mean normalized step lengths for the affected side versus the non-affected side are shown in Fig. 7. The step lengths are relatively symmetric, except for the two trials of participant number 3 (±0.4 versus ±0.6 normalized step length for the affected and non-affected side respectively). Overall the step lengths are smaller for participants with a smaller BBS score. The asymmetry in step length (see Table 2) is not significantly correlated with the BBS score (r=−0.51, p=0.074). Step length of the affected side versus the non affected side. Mean step length for affected versus non-affected side (normalized to leg length l 0). Numbers indicate participant identification number, which are ranked from a low to high BBS score. Filled data markers are of those participants with a BBS score below or equal to 45. Both trials of a patient are averaged. The line x = y is plotted to indicate a symmetric walking pattern The objective of this study was to develop a method to quantitatively assess balance dynamics during gait in stroke patients in an ambulatory setting. Our balance metrics were related to standardized clinical stability parameters (i.e., BBS scores) in order to assess the additional information they provide. By combining Xsens ForceShoes™ and ultrasound modules, we were able to completely reconstruct kinetics and kinematics of both feet as well as the position of the CoM relative to both feet during walking, without the use of a lab restricted setup. All underlying physical parameters of the presented system have been validated against a gold standard [5, 14, 22]. Although no parts of the BBS include assessment of walking, a high correlation was found between walking speed and BBS scores. As in Liston, et al. [27, 33], participants show a higher walking speed with an increase of BBS score. During walking, step lengths and stance times for both the affected side and the non-affected side show correlations with the BBS scores. Participants with a higher BBS score, show an increase of step length on both sides and a decrease of stance times on both sides. A significant negative relation in stance time asymmetry and BBS was found, stance times of both sides are getting more symmetrical with a higher BBS score. However, no significant relation between step length symmetry and BBS was found. These results are contradictory to earlier findings of Lewek and others [16]. Compared to our study, they describe slightly different correlation values out of which they conclude the presence of a negative (weak-to-moderate) correlation between BBS and step length asymmetry and the absence of a relation between BBS and stance time asymmetry measured using a sensorised walkway. These different outcomes could be related to, for example the difference between average age and number of months post stroke of the groups of participants or the larger sample size compared to our study. The sample size in our study is relatively small and the participants' BBS scores are limitedly distributed, which is a limitation of our study. By extrapolating the CoM' using its velocity, the XCoM' was estimated. This XCoM' can be used to examine stability during walking, by evaluating the distance between XCoM' and the front-line of the BoS (i.e., DSM). Overall, participants with higher BBS scores show larger (as expected, however not significantly larger) average DSM values. More-affected participants – especially the ones with BBS score of 45 or lower, who have a higher risk of falling – show smaller velocities and smaller and almost negative average values for their DSM. In case of a negative mean DSM value during walking, a person is dynamically stable during walking, which means that after each step made, no extra step is needed to prevent a fall (on average). Nevertheless, positive DSM values, e.g., moments of dynamical instability, are necessary for forward progression. Participants with a lower BBS score might decrease their average DSM value as a conservative balance strategy in order to be more stable during walking. Although, this may cause interruptions in walking and possible risk of falling backwards [34]. Furthermore, a smaller walking velocity may be less efficient [35]. By estimating the walking speed, asymmetry in walking and especially the size of patients' DSM during walking, it may be possible to objectively follow up improvement or deterioration of daily life ambulation. These parameters offer additional information not only on activity level (using the BBS) but also on the level of body function. This information may be of importance during rehabilitation training, because it provides extra information on impairment level (during a functional task). Monitoring these parameters adds insight whether or not changes on ability level are associated with changes on impairment level. Thus providing insight whether improvement is due to restoration of body function or whether these changes are related to compensation and adaptive strategies are used to overcome the problems on impairment level. The ability to control the position of the XCoM' with respect to the BoS might for instance be a compensatory mechanism for preventing falls during walking [34]. Furthermore, using the presented system it is possible to gather patient specific information. Although most parameters are significantly related to the BBS, when evaluated in a group of stroke patients, the explained variances (R 2, see Table 3) are low. Therefore this patient specific information estimated using the described setup, cannot accurately be predicted by just evaluating the BBS score. The additional information such as average walking speed and DSM value during walking, along with clinically evaluated balance scores, can be used as a guidance for patient specific clinical practice. For instance, if a patient shows high clinically evaluated balance scores but small DSM values, increased walking speed may be advised. Alternatively, a patient who shows large DSM values but low clinical balance scores, might have a higher risk of falling and should be advised to adapt their walking pattern to their balance capacity. This approach should be evaluated in future research to demonstrate the effectiveness of using these parameters for the guidance in rehabilitation practice. In addition to the average DSM values over multiple double stance phases, the time course of the DSM value (as shown in Fig. 4) may provide additional insight in walking balance and continuity of the walking pattern. In case of a negative DSM value at the beginning of a double stance phase gait can be terminated without an additional step. However, if the minimum value of the DSM during the double stance phase is positive, an additional step is always needed before gait can be terminated. Future research should focus on the sizes of the used sensors, which were integrated in the instrumented shoes. Although previous research found only limited influence on walking patterns of patients with knee osteoarthritis while wearing these instrumented shoes [21], walking might be more exhausting and chances of a trip are higher by the design of the shoes. Currently the shoes are relatively heavy (±1 kg per shoe) and the sole height is relatively high (±2.5 cm). New technical developments in the use of smaller and lighter force/moment sensors [36] integrated in shoes and the already widely available smaller inertial sensors, may result in instrumented shoes that can be used in daily life [7]. In addition, the number of sensors may be reduced depending on the actual research question. We presented a system using one IMU, two force/moment sensors and an ultrasound transducer per shoe (data of the inertial sensors in the heel part of the shoes was not used), which are all required for the dynamic balance parameters shown in Figs. 4 and 5. Using a reduced set of sensors, several relevant objective parameters can still be determined. For example, for the estimation of stance and swing times (as in Fig. 6), a system with only inertial sensors or only 1D force sensors could be used. For the evaluation of step lengths and step widths (as in Fig. 7), a system with inertial sensors in combination with ultrasound transducers suffices, as was shown in [37]. Although not used in the presented methods, the inertial sensor data of the heel segments can be used to additionally evaluate orientations of heel segments, the rolling of the feet or heel contact times. Besides the evaluation of straight line walking, it is possible to evaluate other phases of walking, such as gait initiation and termination, standing, transfers, turning and non-repetitive walking patterns. Especially in an ambulatory setting, the stability of a stroke patient during these phases might be of interest, because of the high incidence of falls during these non-stationary walking phases [38]. In addition to stroke patients, the presented system might be of interest in other groups of patients with difficulties in walking (e.g. before and after knee or hip surgery). We demonstrated a method to assess walking balance in stroke patients under ambulatory conditions. Using the described setup, objective evaluation of walking is no longer restricted to a lab setting. Quantitative parameters can be used for describing walking patterns of the individual patient. DSM values and the asymmetry in step lengths, are not significantly correlated with participants BBS scores. Walking velocity, step lengths of both feet, stance times of both feet and the asymmetry in stance time are significantly correlated with participants BBS score, although the explained variance of the velocity of walking, stance time on the affected side and asymmetry in stances time is limited to approximately 0.5. The presented system provides important information about the walking balance in addition to parameters describing the walking pattern of an individual subject, which is only partly predictable in the individual person using BBS. BBS: Berg balance scale BoS: base of support CoM: CoP: center of pressure DSM: dynamic stability margin GRF: ground reaction force IMU: LSL: left step length RSL: right step length XCoM: extrapolated center of mass Tyson SF, Hanley M, Chillala J, Selley A, Tallis RC. Balance disability after stroke. Phys Ther. 2006; 86(1):30–8. Pollock CLa, Eng JJb, Garland SJc. Clinical measurement of walking balance in people post stroke: a systematic review. Clin Rehabil. 2011; 25(8):693–708. doi:10.1177/0269215510397394. Kwakkel G, Kollen B, Lindeman E. Understanding the pattern of functional recovery after stroke: facts and theories. Restor Neurol Neurosci. 2004; 22(3–5):281–300. Veltink PH, van Meulen FB, van Beijnum B-JF, Klaassen B, Hermens HJ, Droog E, Weusthof M, Lorussi F, Tognetti A, Reenalda J, Nikamp CDM, Baten C, Buurke JH, Held J, Luft A, Luinge H, Toma GD, Mancusso C, Paradiso R. Daily-life tele-monitoring of motor performance in stroke survivors. In: 13th International Symposium on 3D Analysis of Human Movement, 3D-AHM 2014. Lausanne, Switserland: EPFL: 2014. p. 159–62. http://doc.utwente.nl/91439/. Accessed 12 Oct 2015. Weenk D, Roetenberg D, van Beijnum B-JF, Hermens HJ, Veltink PH. Ambulatory estimation of relative foot positions by fusing ultrasound and inertial sensor data. IEEE Tran Neural Syst Rehabil Eng. 2015; 23(5):817–26. doi:10.1109/TNSRE.2014.2357686. Gutierrez-Farewik EM, Bartonek A, Saraste H. Comparison and evaluation of two common methods to measure center of mass displacement in three dimensions during gait. Hum Mov Sci. 2006; 25(2):238–56. doi:10.1016/j.humov.2005.11.001. Bergmann JHM, McGregor AH. Body-worn sensor design: What do patients and clinicians want?Ann Biomed Eng. 2011; 39(9):2299–312. doi:10.1007/s10439-011-0339-9. Lugade V, Lin V, Chou LS. Center of mass and base of support interaction during gait. Gait & Posture. 2011; 33(3):406–11. doi:10.1016/j.gaitpost.2010.12.013. Lugade V, Kaufman K. Dynamic stability margin using a marker based system and tekscan: A comparison of four gait conditions. Gait & Posture. 2014; 40(1):252–4. doi:10.1016/j.gaitpost.2013.12.023. Hof AL, Gazendam MGJ, Sinke WE. The condition for dynamic stability. J Biomech. 2005; 38(1):1–8. doi:10.1016/j.jbiomech.2004.03.025. van Meulen FB, Reenalda J, Buurke JH, Veltink PH. Assessment of daily-life reaching performance after stroke. Ann Biomed Eng. 2015; 43(2):478–86. doi:10.1007/s10439-014-1198-y. Sabatini AM, Martelloni C, Scapellato S, Cavallo F. Assessment of walking features from foot inertial sensing. IEEE Trans Biomed Eng. 2005; 52(3):486–94. doi:10.1109/TBME.2004.840727. Rebula JR, Ojeda LV, Adamczyk PG, Kuo AD. Measurement of foot placement and its variability with inertial sensors. Gait & Posture. 2013; 38(4):974–80. doi:10.1016/j.gaitpost.2013.05.012. Schepers HM, van Asseldonk E, Buurke JH, Veltink PH. Ambulatory estimation of center of mass displacement during walking. IEEE Trans Biomed Eng. 2009; 56(4):1189–1195. doi:10.1109/TBME.2008.2011059. Menz HB, Latt MD, Tiedemann A, Kwan MMS, Lord SR. Reliability of the gaitrite\(^{\circledR }\) walkway system for the quantification of temporo-spatial parameters of gait in young and older people. Gait & Posture. 2004; 20(1):20–5. doi:10.1016/S0966-6362(03)00068-7. Lewek MD, Bradley CE, Wutzke CJ, Zinder SM. The relationship between spatiotemporal gait asymmetry and balance in individuals with chronic stroke. J Appl Biomech. 2014; 30(1):31–6. doi:10.1123/jab.2012-0208. Schepers HM, Koopman H, Veltink PH. Ambulatory assessment of ankle and foot dynamics. IEEE Trans Biomed Eng. 2007; 54(5):895–902. doi:10.1109/TBME.2006.889769. Liu T, Inoue Y, Shibata K. A wearable ground reaction force sensor system and its application to the measurement of extrinsic gait variability. Sensors. 2010; 10(11):10240–10255. doi:10.3390/s101110240. Rouhani H, Favre J, Crevoisier X, Aminian K. Ambulatory assessment of 3d ground reaction force using plantar pressure distribution. Gait & Posture. 2010; 32(3):311–6. doi:10.1016/j.gaitpost.2010.05.014. Cordero AF, Koopman HJFM, van der Helm FCT. Use of pressure insoles to calculate the complete ground reaction forces. J Biomech. 2004; 37(9):1427–1432. doi:10.1016/j.jbiomech.2003.12.016. van den Noort J, van der Esch M, Steultjens M, Dekker J, Schepers M, Veltink P, Harlaar J. Influence of the instrumented force shoe on gait pattern in patients with osteoarthritis of the knee. Med Biol Eng Comput. 2011; 49(12):1381–1392. doi:10.1007/s11517-011-0818-z. Weenk D, van Beijnum B-JF, Droog A, Hermens HJ, Veltink PH. Ultrasonic range measurements on the human body. In: Seventh International Conference on Sensing Technology, ICST 2013. Wellington, New Zealand: IEEE: 2013. p. 151–6, doi:10.1109/ICSensT.2013.6727633. Smith MT, Baer GD. Achievement of simple mobility milestones after stroke. Arch Phys Med Rehabil. 1999; 80(4):442–7. doi:10.1016/s0003-9993(99)90283-6. Berg K. Measuring balance in the elderly: preliminary development of an instrument. Physiother Can. 1989; 41(6):304–11. doi:10.3138/ptc.41.6.304. Skog I, Handel P, Nilsson JO, Rantakokko J. Zero-velocity detection – an algorithm evaluation. IEEE Trans Biomed Eng. 2010; 57(11):2657–666. doi:10.1109/TBME.2010.2060723. Olney SJ, Richards C. Hemiparetic gait following stroke. part i: Characteristics. Gait & Posture. 1996; 4(2):136–48. doi:10.1016/0966-6362(96)01063-6. Chen CL, Chen HC, Tang SF-T, Wu CY, Cheng PT, Hong WH. Gait performance with compensatory adaptations in stroke patients with different degrees of motor recovery. Am J Phys Med Rehabil. 2003; 82(12):925–35. doi:10.1097/01.phm.0000098040.13355.b5. Geurts ACH, de Haart M, van Nes IJW, Duysens J. A review of standing balance recovery from stroke. Gait & Posture. 2005; 22(3):267–81. doi:10.1016/j.gaitpost.2004.10.002. Hsu AL, Tang PF, Jan MH. Analysis of impairments influencing gait velocity and asymmetry of hemiplegic patients after mild to moderate stroke. Arch Phys Med Rehabil. 2003; 84(8):1185–1193. doi:10.1016/S0003-9993(03)00030-3. Huxham F, Gong J, Baker R, Morris M, Iansek R. Defining spatial parameters for non-linear walking. Gait & Posture. 2006; 23(2):159–63. doi:10.1016/j.gaitpost.2005.01.001. Drillis R, Contini R, Bluestein M. Body segment parameters. Artif Limbs. 1964; 8(1):44–66. Hof AL. Scaling gait data to body size. Gait & Posture. 1996; 4(3):222–3. doi:10.1016/0966-6362(95)01057-2. Liston RAL, Brouwer BJ. Reliability and validity of measures obtained from stroke patients using the balance master. Arch Phys Med Rehabil. 1996; 77(5):425–30. doi:10.1016/S0003-9993(96)90028-3. Hak L, Houdijk H, van der Wurff P, Prins MR, Mert A, Beek PJ, van Dieën JH. Stepping strategies used by post-stroke individuals to maintain margins of stability during walking. Clin Biomech. 2013; 28(9–10):1041–1048. doi:10.1016/j.clinbiomech.2013.10.010. Orendurff MS, Segal AD, Klute GK, Berge JS, Rohr ES, Kadel NJ. The effect of walking speed on center of mass displacement. J Rehabil Res Dev. 2004; 41(6A):829–34. doi:10.1682/JRRD.2003.10.0150. Brookhuis RA, Sanders RGP, Ma K, Lammerink TSJ, de Boer MJ, Krijnen GJM, Wiegerink RJ. Miniature large range multi-axis force-torque sensor for biomechanical applications. J Micromech Microeng. 2015; 25(2):025012. doi:10.1088/0960-1317/25/2/025012. Weenk D, van Meulen FB, van Beijnum B-JF, Veltink PH. Ambulatory gait analysis in stroke patients using ultrasound and inertial sensors. In: 13th International Symposium on 3D Analysis of Human Movement, 3D-AHM 2014. Lausanne, Switserland: EPFL: 2014. p. 385–8. http://doc.utwente.nl/91437/. Accessed 12 Oct 2015. Czernuszenko A, Czlonkowska A. Risk factors for falls in stroke patients during inpatient rehabilitation. Clin Rehabil. 2009; 23(2):176–88. doi:10.1177/0269215508098894. The authors would like to thank all study participants from Roessingh Rehabilitation Hospital for participating in this research. Biomedical Signals and Systems, MIRA - Institute for Biomedical Technology and Technical Medicine, University of Twente, PO Box 217, Enschede, 7500, AE, The Netherlands Fokke B. van Meulen, Dirk Weenk, Jaap H. Buurke, Bert-Jan F. van Beijnum & Peter H. Veltink Centre for Telematics and Information Technology, University of Twente, PO Box 217, Enschede, 7500, AE, The Netherlands Dirk Weenk & Bert-Jan F. van Beijnum Roessingh Research and Development, Roessingh Rehabilitation Hospital, Roessinghsbleekweg 33b, Enschede, 7522, AH, The Netherlands Jaap H. Buurke Fokke B. van Meulen Dirk Weenk Bert-Jan F. van Beijnum Peter H. Veltink Correspondence to Fokke B. van Meulen. FM and PV participated in the design of the study. FM and DW provided the data of the stroke participants. FM and DW developed and tested the algorithms and drafted the manuscript. JB, BB and PV assisted with data interpretation, helped to develop the algorithm and to draft the manuscript. PV supervised the research. All authors read and approved the final manuscript. This study was part of the INTERACTION project, which was partially funded by the European Commission under the Seventh Framework Programme (FP7-ICT-2011-7-287351) and the FUSION project financed by PIDON, the Dutch ministry of economic affairs and the Provinces Overijssel and Gelderland. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. van Meulen, F.B., Weenk, D., Buurke, J.H. et al. Ambulatory assessment of walking balance after stroke using instrumented shoes. J NeuroEngineering Rehabil 13, 48 (2016). https://doi.org/10.1186/s12984-016-0146-5 Ambulatory assessment Walking balance
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Gravitationally Lensed Quasars in Gaia: II. Discovery of 24 Lensed Quasars (1803.07601) Cameron A. Lemon, Matthew W. Auger, Richard G. McMahon, Fernanda Ostrovski March 20, 2018 astro-ph.GA We report the discovery, spectroscopic confirmation and preliminary characterisation of 24 gravitationally lensed quasars identified using Gaia observations. Candidates were selected in the Pan-STARRS footprint with quasar-like WISE colours or as photometric quasars from SDSS, requiring either multiple detections in Gaia or a single Gaia detection near a morphological galaxy. The Pan-STARRS grizY images were modelled for the most promising candidates and 60 candidate systems were followed up with the William Herschel Telescope. 13 of the lenses were discovered as Gaia multiples and 10 as single Gaia detections near galaxies. We also discover 1 lens identified through a quasar emission line in an SDSS galaxy spectrum. The lenses have median image separation 2.13 arcsec and the source redshifts range from 1.06 to 3.36. 4 systems are quadruply-imaged and 20 are doubly-imaged. Deep CFHT data reveal an Einstein ring in one double system. We also report 12 quasar pairs, 10 of which have components at the same redshift and require further follow-up to rule out the lensing hypothesis. We compare the properties of these lenses and other known lenses recovered by our search method to a complete sample of simulated lenses to show the lenses we are missing are mainly those with small separations and higher source redshifts. The initial Gaia data release only catalogues all images of ~ 30% of known bright lensed quasars, however the improved completeness of Gaia data release 2 will help find all bright lensed quasars on the sky. A Window On The Earliest Star Formation: Extreme Photoionization Conditions of a High-Ionization, Low-Metallicity Lensed Galaxy at z~2 (1803.02340) Danielle A. Berg, Dawn K. Erb, Matthew W. Auger, Max Pettini, Gabriel B. Brammer March 6, 2018 astro-ph.GA We report new observations of SL2SJ021737-051329, a lens system consisting of a bright arc at z=1.84435, magnified ~17x by a massive galaxy at z=0.65. SL2SJ0217 is a low-mass (M <10^9 M*), low-metallicity (Z~1/20 Z*) galaxy, with extreme star-forming conditions that produce strong nebular UV emission lines in the absence of any apparent outflows. Here we present several notable features from rest-frame UV Keck/LRIS spectroscopy: (1) Very strong narrow emission lines are measured for CIV 1548,1550, HeII 1640, OIII] 1661,1666, SiIII] 1883,1892, and CIII] 1907,1909. (2) Double-peaked LyA emission is observed with a dominant blue peak and centered near the systemic velocity. (3) The low- and high-ionization absorption features indicate very little or no outflowing gas along the sightline to the lensed galaxy. The relative emission line strengths can be reproduced with a very high-ionization, low-metallicity starburst with binaries, with the exception of He \ii, which indicates an additional ionization source is needed. We rule out large contributions from AGN and shocks to the photoionization budget, suggesting that the emission features requiring the hardest radiation field likely result from extreme stellar populations that are beyond the capabilities of current models. Therefore, SL2S0217 serves as a template for the extreme conditions that are important for reionization and thought to be more common in the early Universe. The Sloan Lens ACS Survey. XIII. Discovery of 40 New Galaxy-Scale Strong Lenses (1711.00072) Yiping Shu, Joel R. Brownstein, Adam S. Bolton, Léon V. E. Koopmans, Tommaso Treu, Antonio D. Montero-Dorta, Matthew W. Auger, Oliver Czoske, Raphaël Gavazzi, Philip J. Marshall, Leonidas A. Moustakas Jan. 12, 2018 astro-ph.GA We present the full sample of 118 galaxy-scale strong-lens candidates in the Sloan Lens ACS (SLACS) Survey for the Masses (S4TM) Survey, which are spectroscopically selected from the final data release of the Sloan Digital Sky Survey. Follow-up Hubble Space Telescope (HST) imaging observations confirm that 40 candidates are definite strong lenses with multiple lensed images. The foreground lens galaxies are found to be early-type galaxies (ETGs) at redshifts 0.06 to 0.44, and background sources are emission-line galaxies at redshifts 0.22 to 1.29. As an extension of the SLACS Survey, the S4TM Survey is the first attempt to preferentially search for strong-lens systems with relatively lower lens masses than those in the pre-existing strong-lens samples. By fitting HST data with a singular isothermal ellipsoid model, we find total projected mass within the Einstein radius of the S4TM strong-lens sample ranges from $3 \times10^{10} M_{\odot}$ to $2 \times10^{11} M_{\odot}$. In [Shu15], we have derived the total stellar mass of the S4TM lenses to be $5 \times10^{10} M_{\odot}$ to $1 \times10^{12} M_{\odot}$. Both total enclosed mass and stellar mass of the S4TM lenses are on average almost a factor of 2 smaller than those of the SLACS lenses, which also represent typical mass scales of the current strong-lens samples. The extended mass coverage provided by the S4TM sample can enable a direct test, with the aid of strong lensing, for transitions in scaling relations, kinematic properties, mass structure, and dark-matter content trends of ETGs at intermediate-mass scales as noted in previous studies. Evidence for radial variations in the stellar mass-to-light ratio of massive galaxies from weak and strong lensing (1801.01883) Alessandro Sonnenfeld, Matthew W. Auger, Yutaka Komiyama Institute of Astronomy University of Cambridge, Institut d'Astrophysique de Paris, National Astronomical Observatory of Japan, Jan. 5, 2018 astro-ph.CO, astro-ph.GA The Initial Mass Function (IMF) for massive galaxies can be constrained by combining stellar dynamics with strong gravitational lensing. However, this method is limited by degeneracies between the density profile of dark matter and the stellar mass-to-light ratio. In this work we reduce this degeneracy by combining weak lensing together with strong lensing and stellar kinematics. Our analysis is based on two galaxy samples: 45 strong lenses from the SLACS survey and 1,700 massive quiescent galaxies from the SDSS main spectroscopic sample with weak lensing measurements from the Hyper Suprime-Cam survey. We use a Bayesian hierarchical approach to jointly model all three observables. We fit the data with models of varying complexity and show that a model with a radial gradient in the stellar mass-to-light ratio is required to simultaneously describe both galaxy samples. Models with no gradient result in too small dark matter masses when fitted to the strong lens sample, at odds with weak lensing constraints. Our measurements are unable to determine whether $M_*/L$ gradients are due to variations in stellar population parameters at fixed IMF, or to gradients in the IMF itself. The inclusion of $M_*/L$ gradients decreases dramatically the inferred IMF normalisation, compared to previous lensing-based studies. The main effect of strong lensing selection is to shift the stellar mass distribution towards the high mass end, while the halo mass and stellar IMF distribution at fixed stellar mass are not significantly affected. The Discovery of a Five-Image Lensed Quasar at z = 3.34 using PanSTARRS1 and Gaia (1709.08975) Fernanda Ostrovski, Cameron A. Lemon, Matthew W. Auger, Richard G. McMahon, Christopher D. Fassnacht, Geoff C.-F. Chen, Andrew J. Connolly, Sergey E. Koposov, Estelle Pons, Sophie L. Reed, Cristian E. Rusu Oct. 16, 2017 astro-ph.GA We report the discovery, spectroscopic confirmation, and mass modelling of the gravitationally lensed quasar system PS J0630-1201. The lens was discovered by matching a photometric quasar catalogue compiled from Pan-STARRS and WISE photometry to the Gaia DR1 catalogue, exploiting the high spatial resolution of the latter (FWHM $\sim $0.1") to identify the three brightest components of the lens. Follow-up spectroscopic observations with the WHT confirm the multiple objects are quasars at redshift $z_{q}=3.34$. Further follow-up with Keck AO high-resolution imaging reveals that the system is composed of two lensing galaxies and the quasar is lensed into a $\sim$2.8" separation four-image cusp configuration with a fifth image clearly visible, and a 1.0" arc due to the lensed quasar host galaxy. The system is well-modelled with two singular isothermal ellipsoids, reproducing the position of the fifth image. We discuss future prospects for measuring time delays between the images and constraining any offset between mass and light using the faintly detected Einstein arcs associated with the quasar host galaxy. Gravitationally Lensed Quasars in Gaia: I. Resolving Small-Separation Lenses (1709.08976) Cameron A. Lemon, Matthew W. Auger, Richard G. McMahon, Sergey E. Koposov Sept. 26, 2017 astro-ph.GA Gaia's exceptional resolution (FWHM $\sim$ 0.1$^{\prime\prime}$) allows identification and cataloguing of the multiple images of gravitationally lensed quasars. We investigate a sample of 49 known lensed quasars in the SDSS footprint, with image separations less than 2$^{\prime\prime}$, and find that 8 are detected with multiple components in the first Gaia data release. In the case of the 41 single Gaia detections, we generally are able to distinguish these lensed quasars from single quasars when comparing Gaia flux and position measurements to those of Pan-STARRS and SDSS. This is because the multiple images of these lensed quasars are typically blended in ground-based imaging and therefore the total flux and a flux-weighted centroid are measured, which can differ significantly from the fluxes and centroids of the individual components detected by Gaia. We compare the fluxes through an empirical fit of Pan-STARRS griz photometry to the wide optical Gaia bandpass values using a sample of isolated quasars. The positional offsets are calculated from a recalibrated astrometric SDSS catalogue. Applying flux and centroid difference criteria to spectroscopically confirmed quasars, we discover 4 new sub-arcsecond-separation lensed quasar candidates which have two distinct components of similar colour in archival CFHT or HSC data. Our method based on single Gaia detections can be used to identify the $\sim$ 1400 lensed quasars with image separation above 0.5$^{\prime\prime}$, expected to have only one image bright enough to be detected by Gaia. H0LiCOW VII. Cosmic evolution of the correlation between black hole mass and host galaxy luminosity (1703.02041) Xuheng Ding, Tommaso Treu, Sherry H. Suyu, Kenneth C. Wong, Takahiro Morishita, Daeseong Park, Dominique Sluse, Matthew W. Auger, Adriano Agnello, Vardha N. Bennert, Thomas E. Collett Sept. 1, 2017 astro-ph.GA Strongly lensed active galactic nuclei (AGN) provide a unique opportunity to make progress in the study of the evolution of the correlation between the mass of supermassive black holes ($\mathcal M_{BH}$) and their host galaxy luminosity ($L_{host}$). We demonstrate the power of lensing by analyzing two systems for which state-of-the-art lens modelling techniques have been applied to Hubble Space Telescope imaging data. We use i) the reconstructed images to infer the total and bulge luminosity of the host and ii) published broad-line spectroscopy to estimate $\mathcal M_{BH}$ using the so-called virial method. We then enlarge our sample with new calibration of previously published measurements to study the evolution of the correlation out to z~4.5. Consistent with previous work, we find that without taking into account passive luminosity evolution, the data points lie on the local relation. Once passive luminosity evolution is taken into account, we find that BHs in the more distant Universe reside in less luminous galaxies than today. Fitting this offset as $\mathcal M_{BH}$/$L_{host}$ $\propto$ (1+z)$^{\gamma}$, and taking into account selection effects, we obtain $\gamma$ = 0.6 $\pm$ 0.1 and 0.8$\pm$ 0.1 for the case of $\mathcal M_{BH}$-$L_{bulge}$ and $\mathcal M_{BH}$-$L_{total}$, respectively. To test for systematic uncertainties and selection effects we also consider a reduced sample that is homogeneous in data quality. We find consistent results but with considerably larger uncertainty due to the more limited sample size and redshift coverage ($\gamma$ = 0.7 $\pm$ 0.4 and 0.2$\pm$ 0.5 for $\mathcal M_{BH}$-$L_{bulge}$ and $\mathcal M_{BH}$-$L_{total}$, respectively), highlighting the need to gather more high-quality data for high-redshift lensed quasar hosts. Our result is consistent with a scenario where the growth of the black hole predates that of the host galaxy. H0LiCOW IV. Lens mass model of HE 0435-1223 and blind measurement of its time-delay distance for cosmology (1607.01403) Kenneth C. Wong, Sherry H. Suyu, Matthew W. Auger, Vivien Bonvin, Frederic Courbin, Christopher D. Fassnacht, Aleksi Halkola, Cristian E. Rusu, Dominique Sluse, Alessandro Sonnenfeld, Tommaso Treu, Thomas E. Collett, Stefan Hilbert, Leon V. E. Koopmans, Philip J. Marshall, Nicholas Rumbaugh Dec. 19, 2016 astro-ph.CO Strong gravitational lenses with measured time delays between the multiple images allow a direct measurement of the time-delay distance to the lens, and thus a measure of cosmological parameters, particularly the Hubble constant, $H_{0}$. We present a blind lens model analysis of the quadruply-imaged quasar lens HE 0435-1223 using deep Hubble Space Telescope imaging, updated time-delay measurements from the COSmological MOnitoring of GRAvItational Lenses (COSMOGRAIL), a measurement of the velocity dispersion of the lens galaxy based on Keck data, and a characterization of the mass distribution along the line of sight. HE 0435-1223 is the third lens analyzed as a part of the $H_{0}$ Lenses in COSMOGRAIL's Wellspring (H0LiCOW) project. We account for various sources of systematic uncertainty, including the detailed treatment of nearby perturbers, the parameterization of the galaxy light and mass profile, and the regions used for lens modeling. We constrain the effective time-delay distance to be $D_{\Delta t} = 2612_{-191}^{+208}~\mathrm{Mpc}$, a precision of 7.6%. From HE 0435-1223 alone, we infer a Hubble constant of $H_{0} = 73.1_{-6.0}^{+5.7}~\mathrm{km~s^{-1}~Mpc^{-1}}$ assuming a flat $\Lambda$CDM cosmology. The cosmographic inference based on the three lenses analyzed by H0LiCOW to date is presented in a companion paper (H0LiCOW Paper V). VDES J2325-5229 a z=2.7 gravitationally lensed quasar discovered using morphology independent supervised machine learning (1607.01391) Fernanda Ostrovski, Richard G. McMahon, Andrew J. Connolly, Cameron A. Lemon, Matthew W. Auger, Manda Banerji, Johnathan M. Hung, Sergey E. Koposov, Christopher E. Lidman, Sophie L. Reed, Sahar Allam, Aurélien Benoit-Lévy, Emmanuel Bertin, David Brooks, Elizabeth Buckley-Geer, Aurelio Carnero Rosell, Matias Carrasco Kind, Jorge Carretero, Carlos E. Cunha, Luiz N. da Costa, Shantanu Desai, H. Thomas Diehl, Jörg P. Dietrich, August E. Evrard, David A. Finley, Brenna Flaugher, Pablo Fosalba, Josh Frieman, David W. Gerdes, Daniel A. Goldstein, Daniel Gruen, Robert A. Gruendl, Gaston Gutierrez, Klaus Honscheid, David J. James, Kyler Kuehn, Nikolay Kuropatkin, Marcos Lima, Huan Lin, Marcio A. G. Maia, Jennifer L. Marshall, Paul Martini, Peter Melchior, Ramon Miquel, Ricardo Ogando, Andrés Plazas Malagón, Kevin Reil, Kathy Romer, Eusebio Sanchez, Basilio Santiago, Vic Scarpine, Ignacio Sevilla-Noarbe, Marcelle Soares-Santos, Flavia Sobreira, Eric Suchyta, Gregory Tarle, Daniel Thomas, Douglas L. Tucker, Alistair R. Walker Nov. 15, 2016 astro-ph.GA We present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift $z_{s}=2.74$ and image separation of $2.9"$ lensed by a foreground $z_{l}=0.40$ elliptical galaxy. Since the images of gravitationally lensed quasars are the superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and $gi$ multicolour photometric observations from the Dark Energy Survey (DES), near IR $JK$ photometry from the VISTA Hemisphere Survey (VHS) and WISE mid IR photometry, we have identified a candidate system with two catalogue components with $i_{AB}=18.61$ and $i_{AB}=20.44$ comprised of an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of $z=2.739\pm0.003$ and a foreground early type galaxy with $z=0.400\pm0.002$. We model the system as a single isothermal ellipsoid and find the Einstein radius $\theta_E \sim 1.47"$, enclosed mass $M_{enc} \sim 4 \times 10^{11}$M$_{\odot}$ and a time delay of $\sim$52 days. The relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1. H0LiCOW VI. Testing the fidelity of lensed quasar host galaxy reconstruction (1610.08504) Xuheng Ding, Kai Liao, Tommaso Treu, Sherry H. Suyu, Geoff C.-F. Chen, Matthew W. Auger, Philip J. Marshall, Adriano Agnello, Frederic Courbin, Anna M. Nierenberg, Cristian E. Rusu, Dominique Sluse, Alessandro Sonnenfeld, Kenneth C. Wong The empirical correlation between the mass of a super-massive black hole (MBH) and its host galaxy properties is widely considered to be evidence of their co-evolution. A powerful way to test the co-evolution scenario and learn about the feedback processes linking galaxies and nuclear activity is to measure these correlations as a function of redshift. Unfortunately, currently MBH can only be estimated in active galaxies at cosmological distances. At these distances, bright active galactic nuclei (AGN) can outshine the host galaxy, making it extremely difficult to measure the host's luminosity. Strongly lensed AGNs provide in principle a great opportunity to improve the sensitivity and accuracy of the host galaxy luminosity measurements as the host galaxy is magnified and more easily separated from the point source, provided the lens model is sufficiently accurate. In order to measure the MBH-L correlation with strong lensing, it is necessary to ensure that the lens modelling is accurate, and that the host galaxy luminosity can be recovered to at least a precision and accuracy better than that of the typical MBH measurement. We carry out extensive and realistic simulations of deep Hubble Space Telescope observations of lensed AGNs obtained by our collaboration. We show that the host galaxy luminosity can be recovered with better accuracy and precision than the typical uncertainty on MBH(~ 0.5 dex) for hosts as faint as 2-4 magnitudes dimmer than the AGN itself. Our simulations will be used to estimate bias and uncertainties on the actual measurements to be presented in a future paper. SHARP - III: First Use Of Adaptive Optics Imaging To Constrain Cosmology With Gravitational Lens Time Delays (1601.01321) Geoff C.-F. Chen, Sherry H. Suyu, Kenneth C. Wong, Christopher D. Fassnacht, Tzihong Chiueh, Aleksi Halkola, I Shing Hu, Matthew W. Auger, Leon V. E. Koopmans, David J. Lagattuta, John P. McKean, Simona Vegetti Aug. 14, 2016 astro-ph.CO, astro-ph.GA Accurate and precise measurements of the Hubble constant are critical for testing our current standard cosmological model and revealing possibly new physics. With Hubble Space Telescope (HST) imaging, each strong gravitational lens system with measured time delays can allow one to determine the Hubble constant with an uncertainty of $\sim 7\%$. Since HST will not last forever, we explore adaptive-optics (AO) imaging as an alternative that can provide higher angular resolution than HST imaging but has a less stable point spread function (PSF) due to atmospheric distortion. To make AO imaging useful for time-delay-lens cosmography, we develop a method to extract the unknown PSF directly from the imaging of strongly lensed quasars. In a blind test with two mock data sets created with different PSFs, we are able to recover the important cosmological parameters (time-delay distance, external shear, lens mass profile slope, and total Einstein radius). Our analysis of the Keck AO image of the strong lens system RXJ1131-1231 shows that the important parameters for cosmography agree with those based on HST imaging and modeling within 1-$\sigma$ uncertainties. Most importantly, the constraint on the model time-delay distance by using AO imaging with $0.045"$resolution is tighter by $\sim 50\%$ than the constraint of time-delay distance by using HST imaging with $0.09"$when a power-law mass distribution for the lens system is adopted. Our PSF reconstruction technique is generic and applicable to data sets that have multiple nearby point sources, enabling scientific studies that require high-precision models of the PSF. Broad Hbeta Emission-Line Variability in a Sample of 102 Local Active Galaxies (1603.00035) Jordan N. Runco, Maren Cosens, Vardha N. Bennert, Bryan Scott, S. Komossa, Matthew A. Malkan, Mariana S. Lazarova, Matthew W. Auger, Tommaso Treu, Daeseong Park Feb. 29, 2016 astro-ph.GA A sample of 102 local (0.02 < z < 0.1) Seyfert galaxies with black hole masses MBH > 10^7 M_sun was selected from the Sloan Digital Sky Survey (SDSS) and observed using the Keck 10-m telescope to study the scaling relations between MBH and host galaxy properties. We study profile changes of the broad Hbeta emission line within the ~3-9 year time-frame between the two sets of spectra. The variability of the broad Hbeta emission line is of particular interest, not only since it is used to estimate MBH, but also since its strength and width is used to classify Seyfert galaxies into different types. At least some form of broad-line variability (in either width or flux) is observed in the majority (~66%) of the objects, resulting in a Seyfert-type change for ~38% of the objects, likely driven by variable accretion and/or obscuration. The broad Hbeta line virtually disappears in 3/102 (~3%) extreme cases. We discuss potential causes for these changing-look AGNs. While similar dramatic transitions have previously been reported in the literature, either on a case-by-case basis or in larger samples focusing on quasars at higher redshifts, our study provides statistical information on the frequency of H$\beta$ line variability in a sample of low-redshift Seyfert galaxies. Precision cosmology with time delay lenses: high resolution imaging requirements (1506.07640) Xiao-Lei Meng, Tommaso Treu, Adriano Agnello, Matthew W. Auger, Kai Liao, Philip J. Marshall Sept. 1, 2015 astro-ph.CO, astro-ph.IM Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as "Einstein Rings" in high resolution images. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope $\gamma'$ of the total mass density profile $\rho_{tot}\propto r^{-\gamma'}$ for the main deflector can be measured. Ideally, we require that the statistical error on $\gamma'$ be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. However, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive Optics System, and TMT, will only be of order a few minutes per system, thus making the follow-up of hundreds of systems a practical and efficient cosmological probe. Discovery of two gravitationally lensed quasars in the Dark Energy Survey (1508.01203) Adriano Agnello, Tommaso Treu, Fernanda Ostrovski, Paul L. Schechter, Elizabeth J. Buckley-Geer, Huan Lin, Matthew W. Auger, Frederic Courbin, Christopher D. Fassnacht, Josh Frieman, Nikolay Kuropatkin, Philip J. Marshall, Richard G. McMahon, Georges Meylan, Anupreeta More, Sherry H. Suyu, Cristian E. Rusu, David Finley, Tim Abbott, Filipe B. Abdalla, Sahar Allam, James Annis, Manda Banerji, Aurélien Benoit-Lévy, Emmanuel Bertin, David Brooks, David L. Burke, Aurelio Carnero Rosell, Matias Carrasco Kind, Jorge Carretero, Carlos E. Cunha, Chris B. D'Andrea, Luiz N. da Costa, Shantanu Desai, H. Thomas Diehl, Jörg P. Dietrich, Peter Doel, Tim F. Eifler, Juan Estrada, Angelo Fausti Neto, Brenna Flaugher, Pablo Fosalba, David W. Gerdes, Daniel Gruen, Gaston Gutierrez, Klaus Honscheid, David J. James, Kyler Kuehn, Ofer Lahav, Marco Lima, Marcio A.G. Maia, Marina March, Jennifer L. Marshall, Paul Martini, Peter Melchior, Christopher J. Miller, Ramon Miquel, Robert C. Nichol, Ricardo Ogando, Andres A. Plazas, Kevin Reil, A. Kathy Romer, Aaron Roodman, Masao Sako, Eusebio Sanchez, Basilio Santiago, Vic Scarpine, Michael Schubnell, Ignacio Sevilla-Noarbe, R. Chris Smith, Marcelle Soares-Santos, Flavia Sobreira, Eric Suchyta, Molly E. C. Swanson, Gregory Tarle, Jon Thaler, Douglas Tucker, Alistair R. Walker, Risa H. Wechsler, Yuanyuan Zhang Aug. 5, 2015 astro-ph.CO, astro-ph.GA We present spectroscopic confirmation of two new lensed quasars via data obtained at the 6.5m Magellan/Baade Telescope. The lens candidates have been selected from the Dark Energy Survey (DES) and WISE based on their multi-band photometry and extended morphology in DES images. Images of DES J0115-5244 show two blue point sources at either side of a red galaxy. Our long-slit data confirm that both point sources are images of the same quasar at $z_{s}=1.64.$ The Einstein Radius estimated from the DES images is $0.51$". DES J2200+0110 is in the area of overlap between DES and the Sloan Digital Sky Survey (SDSS). Two blue components are visible in the DES and SDSS images. The SDSS fiber spectrum shows a quasar component at $z_{s}=2.38$ and absorption compatible with Mg II and Fe II at $z_{l}=0.799$, which we tentatively associate with the foreground lens galaxy. The long-slit Magellan spectra show that the blue components are resolved images of the same quasar. The Einstein Radius is $0.68$" corresponding to an enclosed mass of $1.6\times10^{11}\,M_{\odot}.$ Three other candidates were observed and rejected, two being low-redshift pairs of starburst galaxies, and one being a quasar behind a blue star. These first confirmation results provide an important empirical validation of the data-mining and model-based selection that is being applied to the entire DES dataset. A Local Baseline of the Black Hole Mass Scaling Relations for Active Galaxies. III. The BH mass - $\sigma$ relation (1409.4428) Vardha N. Bennert, Tommaso Treu, Matthew W. Auger, Maren Cosens, Daeseong Park, Rebecca Rosen, Chelsea E. Harris, Matthew A. Malkan, Jong-Hak Woo June 24, 2015 astro-ph.GA We create a baseline of the black hole (BH) mass (MBH) - stellar-velocity dispersion (sigma) relation for active galaxies, using a sample of 66 local (0.02<z<0.09) Seyfert-1 galaxies, selected from the Sloan Digital Sky Survey (SDSS). Analysis of SDSS images yields AGN luminosities free of host-galaxy contamination and morphological classification. 51/66 galaxies have spiral morphology. 28 bulges have Sersic index n<2 and are considered candidate pseudo bulges, with eight being definite pseudo bulges based on multiple classification criteria met. Only 4/66 galaxies show sign of interaction/merging. High signal-to-noise ratio Keck spectra provide the width of the broad Hbeta emission line free of FeII emission and stellar absorption. AGN luminosity and Hbeta line widths are used to estimate MBH. The Keck-based spatially-resolved kinematics is used to determine stellar-velocity dispersion within the spheroid effective radius. We find that sigma can vary on average by up to 40% across definitions commonly used in the literature, emphasizing the importance of using self-consistent definitions in comparisons and evolutionary studies. The MBH-sigma relation for our Seyfert-1 galaxies has the same intercept and scatter as that of reverberation-mapped AGNs as well as quiescent galaxies, consistent with the hypothesis that our single epoch MBH estimator and sample selection do not introduce significant biases. Barred galaxies, merging galaxies, and those hosting pseudo bulges do not represent outliers in the MBH-sigma relation. This is in contrast with previous work, although no firm conclusion can be drawn due to the small sample size and limited resolution of the SDSS images. High resolution imaging and spectroscopy of the gravitational lens SDSSJ1206+4332: a natural coronagraph at $z=1.789$ and a standard ruler at $z=0.745$ (1506.02720) Adriano Agnello, Alessandro Sonnenfeld, Sherry H. Suyu, Tommaso Treu, Christopher D. Fassnacht, Charlotte Mason, Maruša Bradač, Matthew W. Auger June 8, 2015 astro-ph.CO We present spectroscopy and laser guide star adaptive optics (LGSAO) images of the doubly imaged lensed quasar SDSS J1206+4332. We revise the deflector redshift proposed previously to $z_{d}=0.745,$ and measure for the first time its velocity dispersion $\sigma=(290\pm30)$ km/s. The LGSAO data show the lensed quasar host galaxy stretching over the astroid caustic thus forming an extra pair of merging images, which was previously thought to be an unrelated galaxy in seeing limited data. Owing to the peculiar geometry, the lens acts as a natural coronagraph on the broad-line region of the quasar so that only narrow [O III] emission is found in the fold arc. We use the data to reconstruct the source structure and deflector potential, including nearby perturbers. We reconstruct the point-spread function (PSF) from the quasar images themselves, since no additional point source is present in the field of view. From gravitational lensing and stellar dynamics, we find the slope of the total mass density profile to be $\gamma^{\prime}=-\log\rho/\log r =1.93\pm0.09.$ We discuss the potential of SDSS J1206+4332 for measuring time delay distance (and thus H$_0$ and other cosmological parameters), or as a standard ruler, in combination with the time delay published by the COSMOGRAIL collaboration. We conclude that this system is very promising for cosmography. However, in order to achieve competitive precision and accuracy, an independent characterization of the PSF is needed. Spatially resolved kinematics of the deflector would reduce the uncertainties further. Both are within the reach of current observational facilities. The Sloan Lens ACS Survey. XII. Extending Strong Lensing to Lower Masses (1407.2240) Yiping Shu, Adam S. Bolton, Joel R. Brownstein, Antonio D. Montero-Dorta, Léon V. E. Koopmans, Tommaso Treu, Raphaël Gavazzi, Matthew W. Auger, Oliver Czoske, Philip J. Marshall, Leonidas A. Moustakas We present observational results from a new Hubble Space Telescope (HST) Snapshot program to extend the methods of the Sloan Lens ACS (SLACS) Survey to lower lens-galaxy masses. We discover 40 new galaxy-scale strong lenses, which we supplement with 58 previously discovered SLACS lenses. In addition, we determine the posterior PDFs of the Einstein radius for 33 galaxies (18 new and 15 from legacy SLACS data) based on single lensed images. We find a less-than-unity slope of $0.64\pm0.06$ for the $\log_{10} {\sigma}_*$-$\log_{10} {\sigma}_{\rm SIE}$ relation, which corresponds to a 6-$\sigma$ evidence that the total mass-density profile of early-type galaxies varies systematically in the sense of being shallower at higher lens-galaxy velocity dispersions. The trend is only significant when single-image systems are considered, highlighting the importance of including both "lenses" and "non-lenses" for an unbiased treatment of the lens population when extending to lower mass ranges. By scaling simple stellar population models to the HST I-band data, we identify a strong trend of increasing dark-matter fraction at higher velocity dispersions, which can be alternatively interpreted as a trend in the stellar initial mass function (IMF) normalization. Consistent with previous findings and the suggestion of a non-universal IMF, we find that a Salpeter IMF is ruled out for galaxies with velocity dispersion less than $180$ km/s. Considered together, our mass-profile and dark-matter-fraction trends with increasing galaxy mass could both be explained by an increasing relative contribution on kiloparsec scales from a dark-matter halo with a spatial profile more extended than that of the stellar component. Cosmological Constraints from the double source plane lens SDSSJ0946+1006 (1403.5278) Thomas E. Collett, Matthew W. Auger Sept. 12, 2014 astro-ph.CO, astro-ph.GA We present constraints on the equation of state of dark energy, $w$, and the total matter density, $\Omega_{\mathrm{M}}$, derived from the double-source-plane strong lens SDSSJ0946+1006, the first cosmological measurement with a galaxy-scale double-source-plane lens. By modelling the primary lens with an elliptical power-law mass distribution, and including perturbative lensing by the first source, we are able to constrain the cosmological scaling factor in this system to be $\beta^{-1}=1.404 \pm 0.016$, which implies $\Omega_{\mathrm{M}}= 0.33_{-0.26}^{+0.33}$ for a flat $\Lambda$ cold dark matter ($\Lambda$CDM) cosmology. Combining with a cosmic microwave background prior from Planck, we find $w$ = $-1.17^{+0.20}_{-0.21}$ assuming a flat $w$CDM cosmology. This inference shifts the posterior by 1${\sigma}$ and improves the precision by 30 per cent with respect to Planck alone, and demonstrates the utility of combining simple, galaxy-scale multiple-source-plane lenses with other cosmological probes to improve precision and test for residual systematic biases. The evolution of late-type galaxies from CASSOWARY lensing systems (1406.1114) Zuzanna Kostrzewa-Rutkowska, Łukasz Wyrzykowski, Matthew W. Auger, Thomas E. Collett, Vasily Belokurov June 4, 2014 astro-ph.GA We explore the properties of lensing galaxies and lensed faint sources at redshifts between 1.5 and 3.0. Our sample consists of 9 carefully selected strongly-lensed galaxies discovered by the CASSOWARY survey in the Sloan Digital Sky Survey (SDSS) data. We find that, despite some limitations of the original SDSS data, the homogeneous sample of lensing systems can provide a useful insight into lens and source properties. We also explore the limitations of using low-resolution data to model and analyse galaxy-galaxy lensing. We derive the relative alignment of mass and light in fitted lens profiles. The range of magnification extends above 5, hence we are able to analyse potentially small and low-mass galaxies at high redshifts. We confirm the likely evolution of the size-luminosity relation for blue star-forming galaxies as a function of redshift Testing metallicity indicators at z~1.4 with the gravitationally lensed galaxy CASSOWARY 20 (1311.5092) Bethan L. James, Max Pettini, Lise Christensen, Matthew W. Auger, George D. Becker, Lindsay J. King, Anna M. Quider, Alice E. Shapley, Charles C. Steidel Feb. 24, 2014 astro-ph.CO, astro-ph.GA We present X-shooter observations of CASSOWARY 20 (CSWA 20), a star-forming (SFR ~6 Msol/yr) galaxy at z=1.433, magnified by a factor of 11.5 by the gravitational lensing produced by a massive foreground galaxy at z=0.741. We analysed the integrated physical properties of the HII regions of CSWA 20 using temperature- and density-sensitive emission lines. We find the abundance of oxygen to be ~1/7 of solar, while carbon is ~50 times less abundant than in the Sun. The unusually low C/O ratio may be an indication of a particularly rapid timescale of chemical enrichment. The wide wavelength coverage of X-shooter gives us access to five different methods for determining the metallicity of CSWA 20, three based on emission lines from HII regions and two on absorption features formed in the atmospheres of massive stars. All five estimates are in agreement, within the factor of ~2 uncertainty of each method. The interstellar medium of CSWA 20 only partially covers the star-forming region as viewed from our direction; in particular, absorption lines from neutrals and first ions are exceptionally weak. We find evidence for large-scale outflows of the interstellar medium (ISM) with speeds of up 750 km/s, similar to the values measured in other high-z galaxies sustaining much higher rates of star formation. The SWELLS Survey. VI. hierarchical inference of the initial mass functions of bulges and discs (1310.5177) Brendon J. Brewer, Philip J. Marshall, Matthew W. Auger, Tommaso Treu, Aaron A. Dutton, Matteo Barnabè Oct. 18, 2013 physics.data-an, stat.AP, astro-ph.GA, astro-ph.IM The long-standing assumption that the stellar initial mass function (IMF) is universal has recently been challenged by a number of observations. Several studies have shown that a "heavy" IMF (e.g., with a Salpeter-like abundance of low mass stars and thus normalisation) is preferred for massive early-type galaxies, while this IMF is inconsistent with the properties of less massive, later-type galaxies. These discoveries motivate the hypothesis that the IMF may vary (possibly very slightly) across galaxies and across components of individual galaxies (e.g. bulges vs discs). In this paper we use a sample of 19 late-type strong gravitational lenses from the SWELLS survey to investigate the IMFs of the bulges and discs in late-type galaxies. We perform a joint analysis of the galaxies' total masses (constrained by strong gravitational lensing) and stellar masses (constrained by optical and near-infrared colours in the context of a stellar population synthesis [SPS] model, up to an IMF normalisation parameter). Using minimal assumptions apart from the physical constraint that the total stellar mass within any aperture must be less than the total mass within the aperture, we find that the bulges of the galaxies cannot have IMFs heavier (i.e. implying high mass per unit luminosity) than Salpeter, while the disc IMFs are not well constrained by this data set. We also discuss the necessity for hierarchical modelling when combining incomplete information about multiple astronomical objects. This modelling approach allows us to place upper limits on the size of any departures from universality. More data, including spatially resolved kinematics (as in paper V) and stellar population diagnostics over a range of bulge and disc masses, are needed to robustly quantify how the IMF varies within galaxies. Reconstructing the Lensing Mass in the Universe from Photometric Catalogue Data (1303.6564) Thomas E. Collett, Philip J. Marshall, Matthew W. Auger, Stefan Hilbert, Sherry H. Suyu, Zachary Greene, Tommaso Treu, Christopher D. Fassnacht, Léon V.E. Koopmans, Maruša Bradač, Roger D. Blandford March 26, 2013 astro-ph.CO High precision cosmological distance measurements towards individual objects such as time delay gravitational lenses or type Ia supernovae are affected by weak lensing perturbations by galaxies and groups along the line of sight. In time delay gravitational lenses, "external convergence," kappa, can dominate the uncertainty in the inferred distances and hence cosmological parameters. In this paper we attempt to reconstruct kappa, due to line of sight structure, using a simple halo model. We use mock catalogues from the Millennium Simulation, and calibrate and compare our reconstructed P(kappa) to ray-traced kappa "truth" values; taking into account realistic observational uncertainties. We find that the reconstruction of kappa provides an improvement in precision of ~50% over galaxy number counts. We find that the lowest-kappa lines of sight have the best constrained P(kappa). In anticipation of large future samples of lenses, we find that selecting the third of the systems with the highest precision kappa estimates gives a sample of unbiased time delay distance measurements with just ~1% uncertainty due to line of sight external convergence effects. Photometric data are sufficient to pre-select the best-constrained lines of sight, and can be done before investment in light-curve monitoring. Conversely, we show that selecting lines of sight with high external shear could, with the reconstruction model presented, induce biases of up to 1% in time delay distance. We find that a major potential source of systematic error is uncertainty in the high mass end of the stellar mass-halo mass relation; this could introduce ~2% biases on the time-delay distance if completely ignored. We suggest areas for the improvement of this general analysis framework (including more sophisticated treatment of high mass structures) that should allow yet more accurate cosmological inferences to be made. Improving the precision of time-delay cosmography with observations of galaxies along the line of sight (1303.3588) Zach S. Greene, Sherry H. Suyu, Tommaso Treu, Stefan Hilbert, Matthew W. Auger, Thomas E. Collett, Philip J. Marshall, Christopher D. Fassnacht, Roger D. Blandford, Marua Bradac, Léon V.E. Koopmans In order to use strong gravitational lens time delays to measure precise and accurate cosmological parameters the effects of mass along the line of sight must be taken into account. We present a method to achieve this by constraining the probability distribution function of the effective line of sight convergence k_ext. The method is based on matching the observed overdensity in the weighted number of galaxies to that found in mock catalogs with k_ext obtained by ray-tracing through structure formation simulations. We explore weighting schemes based on projected distance, mass, luminosity, and redshift. This additional information reduces the uncertainty of k_ext from sigma_k $0.06 to ~0.04 for very overdense lines of sight like that of the system B1608+656. For more common lines of sight, sigma_k is reduced to ~<0.03, corresponding to an uncertainty of ~<3% on distance. This uncertainty has comparable effects on cosmological parameters to that arising from the mass model of the deflector and its immediate environment. Photometric redshifts based on g, r, i and K photometries are sufficient to constrain k_ext almost as well as with spectroscopic redshifts. As an illustration, we apply our method to the system B1608+656. Our most reliable k_ext estimator gives sigma_k=0.047 down from 0.065 using only galaxy counts. Although deeper multi-band observations of the field of B1608+656 are necessary to obtain a more precise estimate, we conclude that griK photometry, in addition to spectroscopy to characterize the immediate environment, is an effective way to increase the precision of time-delay cosmography. Constraining the dark energy equation of state with double source plane strong lenses (1203.2758) Thomas E. Collett, Matthew W. Auger, Vasily Belokurov, Philip J. Marshall, Alex C. Hall May 30, 2012 gr-qc, astro-ph.CO We investigate the possibility of constraining the dark energy equation of state by measuring the ratio of Einstein radii in a strong gravitational lens system with two source planes. This quantity is independent of the Hubble parameter and directly measures the growth of angular diameter distances as a function of redshift. We investigate the prospects for a single double source plane system and for a forecast population of systems discovered by re-observing a population of single source lenses already known from a photometrically selected catalogue such as CASSOWARY or from a spectroscopically selected catalogue such as SLACS. We find that constraints comparable to current data-sets (15% uncertainty on the dark equation of state at 68%CL) are possible with a handful of double source plane systems. We also find that the method's degeneracy between Omega_M and w is almost orthogonal to that of CMB and BAO measurements, making this method highly complimentary to current probes. Evidence for dark matter contraction and a Salpeter IMF in a massive early-type galaxy (1111.4215) Alessandro Sonnenfeld, Tommaso Treu, Raphael Gavazzi, Philip J. Marshall, Matthew W. Auger, Sherry H. Suyu, Leon V. E. Koopmans, Adam S. Bolton April 30, 2012 astro-ph.CO Stars and dark matter account for most of the mass of early-type galaxies, but uncertainties in the stellar population and the dark matter profile make it challenging to distinguish between the two components. Nevertheless, precise observations of stellar and dark matter are extremely valuable for testing the many models of structure formation and evolution. We present a measurement of the stellar mass and inner slope of the dark matter halo of a massive early-type galaxy at $z=0.222$. The galaxy is the foreground deflector of the double Einstein ring gravitational lens system SDSSJ0946+1006, also known as the "Jackpot". By combining the tools of lensing and dynamics, we first constrain the mean slope of the total mass density profile ($\rho_{\rm{tot}}\propto r^{-\gamma'}$) within the radius of the outer ring to be $\gamma' = 1.98\pm0.02\pm0.01$. Then we obtain a bulge-halo decomposition, assuming a power-law form for the dark matter halo. Our analysis yields $\gamma_{\rm{DM}} = 1.7\pm0.2$ for the inner slope of the dark matter profile, in agreement with theoretical findings on the distribution of dark matter in ellipticals, and a stellar mass from lensing and dynamics $M_*^{\rm{LD}} = 5.5_{-1.3}^{+0.4}\times10^{11}M_\Sun$. By comparing this measurement with stellar masses inferred from stellar population synthesis fitting we find that a Salpeter IMF provides a good description of the stellar population of the lens while the probability of the IMF being heavier than Chabrier is 95%. Our data suggest that growth by accretion of small systems from a compact red nugget is a plausible formation scenario for this object.
CommonCrawl
Two successive reactions, D $\longrightarrow \mat… Two successive reactions, D $\longrightarrow \mathrm{E}$ and $\mathrm{E} \longrightarrow \mathrm{F}$ , have yields of 48$\%$ and $73 \%,$ respectively. What is the overall percent yield for conversion of $\mathrm{D}$ to $\mathrm{F} ?$ What is the percent yield of a reaction in which 45.5 $\mathrm{g}$ of $\operatorname{tungsten}(\mathrm{VI})$ oxide $\left(\mathrm{WO}_{3}\right)$ reacts with excess hydrogen gas to produce metallic tungsten and 9.60 $\mathrm{mL}$ of water $(d=1.00 \mathrm{g} / \mathrm{mL}) ?$ What is the percent yield of a reaction in which $200 .$ g of phosphorus trichloride reacts with excess water to form 128 $\mathrm{g}$ of $\mathrm{HCl}$ and aqueous phosphorous acid $\left(\mathrm{H}_{3} \mathrm{PO}_{3}\right) ?$ When 20.5 g of methane and 45.0 g of chlorine gas undergo a reaction that has a 75.0$\%$ yield, what mass (g) of chloromethane $\left(\mathrm{CH}_{3} \mathrm{Cl}\right)$ forms? Hydrogen chloride also forms. When 56.6 $\mathrm{g}$ of calcium and 30.5 $\mathrm{g}$ of nitrogen gas undergo a reaction that has a 93.0$\%$ yield, what mass (g) of calcium nitride forms? Two successive reactions, $\mathrm{A} \longrightarrow \mathrm{B}$ and $\mathrm{B} \longrightarrow \mathrm{C},$ have yields of 73$\%$ and 68$\%$ , respectively. What is the overall percent yield for conversion of $\mathrm{A}$ to $\mathrm{C} ?$ The aspirin substitute, acetaminophen $\left(\mathrm{C}_{8} \mathrm{H}_{9} \mathrm{O}_{2} \mathrm{N}\right),$ is produced by the following three-step synthesis: \mathrm{I} . \quad \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{O}_{3} \mathrm{N}(s)+3 \mathrm{H}_{2}(g)+\mathrm{HCl}(a q) \longrightarrow \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{ONCl}(s)+2 \mathrm{H}_{2} \mathrm{O}(l) \mathrm{II}\quad \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{ONCl}(s)+\mathrm{NaOH}(a q) \longrightarrow \mathrm{C}_{6} \mathrm{H}_{7} \mathrm{ON}(s)+\mathrm{H}_{2} \mathrm{O}(l)+\mathrm{NaCl}(a q) \mathrm{III.} \quad \mathrm{C}_{6} \mathrm{H}_{7} \mathrm{ON}(s)+\mathrm{C}_{4} \mathrm{H}_{6} \mathrm{O}_{3}(l) \longrightarrow \mathrm{C}_{8} \mathrm{H}_{9} \mathrm{O}_{2} \mathrm{N}(s)+\mathrm{HC}_{2} \mathrm{H}_{3} \mathrm{O}_{2}(l) The first two reactions have percent yields of 87$\%$ and 98$\%$ by mass, respectively. The overall reaction yields 3 moles of acetaminophen product for every 4 moles of $\mathrm{C}_{6} \mathrm{H}_{5} \mathrm{O}_{3} \mathrm{N}$ reacted. a. What is the percent yield by mass for the overall process? b. What is the percent yield by mass of Step III? The reactions shown here can be combined to make the overall reaction $\mathrm{C}(\mathrm{s})+\mathrm{H}_{2} \mathrm{O}(g) \longrightarrow \mathrm{CO}(g)+\mathrm{H}_{2}(g)$ by reversing some and/or dividing all the coefficients by a number. As a group, determine how the reactions need to be modified to sum to the overall process. Then have each group member determine the value of $K$ for one of the reactions to be combined. Finally, combine all the values of $K$ to determine the value of $K$ for the overall reaction. \begin{array}{ll}{\text { a. } C(s)+\mathrm{O}_{2}(g) \longrightarrow \mathrm{CO}_{2}(g)} & {K=1.363 \times 10^{69}} \\ {\text { b. } 2 \mathrm{H}_{2}(g)+\mathrm{O}_{2}(g) \longrightarrow 2 \mathrm{H}_{2} \mathrm{O}(g)} & {K=1.389 \times 10^{80}} \\ {\text { c. } 2 \mathrm{CO}(g)+\mathrm{O}_{2}(g) \longrightarrow 2 \mathrm{CO}_{2}(g)} & {K=1.477 \times 10^{90}}\end{array}
CommonCrawl
Dynamics of fault motion and the origin of contrasting tectonic style between Earth and Venus Fault-controlled deep hydrothermal flow in a back-arc tectonic setting, SE Tyrrhenian Sea Maria Filomena Loreto, Doğa Düşünür-Doğan, … Marco Ligi Mantle exhumation at magma-poor rifted margins controlled by frictional shear zones Thomas Theunissen & Ritske S. Huismans Evidence for a serpentinized plate interface favouring continental subduction Liang Zhao, Marco G. Malusà, … AlpArray Working Group Deep hydration and lithospheric thinning at oceanic transform plate boundaries Zhikai Wang, Satish C. Singh, … Milena Marjanović Lithospheric double shear zone unveiled by microseismicity in a region of slow deformation Rita de Nardis, Claudia Pandolfi, … Giusy Lavecchia Deep embrittlement and complete rupture of the lithosphere during the Mw 8.2 Tehuantepec earthquake Diego Melgar, Angel Ruiz-Angulo, … Leonardo Ramirez-Guzmán 3-D thermal regime and dehydration processes around the regions of slow earthquakes along the Ryukyu Trench Nobuaki Suenaga, Shoichi Yoshioka & Yingfeng Ji Deep high-temperature hydrothermal circulation in a detachment faulting system on the ultra-slow spreading ridge Chunhui Tao, W. E. Seyfried Jr, … Wei Li Trans-lithospheric diapirism explains the presence of ultra-high pressure rocks in the European Variscides Petra Maierová, Karel Schulmann, … Ondrej Lexa Shun-ichiro Karato ORCID: orcid.org/0000-0002-1483-45891 & Sylvain Barbot ORCID: orcid.org/0000-0003-4257-74092 Scientific Reports volume 8, Article number: 11884 (2018) Cite this article Plate tectonics is one mode of mantle convection that occurs when the surface layer (the lithosphere) is relatively weak. When plate tectonics operates on a terrestrial planet, substantial exchange of materials occurs between planetary interior and its surface. This is likely a key in maintaining the habitable environment on a planet. Therefore it is essential to understand under which conditions plate tectonics operates on a terrestrial planet. One of the puzzling observations in this context is the fact that plate tectonics occurs on Earth but not on Venus despite their similar size and composition. Factors such as the difference in water content or in grain-size have been invoked, but these models cannot easily explain the contrasting tectonic styles between Earth and Venus. We propose that strong dynamic weakening in friction is a key factor. Fast unstable fault motion is found in cool Earth, while slow and stable fault motion characterizes hot Venus, leading to substantial dynamic weakening on Earth but not on Venus. Consequently, the tectonic plates are weak on Earth allowing for their subduction, while the strong plates on Venus promote the stagnant lid regime of mantle convection. Earth and Venus have similar size, density, and chemical composition. Consequently, one might expect that both planets evolved in a similar way. However, these planets show markedly different tectonic styles in addition to different surface temperatures and atmospheric composition. Although there are rich topographical observations on Venus showing wide-spread short-wavelength (~10s km) deformation similar to the Tibetan plateau on Earth1,2, the distribution of crater density shows nearly homogenous ages (~500 Myrs) of the surface. This implies that these short wavelength deformation features were formed at ~500 Myrs or before and that there is little or no large-scale materials exchange between the surface and the interior expected from plate tectonics at least for the last ~500 Myr3. It is generally considered that the stagnant-lid style of convection operates on Venus while plate tectonics operates on Earth4 (Fig. 1) implying that the surface layer (the lithosphere) on Venus is substantially stronger than that on Earth. Two modes of mantle convection. When the surface layer (the lithosphere) is relatively weak, subduction occurs and plate tectonics will operate (a). In contrast, when the surface layer is strong, subduction cannot be initiated, and the stagnant lid mode of convection will occur (b) where the surface layer is stagnant and there is no large-scale materials circulation. Tectonics of Venus is considered to stagnant lid convection at least for the last ~500 Myrs4. The threshold strength of the lithosphere between these two regimes is ~100 MPa depending on the details of mass distribution such as the heterogeneity of the crustal thickness4,23,24,26. The topography-geoid correlation provides another constraint on the strength of the lithosphere. On Earth, topography and geoid have poor correlation for the scale of 100s-1000s km that is interpreted to imply shallow compensation (thin lithosphere)5. In contrast, topography and geoid have strong correlation on Venus at a similar scale, and this can be interpreted by a thick lithosphere if the variation in the crustal thickness is caused in the past (~500 Myrs ago) as suggested by the crater density distribution4,6. Similarly, based on the observations on surface topography on Earth (East African rift) and Venus (Beta Regio), Foster and Nimmo7 concluded that the faults of Venus are stronger than those of Earth. The Venusian atmosphere is much hotter than that of Earth and is made largely of carbon dioxide8. Consequently, temperatures in the near surface layer of Venus are higher than those of Earth. The strength of rocks in the ductile regime decreases with temperature9,10. Indeed, extensive short-wavelength ductile deformation (folding) on Venus can be attributed to high near surface temperatures (not only caused by the high atmospheric temperature but also caused by the transport of hot materials some ~500 Myrs ago) (e.g.11,12). But as reviewed before, other observations such as the topography-gravity correlation over 100s to 1000s km scale suggest that the lithosphere on Venus is thicker than that of Earth implying that the deep lithosphere of Venus is stronger than that of Earth. It is this puzzle that we focus our attention on in this paper. One popular idea to explain these paradoxical observations is to connect them through the loss of water (e.g.13): high temperature of Venus was likely caused by a slightly higher initial surface temperature than Earth's that led to a runaway instability that resulted in a hot atmosphere promoting further loss of water14. The loss of water leads to a strong lithosphere preventing plate tectonics from occurring13,15. Another idea is that a high surface temperature leads to extensive grain-growth, making the lithosphere strong16,17. In both cases, a key issue is to explain why the oceanic lithosphere on Earth is weak but the lithosphere of Venus is strong. However, when the basics of materials science of deformation and the geological observations are reviewed, it becomes clear that both of these models have some fundamental difficulties. For example, even if the oceanic lithosphere on Earth is covered with water, the main parts of the oceanic lithosphere are dry18,19. There have been some models to suggest relatively deep penetration of water into the oceanic lithosphere20, but most of these models assume that subduction is already happening and therefore these models do not explain how subduction initiates. Similarly, there are several fundamental issues for the grain-size (or "damage") model including the fact that exceedingly small grain-size is needed to obtain sufficiently weak lithosphere and that grain-size reduction is difficult if the lithosphere is initially strong as will be discussed later21. Before evaluating the plausibility of these models, let us first review the results of geodynamic modeling22. Geodynamic studies show that if the lithosphere is too strong, then it will stay at the surface, and the "stagnant lid" style of convection will operate23 (Fig. 1). The lithosphere must be weak enough for plate tectonics to start and be maintained. When the resistance against deformation of the lithosphere is characterized by the average strength (critical stress to deform a rock at a given strain-rate) for a given thickness, the threshold strength for plate tectonics to be initiated is ~100 MPa24 (within a factor of 2) depending on the distribution of mass at the surface25, corresponding to a friction coefficient of ~0.126. So any model must explain why the strength of the lithosphere of Venus is higher than this critical value but that of Earth is smaller than or comparable to this value. More precisely, in order for plate tectonics style of convection to occur, the lithosphere should be strong at most places but it must loose its strength locally and temporarily at plate boundaries to reduce the average strength to on the order of ~100 MPa or less. In this paper, we will investigate how such a mechanical behavior of the lithosphere is possible on Earth but not on Venus. In all models on the presence or absence of plate tectonics, a key issue is the subduction of the (oceanic) lithosphere. Plate tectonic occurs only when subduction is possible27,28. The types of resistance against subduction are schematically shown in Fig. 2. They include friction between the subducting lithosphere and the overlying materials, and the resistance against bending of the lithosphere itself by brittle failure (faulting) or ductile flow. Schematic diagrams showing (a) the processes associated with subduction of the oceanic lithosphere and (b) a corresponding strength profile. (a) Subduction of the oceanic lithosphere needs to overcome frictional resistance against an overlying lithosphere, resistance for bending of the oceanic lithosphere by brittle failure (faulting) in the shallow region and by ductile flow in the deep region. (b) A schematic strength profile corresponding to the processes shown in (a) (for a more detailed model, see Fig. 3) The focus of this paper is to investigate the strength of the lithosphere itself to understand how the initiation of subduction is possible on Earth but not on Venus. In the next sections, we lay out a critical review of the existing models that explain the difference of convection style between the two planets. We then discuss the importance of strong weakening by shear heating during earthquakes for this debate. We describe the thermal conditions under which frictional instabilities can develop. Finally, we show how the dynamics of fault slip with strong weakening is compatible with low seismic stress drops, large rock yield stress, but overall a low strength of the oceanic lithosphere on Earth. Sensitivity of the strength profile on various factors In order to illustrate a few key points, let us first consider the strength-depth profile based on the results of rock mechanics studies. To illustrate the influence of various factors (other than temperature), we calculate the strength profile for the thermal model of the 60 Myrs old oceanic geotherm on Earth (Fig. 3) (the influence of the temperature gradient and of the surface temperature will be explored later). The strength profile of the lithosphere. Temperature-depth profile corresponding to 60 Myrs old oceanic lithosphere is used. Differential stress (σ1 − σ3) (σ1: the maximum compressional stress, σ3: the minimum compressional stress) needed for deformation at 10−14 s−1 strain-rate is plotted (shear stress is given by \(\tau =\frac{{\sigma }_{1}-{\sigma }_{3}}{2}\)). In the shallow part, the strength is controlled by friction (μ: friction coefficient), and in the deep part by plastic flow. We consider diffusion creep (numbers correspond to grain size in micron), (power law) dislocation creep and the Peierls mechanism (low-temperature plasticity). Grain-size reduction in the plastic flow regime reduces the strength, but even for extremely small grain size (1 micron), the shallow lithosphere is strong if the friction coefficient is large (0.6). The reduction of friction coefficient (to ~0.1) is an efficient way to reduce the strength of the lithosphere. The stress level below which plate tectonic would occur is shown by the green hatched region. (For the data source, see Supplementary Information). Such models are based on the idea that the resistance to deformation in the shallow part is controlled by the resistance for sliding on pre-existing faults, whereas in the deeper part it is controlled by plastic deformation29,30. In the deep region, where the strength is controlled by plastic deformation, the strength is sensitive to rock type, strain-rate, temperature and grain-size (the effect of pressure is only moderate for a typical activation volume of 10 cc/mol31). We will assume that the lithosphere of both Earth and Venus is made of peridotite and use the experimental results on dry olivine based on the results suggesting relatively dry oceanic lithosphere19,32 (for Venusian lithosphere where the crust is thick (~30 km11) we use the dry flow law of diabase12. For the oceanic lithosphere on Earth, the contribution from the crust to the lithosphere strength is negligible). We consider three deformation mechanisms: (i) diffusion creep, (ii) power-law dislocation creep and (iii) the Peierls mechanism (low-temperature plasticity). Grain-size of typical upper mantle rocks is several mm33,34 (for the processes by which grain-size is determined, see35). However, in shear zones, much smaller grain-size are observed typically ~ 10s of microns but in some cases down to a few microns36,37. Therefore grain-sizes of 1, 10, 100 micron as well as 5 mm are assumed. For a grain-size of 5 mm, the dominant mechanism of deformation is either power-law dislocation creep or the Peierls mechanism that is insensitive to grain-size. In the shallow region where the strength is controlled by the resistance for motion of pre-existing faults, we used the following relation38, $$\tau =\mu ({\sigma }_{n}-{P}_{pore})$$ where \(\tau \) is the shear stress needed for fault motion, μ is the friction coefficient, σn is the normal stress, and Ppore is the pressure of pore fluid. We consider the strength corresponding to normal faulting that is relevant to plate bending near the trench. To illustrate the range of strength one can get for different assumptions, several cases will be considered. For the friction coefficient, a canonical value is ~0.6. This is based on a large number of experimental studies that show that the static friction coefficient is nearly independent of rock type (including serpentinite), sliding velocity (for small sliding velocities, <0.1 m/sec)39,40 and temperature (to T < 600 °C; influence of shear heating is small for less than ~0.1 m/sec41). However, we also show the result for a friction coefficient of 0.1 for comparison. As to the pore pressure, we consider two cases (i) zero pore pressure (no fluid on the fault plane) and (ii) pore pressure = hydrostatic pressure of water. The latter is a case where the fault is filled with water that is connected to the surface. As can be seen in Fig. 3, the influence of pore pressure is small as far as it is up to the hydrostatic pressure. In contrast, the influence of friction coefficient is large, if there are mechanisms to change the friction coefficient. Difficulties in reducing the strength in the ductile regime Another important point of Fig. 3 is the fact that the degree to which small grain-size reduces the strength is limited. This latter point may need an elaboration because there have been many publications where grain-size effects were considered to play a key role in controlling the strength of the lithosphere. For example, based on the geological observations, weakening due to grain-size reduction is often proposed to explain shear localization36,37. Also detailed theoretical models have been developed where the main mechanism for weakening of the lithosphere is grain-size reduction by dynamic recrystallization22,42. One of the obvious difficulties of the grain-size ("damage") model is the fact that because the temperature in the shallow lithosphere is so low, one would need exceedingly small grain-size (sub-micron) to make the lithosphere weak enough (see Fig. 3). Geological observations show that the grain-size of upper mantle rocks is typically a few mm33,34. In very rare cases, grain-size of a few microns (not sub-microns) is observed but these are only in very thin layers (~1 cm thickness)36,43 (in these cases, the local strain rate will be much higher than the average strain rate, and the influence of fine grains on the average strength of the lithosphere will be limited). More fundamentally, it is difficult to produce small grains by dynamic recrystallization under small ambient strain rates36, as grain-size reduction requires substantial plastic strain (~10% or more strain)9,21,44. If the lithosphere were strong to begin with, not much plastic strain can be produced and small grains would not form. For example, assuming 100 MPa stress, the strain in olivine at 20 km depth in the oceanic lithosphere (P = 0.7 GPa, T = 300–500 °C) in 100 Myrs would be ~10−16–10−6, too small for dynamic recrystallization to occur. Extremely small grains (a few microns) found in some pseudotachylites are formed by high local stress associated with faulting36, not by purely plastic deformation. Furthermore, the degree to which small grain-size affects the strength of a plate is unclear because the distribution of weak regions, for example, the spacing of shear zones, is not well defined in the previous models (see a discussion presented in45). We conclude that although grain-size reduction does often lead to shear localization as seen many mylonites on the continents36,37, the degree to which this makes the oceanic lithosphere weak and initiate subduction is limited. Recently Kumamoto et al.46 invoked a "size effect" and proposed that the strength of olivine in the low-temperature plasticity regime might be substantially weaker than previously thought. This would reduce the strength in this regime somewhat, but Kumamoto et al.46 recognized that this effect is not enough to reduce the strength to ~100 MPa. However, the physical basis for this "size effect" is unclear as discussed in Supplementary Information. Furthermore, if this were the cause for the weakening of lithosphere on Earth, it would be difficult to explain why Venusian lithosphere is so strong because the strength in this regime decreases with temperature. Consequently, the "size effect" is not included in our model. Reduction in friction coefficient by high velocity fault motion Compared to reducing the strength in the ductile regime, it would be much more effective to reduce the brittle strength by either reducing the friction coefficient or by increasing the pore pressure beyond the hydrostatic pressure47 (Fig. 3). Because the pore pressure in excess of hydrostatic pressure is most likely caused by heating47, essentially it is heating that is responsible for the reduction in frictional resistance. Recent experimental studies showed that the friction coefficient can be substantially reduced when the velocity of fault motion exceeds a threshold value of VC ~ 1 m/s41,48,49 (in the laboratory where the normal stress is ~10 MPa). But this reduction in friction coefficient does not occur instantaneously. At high velocities, the friction coefficient evolves with the sliding distance (x) from the initial static friction coefficient, \({\mu }_{o}\), to a reduced value, \({\mu }_{\infty }\), after a slip greater than the characteristic slip distance, Dth, viz.41, $$\mu (x)={\mu }_{\infty }+({\mu }_{o}-{\mu }_{\infty }){e}^{-\tfrac{x}{{D}_{th}}}.$$ Using this relation, it can be shown that if the total slip distance (D) far exceeds the critical distance for thermal weakening (Dth), the effective friction coefficient defined by \({\int }_{o}^{D}\mu (x)\cdot {\sigma }_{n}\cdot S\cdot dx\equiv {\mu }_{eff}{\int }_{o}^{D}{\sigma }_{n}\cdot S\cdot dx\) (S: area of the fault, x: displacement) is reduced to \({\mu }_{\infty }\) (~0.1 or less41), viz., $${\mu }_{eff}={\mu }_{\infty }+\tfrac{{D}_{th}}{D}({\mu }_{0}-{\mu }_{\infty })(1-{e}^{-\tfrac{D}{{D}_{th}}})\Rightarrow \,{\mu }_{\infty }\,as\,D\gg {D}_{th}.$$ Because the integral defined above is the work done by friction, this definition implies that with this small effective friction coefficient, the conditions for plate tectonics is satisfied from the energetics point of view. The causes for strong weakening at high-velocity fault motion are not fully understood, but they may involve several mechanisms. In some cases, melt is observed on the fault when friction coefficient is reduced substantially (e.g.48). However, weakening may not always involve melting. Some thermally activated processes such as decarbonation or dehydration reactions producing nano-size particles or high-pressure fluids might play some role41,50,51. However, in all these cases, high temperature from shear heating causes a reduction in the frictional strength. We, therefore, call these processes collectively as "thermal weakening". Thermal weakening occurs when the velocity of frictional sliding exceeds a threshold value (V > VC, V: velocity of sliding, VC: threshold velocity for thermal weakening). The threshold velocity for thermal weakening is on the order of 1 m/s41,48 for typical experimental conditions where the normal stress is ~10 MPa52. Another important condition for thermal weakening is a large enough slip distance, D ≫ Dth. When these two conditions are met, and if \({\mu }_{\infty }\) is smaller than ~0.1, then thermal weakening would lead to substantial reduction in the resistance for plate subduction and would allow plate tectonics to occur. Since all laboratory results used here are obtained at low normal stress (<~40 MPa), applications of these results to friction in the deep lithosphere where the normal stress is ~1,000 MPa or more require some scaling analysis. The scaling analysis summarized in Supplementary Information shows that thermal weakening is enhanced at higher normal stress and the conditions for VC and Dth are likely met for friction in the deep lithosphere, and the friction coefficient (\({\mu }_{\infty }\)) is substantially lower than the static friction coefficient (\({\mu }_{o}\)), particularly at high confining pressures. This is essentially due to the fact that at a high normal stress, more work is done by friction ((work/unit area) = (normal stress) × (displacement)). At a greater depth, the style of deformation changes from localized brittle behavior to distributed ductile one, and in the latter regime, intense shear heating will not occur. The distribution of seismicity in the old oceanic lithosphere (e.g.53) shows that localized deformation continues to ~50 km depth, and therefore our model will work to that depth. Let us now consider under which conditions fast fault motion could occur. The velocity of fault motion includes a wide range54. A fault starts to move with a slow rate (~10−9 m/s), but when fault motion is unstable, the sliding velocity will be accelerated to a high value. Velocity of fault motion associated with a typical earthquake is ~1 m/s (e.g.55). Thermal weakening occurs only at the high end of slip rate (>~1 m/s) corresponding to regular earthquakes. However, this high-speed fault motion spontaneously occurs only when fault motion is unstable and accelerated. Experimental studies show that unstable, fast fault motion occurs under relatively low temperatures, below ~400 °C for crustal rocks56 and below ~600 °C for mantle rocks57. The latter agrees well with the maximum depth of intra-plate earthquakes in the oceanic lithosphere57. Given these conditions, it is clear that on Venus, the conditions for unstable fast fault motion are not met, but they are met in the shallow regions of Earth's lithosphere (Fig. 4). This leads to a small effective friction coefficient on Earth, but not on Venus. Temperature profiles (solid and broken lines) for (a) Venus and (b) Earth compared with the conditions for unstable fast fault motion (hatched regions). On Earth, the oceanic geotherm for 60 Myr old ocean is assumed. For Venus, three models of temperature-depth profiles are shown (dT/dz = 6, 12, 18 K/km). The conditions for unstable fast fault motion leading to a small effective friction coefficient are met at depths shallower than ~50 km in Earth (this depth depends on age). On Venus, the temperature in all regions exceeds the threshold temperature for unstable fast fault motion. Consequently, the strength in the shallow regions of Venus is characterized by the static friction coefficient of ~0.6, leading to a high strength that would not allow plate tectonics to occur. Strength profiles for Earth and Venus Assuming a small effective friction coefficient for Earth (0.1) and a large one for Venus (0.6), we calculated the strength profiles for these two planets (Fig. 5). For Earth, the strength of the oceanic lithosphere (with an age of 60 Myrs) is calculated. To account for a possible effect of pore pressure, we consider two cases, one is a case where the fault is filled with water that has the hydrostatic pressure (the fault rocks has the lithostatic pressure), and another is a case of no pore pressure. A key feature of the strength profile of Earth's lithosphere is that because fault motion is unstable, the brittle strength is not constant but evolves. At a static condition the friction coefficient is high (\({\mu }_{o}\) ~ 0.6), but it evolves to a low value (\({\mu }_{\infty }\) ~ 0.1 or less at high pressure) when slip velocity is high and slip distance far exceeds Dth. The strength profiles of (a) Venus and (b) Earth. For Venus, three temperature profiles dT/dz = 6, 12, 18 K/km are considered and the pore pressure was assumed to be zero. The lowest temperature gradient (dT/dz = 6 K/km) corresponds to much of the recent temperature profiles11. Higher temperature gradients (dT/dz = 12, 18 K/km) would represent the period soon after the large-scale over-turn. Venus has a much higher surface temperature than Earth leading to a higher friction coefficient (0.6) and hence a higher strength in the shallow part. For Earth, the oceanic geotherm corresponding to the age of 60 Myr is used. Both zero pore pressure and hydrostatic pore pressure are considered (same as Fig. 3). The strength of Earth's lithosphere is heterogeneous and evolves with time due to thermal weakening caused by unstable fast fault motion (Fig. 6): initial static high friction to dynamic low friction. The effective (average) strength corresponds to a dynamic low friction coefficient (see equation (3); for the details on calculating the strength for evolving friction, see Supplementary Information). The link between the reduced friction by thermal weakening and the average strength of the lithosphere depends on the entire history of stress evolution during the seismic cycle. A key issue in this connection is that the static stress drop associated with earthquakes is substantially smaller (~10 MPa or less) than the peak stress associated with static friction (~100 MPa or higher). As far as we accept this, we conclude that the strength in the brittle regime in Earth is approximately represented by the profile corresponding to \(\mu \simeq {\mu }_{\infty }=\) 0.1, and with this friction coefficient, Earth's lithosphere is weak enough for plate tectonics to operate. On Venus, the pore pressure is assumed to be zero, and three possible temperature-depth profile models are considered (dT/dz = 6, 12, 18 K/km). The small temperature gradient model is preferred based on the analysis of thermal structures based on Venusian deformation based partly on the results of laboratory data on dry diabase deformation by Mackwell et al.11,12, but we also use the higher temperature gradient that would correspond to the period soon after the large scale over-turn12. The lithosphere strength for Venus exceeds the critical strength for plate tectonics. In contrast, the calculated strength for Earth is compatible with plate tectonics. The threshold strength of the lithosphere for plate tectonics The preceding discussions are based on the presumption that the strength of the lithosphere needs to be ~100 MPa (within a factor of 2) or less for plate tectonics to operate. This presumption is based on the results of a large number of numerical modelings4,16,23,24,25,26,58. We note, however, that Buffett and Becker59 suggested that subduction could continue with the standard strength model of the lithosphere (average the strength of ~500 MPa or more). The difference between these two sets of studies may be caused by the fact that the initiation of subduction is more difficult than maintaining subduction. Once subduction has started, a large negative buoyancy force is available making it easier to maintain it. Consequently, we believe that the threshold strength of the lithosphere of ~100 MPa (within a factor of 2) is appropriate in investigating whether plate tectonics operates or not for a given planet. The role of water (or hydrous minerals) An alternative model for the weak lithosphere on Earth is the role of hydrous minerals. Indeed, some hydrous minerals (e.g., talc) reduce the friction coefficient60, and low friction coefficients are reported for samples from the San Andreas fault61 and from the fault in the Japan trench where the 2011 Mw 9.1 Tohoku earthquake took place62. However, it is not clear if a substantial amount of hydrous minerals is available in the deep oceanic lithosphere (~20–40 km depth) where plate bending must occur to initiate subduction. The oceanic lithosphere is depleted with water and other volatiles63, and it is difficult to imagine the presence of hydrous minerals in the deep lithosphere (for more details, see Supplementary Information). Furthermore, unlike talc or smectite, the hydrous minerals that would be present in the mantle portion of the lithosphere such as serpentine have a friction coefficient not much different from other materials below ~400 °C64. We conclude that strong weakening from fast, unstable fault slip is a more likely mechanism to reduce the strength of tectonic plates. A comparison to other observations How does our model explain other observations? The largest strike-slip earthquake sequence of the 2012 Mw 8.6 Indian Ocean earthquake occurred on near-perpendicular conjugate faults, indicating a low effective friction coefficient65. The Indian Ocean earthquake cut the entire lithosphere, showing that rupture processes may reduce fault strength deep into the lithosphere. Therefore this observation suggests that the dynamic friction coefficient in most of the oceanic lithosphere is small. However, such a model raises an issue of how to explain the initiation of fault slip on the strong asperity controlled by static friction because the strength corresponding to the static friction coefficient exceeds the tectonic stress level. Also, if the static and dynamic friction coefficients are so different, one may also ask how to explain inferred low stress drop (~10 MPa or less) from the analyses of earthquake mechanisms66. We believe that a key to solve this question is the heterogeneity of strength on a fault (Fig. 6) as discussed by Rice47 and others67,68. A model of a fault plane and corresponding stress distribution. (a) A fault plane is made of weak part (green background) and strong parts (red regions (large asperities)). (b) A large asperity initially deforms elastically when weak regions creep or slide and stress at a large asperity increases with time until the local stress reaches the critical stress for the asperity to break. The critical stress to break an asperity is approximately the same as the stress corresponding to static friction and depends nearly linearly on depth (this is why the stress corresponding to "static friction" has a broad range). Stress at a given point on a fault is also expected to be time dependent. After the break of an asperity, this region becomes weak (due to shear weakening) and stress is re-distributed (Figs S2, S3). If strong enough dynamic weakening is activated in regions of large static strength, the resulting long-term strength may be at the dynamic level. As a result, the stress may be at the static level (blue curve) most of the time, but at the dynamic level (purple band) for most of the fault slip. A fault contains various regions with different strength (an asperity model69). Fault motion occurs in weak regions under the tectonic stress that leads to stress concentration in the strong regions. This eventually breaks a strong asperity. When slip occurs in an unstable manner, stress concentration on a strong asperity occurs rapidly when the propagating slip front reaches close to the strong asperity. The strength threshold is reached dynamically, leading to a large dynamic (peak-to-peak) stress drop, but a low static (before minus after the seismic event) stress drop. Such a behavior leads to a low slip-averaged stress while still compatible with small seismic stress drops67,70. We performed a numerical modeling of stress evolution in such a heterogeneous fault, and show that for a certain choice of parameters characterizing the fault properties, we can reproduce the fault behavior that is consistent with low static stress drop and high static strength, i.e., a high time-averaged stress coeval with a low slip-averaged stress (for details see Supplementary Information). Are there enough earthquakes near the trench to accommodate deformation that occurs during subduction? Shear strain of an oceanic lithosphere caused by faulting is given by \(\varepsilon =\tfrac{h}{B}\) (see Fig. 7) where h is the displacement associated with faulting, B is the mean spacing of normal fault near trenches (\({B}={\upsilon }\cdot {\rm{\Delta }}{t}\) where v is the velocity of plate motion and \({\rm{\Delta }}t\) is the mean interval of earthquakes associated with these faults). Then the strain rate caused by these faults is \(\dot{\varepsilon }=\tfrac{{h}}{{\upsilon }\cdot {({\rm{\Delta }}{t})}^{2}}\). Given a typical value of h ~1 m estimated from that for 1933 M = 8.2 Sanriku earthquake71, and the plate velocity of 10−9 m/s, we should have Δt ≈ 104 year for this mechanism to make strain rate of 10−14 s−1 (strain rate associated with plate bending at a trench). Chapple and Forsyth72 estimated the frequency of normal fault earthquakes near trenches for the whole Earth, and found that those with magnitude 8 occur every ~30 years. Given the total length of trench of ~50,000 km, and assuming a typical length of normal faults along the trench of ~100 km (corresponding to a M (magnitude) = 8 earthquake73), this can be translated to the mean time interval of Δt ≈ 104 years. This result agrees well with the present model. A diagram showing a relation between bending strain and faulting in a bending plate. h: vertical displacement associated with a normal fault, B: the mean spatial interval of faults, v: velocity of plate motion, \({\rm{\Delta }}t\): mean time interval of normal fault earthquakes Summary and perspectives Our model provides a possible explanation for the operation (initiation) of plate tectonics on Earth but not on Venus. Indeed, there are several observations on Venus that suggest a high strength of the near-surface layer compared to that of Earth7 including the positive correlation between topography and gravity field74, lack of subduction for ~500 Myrs3. However, the causes for the different style of convection (stagnant lid convection) on Venus are not entirely clear. In addition to the strong faults, other factors may also contribute to the lack of plate tectonics on Venus such as the absence of a low viscosity asthenosphere75 and the presence of a thick, buoyant crust11. Also, although evidence of large-scale tectonics such as plate tectonics is lacking on Venus, Venus shows evidence of extensive short-wavelength (~10s of km scale) deformation shown by widespread distribution of folding (e.g.1,2,76). There are several models to explain these short-wavelength tectonic features on Venus2,76. However, most of these features are old (~500 Myrs), and therefore one would need to consider different thermal structures than the current one. For example, soon after a large-scale over-turn at ~500 Myrs ago, near surface temperature would be substantially higher than the current temperatures that might have facilitated small-scale deformation (this would corresponding to the strength profile for a higher dT/dz in Fig. 5a). The implications of the strength of the lithosphere on the global dynamics and thermal evolution of a planet remain unclear. Moresi and Solomatov26 discussed that even if the strength of the near-surface layer plays an important role, the net heat loss and hence the thermal evolution of a planet is still controlled by mantle flow. In contrast, Conrad and Hager77 argued that the lithosphere strength controls thermal evolution (see also28,78). Our model suggests that the strength of faults plays a key role in controlling not only the nature of near-surface tectonics but also the global dynamics, such as the style of mantle convection. However, the nature of friction is not well understood at high normal stress relevant to faulting in the deep lithosphere (~20–30 km depth). Experimental studies on high velocity friction need to be extended to higher normal stress conditions. Also, the dynamics of heterogeneous fault needs further detailed studies including a broad range of parameters characterizing the slip behavior. Since the effective strength of faults depends on near-surface temperature, the coupling between climate and internal dynamics may be important in analyzing the geological evolution of planets (e.g.16). Finally, we point out that our proposed model predicts the absence of large quakes on Venus. Strength of rocks Strength of rocks in both brittle and ductile regimes is calculated using the standard definition of the strength in these two regimes. Strength is the differential stress needed to deform a rock at a given strain rate. We choose a strain rate of 10−14 s−1 appropriate for deformation near a trench. For the ductile regime, we consider three deformation mechanisms, i.e., power-law dislocation creep, the Peierls mechanism (low-temperature plasticity) and diffusion creep. In the ductile regime, the strength depends on materials. We assume olivine-rich rocks for the mantle of both planets. The oceanic crust makes little contribution to ductile strength on Earth, but for Venus thick crust (~30 km) makes some contributions. We assumed basaltic rocks (e.g., diabase) for the Venusian crust. For the brittle regime, we assume that many faults exist and the strength is controlled by the stress needed to move the pre-existing faults. However, the resistance against the fault motion is not constant when the velocity of fault motion becomes high enough caused by unstable fault slip (unstable flip occurs at low temperature on Earth but not at high temperature on Venus). In these cases, the lithosphere strength in the brittle regime evolves with time and both the initial and final friction coefficients are used to calculate the strength in the brittle regime. The details of used constitutive relations and the data source are given in Supplementary Information. Earth and Venus structure For Earth, 7 km thick oceanic crust and underlying upper mantle (made mainly of olivine) is assumed. For Venus, 30 km crust and upper mantle below is assumed based on a model by Nimmo and McKenzie11. A strength-depth profile also depends on the temperature-depth profile. We use a model oceanic geotherm corresponding to the age of 60 Myr for Earth. Temperature-depth relation in Venus is not well constrained. The crater density observations suggest that there was a large-scale over-turn of materials from the interior to the surface at ~500 Myrs ago, after that there was no major large-scale tectonics on Venus. Soon after the large-scale over-turn, the surface temperature was high whereas after the over-turn thermal gradient is likely reduced because cold materials are brought into the deep interior. Based on the review by Nimmo and McKenzie11, we use a model with dT/dz = 6 K/km (T: temperature, z: depth) for a representative thermal gradient, but we also use a higher gradient, dT/dz = 12 and 18 K/km to explore the strength profile corresponding to the period in which shallow regions are hotter. Uncertainties in these models are discussed in Supplementary Information. All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Information. Additional data related to this paper may be requested from the authors. Solomon, S. C. & Head, J. W. Fundamental issues in the geology and geophysics of Venus. Science 252, 252–260 (1991). Article ADS PubMed CAS Google Scholar Brown, C. D. & Grimm, R. E. Recent tectonic and lithospheric thermal evolution in Venus. Icarus 139, 40–48 (1999). Schaber, G. G. et al. Geology and distribution of impact craters on Venus: What are they telling us? Journal of Geophysical Research 97, 13257–13301 (1992). Solomatov, V. S. & Moresi, L. N. Stagnant lid convection on Venus. Journal of Geophysical Research 101, 4737–4753 (1996). Watts, A. B. Isostasy and Flexure of the Lithosphere. (Cambridge University Press, 2001). Simons, M., Hager, B. H. & Solomon, S. C. Global variations in the geoid/topography admittance of Venus. Science 264, 798–803 (1994). Foster, A. & Nimmo, F. Comparisons between the rift systems of East Africa, Earth and Beta Regio, Venus. Earth and Planetary Science Letters 143, 183–195 (1996). Schubert, G. et al. Structure and circulation of the Venus atmosphere. Journal of Geophysical Research 85, 8007–8025 (1980). Poirier, J.-P. Creep of Crystals. (Cambridge University Press, 1985). Karato, S. Deformation of Earth Materials: Introduction to the Rheology of the Solid Earth. (Cambridge University Press, 2008). Nimmo, F. & McKenzie, D. Volcanism and tectonics on Venus. Annual Review of Earth and Planetary Sciences 26, 23–51 (1998). Mackwell, S. J., Zimmerman, M. E. & Kohlstedt, D. L. High-temperature deformation of dry diabase with application to tectonics on Venus. Journal of Geophysical Research 103, 975–984 (1998). Kaula, W. M. The tectonics of Venus. Philosophical Transaction of the Royal Society of London A 349, 345–355 (1994). Hartmann, W. K. Moons and Planets. 4th edition edn, (Wadsworth Publishing Company, 1999). Kaula, W. M. Venus: A contrast in evolution to Earth. Science 247 (1990). Foley, B. J., Bercovici, D. & Landuyt, W. The conditions for plate tectonics on super-Earths: Inference from convection models with damage. Earth and Planetary Science Letters 331–332, 281–290 (2012). Bercovici, D. & Ricard, Y. Plate tectonics, damage and ingeritance. Nature 508, 513–516 (2014). Peslier, A. H. & Bizimis, M. Water in Hawaiian peridotites: A case for a dry metasomatized oceanic mantle lithosphere. Geochemistry, Geophysics, Geosystems 16, 1211–1232 (2015). Hirth, G. & Kohlstedt, D. L. Water in the oceanic upper mantle - implications for rheology, melt extraction and the evolution of the lithosphere. Earth and Planetary Science Letters 144, 93–108 (1996). Faccenda, M., Gerya, T. V. & Burlini, L. Deep slab hydration induced by bending-related variations in tectonic pressure. Nature Geoscience 2, 790–793 (2009). Karato, S., Toriumi, M. & Fujii, T. Dynamic recrystallization of olivine single crystals during high temperature creep. Geophysical Research Letters 7, 649–652 (1980). Bercovici, D., Tackley, P. J. & Ricard, Y. In Treatise on Geophysics Vol. 7 (ed Schubert, G.) 271–318 (Elsevier, 2015). Solomatov, V. S. & Moresi, L. N. Three regimes of mantle convection with non-Newtonian viscosity and stagnant lid convection on the terrestrial planets. Geophysical Research Letters 24, 1907–1910 (1997). van Heck, H. J. & Tackley, P. J. Planform of self-consistently generated plates in 3D spherical geometry. Geophysical Research Letters 35, https://doi.org/10.1029/2009GL035190 (2008). Lourenço, D., Rozel, A. & Tackley, P. J. Melting-induced crustal production helps plate tectonics on planets. Earth and Planetary Science Letters 439, 18–28 (2016). Moresi, L. & Solomatov, V. Mantle convection with a brittle lithosphere: thoughts on the global tectonic styles of the Earth and Venus. Geophysical Journal International 133, 669–682 (1998). McKenzie, D. P. In Island Arcs, Deep Sea Trenches and Back-Arc Basins (eds Talwani, M. & Pitman, W. C. III) 57–61 (American Geophysical Union, 1977). Korenaga, J. Initiation and evolution of plate tectonics on Earth: Theories and observations. Annual Review of Earth and Planetary Sciences 41, 117–151 (2013). Goetze, C. & Evans, B. Stress and temperature in the bending lithosphere as constrained by experimental rock mechanics. Geophysical Journal of Royal Astronomical Society 59, 463–478 (1979). Kohlstedt, D. L., Evans, B. & Mackwell, S. J. Strength of the lithosphere: constraints imposed by laboratory measurements. Journal of Geophysical Research 100, 17587–17602 (1995). Karato, S. Rheology of the deep upper mantle and its implications for the preservation of the continental roots: A review. Tectonophysics 481, 82–98 (2010). Peslier, A. H., Schönbächler, M., Busemann, H. & Karato, S. Water in the Earth's interior: Distribution and origin. Space Science Reviews in press (2018). Mercier, J.-C. C. Magnitude of the continental lithospheric stresses inferred from rheomorphic petrology. Journal of Geophysical Research 85, 6293–6303 (1980). Avé Lallemant, H. G., Mercier, J.-C. C. & Carter, N. L. Rheology of the upper mantle: inference from peridotite xenoliths. Tectonophysics 70, 85–114 (1980). Karato, S. Grain-size distribution and rheology of the upper mantle. Tectonophysics 104, 155–176 (1984). Jin, D., Karato, S. & Obata, M. Mechanisms of shear localization in the continental lithosphere: inference from the deformation microstructures of peridotites from the Ivrea zone, northern Italy. Journal of Structural Geology 20, 195–209 (1998). Handy, M. R. Deformation regimes and the rheological evolution of fault zones in the lithosphere: the effects of pressure, temperature, grain size and time. Tectonophysics 163, 119–152 (1989). Paterson, M. S. & Wong, T.-F. Experimental Rock Deformation - The Brittle Field. (Springer, 2005). Byerlee, J. D. Friction of rocks. Pure and Applied Geophysics 116, 615–626 (1978). Stesky, R. M. Mechanisms of high temperature frictional sliding in Westerly granite. Canadian Journal of Earth Sciences 15, 361–375 (1978). Di Toro, G. et al. Fault lubrication during earthquakes. Nature 471, 494–498 (2011). Bercovici, D. & Ricard, Y. Grain-damage hysteresis and plate tectonic states. Physics of the Earth and Planetary Interiors 253 (2016). Obata, M. & Karato, S. Ultramafic pseudotachylyte from Balmuccia peridotite, Ivrea-Verbana zone, northern Italy. Tectonophysics 242, 313–328 (1995). Derby, B. & Ashby, M. F. On dynamic recrystallization. Scripta Metallurgica 21, 879–884 (1987). Karato, S. Some remarks on the models of plate tectonics on terrestrial planets: From the view-point of mineral physics. Tectonophysics 631, 4–13 (2014). Kumamoto, K. M. et al. Size effects resolve discrepancies in 40 years of work on low-temperature plasticity. Science Advances 3, e1701338 (2017). Article ADS PubMed PubMed Central CAS Google Scholar Rice, J. R. Heating and weakening of faults during earthquake slip. Journal of Geophysical Research 111, https://doi.org/10.1029/2005JB004006 (2006). Tsutsumi, A. & Shimamoto, T. High-velocity frictional properties of gabbro. Geophysical Research Letters 24, 699–702 (1997). Tullis, T. E. In Treatise on Geophysics Vol. 4 (ed Schubert, G.) 139–159 (Elsevier, 2015). Ferri, F., Di Toro, G., Hirose, T. & Shimamoto, T. Evidence of thermal pressurization in high‐velocity friction experiments on smectite‐rich gouges. Terra Nova 22, 347–353 (2010). Brantut, N., Passelègue, F. X., Deldicque, D., Rouzaud, J.-N. & Schubnel, A. Dynamic weakening and amorphization in serpentine during laboratory earthquakes. Geology 44, 607–610 (2016). Niemeijer, A. R., Di Toro, G., Nielsen, S. & Di Felice, F. Frictional melting of gabbro under extreme experimental conditions of normal stress, acceleration, and sliding velocity. Journal of Geophysical Research 116, https://doi.org/10.1029/2010JB008181 (2011). McKenzie, D., Jackson, J. A. & Priestley, K. Thermal structure of oceanic and continental lithosphere. Earth and Planetary Science Letters 233, 337–349 (2005). Beroza, G. C. & Ide, S. Slow earthquakes and nonvolcanic tremor. Annual Review of Earth and Planetary Sciences 39, 271–296 (2011). Scholz, C. H. Mechanics of faulting. Annual Review of Earth and Planetary Sciences 17, 309–334 (1989). Scholz, C. H. Earthquakes and friction laws. Nature 391, 37–42 (1998). Boettcher, M. S., Hirth, G. & Evans, B. Olivine friction at the base of oceanic seismogenic zones. Journal of Geophysical Research 112, https://doi.org/10.1029/2006JB004301 (2007). Tackley, P. J. Self-consistent generation of tectonic plates in three-dimensional mantle convection. Earth and Planetary Science Letters 157, 9–22 (1998). Buffett, B. A. & Becker, T. W. Bending stress and dissipation in subducted lithosphere. Journal of Geophysical Research 117, https://doi.org/10.1029/2012JB009205 (2012). Hirauchi, K.-i. Fukushima, K., Kido, M., Muto, J. & Okamoto, A. Reaction-induced rheological weakening enables oceanic plate subduction. Nature Communications 7, 12550 (2016). Fulton, P. M. & Saffer, D. M. Potential role of mantle-derived fluids in weakening the San Andreas Fault. Journal of Geophysical Research 114, https://doi.org/10.1029/2008JB006087 (2009). Fulton, P. M. et al. Low coseismic friction on the Tohoku-Oki fault determined from temperature measurements. Science 342, 1214–1217 (2013). Plank, T. & Langmuir, A. H. Effects of melting regime on the composition of the oceanic crust. Journal of Geophysical Research 97, 19749–19770 (1992). Chernak, L. J. & Hirth, G. Deformation of antigorite serpentine at high temperature and pressure. Earth and Planetary Science Letters 296, 23–33 (2010). Masuti, S., Barbot, S., Karato, S., Feng, L. & Banerjee, P. Upper-mantle water stratification inferred from the observations of the 2012 Indian Ocean earthquake. Nature 538, 373–377 (2016). Kanamori, H. & Anderson, D. L. Theoretical basis of some empirical relations in seismology. Bulletin of Seismological Soceity of America 65, 1073–1095 (1975). Noda, H., Dunham, E. M. & Rice, J. R. Earthquake ruptures with thermal weakening and the operation of major faults at low overall stress levels. Journal of Geophysical Research 114, https://doi.org/10.1029/2008JB006143 (2009). Rice, J. R. Flash heating at asperity contacs and earthquake instabilities. EOS, Transactions of American Geophysical Union 80, F6811 (1999). Lay, T., Kanamori, H. & Ruff, L. The asperity model and the nature of large subduction zone earthquakes. Earthquake Prediction Research 1, 3–71 (1982). Yamashita, T. On the dynamical process of fault motion in the presence of friction and inhomogeneous initial stress Part I. Rupture propagation. Journal of Physics of the Earth 24, 417–444 (1976). Kanamori, H. Seismological evidence for a lithospheric normal faulting - The Sanriku earthquake of 1933. Physics of the Earth and Planetary Interiors 4, 289–300 (1971). Chapple, W. M. & Forsyth, D. W. Earthquakes and bending of plates at trenches. Journal of Geophysical Research 84, 6729–6749 (1979). Lay, T. & Wallace, T. C. Modern Global Seismology. (Academic Press, 1995). Solomon, S. C. Venus tectonics: An over-view of Magellan observations. Journal of Geophysical Research 97, 13199–13255 (1992). Richards, M. A., Yang, W. S., Baumgardner, J. R. & Bunge, H.-P. Role of a low-viscosity zone in stabilizing plate tectonics: Implications for comparative planetology. Geochemistry, Geophysics, Geosystems 2, https://doi.org/10.1029/2000GC000115 (2001). Head, J. W. Processes of crustal formation and evolution on Venus: An analysis of topography, hypsometry, and crustal thickness variation. Earth, Moon, and Planets 50/51, 25–55 (1990). Conrad, C. P. & Hager, B. H. The thermal evolution of an Earth with strong subduction zones. Geophysical Research Letters 26, 3041–3044 (1999). Korenaga, J. Energetics of mantle convection and the fate of fossil heat. Geophysical Research Letters 30, https://doi.org/10.1029/2003GL016982 (2003). This work was supported by the National Research Foundation of Singapore under the NRF Fellowship scheme (National Research Fellow Awards No. NRF-NRFF2013-04) and by the Earth Observatory of Singapore, the National Research Foundation, and the Singapore Ministry of Education under the Research Centres of Excellence initiative. We thank David Bercovici, Peter Bunge, Giulio Di Toro, Taras Gerya, Jun Korenaga, Chris Marone, Jun Muto, Jerry Schubert, Tetsuzo Seno, Toshihiko Shimamoto, Paul Tackley and Teruo Yamashita for discussions. Comments by anonymous reviewers are helpful in improving the paper. This work comprises Earth Observatory of Singapore contribution no. 208. Yale University, Department of Geology & Geophysics, New Haven, CT, USA Shun-ichiro Karato Earth Observatory of Singapore, Nanyang Technological University, Singapore, Singapore Sylvain Barbot S.K. and S.B. had a discussion on thermal weakening during the visit of S.K. to Earth Observatory of Singapore. S.K. connected the concept of thermal weakening to the tectonic contrast between Venus and Earth and formulated the model. S.B. conducted a numerical simulation of stress evolution during the seismic cycle, and provided the background of friction, thermal weakening and also the key literature in seismology. Correspondence to Shun-ichiro Karato. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Electronic supplementary material Karato, Si., Barbot, S. Dynamics of fault motion and the origin of contrasting tectonic style between Earth and Venus. Sci Rep 8, 11884 (2018). https://doi.org/10.1038/s41598-018-30174-6 Strength models of the terrestrial planets and implications for their lithospheric structure and evolution Ikuo Katayama Progress in Earth and Planetary Science (2021) Deep mantle melting, global water circulation and its implications for the stability of the ocean mass Bijaya Karki Jeffrey Park A coupled core-mantle evolution: review and future prospects Subduction tectonics vs. Plume tectonics—Discussion on driving forces for plate motion Ling Chen Xu Wang Lijun Liu Science China Earth Sciences (2020) Sensitivity of elastic thickness to water in the Martian lithosphere Yuhki Matsuoka Shintaro Azuma
CommonCrawl
Another point of view on Kusuoka's measure Extensions of expansive dynamical systems Homoclinic tangencies with infinitely many asymptotically stable single-round periodic solutions Sishu Shankar Muni , , Robert I. McLachlan and David J. W. Simpson School of Fundamental Sciences, Massey University, Palmerston North, New Zealand * Corresponding author: Sishu Shankar Muni Received May 2020 Revised November 2020 Published January 2021 We consider a homoclinic orbit to a saddle fixed point of an arbitrary $ C^\infty $ map $ f $ on $ \mathbb{R}^2 $ and study the phenomenon that $ f $ has an infinite family of asymptotically stable, single-round periodic solutions. From classical theory this requires $ f $ to have a homoclinic tangency. We show it is also necessary for $ f $ to satisfy a 'global resonance' condition and for the eigenvalues associated with the fixed point, $ \lambda $ and $ \sigma $, to satisfy $ |\lambda \sigma| = 1 $. The phenomenon is codimension-three in the case $ \lambda \sigma = -1 $, but codimension-four in the case $ \lambda \sigma = 1 $ because here the coefficients of the leading-order resonance terms associated with $ f $ at the fixed point must add to zero. We also identify conditions sufficient for the phenomenon to occur, illustrate the results for an abstract family of maps, and show numerically computed basins of attraction. Keywords: Multistability, homoclinic tangency, smooth maps, asymptotically stable, periodic solutions. Mathematics Subject Classification: Primary: 37G25; Secondary: 37G15, 39A23. Citation: Sishu Shankar Muni, Robert I. McLachlan, David J. W. Simpson. Homoclinic tangencies with infinitely many asymptotically stable single-round periodic solutions. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2021010 C. C. Canavier, D. A. Baxter, J. W. Clark and J. H. Byrne, Nonlinear dynamics in a model neuron provide a novel mechanism for transient synaptic inputs to produce long-term alterations of postsynaptic activity, J. Neurophysiol, 69 (1993), 2252-2257. doi: 10.1152/jn.1993.69.6.2252. Google Scholar N. G. de Brujin, Asymptotic Methods in Analysis, Dover, New York, 1981. Google Scholar A. Delshams, M. Gonchenko and S. Gonchenko, On dynamics and bifurcations of area-preserving maps with homoclinic tangencies, Nonlinearity, 28 (2015), 3027-3071. doi: 10.1088/0951-7715/28/9/3027. Google Scholar A. Delshams, M. Gonchenko and S. V. Gonchenko, On bifurcations of area-preserving and non-orientable maps with quadratic homoclinic tangencies, Regul. Chaotic Dyn., 19 (2014), 702-717. doi: 10.1134/S1560354714060082. Google Scholar S. N. Elaydi, Discrete Chaos with Applications in Science and Engineering, Chapman and Hall., Boca Raton, FL, 2008. Google Scholar U. Feudel, Complex dynamics in multistable systems, Int. J. Bifurcation Chaos, 18 (2008), 1607-1626. doi: 10.1142/S0218127408021233. Google Scholar J. A. C. Gallas, Dissecting shrimps: Results for some one-dimensional physical systems, Physica A, 202 (1994), 196-223. Google Scholar J. M. Gambaudo and C. Tresser, Simple models for bifurcations creating horseshoes, J. Stat. Phys., 32 (1983), 455-476. doi: 10.1007/BF01008950. Google Scholar N. K. Gavrilov and L. P. Silnikov, On three dimensional dynamical systems close to systems with structurally unstable homoclinic curve. I, Mat. Sb. (N.S.), 88 (1972), 475-492. Google Scholar N. K. Gavrilov and L. P. Silnikov, On three-dimensional dynamical systems close to systems with a structurally unstable homoclinic curve. II, Mat. Sb. (N.S.), 90 (1973), 139-156. Google Scholar M. S. Gonchenko and S. V. Gonchenko, On cascades of elliptic periodic points in two-dimensional symplectic maps with homoclinic tangencies, Regul. Chaotic Dyns., 14 (2009), 116-136. doi: 10.1134/S1560354709010080. Google Scholar S. V. Gonchenko and L. P. Shil'nikov, Arithmetic properties of topological invariants of systems with nonstructurally-stable homoclinic trajectories, Ukr. Math. J., 39 (1987), 15-21. doi: 10.1007/BF01056417. Google Scholar S. V. Gonchenko and L. P. Shilnikov, On two-dimensional area-preserving maps with homoclinic tangencies that have infinitely many generic elliptic periodic points, J. Math. Sci. (N. Y.), 128 (2005), 2767-2773. doi: 10.1007/s10958-005-0228-6. Google Scholar S. V. Gonchenko, L. P. Shil'nikov and D. V. Turaev, Dynamical phenomena in systems with structurally unstable Poincaré homoclinic orbits, Chaos, 6 (1996), 15-31. doi: 10.1063/1.166154. Google Scholar V. S. Gonchenko, Yu. A. Kuznetsov and H. G. E. Meijer, Generalized Hénon map and bifurcations of homoclinic tangencies, SIAM J. Appl. Dyn. Syst., 4 (2005), 407-436. doi: 10.1137/04060487X. Google Scholar P. Hirschberg and C. R. Laing, Successive homoclinic tangencies to a limit cycle, Physica D, 89 (1995), 1-14. doi: 10.1016/0167-2789(95)00211-1. Google Scholar V. Lakshmikantham and D. Trigiante, Theory of Difference Equations: Numerical Methods and Applications, Marcel Dekker, Inc., New York, 2002. doi: 10.1201/9780203910290. Google Scholar C. Mira, L. Gardini, A. Barugola and J. C. Cathala, Chaotic Dynamics in Two Dimensional Noninvertible Maps, World Scientific, 1996. doi: 10.1142/9789812798732. Google Scholar S. E. Newhouse, Diffeomorphisms with infinitely many sinks, Topology, 13 (1974), 9-18. doi: 10.1016/0040-9383(74)90034-2. Google Scholar C. N. Ngonghala, U. Feudel and K. Showalter, Extreme multistability in a chemical model system, Phys. Rev. E, 83 (2011), 056206. doi: 10.1103/PhysRevE.83.056206. Google Scholar [21] J. Palis and F. Takens, Hyperbolicity and Sensitive Chaotic Dynamics at Homoclinic Bifurcations, Cambridge University Press, New York, 1993. Google Scholar S. Rahmstorf and J. Willebrand, The role of temperature feedback in stabilizing the thermohaline circulation, J. Phys. Oceanogr, 25 (1995), 787-805. doi: 10.1175/1520-0485(1995)025<0787:TROTFI>2.0.CO;2. Google Scholar R. C. Robinson, An Introduction to Dynamical Systems. Continuous and Discrete, Prentice Hall, Upper Saddle River, NJ, 2004. Google Scholar L. P. Shil'nikov, A. L. Shil'nikov, D. V. Turaev and L. O. Chua, Methods of Qualitative Theory in Nonlinear Dynamics, Part I, volume 4, World Scientific Singapore, 1998. doi: 10.1142/9789812798596. Google Scholar D. J. W. Simpson, Scaling laws for large numbers of coexisting attracting periodic solutions in the border-collision normal form, Int. J. Bifurcation Chaos, 24 (2014), 1450118, 28pp. doi: 10.1142/S0218127414501181. Google Scholar D. J. W. Simpson, Sequences of periodic solutions and infinitely many coexisting attractors in the border-collision normal form, Int. J. Bifurcation Chaos, 24 (2014), 1430018, 18pp. doi: 10.1142/S0218127414300183. Google Scholar D. J. W. Simpson, Unfolding codimension-two subsumed homoclinic connections in two-dimensional piecewise-linear maps, Int. J. Bifurcation Chaos, 30 (2020), 203006, 12pp. doi: 10.1142/S0218127420300062. Google Scholar D. J. W. Simpson and C. P. Tuffley, Subsumed homoclinic connections and infinitely many coexisting attractors in piecewise-linear maps, Int. J. Bifurcation Chaos, 27 (2017), 1730010, 20 pp. doi: 10.1142/S0218127417300105. Google Scholar S. Sternberg, On the structure of local homeomorphisms of Euclidean $n$-space, II, Amer. J. Math., 80 (1958), 623-631. doi: 10.2307/2372774. Google Scholar Figure 1. A sketch of tangentially intersecting stable [blue] and unstable [red] manifolds of a saddle fixed point of a two-dimensional map. Note that the tangential intersections form a homoclinic orbit Figure 2. A sketch of the stable [blue] and unstable [red] manifolds of the origin for a $ C^\infty $ map $ f $ satisfying (3). A homoclinic orbit is indicated with black dots in the case $ 0<\lambda<1 $ and $ \sigma > 1 $ Figure 3. Selected points of an $ {\rm SR}_k $-solution (single-round periodic solution satisfying Definition 2.1) in the case $ \lambda > 0 $ and $ \sigma > 0 $. The region $ {\mathcal{N}}_\eta $ is shaded Figure 4. The stability of a period-$ n $ solution of $ f $ in terms of the trace $ \tau $ and determinant $ \delta $ of the Jacobian matrix $ D f^n $ evaluated at one point of the solution Figure 5. The basic structure of the phase space of the map (15) Figure 6. The function (19) (with (20)) that we use as a convex combination parameter in (15) Figure 7. Parts of the stable [blue] and unstable [red] manifolds of the origin for the map (15) with (16)–(21). Panels (a)–(d) correspond to (22)–(25) respectively. In each panel the region $ h_0 < y < h_1 $ is shaded Figure 8. Asymptotically stable $ {\rm SR}_k $-solutions of (15) with (16)–(21). Panels (a)–(d) correspond to (22)–(25) respectively. Points of the stable $ {\rm SR}_k $-solutions are indicated by triangles and coloured by the value of $ k $ (as indicated in the key). In panels (a) and (b) the solutions are shown for $ k = 0 $ (a fixed point in $ y > h_1 $) up to $ k = 15 $. In panel (c) the solutions are shown for $ k = 0,2,4,\ldots,14 $ and in panel (d) the solutions are shown for $ k = 1,3,5,\ldots,15 $. In each panel one saddle $ {\rm SR}_k $-solution is shown with circles (with $ k = 14 $ in panel (c) and $ k = 15 $ in the other panels). In panels (b) and (c) asymptotically stable double-round periodic solutions are shown with diamonds Figure 9. Basins of attraction for the asymptotically stable $ {\rm SR}_k $-solutions shown in Fig. 8. Specifically each point in a $ 1000 \times 1000 $ grid is coloured by that of the $ {\rm SR}_k $-solution to which its forward orbit under $ f $ converges to Ying Lv, Yan-Fang Xue, Chun-Lei Tang. Ground state homoclinic orbits for a class of asymptotically periodic second-order Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1627-1652. doi: 10.3934/dcdsb.2020176 Thierry Cazenave, Ivan Naumkin. Local smooth solutions of the nonlinear Klein-gordon equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020448 Monia Capanna, Jean C. Nakasato, Marcone C. Pereira, Julio D. Rossi. Homogenization for nonlocal problems with smooth kernels. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020385 Héctor Barge. Čech cohomology, homoclinic trajectories and robustness of non-saddle sets. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020381 Maicon Sônego. Stable transition layers in an unbalanced bistable equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020370 Claudio Bonanno, Marco Lenci. Pomeau-Manneville maps are global-local mixing. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1051-1069. doi: 10.3934/dcds.2020309 Christian Clason, Vu Huu Nhu, Arnd Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020052 Meilan Cai, Maoan Han. Limit cycle bifurcations in a class of piecewise smooth cubic systems with multiple parameters. Communications on Pure & Applied Analysis, 2021, 20 (1) : 55-75. doi: 10.3934/cpaa.2020257 Noufel Frikha, Valentin Konakov, Stéphane Menozzi. Well-posedness of some non-linear stable driven SDEs. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 849-898. doi: 10.3934/dcds.2020302 Huanhuan Tian, Maoan Han. Limit cycle bifurcations of piecewise smooth near-Hamiltonian systems with a switching curve. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020368 Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020352 Yohei Yamazaki. Center stable manifolds around line solitary waves of the Zakharov–Kuznetsov equation with critical speed. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021008 Guoyuan Chen, Yong Liu, Juncheng Wei. Nondegeneracy of harmonic maps from $ {{\mathbb{R}}^{2}} $ to $ {{\mathbb{S}}^{2}} $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3215-3233. doi: 10.3934/dcds.2019228 Sishu Shankar Muni Robert I. McLachlan David J. W. Simpson
CommonCrawl
Improved optical performance of multi-layer MoS2 phototransistor with see-through metal electrode Junghak Park1 na1, Dipjyoti Das2 na1, Minho Ahn1, Sungho Park3, Jihyun Hur4 & Sanghun Jeon2 In recent years, MoS2 has emerged as a prime material for photodetector as well as phototransistor applications. Usually, the higher density of state and relatively narrow bandgap of multi-layer MoS2 give it an edge over monolayer MoS2 for phototransistor applications. However, MoS2 demonstrates thickness-dependent energy bandgap properties, with multi-layer MoS2 having indirect bandgap characteristics and therefore possess inferior optical properties. Herein, we investigate the electrical as well as optical properties of single-layer and multi-layer MoS2-based phototransistors and demonstrate improved optical properties of multi-layer MoS2 phototransistor through the use of see-through metal electrode instead of the traditional global bottom gate or patterned local bottom gate structures. The see-through metal electrode utilized in this study shows transmittance of more than 70% under 532 nm visible light, thereby allowing the incident light to reach the entire active area below the source and drain electrodes. The effect of contact electrodes on the MoS2 phototransistors was investigated further by comparing the proposed electrode with conventional opaque electrodes and transparent IZO electrodes. A position-dependent photocurrent measurement was also carried out by locally illuminating the MoS2 channel at different positions in order to gain better insight into the behavior of the photocurrent mechanism of the multi-layer MoS2 phototransistor with the transparent metal. It was observed that more electrons are injected from the source when the beam is placed on the source side due to the reduced barrier height, giving rise to a significant enhancement of the photocurrent. Molybdenum disulfide (MoS2), a typical transition metal dichalcogenide (TMDC) material, is attracting significant attention from researchers in the field of future optoelectronic devices due to its excellent optical as well as electrical properties, such as a high absorption coefficient, narrow bandgap, and high carrier mobility [1,2,3,4,5]. In recent years, MoS2 has been studied extensively in relation to thin film transistor (TFT) technology, and TFTs composed of multi-layer MoS2 have been found to exhibit useful features of future switching devices, such as large on/off current ratios, high field effect mobility values (μFE), low temperature processes, and low power consumption [6, 7]. Especially due to its narrow bandgap, MoS2 has emerged as a prime material for photodetector as well as phototransistor applications, demonstrating the potential to outperform graphene by demonstrating better light responsiveness [8,9,10,11,12]. MoS2, however, exhibits a direct or an indirect energy bandgap property based on the number of layers [13]; therefore, its carrier mobility, absorbance and luminescence properties as well as its structural properties all strongly depend on the number of layers [14,15,16]. Multi-layer MoS2 has a higher density of state and a relatively narrow bandgap as compared to mono-layer MoS2 which can be advantageous for phototransistor applications [17]. However, unlike mono-layer MoS2, the multi-layer MoS2 has an indirect bandgap characteristic and therefore multi-layer MoS2 phototransistors possess inferior optical properties as compared to mono-layer MoS2 phototransistors [9]. The optical properties of the multi-layer MoS2 phototransistors can be improved significantly by utilizing transparent electrodes. However, the high work function of conventional transparent electrodes often gives rise to a large Schottky barrier and limits device performance [18]. The proper choice of transparent electrode is therefore of utmost importance to achieve high performance from MoS2 phototransistors. In this study, we investigate the electrical as well as the optical properties of single-layer as well as multi-layer MoS2-based phototransistors and demonstrate improved optical properties of multi-layer MoS2 phototransistors through the use of see-through metal electrodes instead of traditional global bottom gate or patterned local bottom gate structures. An increase in the dark-state ON current as well as the photocurrent in an illuminated state was observed when increasing the MoS2 thickness from the monolayer to the bulk due to the increase in the carrier concentration along with an increase in the decay time, as revealed by persistent photoconductivity (PPC) measurements. The see-through metal electrode utilized in this study was found to exhibit transmittance of more than 70% under visible light at 532 nm, thereby allowing the incident light to reach the entire active area below the source and drain electrodes. To investigate the effect of the contact electrodes on MoS2 phototransistors further, phototransistors with conventional opaque electrodes and transparent IZO electrodes were fabricated and compared to the proposed electrode. To gain better insight into the behavior of the photocurrent mechanism of the multi-layer MoS2 phototransistor with the see-through metal electrode, position-dependent photocurrent measurements were also carried out by locally illuminating the MoS2 channel at different positions. The MoS2 phototransistors were fabricated in a conventional inverted staggered gate structure. Each MoS2 flake was mechanically exfoliated from a bulk MoS2 crystal and transferred to the top of a highly doped p-type Si wafer with a SiO2 thickness of 3000 Å. Highly doped p++ silicon and silicon dioxide layers were used as the back gate and gate insulator, respectively. After transferring the MoS2 onto the substrate, source and drain electrodes were patterned by conventional photolithography. Finally, the Ti/Au metal was deposited by electron beam evaporation as a contact electrode. The formation of single-layer as well as multi-layer MoS2 was confirmed with the help of AFM and Raman spectroscopy. The electrical and the optical properties of the TFTs were measured using a semiconductor device analyzer. A monochromator with wavelengths within the visible light region (400–900 nm) was employed to measure the optical characteristics of the individual phototransistors. In particular, a beam with a wavelength of 532 nm and with a radius of 1 µm in the Raman spectroscope was utilized to carry out the position-dependent photocurrent measurements of the TFTs. Figure 1a shows the schematic diagram of the fabricated MoS2 photo-TFT with the conventional bottom gate structure. The number of MoS2 layers in the as-fabricated phototransistor was confirmed from the AFM height profiles, as depicted in Fig. 1b. As shown in the figure, the height of the mono-layer MoS2 on the substrate is around 0.7 nm or more, which is slightly higher than the theoretical thickness of 6.15 Å due to the absorber on the MoS2 surface. Figure 1c presents the Raman spectrum of MoS2 at different thicknesses. The \( {\text{E}}^{ 1}_{{ 2 {\text{g}}}} \) mode close to 383 cm−1 and the A1g mode close to 408 cm−1 are observed from the mono-layer to the bulk MoS2. As shown in Fig. 1d, with an increase in the number of MoS2 layers, the frequency of the \( {\text{E}}^{ 1}_{{ 2 {\text{g}}}} \) peak decreases whereas that of the A1g peak increases. An increase in the number of MoS2 layers resulted in a decrease in the Van der Waals force [19] between adjacent layers, causing a red shift of the \( {\text{E}}^{ 1}_{{ 2 {\text{g}}}} \) peak. Moreover, the Van der Waals force at each MoS2 layer suppresses the vibration as the number of layers is increased. This produces a higher force constant [20], resulting in a blue shift of the A1g modes. a Schematic diagram of the conventional bottom gate structure MoS2 photo-TFT, b height profile of the MoS2 layers, c Raman spectra of MoS2 samples with different numbers of layers. The left and right red lines indicate the positions of the \( {\text{E}}^{ 1}_{{ 2 {\text{g}}}} \) and A1g peaks in the mono-layer MoS2, respectively. d Frequencies of the \( {\text{E}}^{ 1}_{{ 2 {\text{g}}}} \) and A1g Raman peaks (left side axis) and their difference (right side axis) with different numbers of MoS2 layers Figure 2a presents the IDS–VGS characteristics of the MoS2 phototransistors with different layer thicknesses under dark and illuminated conditions. An increase in the dark-state ON current as well as the photocurrent in the illuminated state was observed with an increase in the MoS2 thickness from the monolayer to the bulk. Figure 2b shows the photocurrent (IPhoto) to dark current (IDark) ratio in the off state and the drain current in the on state as a function of the layer thickness. The increase in the drain current with the layer thickness can be explained by the increased carrier concentration. The effects of the layer thickness on the persistent photoconductivity (PPC) of the fabricated MoS2 phototransistors are shown in Fig. 2c. The PPC measurements were carried out by exposing the MoS2 TFTs to light pulses at a wavelength of 400 nm with a fixed intensity (5 mW/cm2). Figure 2d shows the decay time with the maximum photocurrent for the different layers obtained from Fig. 2c. Here, the decay time represents the time required for the photocurrent to decrease from the maximum level to one-fifth of its maximum value. It can be seen that the decay time and the magnitude of the maximum photocurrent of the phototransistors increase with an increase in the layer thickness. a IDS–VGS characteristics under dark and illuminated conditions (λ = 400 nm and power ∼ 50 mW/cm2), b ratio between IPhoto and IDark (left side axis) at VGS = − 20 V and IDS (right side axis) at VGS 20 V, c dynamic photosensitivity under pulsed illumination, and d decay time and maximum photocurrent (obtained from Fig. 4.2c) of MoS2 photo-TFT samples with different numbers of MoS2 layers The optical property of the as fabricated MoS2 phototransistors were significantly enhanced by adapting see-through transparent electrodes instead of the traditional global bottom gate or patterned local bottom gate structures and were compared with those of conventional opaque electrodes and transparent IZO electrodes. Figure 3a–c show the IDS–VGS characteristics of MoS2 phototransistors with various metal electrodes. This measurement was carried out under both dark and light conditions using a focused laser with different wavelengths at steps of 100 nm. IDS–VGS characteristics of the multi-layer MoS2 phototransistor with a see-through metal, b thick opaque metal and c IZO transparent metal in the dark and under illumination at different wavelengths From Fig. 3c, it can be seen that the use of IZO transparent metals limits the optical performance of the MoS2 phototransistor due to the high Schottky barrier resulting from the Fermi-level pinning effect caused by its high workfunction (~ 5 eV). Furthermore, the sheet resistance of the IZO metal electrode obtained from four-point probe measurements was as high as 105 Ω/square. On the other hand, the sheet resistance of the see-through metal electrode was 8 Ω/square, much lower than that of IZO, which is not significantly different from the value of 1.4 Ω/square, which is the sheet resistance of a conventional Ti/Au metal electrode. The see-through metal was chosen not only for its sheet resistance properties but also for its transmittance capabilities. This see-through metal electrode shows transmittance of 70% under visible light of 532 nm and allows incident light to reach the entire channel area below the source and drain electrodes. The optical properties of the external quantum efficiency (EQE), responsiveness (R) and collected carrier density (ncoll) of the phototransistors were extracted and calculated from Eqs. (1), (2) and (3) as a function of the wavelength depending on the different contact electrodes [21, 22]. These results are shown in Fig. 4. $$ {\text{EQE}} = \frac{{I_{DS} /q}}{{P_{i} /h\upsilon }} $$ $$ {\text{R}} = \frac{{J_{total} - J_{dark} }}{{P_{i} }} $$ $$ \eta_{coll} = \frac{{I_{DS} }}{{q\mu_{FE} \left( {W/L} \right)t_{s} V_{DS} }}. $$ Here Pi is the power density in the illumination state, hν is the incident photon energy, JTotal is the current density in the illumination state, JDark is the current density in the dark state, μFE is the field-effect mobility of each device and tS is the MoS2 layer thickness. a IPhoto and external quantum efficiency (EQE), b responsivity and collected carrier density values of multi-layer MoS2 phototransistors with see-through metal, thick opaque metal and IZO transparent metal electrode under illumination at different wavelengths As shown in Fig. 4, MoS2 phototransistors with see-through metal electrodes exhibit significantly improved optical properties as compared to the thick opaque or IZO transparent metal in the visible region. This results from the enhancement in the photocurrent due to the penetration of incident light to the entire active region below the transparent electrode, as described above. To identify the photocurrent mechanism, the photocurrent of the multi-layer MoS2 phototransistor with the see-through metal electrode was measured by locally illuminating the MoS2 channel at different positions (inset of Fig. 5b). A beam with a wavelength of 532 nm at an intensity level of 0.99 μW was used for this purpose. As indicated by the IDS–VGS characteristics presented in Fig. 5b, the photocurrent of the MoS2 TFT is highest when the beam is located at the source position (A), after which it decreases along the channel (B, C, D), and is lowest at the drain (E). This can be explained by the barrier height variation (BHV), i.e., ΔφB, between the source and the channel due to the incident light. The BHV in this case is mainly caused by the electrostatic force induced at the junction between the metal and the semiconductor. It can be expressed by the following equation. $$ I_{DS} = I_{DS0} \exp \left( {\alpha \frac{{q\Delta \varPhi_{B} }}{kT}} \right). $$ Here IDS0 is a reference current value without variation of the barrier height, α is a constant and kT is the thermal energy at room temperature. (Inset: 3D schematic image of a MoS2 phototransistor representing the different beam positions) a IDS–VGS curves and b variations in the barrier height with the beam position of the multi-layer MoS2 phototransistors Figure 5b shows the BHV depending on the beam position obtained from Eq. (4). As expected, the BHV at the source position has the largest value. Due to the increased BHV, a greater amount of electron injection (ninj) occurs from the source and causes an increase in the photocurrent [23]. From Eqs. (3) and (4), ninj can be deduced as follows: Here n0 is a constant indicating the reference carrier density. In summary, we fabricated mono-layer to multi-layer MoS2 TFTs and evaluated their electrical and optical properties. Increases in the dark-state ON current as well as the photocurrent in an illuminated state were observed along with an increase in the decay time (as found in the PPC measurement results) with an increase in the MoS2 thickness from the monolayer to the bulk due to the increased carrier concentration. To improve the optical properties, a see-through metal electrode with a very thin film of Ti/Au metal was introduced. This see-through metal electrode showed transmittance of 70% or more under visible light at 532 nm. MoS2 phototransistors with see-through metal electrodes exhibit significantly improved optical properties as compared to thick opaque or IZO transparent metal samples in the visible light region. Furthermore, photocurrent measurements with respect to the position of the beam along the MoS2 channel revealed that more electrons are injected from the source when the beam is placed on the source side due to the reduced barrier height, giving rise to a significant enhancement in the photocurrent. We hope that the results presented here can provide considerable help to those attempting to understand the photocurrent mechanism as well as the origin of the improved high photocurrent in these types of devices. M. Chhowalla, H.S. Shin, G. Eda, L.J. Li, K.P. Loh, H. Zhang, Nature chemistry 5(4), 263 (2013) A.K. Geim, I.V. Grigorieva, Nature 499(7459), 419 (2013) X. Huang, Z. Zeng, H. Zhang, Chem. Soc. Rev. 42(5), 1934 (2013) Q.H. Wang, K. Kalantar-Zadeh, A. Kis, J.N. Coleman, M.S. Strano, Nat. Nanotechnol. 7(11), 699 (2012) S.A. Han, R. Bhatia, S.-W. Kim, Nano Conv. 2, 17 (2015) S. Kim, A. Konar, W.S. Hwang, J.H. Lee, J. Lee, J. Yang, C. Jung, H. Kim, J.B. Yoo, J.Y. Choi, Y.W. Jin, S.Y. Lee, D. Jena, W. Choi, K. Kim, Nat. Commun. 3, 1011 (2012) C. Muratore, J.J. Hu, B. Wang, M.A. Haque, J.E. Bultman, M.L. Jespersen, P.J. Shamberger, M.E. McConney, R.D. Naguy, A.A. Voevodin, Appl. Phys. Lett. 104, 26 (2014) S.H. Yu, Y. Lee, S.K. Jang, J. Kang, J. Jeon, C. Lee, J.Y. Lee, H. Kim, E. Hwang, S. Lee, J.H. Cho, ACS Nano 8(8), 8285 (2014) J. Kwon, Y.K. Hong, G. Han, I. Omkaram, W. Choi, S. Kim, Y. Yoon, Adv. Mater. 27(13), 2224 (2015) C.C. Wu, D. Jariwala, V.K. Sangwan, T.J. Marks, M.C. Hersam, L.J. Lauhon, J. Phys. Chem. Lett. 4(15), 2508 (2013) W. Zhang, J.K. Huang, C.H. Chen, Y.H. Chang, Y.J. Cheng, L.J. Li, Adv. Mater. 25(25), 3456 (2013) B. Wang, C. Muratore, A.A. Voevodin, M.A. Haque, Nano Conver. 1, 22 (2014) J.E. Padilha, H. Peelaers, A. Janotti, C.G. Van de Walle, Phys Rev B 90, 20 (2014) G.H. Han, N.J. Kybert, C.H. Naylor, B.S. Lee, J.L. Ping, J.H. Park, J. Kang, S.Y. Lee, Y.H. Lee, R. Agarwal, A.T.C. Johnson, Nat. Commun. 6, 6123 (2015) H.Y. Chang, M.N. Yogeesh, R. Ghosh, A. Rai, A. Sanne, S.X. Yang, N.S. Lu, S.K. Banerjee, D. Akinwande, Adv. Mater. 28(9), 1818 (2016) J.U. Lee, J. Park, Y.W. Son, H. Cheong, Nanoscale 7(7), 3229 (2015) W. Choi, M.Y. Cho, A. Konar, J.H. Lee, G.B. Cha, S.C. Hong, S. Kim, J. Kim, D. Jena, J. Joo, S. Kim, Adv. Mater. 24(43), 5832 (2012) D.S. Schulman, A.J. Arnold, S. Das, Chem. Soc. Rev. 47(9), 3037 (2018) H. Li, Q. Zhang, C.C.R. Yap, B.K. Tay, T.H.T. Edwin, A. Olivier, D. Baillargeat, Adv. Funct. Mater. 22(7), 1385 (2012) C. Lee, H. Yan, L.E. Brus, T.F. Heinz, J. Hone, S. Ryu, ACS Nano 4(5), 2695 (2010) S. Jeon, I. Song, S. Lee, B. Ryu, S.E. Ahn, E. Lee, Y. Kim, A. Nathan, J. Robertson, U.I. Chung, Adv. Mater. 26(41), 7102 (2014) S.E. Ahn, I. Song, S. Jeon, Y.W. Jeon, Y. Kim, C. Kim, B. Ryu, J.H. Lee, A. Nathan, S. Lee, G.T. Kim, U.I. Chung, Adv. Mater. 24(19), 2631 (2012) K.H. Choi, J.Y. Kim, Y.S. Lee, H.J. Kim, Thin Solid Films 341(1–2), 152 (1999) This work was supported by LG Display. This work was supported by LG Display (KR). Junghak Park and Dipjyoti Das contributed equally to this work. Department of Applied Physics, Korea University, 2511 Sejong-ro, Sejong city, 30019, Republic of Korea Junghak Park & Minho Ahn School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Yuseong, Daeharkro 291, Dajeon-city, Republic of Korea Dipjyoti Das & Sanghun Jeon Division of Life Science and Chemistry, Daejin University, 1007, Hguk-ro, Pochehon city, Gyeonggi-do, 487-711, Republic of Korea Department of Electrical Engineering, Sejong University, 209, Neungdong-ro, Gwangjin-gu, Seoul-city, 05006, Republic of Korea Jihyun Hur Junghak Park Dipjyoti Das Minho Ahn Sanghun Jeon JP and DD contributed equally in this work. JP, DD and SJ designed the experiments and participated in the conceptual discussion. JP and MA carried the experiments out. DD, SP and SJ helped in analyzing and interpreting the experimental data. The work was carried out under the supervision of SJ. All authors read and approved the final manuscript. Correspondence to Sungho Park or Sanghun Jeon. Park, J., Das, D., Ahn, M. et al. Improved optical performance of multi-layer MoS2 phototransistor with see-through metal electrode. Nano Convergence 6, 32 (2019). https://doi.org/10.1186/s40580-019-0202-5 Barrier height MoS2 See-through metal electrode
CommonCrawl
RF Characterization of Diamond Schottky p-i-n Diodes for Receiver Protector Applications Harshad Surdi, Mohammad Faizan Ahmad, Franz Koeck, Robert J. Nemanich, Stephen Goodnick, Trevor J. Thornton Engineering, Ira A. Fulton Schools of (IAFSE) Diamond Schottky p-i-n diodes have been grown by plasma-enhanced chemical vapor deposition (PECVD) and incorporated as a shunt element within coplanar striplines for RF characterization. The p-i-n diodes have a thin, lightly doped n-type layer that is fully depleted by the top metal contact, and they operate as high-speed Schottky rectifiers. Measurements from dc to 25 GHz confirm that the diodes can be modeled by a voltage-dependent resistor in parallel with a fixed-value capacitor. In the OFF state with a dc bias of 0 V, the diode insertion loss is less than 0.3 dB at 1 GHz and increases to 14 dB when forward biased to 7.6 V. With a contact resistance, $R_{C}$ , of 0.25 $\text{m}\Omega \cdot $ cm2 and an OFF capacitance, $C_{\mathrm{\scriptscriptstyle OFF}}$ , of 17.5 nF/cm2, the diodes have an RF figure of merit $F_{\mathrm {oc}} =$ ( $2\pi R_{C}\,\,C_{\mathrm{\scriptscriptstyle OFF}})^{-1}$ of 36.5 GHz. The RF model suggests that reducing $R_{C}$ to less than $5\times 10^{-5} \Omega \cdot $ cm2 will enable input power rejection exceeding 30 dB. Compared to conventional silicon or compound semiconductor based power limiters, the superior thermal conductivity of the diamond Schottky p-i-n diodes makes them ideally suitable for RF receiver protectors (RPs) that require high power handing capability. IEEE Microwave and Wireless Components Letters https://doi.org/10.1109/LMWC.2020.3031219 SPICE model extraction power semiconductor device receiver protectors (RPs) 10.1109/LMWC.2020.3031219 Dive into the research topics of 'RF Characterization of Diamond Schottky p-i-n Diodes for Receiver Protector Applications'. Together they form a unique fingerprint. protectors Physics & Astronomy 100% p-i-n diodes Physics & Astronomy 89% Schottky diodes Physics & Astronomy 71% Diamonds Engineering & Materials Science 67% Diodes Engineering & Materials Science 62% receivers Physics & Astronomy 56% diamonds Physics & Astronomy 54% diodes Physics & Astronomy 51% Surdi, H., Ahmad, M. F., Koeck, F., Nemanich, R. J., Goodnick, S., & Thornton, T. J. (2020). RF Characterization of Diamond Schottky p-i-n Diodes for Receiver Protector Applications. IEEE Microwave and Wireless Components Letters, 30(12), 1141-1144. [9241775]. https://doi.org/10.1109/LMWC.2020.3031219 RF Characterization of Diamond Schottky p-i-n Diodes for Receiver Protector Applications. / Surdi, Harshad; Ahmad, Mohammad Faizan; Koeck, Franz et al. In: IEEE Microwave and Wireless Components Letters, Vol. 30, No. 12, 9241775, 12.2020, p. 1141-1144. Surdi, H, Ahmad, MF, Koeck, F, Nemanich, RJ, Goodnick, S & Thornton, TJ 2020, 'RF Characterization of Diamond Schottky p-i-n Diodes for Receiver Protector Applications', IEEE Microwave and Wireless Components Letters, vol. 30, no. 12, 9241775, pp. 1141-1144. https://doi.org/10.1109/LMWC.2020.3031219 Surdi H, Ahmad MF, Koeck F, Nemanich RJ, Goodnick S, Thornton TJ. RF Characterization of Diamond Schottky p-i-n Diodes for Receiver Protector Applications. IEEE Microwave and Wireless Components Letters. 2020 Dec;30(12):1141-1144. 9241775. doi: 10.1109/LMWC.2020.3031219 Surdi, Harshad ; Ahmad, Mohammad Faizan ; Koeck, Franz et al. / RF Characterization of Diamond Schottky p-i-n Diodes for Receiver Protector Applications. In: IEEE Microwave and Wireless Components Letters. 2020 ; Vol. 30, No. 12. pp. 1141-1144. @article{be48da65931f4b6b9b93157425bfaeed, title = "RF Characterization of Diamond Schottky p-i-n Diodes for Receiver Protector Applications", abstract = "Diamond Schottky p-i-n diodes have been grown by plasma-enhanced chemical vapor deposition (PECVD) and incorporated as a shunt element within coplanar striplines for RF characterization. The p-i-n diodes have a thin, lightly doped n-type layer that is fully depleted by the top metal contact, and they operate as high-speed Schottky rectifiers. Measurements from dc to 25 GHz confirm that the diodes can be modeled by a voltage-dependent resistor in parallel with a fixed-value capacitor. In the OFF state with a dc bias of 0 V, the diode insertion loss is less than 0.3 dB at 1 GHz and increases to 14 dB when forward biased to 7.6 V. With a contact resistance, $R_{C}$ , of 0.25 $\text{m}\Omega \cdot $ cm2 and an OFF capacitance, $C_{\mathrm{\scriptscriptstyle OFF}}$ , of 17.5 nF/cm2, the diodes have an RF figure of merit $F_{\mathrm {oc}} =$ ( $2\pi R_{C}\,\,C_{\mathrm{\scriptscriptstyle OFF}})^{-1}$ of 36.5 GHz. The RF model suggests that reducing $R_{C}$ to less than $5\times 10^{-5} \Omega \cdot $ cm2 will enable input power rejection exceeding 30 dB. Compared to conventional silicon or compound semiconductor based power limiters, the superior thermal conductivity of the diamond Schottky p-i-n diodes makes them ideally suitable for RF receiver protectors (RPs) that require high power handing capability.", keywords = "Diamond, SPICE model extraction, Schottky diodes, power semiconductor device, receiver protectors (RPs)", author = "Harshad Surdi and Ahmad, {Mohammad Faizan} and Franz Koeck and Nemanich, {Robert J.} and Stephen Goodnick and Thornton, {Trevor J.}", note = "Funding Information: Manuscript received September 4, 2020; revised October 7, 2020; accepted October 9, 2020. Date of publication October 28, 2020; date of current version December 4, 2020. This work was supported in part by the NASA through the HOTTech Program under Grant NNX17AG45G and in part by the National Science Foundation (NSF) under Program NNCI-ECCS-1542160. (Harshad Surdi and Mohammad Faizan Ahmad contributed equally to this work.) (Corresponding author: Trevor J. Thornton.) Harshad Surdi, Mohammad Faizan Ahmad, Stephen Goodnick, and Trevor J. Thornton are with the School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, AZ 85287 USA (e-mail: [email protected]). Publisher Copyright: {\textcopyright} 2001-2012 IEEE.", doi = "10.1109/LMWC.2020.3031219", journal = "IEEE Microwave and Wireless Components Letters", T1 - RF Characterization of Diamond Schottky p-i-n Diodes for Receiver Protector Applications AU - Surdi, Harshad AU - Ahmad, Mohammad Faizan AU - Koeck, Franz AU - Nemanich, Robert J. AU - Goodnick, Stephen AU - Thornton, Trevor J. N1 - Funding Information: Manuscript received September 4, 2020; revised October 7, 2020; accepted October 9, 2020. Date of publication October 28, 2020; date of current version December 4, 2020. This work was supported in part by the NASA through the HOTTech Program under Grant NNX17AG45G and in part by the National Science Foundation (NSF) under Program NNCI-ECCS-1542160. (Harshad Surdi and Mohammad Faizan Ahmad contributed equally to this work.) (Corresponding author: Trevor J. Thornton.) Harshad Surdi, Mohammad Faizan Ahmad, Stephen Goodnick, and Trevor J. Thornton are with the School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, AZ 85287 USA (e-mail: [email protected]). Publisher Copyright: © 2001-2012 IEEE. N2 - Diamond Schottky p-i-n diodes have been grown by plasma-enhanced chemical vapor deposition (PECVD) and incorporated as a shunt element within coplanar striplines for RF characterization. The p-i-n diodes have a thin, lightly doped n-type layer that is fully depleted by the top metal contact, and they operate as high-speed Schottky rectifiers. Measurements from dc to 25 GHz confirm that the diodes can be modeled by a voltage-dependent resistor in parallel with a fixed-value capacitor. In the OFF state with a dc bias of 0 V, the diode insertion loss is less than 0.3 dB at 1 GHz and increases to 14 dB when forward biased to 7.6 V. With a contact resistance, $R_{C}$ , of 0.25 $\text{m}\Omega \cdot $ cm2 and an OFF capacitance, $C_{\mathrm{\scriptscriptstyle OFF}}$ , of 17.5 nF/cm2, the diodes have an RF figure of merit $F_{\mathrm {oc}} =$ ( $2\pi R_{C}\,\,C_{\mathrm{\scriptscriptstyle OFF}})^{-1}$ of 36.5 GHz. The RF model suggests that reducing $R_{C}$ to less than $5\times 10^{-5} \Omega \cdot $ cm2 will enable input power rejection exceeding 30 dB. Compared to conventional silicon or compound semiconductor based power limiters, the superior thermal conductivity of the diamond Schottky p-i-n diodes makes them ideally suitable for RF receiver protectors (RPs) that require high power handing capability. AB - Diamond Schottky p-i-n diodes have been grown by plasma-enhanced chemical vapor deposition (PECVD) and incorporated as a shunt element within coplanar striplines for RF characterization. The p-i-n diodes have a thin, lightly doped n-type layer that is fully depleted by the top metal contact, and they operate as high-speed Schottky rectifiers. Measurements from dc to 25 GHz confirm that the diodes can be modeled by a voltage-dependent resistor in parallel with a fixed-value capacitor. In the OFF state with a dc bias of 0 V, the diode insertion loss is less than 0.3 dB at 1 GHz and increases to 14 dB when forward biased to 7.6 V. With a contact resistance, $R_{C}$ , of 0.25 $\text{m}\Omega \cdot $ cm2 and an OFF capacitance, $C_{\mathrm{\scriptscriptstyle OFF}}$ , of 17.5 nF/cm2, the diodes have an RF figure of merit $F_{\mathrm {oc}} =$ ( $2\pi R_{C}\,\,C_{\mathrm{\scriptscriptstyle OFF}})^{-1}$ of 36.5 GHz. The RF model suggests that reducing $R_{C}$ to less than $5\times 10^{-5} \Omega \cdot $ cm2 will enable input power rejection exceeding 30 dB. Compared to conventional silicon or compound semiconductor based power limiters, the superior thermal conductivity of the diamond Schottky p-i-n diodes makes them ideally suitable for RF receiver protectors (RPs) that require high power handing capability. KW - Diamond KW - SPICE model extraction KW - Schottky diodes KW - power semiconductor device KW - receiver protectors (RPs) U2 - 10.1109/LMWC.2020.3031219 DO - 10.1109/LMWC.2020.3031219 JO - IEEE Microwave and Wireless Components Letters JF - IEEE Microwave and Wireless Components Letters M1 - 9241775
CommonCrawl
Why doesn't light, which travels faster than sound, produce a sonic boom? I know that when an object exceeds the speed of sound ($340$ m/s) a sonic boom is produced. Light which travels at $300,000,000$ m/s, much more than the speed of sound but doesn't produce a sonic boom, right? electromagnetism speed-of-light acoustics A.R.KA.R.K $\begingroup$ see en.wikipedia.org/wiki/Cherenkov_radiation $\endgroup$ $\begingroup$ At first I also thought "Cherenkov radiation", but this question is asking why light doesn't make a sonic boom, not why there is no equivalent of a sonic boom when it comes to light. $\endgroup$ – Roman Starkov $\begingroup$ Light doesn't make a sound. $\endgroup$ – OrangeDog $\begingroup$ @OrangeDog Even if it's in the woods and there is no-one around to hear it? $\endgroup$ – Phill Healey $\begingroup$ @JaapEldering Lightning is electrons, not photons $\endgroup$ – Jim A sonic boom is produced when a macroscopic object (say, roughly: larger than the average spacing between air molecules, $\approx 3\,\mathrm{nm}$) moves so fast that the air has no time to "get out of its way" in the usual way (linearly responding1 to a pressure buildup, which creates a normal sound wave that disperses rather quickly, more or less uniform in all directions). Instead, the air has to create a sharp shock wave then, which is two-dimensional and therefore can be heard much further. Now, with small particles like light, this issue doesn't arise, because the air doesn't need to get out of the way in the first place: at least visible photons don't interact with air much at all, so they simply "fly past". When there is an interaction, it pretty much means just a single air molecule is hit by a photon. This gives it a slight "knock" but nothing dramatic. And in particular, it doesn't happen simultaneously along a whole front, so there's no reason a shock wave would build up. 1Another way to look at this is if you consider the gas on a molecular level. The molecules have a lot of thermal movement – the average speed is in the same order of magnitude as the speed of sound. On this microscopic level, sound propagation is basically a "chain of messengers": one molecule gets knocked to be slightly faster or slower than usual. This extra momentum information is carried on not so much by the sound-wave movement, but by the random thermal movements – in a "smooth" way. Therefore a slow-moving object, or a sufficiently small object (like an alpha particle) only causes normal sound waves. But it doesn't work like that if you hit the air on a whole front at faster than the speed of sound: in this case, the forward momentum you impart is larger than the usual thermal movement, and you get supersonic behaviour. leftaroundaboutleftaroundabout $\begingroup$ And if we consider small particles, think about why alpha & beta radiation doesn't make sonic booms. $\endgroup$ – jamesqf $\begingroup$ From what I understand, the speed of sound in a medium increases with pressure; an object moving at the ambient speed of sound must increase the pressure in front of it sufficiently that the pressure wave can stay in front of the object; since light can't pressurize air enough to raise the speed of sound to the speed of light, the only way it can go through air is by "missing" all the molecules therein. Any photon that hits a gas molecule is going to get deflected, since it can't push the molecule out of the way. Would that be a fair statement? $\endgroup$ $\begingroup$ How much light would you need then? $\endgroup$ $\begingroup$ +1. This is a much better answer than the accepted answer. This answer could be made even better by noting that most of the molecules that make up the atmosphere are, on average, moving faster than the speed of sound. $\endgroup$ I know that when an object exceeds the speed of sound[340 m/s] a asonic boom is produced .Light which travels at 300000000m/s [much more than the speed of sound] doesn't produce a sonic boom right? Why? The answer is already in your own question: just because light is not an object. Sound "is a vibration that propagates as a typically audible mechanical wave of pressure and displacement, through a medium such as air or water" and must propagate itself by compressing particles (atoms/ molecules). The sonic boom is, as you rightly say, sound produced by compression of air molecules by an object, and is also propagating through the air. In this animation is represented a: ..sound source traveling at 1.4 times the speed of sound (Mach 1.4). Since the source is moving faster than the sound waves it creates, it leads the advancing wavefront. The sound source will pass by a stationary observer before the observer hears the sound it creates. the shock wave on the edge Light is an electromagnetic wave that propagates also in vacuum modifying electric and magnetic fields. These fields do not interact with air enough to compress them and produce sounds. terdon $\begingroup$ I agree with your answer, but a "fun fact" on light interacting with air. A strong laser pulse can actually ionize air, such you expect sound. $\endgroup$ – mikuszefski $\begingroup$ @jbarker2160 not at all; that's why I posted it as a comment on a specific detail of an otherwise correct answer. $\endgroup$ $\begingroup$ Light is also a stream of photons. It also does interact with air. Not just strong pulses; any light does (it scatters and gets absorbed). $\endgroup$ $\begingroup$ @TheBlackCat: It does not. But the answer says it does not interact, without qualifying it. And it needs to be qualified. $\endgroup$ $\begingroup$ This is an apple and oranges comparison. Sound is not a thing, it's the effect of air molecules and other particles being moved in patterns, whereas light is an particle, which in the case of @mikuszefski's comment can actually produce sound like anything else under the right conditions. A jet bursting through the air at supersonic speeds moves a lot of air causing a resulting loud noise. The interactions on air caused by photons is much more subtle, typically, which is why there's no boom. $\endgroup$ – Chris Pratt There are many differences between light and sound waves noted in other answers, such as the impossibility of any object with nonzero rest mass reaching lightspeed. However, there is one likeness that I don't think has been noticed yet and that is the following: a sound wave travelling at the speed of sound does not make a sonic boom! This is because the sound wave, like a light wave in the EM field, is simply the propagation of a fixed amount of energy. There is nothing "adding to" the sound - or light - wave as it travels. In contrast, an aeroplane flying at the speed of sound is constantly adding energy to the propagating wave through drag mechanisms. That energy can't propagate faster than the object adding energy to the acoustic field, with the result that you get a bunching up of a great deal of acoustic energy in a narrow wavefront. The object is keeping up with the wavefront, continuously adding energy. If the wavefront can outrun the object, the energy gets spread out over a wide volume. Exactly the same thing happens in the phenonemon of Cherenkov radiation where a particle is constantly adding to the electromagnetic field but the presence of dielectric matter means that the disturbance propagates at less than $c$, so we have the same situation of a body "keeping up" with the wavefront and continuously adding to the latter's energy. Cherenkov radiation indeed is an electromagnetic analogue of the sonic boom. Selene RoutleySelene Routley When comparing light waves and sound waves in this fashion, we need to consider what is waving. In a sound wave, the position of air molecules are waving. In a light wave, the strength and direction of the electromagnetic field is waving. This does not exert any force on air molecules (actually it does, but that force is so small, and the frequencies are so fast that there might as well be no force occuring). In order for a sonic boom to happen, an object must move through air faster than the air-molecule-position wave can propagate, so the front edge of the wave builds up behind the object, moving at the speed of sound, with a very high intensity. Even though light moves a million times faster than sound, its impact on the motions of air molecules is practically 0. AnthonyAnthony Not the answer you're looking for? Browse other questions tagged electromagnetism speed-of-light acoustics or ask your own question. Speed of sound relative to density of medium through which sound travels Did Felix Baumgartner produce a sonic boom during his jump? Theoretically if you passed the speed of light in a medium, would there be a sonic boom equivalent? Why light travels slower and sound travels faster in denser mediums? Is it possible to noise cancel a sonic boom? Can sound be faster than 300m/s? after the reentry of the side booster rockets of the Falcon heavy, why was the sonic boom heard? Why is it a sonic "boom" and not a sonic "boooooooooooooooo...m"? Why are sonic booms produced when the speed of a sound source is HIGHER than the speed of sound? Why is the sound produced by a sonic boom low pitched?
CommonCrawl
Advanced Mathematical Methods for Engineers (2015/2016) Master Program in Electronic Engineering Classes will start on Monday September 28th. Ugo Gianazza Thursday from 4 pm to 6 pm and by appointment Calendar of the Course You can follow the progress of the course, downloading the following calendar Table of particular solutions to some linear equations Textbooks and Suggested Books Ordinary Differential Equations and Systems E.A. Coddington, An Introduction to Ordinary Differential Equations, Dover Publications, Inc., New York, 1961. M.W. Hirsch and S. Smale, Differential Equations, Dynamical Systems, and Linear Algebra, Academic Press, New York, 1974. V.V. Nemytskii and V.V. Stepanov, Qualitative Theory of Differential Equations, Dover Publications, Inc., New York, 1989. W.T. Reid, Sturmian Theory for Ordinary Differential Equations, Applied Mathematics Series 31, Springer-Verlag, New York Heidelberg Berlin, 1980. Lebesgue Integral and Basic Tools of Functional Analysis E. DiBenedetto, Real Analysis, Birkhauser, Boston, (2002): Chapters III and V. B. D. Reddy, Introductory Functional Analysis, Texts in Applied Mathematics n. 27, Springer Verlag, New York, (1998). W. Rudin, Functional Analysis, Mc Graw Hill, New York, (1973). W. Rudin, Real and Complex Analysis, Mc Graw Hill, New York, (1966). A. Vasy, Partial Differential Equations: an Accessible Route through Theory and Applications, Graduate Studies in Mathematics, volume 169, American Mathematical Society, (2015): Chapters 1 and 13. E. DiBenedetto, Real Analysis, Birkhauser, Boston, (2002): Chapter VII. F.G. Friedlander, Introduction to the theory of distributions, Cambridge University Press, Cambridge, (1998). S. Salsa, Partial Differential Equations in Action. From Modelling to Theory, 2nd Edition, Springer-Verlag Italia, (2015): Chapter 7. A. Vasy, Partial Differential Equations: an Accessible Route through Theory and Applications, Graduate Studies in Mathematics, volume 169, American Mathematical Society, (2015): Chapters 5 and 9. E. DiBenedetto, Partial Differential Equations, 2nd Edition, Birkhaüser, (2009): Chapter 6. A. Vasy, Partial Differential Equations: an Accessible Route through Theory and Applications, Graduate Studies in Mathematics, volume 169, American Mathematical Society, (2015): Chapters 6, 7, 9, 11 and 12. Notes of the course Modelli e Metodi Matematici I (unfortunately in Italian) Ordinary Differential Equations and Systems - Ordinary differential equations: in normal (or explicit) form, linear; order of a differential equation. Definition of solution. General solution and particular solution. General solution to the linear equation $y'(x) = \varphi(x)y(x) + \psi(x)$. General solution to a separable differential equation $y'(x) = X(x)Y(y(x))$. General solution to a homogeneous equation $y'(x)=f\left(\frac{y}{x}\right)$. The Cauchy Problem for equations and systems in normal form. Peano's local existence theorem. The local existence and uniqueness theorem. The global existence and uniqueness theorem. An extension theorem. The regularity theorem. Stability of solutions with respect to initial conditions and to parameters. Linear systems and equations of order $n$ with continuous coefficients: structure of the general solution. Liouville's Theorem (with proof). The method of variation of constants to determine a particular solution to a full, linear system (with proof). Linear constant-coefficient systems and equations. Homogeneous and general Boundary Value Problems for second order linear equations. Basic Tools of Functional Analysis - A short introduction to the Lebesgue integral. Normed spaces. Examples. Distance defined in terms of the norm. Equivalent norms. Convergent sequences. Cauchy sequences. Completeness, Banach spaces and their characterisation in terms of convergent series. Space with inner product. Main examples. Cauchy-Schwarz inequality. Continuity of the norm and of the inner product. Pythagoras' Theorem. Orthogonal Complement. Examples of spaces with inner product. Hilbert spaces. Projection Theorem. Meaning and applications; Orthonormal Systems. Fischer-Riesz's Theorem. Complete Orthonormal systems. Fourier Expansion. Parseval inequality. Gram-Schmidt orthonormalization process and applications. Simple examples of complete orthonormal systems in $L^2$. Best possible approximation in Hilbert spaces. Introduction to linear operators in normed spaces. Extension and restriction of a linear operator. Boundedness and continuity of a linear operator. Norm of a continuous operator. Continuity of the inverse operator. Continuity of linear operators in finite dimensional Hilbert spaces. Riesz representation theorem. Norm of a functional in a Hilbert Space; Representation of the Dirac delta in $H^1$. Adjoint of an operator in a Hilbert space; Symmetric operator; Self-adjoint operator. Applications of the theory of adjoint operators to Boundary Value Problems in Hilbert spaces; Eigenvalues and eigenvectors; Introduction to Sturm-Liouville Problems. Regular Sturm-Liouville problem. Examples of Sturm-Liouville problems which are not regular; Applications to Boundary Value Problems for complete equations. Distributions - Introduction to the Theory of Distributions. Definition of test function. Definition of distribution. Distributions and functions in $L^1_{loc}$. Definition of measure. Convergence in the sense of distributions. Derivatives in the sense of distributions. Fundamental solution of the Laplace equation in $\mathbb R^3$. Schwarz's Theorem in the framework of distributions. Vector-valued distributions. Div, grad, laplacian of a distribution. Product of a distribution and a function. Composition of a distribution with a function. Tensor product of distributions. Applications of the composition formula. Restriction of a distribution. The problem of division in the framework of distributions: homogenous and complete case. Convolution of a distribution with a function. Main properties of the convolution of a distribution with a function. Space of rapidly decreasing functions. Notion of convergence in $\mathcal{S}(\mathbb R^N)$. Definition of tempered distributions. Notion of convergence for tempered distributions. Fourier transform for tempered distributions. Simple Examples. Main properties of the Fourier transform. Convolution between a distribution and a function. Convolution between two different distributions. Convolution Theorem. $H^k(\Omega)$ spaces. Characterization of $H^k(\mathbb R^N)$ in terms of the Fourier transform. Paley-Wiener Theorem. Application of Fourier transform methods to ordinary differential equations. Partial Differential Equations - Introduction to the Theory of Partial Differential Equations. Waves. Derivation of the $1$-dimensional wave equation under simplifying assumptions. Conservation of the energy. Initial and Boundary Value Problems for the $1$-dimensional wave equation; Existence, uniqueness and continuous dependence with respect to the data of the solution for a IBVP constructed using the separation-of-variable method. Global solution to the homogeneous $1$-dimensional wave equation: existence, uniqueness and stability. Domain of dependence. Regularity of the solution. Generalised solutions. Duhamel's method for the complete $1$-dimensional wave equation. Plane waves in $\mathbb R^N$; Cylindrical and spherical waves in $\mathbb R^3$. Existence of a solution for the global Cauchy Problem in $\mathbb R^N$. Definition of fundamental solution. Fourier transform of the fundamental solution with respect to the space variable. Uniqueness of the solution to the global Cauchy Problem in $\mathbb R^N$. Uniqueness of the solution to initiali-boundary value problems in regular domains. Existence of a solution for the bidimensional square membrane. Coincidence between the solution to the global Cauchy Problem and the distributional solution of a proper complete wave equation. Explicit expression of the $1$-dimensional wave equation. Support of the fundamental solution to the wave equation. Special features of the $3$-dimensional wave equation. Fundamental solution of the $3$-dimensional wave equation. Main properties of the solution. Stability in $L^\infty(\mathbb R^3)$ of the unique solution. Fundamental solution to the $2$-dimensional wave equation. Explicit formulation of the unique solution to the global Cauchy Problem. Main properties of the solution. Duhamel's method for the solution of the complete wave equation: $3$-dimensional and $2$-dimensional cases. Solution of the $3$-dimensional wave equation with a point source and with a moving source: comments on the general case, and explicit resolution both of the subsonic and of the supersonic regimes. The final exam consists in a written test and an oral exam. Written Test Schedule February 2nd 2016, at 9.00 am, classroom EF1 February 22nd 2016, at 9.00 am, classroom EF1 April 5th 2016, at 9.00 am, classroom E7 (restricted) June 23rd 2016, at 9.00 am, classroom EF4 Written Tests February 3rd 2015 pdf and solutions; February 23rd 2015 pdf and solutions; June 25th 2015 pdf and solutions; July 9th 2015 pdf and solutions; September 1st 2015 pdf and solutions; September 17th 2015 pdf and solutions; February 2nd 2016 pdf; February 22nd 2016 pdf; April 5th 2016 pdf; June 23rd 2016 pdf and solutions; August 30th 2016 pdf; September 15th 2016 pdf; Some exercises (some of them in Italian) Qualitative studies of solutions to first order differential equations Consider the differential equation $$y'=1+\arctan y^2.$$ Prove that the general solution is defined on $\mathbb R$ and it is of $C^\infty$ class. Draw a qualitative graph of the solutions. Consider the differential equation $$y'={y^2-x^2\over{1+y^2}}.$$ Determine the set where the general solution is defined. Study the sign of the first order derivative in $\mathbb R^2$ and characterize the set where $y'$ vanishes. Relying on the results of the previous issues, draw a qualitative graph of the solution which satisfy the initial condition $y(-1)=-1$, $y(1)=1$, $y(0)=0$. Draw qualitative graphs of the particular solutions to the equation $$y'=(y^2-1)e^{{y^2}}.$$ Draw qualitative graphs of the particular solutions to the equation $$y'=y(1-e^{{x^2}}).$$ Draw qualitative graphs of the particular solutions to the equation $$y'={y(y-1)\over{1+y^2}}.$$ Consider the Cauchy Problem \begin{equation*} \left\{ \begin{aligned} & y'_\alpha=y_\alpha\sin(e^x+y_\alpha)-e^x\\ & y_\alpha(0)=\alpha\qquad\qquad\alpha\in{\mathbb R}. \end{aligned} \right. \end{equation*} Prove that $\forall\alpha\in{\mathbb R}$ there exists a unique $y_\alpha:{\mathbb R}\rightarrow{\mathbb R}$ solution to the problem, which is of $C^\infty$ class and draw a qualitative graph of $y_\alpha$ when $\alpha\in(-1-2\pi,-1-\pi)$. Consider the Cauchy Problem \begin{equation*} \left\{ \begin{aligned} & y'=y(y+x)e^{-|y|}\\ & y(0)=1 \end{aligned} \right. \end{equation*} and draw a qualitative of the solution and show that it is of $C^\infty$ class. Consider the differential equation $$y'=x(\exp({{1\over{y^2}}})-e).$$ Determine the set $\Omega\subseteq{\mathbb R}^2$ where it is possible to ensure the existence and uniqueness of the solution to the Cauchy Problem given in $(x_o,y_o)\in\Omega$. Draw qualitative graphs of the solutions, assuming that the number of inflection points is as small as possible. Relying on the previous results, determine for which values of $\lambda\in{\mathbb R}$ the solution to the initial condition $y(0)=\lambda$ can be extended on ${\mathbb R}$ and for which values of $\lambda$ it is bounded as well. Exercises on First Order Differential Equation (in Italian) pdf; Exercises on Linear Constant-Coefficient Differential Systems (in Italian) pdf; Exercises on Linear Constant-Coefficient Differential Equations (in Italian) pdf; Exercises on the Lebesgue Integral Compute $$\lim_{n\to+\infty}\int_n^{n^2+1}\sin(nx)\,\exp\left(\left(\frac1n-2\right)x^3\right)\,dx.$$ Compute $$\lim_{n\to+\infty}\iint_{\mathbb R^2}(y e^{-y})_+ (x^2-2x-1)_-[1-\exp(-n\sin^2 x)]dxdy,$$ where for a given function $f$, we define $(f)_\pm:=\max\{\pm f;0\}.$ Relying on the Dominated Convergence Theorem, compute $$\lim_{n\to+\infty}\int_0^{+\infty}\frac{n}{3x}\arctan\left(\frac{4x}{n}\right)\frac1{16+x^2}dx.$$ Exercises on Normed Spaces Consider the sequence of functions $f_n:{\mathbb R}\to{\mathbb R}_+$ with $n\ge1$ defined by $$f_n(x)=|\cos x|^{\frac1n}\,e^{-n|x|}.$$ Study the pointwise limit of the sequence. Prove that $\{f_n\}\subset L^1({\mathbb R})$. Prove that $\{f_n\}\subset C^0_b({\mathbb R})$, the vector space of the bounded and continuous functions defined on $\mathbb R$. Study the limit of the sequence in $L^1(\mathbb R)$ endowed with its natural norm. Study the limit of the sequence in $L^2(\mathbb R)$ endowed with its natural norm. Study the limit of the sequence in $C^0_b(\mathbb R)$ endowed with the supremum norm. Consider the sequence $\{f_n\}$, $n\in{\mathbb N}$, where $$f_n(x)=2xe^{-3x/n}.$$ Prove that $\{f_n\}\subset C^0([0,1])$, and $\{f_n\}\subset L^1(0,1)$. Compute the pointwise limit of $\{f_n\}$. Study the convergence of the sequence in $C^0([0,1])$ endowed with the supremum norm, and in $L^1(0,1)$ endowed with the integral norm. Consider the vector space $$C^1([0,1]=\left\{f:[0,1]\to{\mathbb R}:\ \ f\in C^0([0,1]),\ f'\in C^0([0,1])\right\}.$$ Discuss which of the following quantities is a norm: $\textstyle \sup_{x\in[0,1]}|f'(x)|$; $\textstyle \sup_{x\in[0,1]}|f(x)|+\sup_{[0,1]}|f'(x)|$; $\textstyle \int_0^1|f'(x)|\,dx$; $\textstyle \int_0^1 |f(x)|\,dx+\int_0^1|f'(x)|\,dx$. Consider the sequence $\{f_n\}$, $n\in{\mathbb N}$, where $$f_n(x)=\chi_{[n,n^2]}(x)\,\cos(2x)\,e^{-nx}.$$ Prove that $\{f_n\}\subset L^1(0,1)$, and $\{f_n\}\subset L^2(0,1)$. Study the convergence of the sequence in $L^1(0,1)$ endowed with the integral norm, and in $L^2(0,1)$ endowed with the integral norm of $|f|^2$. Exercises on Linear operators Check that the following operators are linear and bounded. Compute their norms. $A: C^0([0,1])\to C^0([0,1])$, $(Ax)(t):=t^2 x(0)$; $A:L^2(0,1)\to L^2(0,1)$, $(Ax)(t):=x(t)\chi_{(0,\frac12)}(t)$; $A:L^2(0,1)\to L^2(0,1)$, $(Ax)(t):=\int_0^t x(\tau)\,d\tau$. Let $A:C^0([0,1])\to C^0([0,1])$ defined by \[ (Ax)(t):=\int_0^t x(\tau)\,d\tau +x(t). \] Prove that ${\mathcal N}_{(A)}=0$, check tha the inverse operator is continuous, and compute it explicitly. Consider $K:[a_1,b_1]\times[a_2,b_2]\to \mathbb R$ with $K\in C^0([a_1,b_1]\times[a_2,b_2])$. Prove that the operator \[ A: C^0([a_2,b_2])\to C^0([a_1,b_1]),\qquad (Ax)(s):=\int_{a_2}^{b_2}K(s,t)x(t)\,dt \] is linear and continuous. Exercises on Distributions pdf; Last modification: September 15th 2016
CommonCrawl
How lasing happens in CsPbBr3 perovskite nanowires Andrew P. Schlaus1, Michael S. Spencer1, Kiyoshi Miyata ORCID: orcid.org/0000-0001-6748-13371, Fang Liu1, Xiaoxia Wang2, Ipshita Datta ORCID: orcid.org/0000-0003-1884-28823, Michal Lipson3, Anlian Pan2 & X.-Y. Zhu1 Nature Communications volume 10, Article number: 265 (2019) Cite this article Lead halide perovskites are emerging as an excellent material platform for optoelectronic processes. There have been extensive discussions on lasing, polariton formation, and nonlinear processes in this material system, but the underlying mechanism remains unknown. Here we probe lasing from CsPbBr3 perovskite nanowires with picosecond (ps) time resolution and show that lasing originates from stimulated emission of an electron-hole plasma. We observe an anomalous blue-shifting of the lasing gain profile with time up to 25 ps, and assign this as a signature for lasing involving plasmon emission. The time domain view provides an ultra-sensitive probe of many-body physics which was obscured in previous time-integrated measurements of lasing from lead halide perovskite nanowires. Lead halide perovskites (LHPs) continue to draw attention for their extraordinary photovoltaic efficiencies and their expanding roles in optoelectronic research. Light emission with near unity quantum yield, low lasing thresholds, and compositionally tuneable wavelength makes them strong contenders for highly efficient light emitting devices, nanowire (NW) lasers, and potentially exciton-polariton devices1,2,3,4,5. Photophysical studies in the past few years have established that charge carrier properties in LHPs are distinct from those in conventional semiconductors; those in the former are exemplified by exceptional defect tolerance, slow hot carrier cooling, and efficient dynamic screening6. Despite a plethora of publications on carrier dynamics6,7,8,9,10,11,12,13,14 and lasing1,15,16,17,18,19,20 in LHPs, it remains unclear how these two aspects are related. Central to the debate on the lasing mechanisms is the role of excitons. Various mechanisms have been proposed to explain the quantitative characteristics of lasing from NW or other microcavities of LHPs1,15,16,17,18,19,20. The formation of exciton-polaritons, a coherent superposition between an exciton and a photon in a microcavity, is well known in layered LHPs21,22,23 and has been suggested as an underlying lasing mechanism in LHPs16,20. While exciton-polaritons may exist at low excitation density and continuous wave (CW) conditions, lasing under pulsed excitation may occur above the exciton Mott density from stimulated emission from a non-degenerate electron hole plasma (n-EHP, also referred to as a Coulomb-correlated EHP)24,25,26,27. Here, we use ultrafast time-resolved photoluminescence (PL) to directly probe lasing dynamics in CsPbBr3 perovskite NWs via spectral evolution with ~1 ps time resolution. We carry out complementary measurements through ultrafast transient reflectance. We find that the time-integrated laser emission spectra, typical in nearly all reports on LHP lasers published to date, obscure the intrinsic nonlinear physics in the system. Rather, the lasing spectrum under pulsed excitation is a strongly time-dependent function, which consists of red-shifting cavity modes concurrent with blue-shifting laser gain profiles. The latter is unprecedented, and is strong evidence for stimulated emission from an n-EHP coupled with plasmon emission. Nanowire samples We use single crystal CsPbBr3 NWs grown from vapor deposition on sapphire substrates28. These NWs are of triangular cross-section with hundreds of nanometers lateral dimensions and with lengths in the tens of microns range, as shown by scanning electron microscope (SEM) images in Fig. 1a. We find that these vapor-grown NWs are more stable under optical excitation than solution-grown NWs used in our previous studies16,17, permitting experiments at higher excitation densities in the current report. We carry out electromagnetic (EM) wave analysis using the COMSOL Multiphysics finite element method (FEM), as detailed in Supplementary Note 4. A representative simulation result for the lowest energy waveguided mode is shown in Fig. 1b, and the full set of simulated mode profiles and effective refractive indices are shown in Supplementary Figure 8 and Supplementary Figure 9. In the experiment, Fig. 1c, an individual CsPbBr3 NW at a sample temperature of 80 K is uniformly excited by a short laser pulse (~60 fs) with photon energy (ħω = 3.1 eV) above the bandgap (Eg = 2.35 eV). We time-resolve the fluorescence and laser emission spectra from the photo-excited NW using a Kerr-gating technique, as illustrated in Fig. 1c (see Methods). Figure 1d shows the lasing mechanism deduced from the time-resolved measurements, as detailed below. In the following (Figs. 2 and 3), we show results from a single NW of 15 µm length. Additional representative lasing results and discussion from other NWs and nanoplate samples can be found in Supplementary Figure 1, Supplementary Figure 2, and Supplementary Note 1. Nanowire samples, experimental setup, and lasing mechanism. a Scanning electron microscopy images of triangular nanowires grown on sapphire substrate. b FEM simulation of the lowest order waveguiding mode in a nanowire. The electric field polarization is depicted by the cyan arrows. c Illustration of the optical setup for time-resolved Kerr gating experiment. A microscope is used for excitation of the nanowire and collection of the lasing emission. The linear polarization (I) becomes elliptical (II) as it passes through the Kerr medium with the pump pulse. A final polarizer (III) filters polarization perpendicular to the original incident polarization. d Cartoon describing the carrier dynamics from photoexcitation which results in a hot electron hole plasma (Hot EHP) through carrier cooling to a cold electron hole plasma (Cold EHP) finishing with stimulated emission coupled with plasmon emission Excitation density-dependent lasing spectra revealing lasing and saturation thresholds. Two-dimensional pseudo-color plot (a) and horizontal cuts (b) of photoluminescence spectra as a function of increasing excitation energy density (ρ), 0.24–145 µJ cm−2. The excitation energy density in (b) increases from blue to red (0.43, 0.53, 0.62, 0.77, 1.0, 1.1, 1.3, 1.5, 2.4, 2.6, 2.9, 3.1, 3.6, 4.3, 5.3, 6.5, 7.2, 7.9, 8.6, 9.6, 11.0, 12.5, 13.9, 15.4, 17.3, 19.7, 21.6, 24.0, 33.1, 37.9, 49.9, 61.9, 71.5, 83.5, 95.5, 107.5, 119.5, 131.5, 144.0 µJ cm−2). Note the logarithmic scale for emission intensity in (b). These spectra show the evolution of emission from below the lasing threshold (3 μJ cm−2) through the lasing saturation threshold (30 μJ cm−2) and above. Stimulated behavior is confirmed from the PL intensity as a function of excitation density (c), showing the integrated intensity in the lasing spectral region (blue) where the onset of lasing corresponds to superlinear behavior and saturation of the PL intensity (red). An exponential fit (black line) represents the power scaling law of ρ1.5 present below the lasing threshold. As pump fluence increases, the lasing spectral density red-shifts, as shown by the positions (red dots) of the most intense peak in lasing/PL spectra as a function of excitation density (d). The blue curve in (d) shows fit to the excitation density-dependent plasmon energy. All spectra were obtained from a single NW with 15 µm length at a substrate temperature of 80 K Time-resolved lasing. The 2D pseudo-color (normalized intensity) plots show emission spectra at a 15, b 50, and c 100 μJ cm−2. These powers represent the lasing region before saturation, the saturation region, and the high power limit. Line-cuts and integrated spectra can be found in Supplementary Figure 4. All spectra were obtained from a single NW with 15 µm length at a substrate temperature of 80 K Excitation density-dependent lasing We begin our investigation by analyzing the PL spectrum as a function of incident excitation laser fluence (ρ, pulse energy per unit area), as shown in a two-dimensional (2D) pseudo-color plot in Fig. 2a and as horizontal cuts of the pseudo-color plot in Fig. 2b. Here, the power density can be converted to excitation density based on 1 µJ cm−2 = 1.6 × 1023 m−3, as determined by the reflectivity of the sample and a unity absorption coefficient and sample illumination geometry. At ρ < 2 µJ cm−2, PL shows fluence-independent spectral shape with a maximum at 2.357 ± 0.001 eV, in correspondence with that of spontaneous emission from crystalline CsPbBr3 perovskite29. The integrated PL intensity scales with ρα, with α = 1.5 ± 0.1, in this low fluence region (see fit, black line, to the red circles in Fig. 2c). PL emission from the radiative recombination of electrons and holes is a second order (α = 2) process, as is observed for single crystal CsPbBr3 at room temperature30. In contrast, PL emission from excitons is a first-order process (α = 1). The α = 1.5 ± 0.1 value determined here for PL emission from CsPbBr3 at 80 K may be attributed to radiative recombination from electron and hole carriers in the presence of a less radiative population, e.g., partial indirect bandgap character, an equilibrium between free carriers and excitons, large polaron formation, or competitive trapping31. At ρ ≥ 3 μJ cm−2, we observe the emergence of a group of regular and narrow peaks on the lower energy side of the PL peak maximum (Fig. 2a, b) and an increase in the rate of growth of PL intensity with respect to ρ in this spectral region, blue circles in Fig. 2c. The appearance of these peaks corresponds to lasing, as reported earlier15,16,17,19. With increasing ρ, more lasing peaks appear on the lower energy side of the PL spectra. Unlike previous suggestions of exciton-polariton origins16,20, we find that lasing comes from a Coulomb correlated EHP, i.e., n-EPP7, as the calculated excitation density at the lasing threshold of 3 µJ cm−2 is 4.8 × 1023/m3, which is above the exciton Mott-density (see Fig. 5c below and Supplementary Note 5). The appearance of lasing peaks only on the lower energy side of the PL peak, not at the PL peak where oscillator strength is the highest, is consistent with stimulated emission from an n-EHP with the simultaneous emission of a plasmon24,26,32. We observe a saturation in lasing intensity at ρ2 = 30 µJ cm−2 or excitation density of 4.8 × 1024/m3, above which the spectral density shifts back toward the band edge. The increased screening above ρ2 diminishes the Coulomb correlations as the system transitions into a degenerate EHP26; as a result, the transition cross-section for lasing is decreased and this is responsible for the saturation behavior. Further insight into the EHP lasing mechanism comes from the evolution of the lasing spectral profile with time (Fig. 3) and with excitation density (Fig. 2), as detailed below. Ultrafast photoluminescence of nanowire lasers We probe the laser emission with picosecond time resolution to directly monitor the lasing dynamics, Fig. 3a–c. We make four observations: (1) the onset of lasing occurs with time delay Δt1 = ~1–3 ps, (2) the initially broad lasing spectrum narrows in Δt2 = ~3–7 ps, (3) on longer time scales of Δt3 = 5–30 ps, the lasing profile climbs to higher energies and moves closer to the PL peak with increasing time, and (4) throughout the experimental time window, each lasing mode red-shifts with time. For observations (1)–(3), both the onset time and the duration for each step increase with increasing excitation density. The time-resolved results in Fig. 3 show that the broad lasing peaks in Fig. 2 do not reflect the intrinsic linewidths of the lasing peaks, but instead arise from the time integration of spectrally narrower peaks shifting with time or excitation density. These effects are also obvious in horizontal cuts at selected delay times and in the comparison of time-resolved and time-integrated spectra, Supplementary Figure 3. For further discussion, see Supplementary Note 2. These results are all consistent with stimulated emission from an n-EHP. An EHP is inherently a two-level electronic system for which population inversion is difficult. As first proposed by Klingshirn and coworkers for lasing in ZnO NWs24,26,32, stimulated photon emission from an n-EHP is coupled to the emission of a plasmon quanta, i.e., collective oscillations of an EHP, shown schematically in Fig. 1d. This coupling introduces intermediate states and creates a situation reminiscent of a classic three-level lasing scheme33. Critically, the addition of coupling to a plasmon resonance relaxes the criterion for population inversion in the semiconductor, traditionally given by34: $$\mu _e + \mu _h > E_{e,c}(k) + E_{h,v}(k),$$ where μe, μh stand for electron and hole chemical potentials, and Ee,c, Eh,v stand for electron and hole kinetic energies in the valence and conduction bands, respectively; they are both understood to be zero at the band edge. In the presence of plasmon-coupled emission, the inversion criterion is instead given by: $$\mu _e + \mu _h > E_{e,c}\left( k \right) + E_{h,v}\left( k \right) - \hbar \omega _p,$$ where ωp is the plasmon frequency. This relaxation of lasing requirements gives rise to both the sub-bandgap lasing spectra and lasing threshold below the conventional inversion threshold determined by Eq. (1). The plasmon frequency of an electron plasma may be approximated by: $$\omega _p = \sqrt {\frac{{n_ee^2}}{{m^ \ast \varepsilon _{{\mathrm{eff}}}\varepsilon _{\mathrm{o}}}}} ,$$ where ne and m* are the number density and band mass of electrons in the conduction band, εeff is the effective dielectric constant which is frequency and carrier density-dependent, εo is vacuum permittivity. The same equation applies to the hole (nh) in the valence band. The red-shift can thus be understood as a measure of the plasmon energy, ℏωp, which scales with \(\sqrt {n_{\mathrm{e}}}\). The total red-shift in lasing peak positions (red dots in Fig. 2d) is Δp = −80 meV, as the pump fluence increases from the lasing threshold 3 µJ cm−2 to the saturation threshold ~30 µJ cm−2. The estimated density-dependent ℏωp (blue curve in Fig. 2d) is in agreement with experimental data. To approximately describe the data, we use a constant effective dielectric constant εeff = 19.2, which is 4× the optical dielectric constant of CsPbBr36. The large increase in εeff is expected from the Drude-like response of a highly excited semiconductor in the EHP region. The time-dependent lasing profiles in Fig. 3 are in excellent agreement with the n-EHP model, and provide a direct view of the many-body dynamics. Following initial photoexcitation, formation of hot electron/hole distributions occur on the ultra-short time scales of tens of femtoseconds35 and this is not resolved within our time resolution (~1 ps). The establishment of e–h correlations, are commensurate with hot carrier cooling via LO-phonon-lattice thermalization, which occurs on the slightly longer time scale of tcorr ~ 1 ps36; this process increases the oscillator strength and is responsible for the onset time (Δt1) in the appearance of lasing spectrum. The observed increase of Δt1 with excitation density is consistent with density-dependent tcorr and the slower cooling at higher carrier density10,11,12,13,14. Each lasing spectral profile is a product of the gain profile and the photonic modes defined by the natural Fabry–Perot cavity. Following excitation, we observe a short-lived and broad lasing spectrum, and thus the gain profile can be attributed to hot carriers before they are completely thermalized with the phonon bath. The hot lasing profile narrows down to several predominant modes at the low energy end. This is expected from the cooling of hot carriers and increasing magnitude of electronic correlations in the plasma. The lengthening in time duration (Δt2) of this process with increasing excitation density is also in agreement with the slower cooling at higher densities10,11,12,13,14. As the cooling process continues, the laser gain profile blue-shifts and moves closer to the PL peak energy with time. To our knowledge, this observation is unprecedented in NW lasing literature. In the much studied ZnO NW lasing literature, red-shifts in lasing modes over time has been reported and analyzed27. In published spectra from one time-resolved study on ZnO NWs, there was evidence of blue-shift on the tens of picoseconds time scale but the authors did not provide an analysis or interpretation37. The blue-shifting effect is opposite to the initial red-shift attributed to hot carrier cooling. The blue-shift on the tens of picoseconds time scale in our time-resolved measurement can in fact be taken as signature for plasmon-coupled lasing from an n-EHP, dictated by Eq. (3). As the carrier density is depleted with increasing time via stimulated emission, the plasmon energy (ℏωp) decreases. At an initial excitation density above the laser saturation threshold (~30 µJ cm−2), the plasmon energy blue-shifts by Δℏωp ~ 60 meV over the 25 ps time window probed here. The extent of total blue-shift increases with initial excitation density from the lasing to the saturation threshold, as is shown by a comparison of Fig. 3a, b. In the saturation regime, Fig. 3b, c, we no longer observe significant spectral changes, rather the increased screening at higher excitation density prolongs each step of the time-dependent laser spectral evolution. Present throughout all excitation densities above the lasing threshold is the temporal red-shift of individual lasing modes. The cavity geometry is fixed by the NW but the refractive index, nr, around the bandgap decreases with increasing excitation density34,38. Since the energy of each cavity mode (j) is inversely proportional to refractive index, Ej ∝ 1/nr, the mode energy decreases as nr recovers with time, and the carrier density is depleted. Moreover, a higher initial excitation density corresponds to a larger initial decrease in nr; this gives a steeper slope, −dEj/dt, in the red-shift of mode energy, as is observed experimentally in Fig. 3. Similarly, the carrier density-dependent nr also accounts for the blue-shift in each lasing mode with ρ in Fig. 2. Supplementary Figures 4–6 measure this quantitative spectra-shifting of the lasing modes and is further described in Supplementary Note 3. Transient reflectance of bulk single crystal CsPbBr3 To confirm the nature of the excitations and electronic phase transitions, we carry out transient reflectance (ΔR/R) measurement on CsPbBr3 single crystals, Fig. 4. We use conditions close to those in NW lasing experiments. The sample was cooled to 80 K, excited across the bandgap at ℏω = 3.1 eV, and probed by a white light pulse. Below the lasing threshold at ρ = 0.6 µJ cm−2 in 2D plot in Fig. 4a or horizontal cuts at Δt = 50 fs for ρ = 0.015–1.2 µJ cm−2 in Fig. 4e, we observe an antisymmetric peak, which has been observed and analyzed before in transient reflectance studies on single crystals CsPbBr3 and its hybrid counterparts30,39. The anti-symmetric peak shape results mainly from the frequency dependent and photo-induced change in the refractive index Δn(hν), which can be obtained at low fluences from the Kramers–Kronig transformation of photo-induced change in the absorption coefficient, Δα(ℏω)30,39. In the low pump fluence region (at ρ < 3 µJ cm−2), the anti-symmetric peak crosses zero at ℏω = 2.368 ± 0.001 eV, which corresponds to the peak position of the excitonic resonance in the absorption spectrum39. This value is very close to the PL peak position (ℏω = 2.357 ± 0.001 eV), indicating a very small Stokes shift. Note that, the anti-symmetric peak shape remains constant for low fluence at t > ~3 ps, it slightly blue-shifts and broadens at shorter times due to initial hot carrier cooling and localization. The full set of transient reflectance spectra can be found in Supplementary Figure 7. Transient reflectance spectra reveal transitions from excitonic resonance to n-EHP and d-EHP. a–d 2D pseudo-color plots of transient reflectance spectra from a CsPbBr3 single crystal at excitation densities of 0.6, 6, 30, and 120 μJ cm−2, respectively. The excitation photon energy was 3.1 eV and the sample temperature was 80 K. e Transients taken at time delay = 50 fs for a variety of pump fluences spanning the regimes of interest. f Simulated spectral shapes typical of a plasma (dotted curve) and optical gain (dot-dashed curve) and the sum of the two (solid curve); see Supplementary Note 6 for details on spectral simulation When the excitation density is above the lasing threshold, Fig. 4b, c or horizontal cuts at Δt = 50 fs for ρ = 6–120 µJ cm−2 in Fig. 4e, the zero crossing points are red-shifted by ~0.01 eV to 2.358 ± 0.001 eV. The bandgap renormalization may change in fundamental ways as the system loses excitonic resonance character and enters the n-EHP region. In this high-density region, the reflectance is no longer anti-symmetric. In particular, a positive step-like feature develops above the resonance, in agreement with theoretical simulation. In the simulation, we approximate the band edge as a single oscillator and into account of both reflection and stimulated emission from an EHP, Fig. 4f (see Supplementary Note 6 on simulation details). Below the zero crossing, a negative feature broadens to lower energy at higher excitation densities; this is similar to excitation power dependent lasing seen in the NWs (see Figs. 2 and 3) and is here attributed predominately to amplified spontaneous emission (ASE), coupled with plasmon emission, from the n-EHP. While the antisymmetric spectral shape in transient reflectance corresponds to bleaching of the excitonic correlations39, this feature diminishes for ρ > 30 µJ cm−2 and disappears completely on short time scales at the highest excitation density of ρ = 120 µJ cm−2, consistent with the transition from n-EHP to d-EHP. At each excitation density, the antisymmetric feature, and thus the electron–hole correlations in the EHP, rise with increasing time. We follow the dynamics of this process from the time dependence of the spectral intensity at ~2.37 eV, which is the positive peak position of the antisymmetric feature. The rise of this signal (blue in Fig. 4b–d) slows down by an order of magnitude as initial excitation density increases from 6 to 120 µJ cm−2. Figure 5a shows vertical cuts of the 2D pseudo-color plots at 2.37 eV, including only excitation densities above the lasing threshold (6–120 µJ cm−2). We find that the initial reflectivity decreases with increasing excitation density, as expected due to the diminished electron–hole correlations in the EHP. The growth of this signal on the picosecond time scale slows down by one order of magnitude, as the excitation density increases from 6 to 120 µJ cm−2. This results from the slowed cooling of hot carriers with increasing excitation density, an effect attributed to phonon bottlenecks and/or low thermal conductivity in LHP10,11,12,13,14. The cooling of hot carriers towards the band edges is known to increase the electron–hole correlations, and thus oscillator strength34. The impact of carrier cooling rate is also seen in the integrated lasing intensity, Fig. 5b. The onset of lasing occurs on the same time scales as the buildup of carriers near the band edges. Further supporting the above interpretation, we model the thermodynamic phase transitions for the Mott-threshold (ρ1) and the d-EHP threshold (ρ2) as a function of temperature and excitation density, Fig. 5c (see Supplementary Note 5). The experimentally determined lasing threshold (ρ1) and the saturation threshold (ρ2) are shown as dashed lines; they cross the calculated phase boundaries at electronic temperatures of 600 and 900 K, respectively. These electronic temperatures are the upper bounds of the estimated electronic temperatures on the tens of picoseconds time scale from previous spectroscopic measurements at the corresponding excitation densities12. Thus, the lasing threshold is at or above the Mott density for transition to the n-EHP and the saturation thresholds correspond to transition to the d-EHP phase. Carrier cooling, phase transitions, and mode energies. a Time evolution of the positive feature in the transient reflectance representing the carrier cooling dynamics as various pumping powers. b Integrated lasing intensity from the plots in Fig. 3, demonstrating the connection with the time scales for carrier cooling in panel (a). c Phase diagram showing the temperature and carrier densities at which the Mott densities and population inversions lie, leading to the three electronic phase regimes: thermodynamic population of carriers and excitons, nondegenerate electron hole plasma (n-EHP), degenerate electron hole plasma (d-EHP). d Experimentally determined modes (extrapolated to the lasing threshold) shown along the mode profile calculated using a single Lorentzian oscillator for the dielectric function (see Supplementary Note 7) A NW laser is distinct from a conventional laser in that the whole NW lasing cavity is the gain medium. This cavity character is responsible for the time-dependent red-shift of each lasing mode (Fig. 3) attributed to the excitation density-dependent refractive index, n(ρ)34. Another manifestation is the nonlinear mode dispersion reflected in the decreasing energy spacing of the lasing modes as the energy of the modes moves closer to the excitonic resonance, i.e., PL peak. Such a nonlinear dispersion indicates strong light–matter interaction, which has been interpreted previously as due to polaritons in the bottleneck region16. In view of the current findings on plasmon-coupled n-EHP lasing mechanism in CsPbBr3 NWs, we now re-interpret this strong light–matter interaction from the frequency dependent refractive index, n(ω). Considering only the real part of the refractive index, the jth mode in a Fabry–Perot cavity can be approximated by $$E_j = \frac{{hc}}{{2L}}\frac{j}{{n(\omega )}}.$$ When ω approaches a resonance from the lower energy side, n(ω) increases and ΔEj = Ej+1 − Ej decreases. This is obvious in a simple model involving a Lorentzian oscillator embedded in a dielectric medium for the n-EHP; the dielectric function is40: $${\it{\epsilon }}\left( \omega \right) = 1 + \frac{{\omega _p^2}}{{\omega _o^2 - \omega ^2 - i{\mathrm{\Gamma }}\omega }},$$ and n(ω) is related to the dielectric function by: $$\sqrt {{\it{\epsilon }}\left( \omega \right)} = n(\omega ) + ik(\omega ),$$ where ω0 is the oscillator frequency, Γ is the dephasing rate, and ωP is the plasmon frequency; ℏωp is ~20 meV at the lasing threshold (Fig. 2d). Figure 5d shows the dispersion of lasing modes (open circles) extrapolated to the lasing threshold for three starting excitation densities (see Supplementary Figures 4–6 in Supplementary Information). The nonlinear dispersion with negative mode spacing curvature can be well-described by Eq. (4), black curve in Fig. 5d, numerically solved for initial conditions, with n(ω) given by a Lorentzian oscillator (see Supplementary Note 7 and Supplementary Figure 10). To summarize, we provide a time-domain view of lasing and many-body interaction in single crystal CsPbBr3 perovskite NWs. These measurements establish that above threshold, lasing in CsPbBr3 NWs is not due to excitons or exciton-polaritons, but to stimulated emission from a nondegenerate electron-hole plasma coupled with plasmons. We show that the lasing mode distribution in the NW is a strong function of excitation density and time (in pulsed operation) due to changes in both laser gain profile and refractive index. While these findings reveal fundamental limitations of a NW as a stable lasing platform, there can be engineering approaches to overcome these obstacles. Examples include employing long excitation pulses and feedback in pumping power to stabilize the shifting lasing modes and coupling the NW to an external optical cavity or photonic structure to select a narrow lasing wavelength window while suppressing all other wavelengths. From a fundamental perspective, the shifting lasing modes in a NW cavity serves as an ultra-sensitive probe of many-body dynamics. While the present study focuses on CsPbBr3 NWs, the conclusions likely apply to other lead halide perovskite systems. Compared to their hybrid organic–inorganic counterparts, CsPbBr3 possesses higher exciton binding energy, and thus smaller exciton radius41,42. This indicates that the Mott thresholds in hybrid lead bromide perovskites are lower than that in CsPbBr3 and lasing in these hybrid systems is also expected to be due to stimulated emission from a nondegenerate electron–hole plasma coupled with plasmons. Time-resolved photoluminescence All PL measurements, both static and time-resolved, were performed at a sample temperature of 80 K on sapphire substrates mounted to the copper cold head with silver paste in a cryostat (Cryo Industries of America, RC102-CFM Microscopy Cryostat with LakeShore 325 Temperature Controller). The cryostat was operated at pressures <10−7 mbar (pumped by a Varian turbo pump) and cooled with flow through liquid nitrogen. The second harmonic of a Clark-MXR Impulse laser (repetition rate of 0.5 MHz, 250 fs pulses, 1040 nm) was used to pump a home-built non-collinear optical parametric amplifier to generate 800 nm pulses (~60 fs) which was used to generate 400 nm pulses via second harmonic generation. The beam size is expanded to ensure illumination across the entire NW and focused onto the sample using a far-field epifluorescence microscope (Olympus, IX73 inverted microscope) equipped with a ×40 objective with NA 0.6, with correction collar (Olympus LUCPLFLN40X) and a 490 nm long-pass dichroic mirror (Thorlabs, DMPL490R). The emission spectra for both static and time-resolved measurements were collected with a liquid nitrogen cooled CCD (Princeton Instruments, PyLoN 400B) coupled to a spectrograph (Princeton Instruments, Acton SP 2300i) with 1200 mm−1 grating blazed at 300 nm. We used the Lightfield software suite (Princeton Instruments) and LabVIEW (National Instruments) in data collection. Data analysis was done in Igor Pro (WaveMetrics). To time resolve the emission from the NWs, we used a Kerr gating technique. The emission from the NW was passed through a linear polarizer then focused into a cuvette of liquid CS2 noncollinear with a gating pulse supplied by the fundamental of the Clark-MXR Impulse (0.5 MHz, 250 fs pulse width, 1040 nm). As a result of the optical Kerr effect induced within liquid CS2 by the gating pulse, an elliptical polarization is generated which is then passed through a second linear polarizer (identical to the first) rotated to be cross-polarized from the initial polarization. The projection of the lasing signal through the second polarizer was then passed into the same spectrometer and camera as used in the static experiment. A homebuilt LabView program was used for data acquisition. Transient reflectance Transient Reflectance measurements were done on a homebuilt transient reflectance setup, pumped by Ti:Sapphire amplifier (KM Labs, Wyvern 1000-50 operating at 10 kHz). A high-speed linear array detector (AVIIVA EM4, EV71YEM4CL1014-BA9, e2v) was used in conjunction with LabView for data acquisition. A Bruker Dimension FastScan AFM in ambient conditions was used for all atomic force microscopy measurements. In COMSOL FEM simulations, we searched for modes at 2.408 eV (525 nm) using experimental geometry from AFM. As comparisons, we carried out simulations for equilateral triangle cross-sections with large lateral dimensions (see Supplementary Figure 8 and Supplementary Figure 9). All relevant data are available from the authors upon request. Zhu, H. et al. Lead halide perovskite nanowire lasers with low lasing thresholds and high quality factors. Nat. Mater. 14, 636–642 (2015). Veldhuis, S. A. et al. Perovskite materials for light‐emitting diodes and lasers. Adv. Mater. 28, 6804–6834 (2016). Sutherland, B. R. & Sargent, E. H. Perovskite photonic sources. Nat. Photonics 10, 295–302 (2016). Becker, M. A. et al. Bright triplet excitons in caesium lead halide perovskites. Nature 553, 189–193 (2018). Zhang, W., Eperon, G. E. & Snaith, H. J. Metal halide perovskites for energy applications. Nat. Energy 1, 16048 (2016). Miyata, K., Atallah, T. L. & Zhu, X.-Y. Lead halide perovskites: crystal–liquid duality, phonon glass electron crystals, and large polaron formation. Sci. Adv. 3, e1701469 (2017). ADS Article Google Scholar Saba, M. et al. Correlated electron–hole plasma in organometal perovskites. Nat. Commun. 5, 5049 (2014). Katan, C., Mohite, A. D. & Even, J. Entropy in halide perovskites. Nat. Mater. 17, 377–379 (2018). Miyata, K. & Zhu, X.-Y. Ferroelectric large polarons. Nat. Mater. 17, 379–381 (2018). Fu, J. et al. Hot carrier cooling mechanisms in halide perovskites. Nat. Commun. 8, 1300 (2017). Yang, Y. et al. Observation of a hot-phonon bottleneck in lead-iodide perovskites. Nat. Photonics 10, 53–59 (2016). Yang, J. et al. Acoustic-optical phonon up-conversion and hot-phonon bottleneck in lead-halide perovskites. Nat. Commun. 8, 14120 (2017). Shen, Q. et al. Slow hot carrier cooling in cesium lead iodide perovskites. Appl. Phys. Lett. 111, 153903 (2017). Price, M. B. et al. Hot-carrier cooling and photoinduced refractive index changes in organic–inorganic lead halide perovskites. Nat. Commun. 6, 8420 (2015). Park, K. et al. Light–matter interactions in cesium lead halide perovskite nanowire lasers. J. Phys. Chem. Lett. 7, 3703–3710 (2016). Evans, T. J. S. et al. Continuous-wave lasing in cesium lead bromide perovskite nanowires. Adv. Opt. Mater. 6, 1700982 (2018). Fu, Y. et al. Broad wavelength tunable robust lasing from single-crystal nanowires of cesium lead halide perovskites (CsPbX3, X = Cl, Br, I). ACS Nano 10, 7963–7972 (2016). Fu, Y. et al. Nanowire lasers of formamidinium lead halide perovskites and their stabilized alloys with improved stability. Nano Lett. 16, 1000–1008 (2016). Eaton, S. W. et al. Lasing in robust cesium lead halide perovskite nanowires. Proc. Natl Acad. Sci. USA 113, 1993–1998 (2016). Su, R. et al. Room temperature polariton lasing in all-inorganic perovskite nanoplatelets. Nano Lett. 17, 3982–3988 (2017). Nguyen, H. S. et al. Quantum confinement of zero-dimensional hybrid organic–inorganic polaritons at room temperature. Appl. Phys. Lett. 104, 81103 (2014). Lanty, G., Lauret, J.-S., Deleporte, E., Bouchoule, S. & Lafosse, X. UV polaritonic emission from a perovskite-based microcavity. Appl. Phys. Lett. 93, 81101 (2008). Fujita, T., Sato, Y., Kuitani, T. & Ishihara, T. Tunable polariton absorption of distributed feedback microcavities at room temperature. Phys. Rev. B 57, 12428 (1998). Klingshirn, C. et al. 65 Years of ZnO research—old and very recent results. Phys. Status Solidi 247, 1424–1447 (2010). Klingshirn, C., Hauschild, R., Fallert, J. & Kalt, H. Room-temperature stimulated emission of ZnO: alternatives to excitonic lasing. Phys. Rev. B 75, 115203 (2007). Klingshirn, C. Semiconductor Optics. https://doi.org/10.1007/978-3-540-38347-5 (2007). Röder, R. & Ronning, C. Review on the dynamics of semiconductor nanowire lasers. Semicond. Sci. Technol. 33, 033001 (2018). Shoaib, M. et al. Directional growth of ultralong CsPbBr3 perovskite nanowires for high-performance photodetectors. J. Am. Chem. Soc. 139, 15592–15595. Zhu, H. et al. Screening in crystalline liquids protects energetic carriers in hybrid perovskites. Science 353, 1409–1413 (2016). Zhu, H. et al. Organic cations might not be essential to the remarkable properties of band edge carriers in lead halide perovskites. Adv. Mater. 29, 1603072 (2017). Yi, H. T. et al. Experimental demonstration of correlated flux scaling in photoconductivity and photoluminescence of lead-halide perovskites. Phys. Rev. Appl. 10, 054016 (2018). Klingshirn, C. F. ZnO: material, physics and applications. ChemPhysChem 8, 782–803 (2007). Vigil-Fowler, D., Louie, S. G. & Lischner, J. Dispersion and line shape of plasmon satellites in one, two, and three dimensions. Phys. Rev. B 93, 235446 (2016). Haug, H. & Koch, S. W. Quantum Theory of the Optical and Electronic Properties of Semiconductors (World Scientific, Singapore, 2009). Richter, J. M. et al. Ultrafast carrier thermalization in lead iodide perovskite probed with two-dimensional electronic spectroscopy. Nat. Commun. 8, 376 (2017). Koch, S. W., Kira, M., Khitrova, G. & Gibbs, H. M. Semiconductor excitons in new light. Nat. Mater. 5, 523–531 (2006). Wille, M. et al. Carrier density driven lasing dynamics in ZnO nanowires. Nanotechnology 27, 225702 (2016). Versteegh, M. A. M., Kuis, T., Stoof, H. T. C. & Dijkhuis, J. I. Ultrafast screening and carrier dynamics in ZnO: theory and experiment. Phys. Rev. B 84, 035207 (2011). Yang, Y. et al. Low surface recombination velocity in solution-grown CH3NH3PbBr3 perovskite single crystal. Nat. Commun. 6, 7961 (2015). Boyd, R. W. Nonlinear Optics (Academic Press, London, 2008). Galkowski, K. et al. Determination of the exciton binding energy and effective masses for methylammonium and formamidinium lead tri-halide perovskite semiconductors. Energy Environ. Sci. 9, 962–970 (2016). Yang, Z. et al. Impact of the halide cage on the electronic properties of fully inorganic cesium lead halide perovskites. ACS Energy Lett. 2, 1621–1627 (2017). X.-Y.Z. acknowledges the US Department of Energy, Office of Energy Science Grant DE-SC0010692 for supporting the time-resolved lasing and the transient reflectance experiments, the Vannevar Bush Faculty Fellowship of the US Department of Defense Grant ONR N00014-18-1-2080 for supporting the time-integrated photoluminescence experiments, and the Energy Frontier Research Center of the US Department of Energy Grant DE-SC0019443 for supporting the EM modeling. A.P. acknowledges support from the National Natural Science Foundation of China (Grant numbers 51525202 and 61574054) for supporting the growth of NW samples. The authors thank Prakriti Joshi and Skyler Jones for growing the CsPbBr3 crystals used in the transient reflectance measurements and Kihong Lee and Xinjue Zhong for help with atomic force microscope imaging. Department of Chemistry, Columbia University, New York, NY, 10027, USA Andrew P. Schlaus, Michael S. Spencer, Kiyoshi Miyata, Fang Liu & X.-Y. Zhu Key Laboratory for Micro-Nano Physics and Technology of Hunan Province, College of Materials Science and Engineering, Hunan University, Changsha, 410082, China Xiaoxia Wang & Anlian Pan Department of Electrical Engineering, Columbia University, New York, NY, 10027, USA Ipshita Datta & Michal Lipson Andrew P. Schlaus Michael S. Spencer Kiyoshi Miyata Fang Liu Xiaoxia Wang Ipshita Datta Michal Lipson Anlian Pan X.-Y. Zhu A.P.S., M.S.S. and X.-Y.Z. conceived and initiated this work. A.P.S. and M.S.S. performed time-resolved PL measurements. K.M. constructed the transient reflectance measurements with experimental help from F.L. X.W. synthesized the nanowires and A.P. supervised the synthesis. M.S.S. performed electromagnetic wave modeling with help from I.D. and M.L. A.P.S. and M.S.S. analyzed the data. A.P.S., M.S.S. and X.-Y.Z. wrote the manuscript. All authors read and commented on the manuscript. Correspondence to X.-Y. Zhu. Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Schlaus, A.P., Spencer, M.S., Miyata, K. et al. How lasing happens in CsPbBr3 perovskite nanowires. Nat Commun 10, 265 (2019). https://doi.org/10.1038/s41467-018-07972-7 The Path to Enlightenment: Progress and Opportunities in High Efficiency Halide Perovskite Light-Emitting Devices Chen Zou , Congyang Zhang , Young-Hoon Kim , Lih Y. Lin & Joseph M. Luther ACS Photonics (2021) Mixed halide perovskites for spectrally stable and high-efficiency blue light-emitting diodes Max Karlsson , Ziyue Yi , Sebastian Reichert , Xiyu Luo , Weihua Lin , Zeyu Zhang , Chunxiong Bao , Sai Bai , Guanhaojie Zheng , Pengpeng Teng , Lian Duan , Yue Lu , Kaibo Zheng , Tönu Pullerits , Carsten Deibel , Weidong Xu , Richard Friend & Feng Gao Nature Communications (2021) Recent progress in all-inorganic metal halide nanostructured perovskites: Materials design, optical properties, and application Lianzhen Cao , Xia Liu , Yingde Li , Xiusheng Li , Lena Du , Shengyao Chen , Shenlong Zhao & Cong Wang Frontiers of Physics (2021) Reducing the impact of Auger recombination in quasi-2D perovskite light-emitting diodes Yuanzhi Jiang , Minghuan Cui , Saisai Li , Changjiu Sun , Yanmin Huang , Junli Wei , Li Zhang , Mei Lv , Chaochao Qin , Yufang Liu & Mingjian Yuan Superior single-mode lasing in a self-assembly CsPbX3 microcavity over an ultrawide pumping wavelength range Guoen Weng , Jiyu Yan , Shengjie Chen , Chunhu Zhao , Hanbing Zhang , Jiao Tian , Yuejun Liu , Xiaobo Hu , Jiahua Tao , Shaoqiang Chen , Ziqiang Zhu , Hidefumi Akiyama & Junhao Chu Photonics Research (2021) Editors' Highlights Nature Communications ISSN 2041-1723 (online)
CommonCrawl
Predicting and identifying signs of criticality near neuronal phase transition Chandrasiri, Malithi Eranga Chandrasiri, M. E. (2020). Predicting and identifying signs of criticality near neuronal phase transition (Thesis, Doctor of Philosophy (PhD)). The University of Waikato, Hamilton, New Zealand. Retrieved from https://hdl.handle.net/10289/13437 This thesis examines the critical transitions between distinct neural states associated with the transition to neuron spiking and with the induction of anaesthesia. First, mathematical and electronic models of a single spiking neuron are investigated, focusing on stochastic subthreshold dynamics on close approach to spiking and to depolarisation-blocked quiescence (spiking death) transition points. Theoretical analysis of subthreshold neural behaviour then shifts to the anaesthetic-induced phase transition into unconsciousness using a mean-field model for interacting populations of excitatory and inhibitory neurons. The anaesthetic-induced changes are validated experimentally using published electrophysiological data recorded in anaesthetised rats. The criticality hypothesis associated with brain state change is examined using neuronal avalanches for experimentally recorded rat local field potential (LFP) data and mean-field pseudoLFP simulation data. We compare three different implementations of the FitzHugh--Nagumo single spiking neuron model: a mathematical model by H. R. Wilson, an alternative due to Keener and Sneyd, and an op-amp based nonlinear oscillator circuit. Although all three models can produce nonlinear ``spiking" oscillations, our focus is on the altering characteristics of noise-induced fluctuations near spiking onset and death via Hopf bifurcation. We introduce small-amplitude white noise to enable a linearised stochastic analyses using Ornstein--Uhlenbeck theory to predict variance, power spectrum and correlation of voltage fluctuations during close approach to the critical point, identified as the point at which the real part of the dominant eigenvalue becomes zero. We validate the theoretical predictions with numerical simulations and show that the fluctuations exhibit critical slowing down divergences when approaching the critical point: power-law increases in the variance of the fluctuations simultaneous with prolongation of the system response. We expand the study of stochastic behaviour to two spatial dimensions using the Waikato mean-field model operating near phase transition points controlled by the infusion or elimination of anaesthetic inhibition. Specifically, we investigate close approach to the critical point (CP), and to the points of loss of consciousness (LOC) and recovery of consciousness (ROC). We select the equilibrium states using $\lambda$ anaesthetic inhibition and $\Delta V^{\text{rest}}_e$ cortical excitation as control parameters, then analyse the voltage fluctuations evoked by small-amplitude spatiotemporal white noise. We predict the variance and power spectrum of voltage fluctuations near the marginally stable LOC and ROC transition points, then validate via numerical simulation. The results demonstrate a marked increase in voltage fluctuations and spectral power near transition points. This increased susceptibility to low-intensity white noise stimulation provides an early warning of impending phase transition. Effects of anaesthetic agents on cortical activity are reflected in local field potentials (LFPs) by the variation of amplitude and frequency in voltage fluctuations. To explore these changes, we investigate LFPs acquired from published electrophysiological experiments of anaesthetised rats to extract amplitude distribution, variance and time-correlation statistics. The analysis is broadened by applying detrended fluctuation analysis (DFA) to detect long-range dependencies in the time-series, and we compare DFA results with power spectral density (PSD). We find that the DFA exponent increases with anaesthetic concentration, but is always close to 1. The penultimate chapter investigates the evidence of criticality in anaesthetic induced phase-transitions using avalanche analysis. Rat LFP data reveal an avalanche power-law exponent close to $\alpha = 1.5$, but this value depends on both the time-bin width chosen to separate the events and the \textit{z}-score threshold used to detect these events. Power-law behaviour is only evident at lower anaesthetic concentrations; at higher concentrations the avalanche size distribution fails to align with a power-law nature. Criticality behaviour is also indicated in the Waikato mean-field model for anaesthetic-induced phase-transition using avalanches detected from the pseudoLFP time-series, but only at the critical point (CP) and at the secondary phase-transition points of LOC and ROC. In summary, this thesis unveils evidence of characteristic changes near phase transition points using computer-based mathematical modelling and electrophysiological data analysis. We find that noise-driven fluctuations become larger and persist for longer as the critical point is closely approached, with similar properties being seen not only in single-neuron and neural population models, but also in biological LFP signals. These results consistent with an increase of susceptibility to noise perturbations near phase transition point. Identification of neuronal avalanches in rat LFP data for low anaesthetic concentrations provides further support for the criticality hypothesis. Steyn-Ross, D. Alistair Steyn-Ross, Moira L. Voss, Logan J.
CommonCrawl
Why does AES have exactly 10 rounds for a 128-bit key, 12 for 192 bits and 14 for a 256-bit key size? I was reading about the AES algorithm to be used in one of our projects and found that the exact number of rounds is fixed in AES for specific key sizes: $$ \begin{array}{|c|c|} \hline \begin{array}{c} \textbf{Key Size} \\ \left(\text{bits}\right) \end{array} &\begin{array}{c} \textbf{Rounds} \\ \left(\text{number}\right) \end{array} \\ \hline 128 & 10 \\ \hline 192 & 12 \\ \hline 256 & 14 \\ \hline \end{array} $$ Why these specific numbers of rounds only? kapilkapil $\begingroup$ Note that AES is a subset of the Rijndael cipher. The same number of rounds are applicable for Rijndael, but there are more options available depending on key size and block size (AES has just one block size: 128 bits and 3 key sizes, Rijndael has 3 block sizes and 5 key sizes, and therefore 15 combinations of both, rather than just the 3 for AES). $\endgroup$ – Maarten Bodewes♦ Mar 24 at 13:32 Why these specific number of rounds only? Because AES is a standard; AES is an acronym for "Advanced Encryption Standard". The standard specifies these specific number of rounds to ensure that different implementations are interoperable. Why not more or less? The reason these specific numbers of rounds were chosen was a choice of the designers. They did a lot of math to determine that these were the sweet spot between sufficient security and optimal performance. Less might be insecure, and more might be slower with no benefit. To quote the above book (from Section 3.5 The Number of Rounds): For Rijndael versions with a longer key, the number of rounds was raised by one for every additional 32 bits in the cipher key. This was done for the following reasons: One of the main objectives is the absence of shortcut attacks, i.e. attacks that are more efficient than an exhaustive key search. Since the workload of an exhaustive key search grows with the key length, shortcut attacks can afford to be less efficient for longer keys. (Partially) known-key and related-key attacks exploit the knowledge of cipher key bits or the ability to apply different cipher keys. If the cipher key grows, the range of possibilities available to the cryptanalyst increases. puzzlepalace Ella Rose♦Ella Rose $\begingroup$ That quote only explains why with longer keys the number of rounds is higher. It does not explain why exactly the 128 bit version uses 10 rounds. The reason for the 10 rounds (which I could misrebemmer since it has been almost 20 years) is as follows: The security against all known attacks was analyzed and 6 rounds was found to be enough against attacks known at the time. It takes 2 rounds to achieve a full avalanche effect in AES, so 10 rounds corresponds to enough rounds for a full avalanche effect before and after the 6 rounds needed for security against known attacks. $\endgroup$ – kasperd Mar 22 at 10:58 $\begingroup$ @kasperd I think it was 6 rounds at the time + 2 rounds because attacks only get better + 2 rounds for full avalanche. $\endgroup$ – Martin Bonner Mar 22 at 12:14 $\begingroup$ @MartinBonner The way the paper described it was "so it can be thought of as padding the vulnerable 6 rounds with two full diffusion steps" or something along those lines, as kasperd says. $\endgroup$ – forest Mar 22 at 22:44 Not the answer you're looking for? Browse other questions tagged aes or ask your own question. Increase number of rounds for SPN and Feistel ciphers Is AES-256 weaker than 192 and 128 bit versions? What is the security loss from reducing Rijndael to 128 bits block size from 256 bits? Can Poly1305-AES be used with AES-256? What are the constraints on using GCM with a tag size of 96 and 128 bits? AES - What is the advantage of a 256-bit key with a 128-bit block cipher? AES function with 128 bit key and 128 bit input size - does it have perfect secrecy? Replacing a block cipher's key schedule with a stream cipher A Lightweight Matrix Suggestion for MixColumns State of AES OCB-AES: Ambiguous definition of "Encipher" in RFC document
CommonCrawl
Problem 136 Lead(Il) chromate $\left(\mathrm{PbCrO}_{4}\right… Lead(Il) chromate $\left(\mathrm{PbCrO}_{4}\right)$ is used as the yellow pigment for marking traffic lanes but is banned from house paint because of the risk of lead poisoning. It is produced from chromite (FeCr_a $\mathrm{O}_{4} )$ an ore of chromium: 4 \mathrm{FeCr}_{2} \mathrm{O}_{4}(s)+8 \mathrm{K}_{2} \mathrm{CO}_{3}(a q)+7 \mathrm{O}_{2}(g) \longrightarrow 2 \mathrm{Fe}_{2} \mathrm{O}_{3}(s)+8 \mathrm{K}_{2} \mathrm{CrO}_{4}(a q)+8 \mathrm{CO}_{2}(g) Lead(II) ion then replaces the $\mathrm{K}^{+}$ ion. If a yellow paint is to have 0.511$\% \mathrm{PbCrO}_{4}$ by mass, how many grams of chromite are needed per kilogram of paint? When powdered zinc is heated with sulfur, a violent reaction occurs, and zinc sulfide forms: \mathrm{Zn}(s)+\mathrm{S}_{8}(s) \longrightarrow \mathrm{ZnS}(s)[\text { unbalanced }] Some of the reactants also combine with oxygen in air to form zinc oxide and sulfur dioxide. When 83.2 $\mathrm{g}$ of $\mathrm{Zn}$ reacts with 52.4 $\mathrm{g}$ of $\mathrm{S}_{8}, 104.4 \mathrm{g}$ of $\mathrm{ZnS}$ forms. (a) What is the percent yield of ZnS? (b) If all the remaining reactants combine with oxygen, how many grams of each of the two oxides form? Cocaine $\left(\mathrm{C}_{17} \mathrm{H}_{21} \mathrm{O}_{4} \mathrm{N}\right)$ is a natural substance found in coca leaves, which have been used for centuries as a local anesthetic and stimulant. Illegal cocaine arrives in the United States either as the pure compound or as the hydrochloride salt $\left(\mathrm{C}_{17} \mathrm{H}_{21} \mathrm{O}_{4} \mathrm{NHCl}\right)$ At $25^{\circ} \mathrm{C},$ the salt is very soluble in water $(2.50 \mathrm{kg} / \mathrm{L}),$ but cocaine is much less so $(1.70 \mathrm{g} / \mathrm{L})$ (a) What is the maximum mass $(\text { g) of the salt that can dissolve in }$ 50.0 $\mathrm{mL}$ of water? (b) If this solution is treated with NaOH, the salt is converted to cocaine. How much more water (L) is needed to dissolve it? High-temperature superconducting oxides hold great promise in the utility, transportation, and computer industries (a) One superconductor is $\mathrm{La}_{2-x} \mathrm{Sr}_{x} \mathrm{CuO}_{4}$ . Calculate the molar masses of this oxide when $x=0, x=1,$ and $x=0.163$ (b) Another common superconducting oxide is made by heating a mixture of barium carbonate, copper(II) oxide, and ytrium(III) oxide, followed by further heating in $\mathrm{O}_{2}$ : 4 \mathrm{BaCO}_{3}(s)+6 \mathrm{CuO}(s)+\mathrm{Y}_{2} \mathrm{O}_{3}(s) \longrightarrow 2 \mathrm{YBa}_{2} \mathrm{Cu}_{3} \mathrm{O}_{65}(s)+4 \mathrm{CO}_{2}(g) 2 \mathrm{YBa}_{2} \mathrm{Cu}_{3} \mathrm{O}_{6.5}(s)+\frac{1}{2} \mathrm{O}_{2}(g) \longrightarrow 2 \mathrm{YBa}_{2} \mathrm{Cu}_{3} \mathrm{O}_{7}(s) When equal masses of the three reactants are heated, which reactant is limiting? (c) After the product in part (b) is removed, what is the mass $\%$ of each reactant in the remaining solid mixture? The zirconium oxalate $\mathrm{K}_{2} \mathrm{Zr}\left(\mathrm{C}_{2} \mathrm{O}_{4}\right)_{3}\left(\mathrm{H}_{2} \mathrm{C}_{2} \mathrm{O}_{4}\right) \cdot \mathrm{H}_{2} \mathrm{O}$ was synthesized by mixing 1.68 $\mathrm{g}$ of $\mathrm{ZrOCl}_{2} \cdot 8 \mathrm{H}_{2} \mathrm{O}$ with 5.20 $\mathrm{g}$ of $\mathrm{H}_{2} \mathrm{C}_{2} \mathrm{O}_{4} \cdot 2 \mathrm{H}_{2} \mathrm{O}$ and an excess of aqueous $\mathrm{KOH}$ . After 2 months, 1.25 $\mathrm{g}$ of crystalline product was obtained, along with aqueous $\mathrm{KCl}$ and water. Calculate the percent yield. Kevin C. The rocket fuel hydrazine $\left(\mathrm{N}_{2} \mathrm{H}_{4}\right)$ is made by the three-step Raschig process, which has the following overall equation: $\mathrm{NaOCl}(a q)+2 \mathrm{NH}_{3}(a q) \longrightarrow \mathrm{N}_{2} \mathrm{H}_{4}(a q)+\mathrm{NaCl}(a q)+\mathrm{H}_{2} \mathrm{O}(l)$ What is the percent atom economy of this process? The aspirin substitute, acetaminophen $\left(\mathrm{C}_{8} \mathrm{H}_{9} \mathrm{O}_{2} \mathrm{N}\right),$ is produced by the following three-step synthesis: \mathrm{I} . \quad \mathrm{C}_{6} \mathrm{H}_{5} \mathrm{O}_{3} \mathrm{N}(s)+3 \mathrm{H}_{2}(g)+\mathrm{HCl}(a q) \longrightarrow \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{ONCl}(s)+2 \mathrm{H}_{2} \mathrm{O}(l) \mathrm{II}\quad \mathrm{C}_{6} \mathrm{H}_{8} \mathrm{ONCl}(s)+\mathrm{NaOH}(a q) \longrightarrow \mathrm{C}_{6} \mathrm{H}_{7} \mathrm{ON}(s)+\mathrm{H}_{2} \mathrm{O}(l)+\mathrm{NaCl}(a q) \mathrm{III.} \quad \mathrm{C}_{6} \mathrm{H}_{7} \mathrm{ON}(s)+\mathrm{C}_{4} \mathrm{H}_{6} \mathrm{O}_{3}(l) \longrightarrow \mathrm{C}_{8} \mathrm{H}_{9} \mathrm{O}_{2} \mathrm{N}(s)+\mathrm{HC}_{2} \mathrm{H}_{3} \mathrm{O}_{2}(l) The first two reactions have percent yields of 87$\%$ and 98$\%$ by mass, respectively. The overall reaction yields 3 moles of acetaminophen product for every 4 moles of $\mathrm{C}_{6} \mathrm{H}_{5} \mathrm{O}_{3} \mathrm{N}$ reacted. a. What is the percent yield by mass for the overall process? b. What is the percent yield by mass of Step III? Which element is oxidized, and which is reduced in the following reactions? \begin{array}{l}{\text { (a) } \mathrm{N}_{2}(g)+3 \mathrm{H}_{2}(g) \longrightarrow 2 \mathrm{NH}_{3}(g)\longrightarrow} \\ {\text { (b) } 3 \mathrm{Fe}\left(\mathrm{NO}_{3}\right)_{2}(a q)+2 \mathrm{Al}(s) \longrightarrow} \\\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad{3 \mathrm{Fe}(s)+2 \mathrm{Al}\left(\mathrm{NO}_{3}\right)_{3}(a q)}\\{\text { (c) } \mathrm{Cl}_{2}(a q)+2 \operatorname{Nal}(a q) \longrightarrow \mathrm{I}_{2}(a q)+2 \mathrm{NaCl}(a q)} \\ {\text { (d) } \mathrm{PbS}(s)+4 \mathrm{H}_{2} \mathrm{O}_{2}(a q) \longrightarrow \mathrm{PbSO}_{4}(s)+4 \mathrm{H}_{2} \mathrm{O}(l)}\end{array}
CommonCrawl
Evaluation of the accuracy of diagnostic scales for a syndrome in Chinese medicine in the absence of a gold standard Xiao Nan Wang1, Vanessa Zhou3, Qiang Liu4, Ying Gao5 & Xiao-Hua Zhou1,2 Chinese Medicine volume 11, Article number: 35 (2016) Cite this article The concept of syndromes (zhengs) is unique to Chinese medicine (CM) and difficult to measure. Expert consensus is used as a gold standard to identify zhengs and evaluate the accuracy of existing diagnostic scales for zhengs. But, the use of expert consensus as a gold standard is problematic because the diagnosis of zhengs by expert consensus is not 100 % accurate. This study aimed to evaluate the accuracy of standardized diagnostic scales for a syndrome zhengs in the absence of a gold standard, with application to internal wind (nei feng) syndrome in ischemic stroke patients. A total of 204 participants (age 41–84 years) with ischemic stroke were assessed by the stroke syndrome differentiation diagnostic criterion (SSDC), ischemic stroke TCM syndrome diagnostic scale (ISDS), and expert syndrome differentiation (ESD). The diagnostic tests and data collection process were conducted over a 10-month period (February 2008 to November 2008) in 10 hospitals across nine cities in China. The Bayesian method was used to estimate the accuracy of the SSDC, ISDS, and ESD. For internal wind syndrome, the estimated sensitivities and specificities of the SSDC, ISDS, and ESD without use of a gold standard were respectively: \(\widehat{Se}_{1}=0.687\), \(\widehat{Sp}_{1}=0.776\); \(\widehat{Se}_{2}=0.884\), \(\widehat{Sp}_{2}=0.875\); and \(\widehat{Se}_{3}=0.813\), \(\widehat{Sp}_{3}=0.922\) After adjusting for imperfect gold standard bias, we found that both the sensitivity and specificity of the ISDS were higher than those of the SSDC for diagnosis of internal wind syndrome in ischemic stroke patients. The concept of syndromes (zhengs) is unique to Chinese medicine (CM). Syndromes are identifiable from a holistic understanding of a patient's clinical presentation using the four CM diagnostic methods: observation, listening/smelling, questioning, and pulse analyses [1]. Identification of a syndrome can differ from one CM practitioner to another because of varying medical experience and other related factors. In recent years, the CM community has developed several standardized diagnostic scales for syndromes [2–5]. The accuracies of these scales have been assessed by the diagnostic opinion of CM practitioners as the gold standard. However, an expert diagnosis is largely dependent on clinical experience and educational background, leading to different syndrome differentiation for the same patient by different expert CM practitioners. This results in biased estimates for the accuracy of diagnostic scales because the expert syndrome differentiation (ESD) is imperfect. Such bias is called an imperfect gold standard bias [6, 7]. If the diagnostic test and imperfect gold standard are conditionally independent of the true disease status, the sensitivity and specificity of the diagnostic test are underestimated. However, if the diagnostic test and imperfect gold standard are conditionally dependent, the estimated sensitivity and specificity of the diagnostic test can be biased in either direction. The direction of the bias is determined by the degree to which the diagnostic tests and imperfect gold standard misclassify the same patients. When this tendency is slight, the accuracy of the diagnostic test is generally underestimated; when the tendency is strong, the accuracy of the diagnostic test is generally overestimated [6]. In recent years, several statistical methods have been developed to correct imperfect gold standard bias. Hui and Waiter [8] developed a model for two diagnostic tests within two populations and introduced a maximum likelihood approach when assuming the existence of two populations strata with different prevalence rates. In that model, they also assumed that the two tests were conditionally independent. However, the assumption of conditional independence may not be realistic in some applications owing to some common factors that can influence both diagnostic tests and true disease status. Sinclair and Gastwirth [9] extended the Hui and Waiter model to allow for conditional dependence. Espeland and Handelman [10] and Yang and Becker [11] proposed latent class modeling for conditional dependence, Qu et al. [12], Hadgu and Qu [13] proposed random effects models, and Albert and Dodd [14] developed latent class modeling approaches for binary tests. Pepe and Janes [15] discussed the latent class analysis method when assessing the multiple diagnostic tests without a gold standard, and concluded that a latent class model required careful justification of assumptions made about the conditional dependence structure. These researchers also stressed that a formal clinical definition of the disease should be given before evaluating the accuracy of diagnostic tests with the latent class method. Only when the disease has been clearly defined can the estimated parameters be meaningful for diagnostic tests; otherwise, the results of the estimators were meaningless. The above-mentioned methods used the frequentist approach to estimate the parameters in the model when the diagnostic tests were conditionally independent, given the true disease status or given the true disease status and a random effect. Joseph et al. [16] used Bayesian methods to assess the accuracy of diagnostic tests under conditional independence without a gold standard. Dendukuri [17], Georgiadis et al. [18], and Branscum et al. [19] developed Bayesian models to evaluate the accuracy of diagnostic tests with two conditionally dependent tests. These methods have been widely used for estimation of the accuracy of diagnostic tests without a gold standard in Western medicine research [20–27]. However, they have not been applied for estimation of the accuracy of diagnostic tests for CM syndromes. This study aimed to evaluate the accuracy of standardized diagnostic scales for a syndrome in the absence of a gold standard, with application to internal wind (nei feng) syndrome in ischemic stroke patients. Study design and approval In this study, we evaluated the accuracy of the stroke syndrome differentiation diagnostic criterion (SSDC), ischemic stroke TCM syndrome diagnostic scale (ISDS), and ESD for detecting "internal wind" in ischemic stroke patients, without assuming that the ESD is the gold standard. We mainly focused on comparing the accuracy of the two diagnostic scales (SSDC and ISDS).This study used data from the second round of a diagnostic test study of the ISDS. The diagnostic test and data collection process were performed over a 10-month period (February 2008 to November 2008), after receiving approval (ECSL-BDY-2008-012) from the Ethics Committee of the Dongzhimen Hospital of Beijing University of Chinese Medicine (Additional files 1 and 2). Inclusion and exclusion criteria Individuals who had a confirmed diagnosis of acute ischemic stroke by computed tomography and magnetic resonance imaging examinations, were aged between 35 and 85 years, and were informed of the objectives and research procedures of the study (details of study please see Additional file 3) and provided signed consent forms themselves (consent forms please see Additional file 4) were selected as the participants in this study [5]. We excluded individuals with the following symptoms: transient ischemic attack; cerebral hemorrhage or subarachnoid hemorrhage; stoke caused by brain tumor, traumatic brain injury, or blood disease; severe heart, liver, kidney, or hematopoietic system comorbidity and complication; mental disorder or severe dementia; and severe aphasia that could affect data collection [5]. The final data set comprised 204 patients from 10 hospitals across nine cities in China [4, 5]. All of the participants (age 41–84 years ) were diagnosed with ischemic stroke. The mean age of the patients was 65 years, and the mode age was 74 years. The subjects were diagnosed as "0" or "1" by each of the SSDC, ISDS, and ESD. The detailed results of the cross-classification of the three diagnostic tests for internal wind syndrome in the 204 ischemic stroke participants are shown in Table 1. The CM syndrome factor scales (SSDC and ISDS) of the symptoms and signs, and the ESD were separately completed on the same day. In this study, an expert was defined as a physician, who had the clinical title of deputy director or above and also had more than 10 years of clinical work experience in diagnosing and curing stroke disease with traditional CM. Table 1 Cross-classified test results of \(T_{1} \), \(T_{2}\) and \(T_{3}\) for internal wind syndrome CM syndrome factor scales and syndrome differentiation The SSDC and ESD were used to diagnose the status of a patient in place of a gold standard, before the development of the ISDS. The SSDC was the first recognized scale for diagnosing a CM syndrome in ischemic stroke patients, and has been widely used since its publication in 1994 [2, 3]. The development of the ISDS was based on the SSDC. Essentially, the ISDS is an updated version of the SSDC [3], and was first developed in 2007. The simple process for developing the ISDS has been described in the published literature [3, 4]. Briefly, the ISDS was developed from a two-round Delphi study, which generated a pool of draft items with 288 items in six syndrome factor dimensions [4]. From this pool of items, six syndrome factor diagnostic scales were constructed according to logistic regression functions and receiver-operating curve analysis. Each syndrome factor diagnostic scale consisted of 10–20 "yes" or "no" statements. The ESD was completed by three senior physicians with over 10 years of work experience [4, 5]. When the practitioners failed to reach a unanimous decision about a patient's diagnosis, the majority opinion was used. Descriptive statistics were utilized to summarize the characteristics of the subjects in the data set. The latent class model was fitted to the results of the SSDC, ISDS, and ESD for the ischemic stroke patients when a gold standard was not available. The Bayesian method was used to estimate the sensitivity and specificity for every CM diagnostic scale. We followed the guidelines for reporting Bayesian analyses in biomedical journals, as described by Lang and Altman [28]. Using the reporting guidelines, we first described the general Bayesian statistical model. Next, we specified the pre-trial probabilities (prior distributions) for the parameters in the proposed model based on the data we wanted to analyze and also explained how the prior distributions were selected. Subsequently, we used Markov chain Monte Carlo (MCMC) techniques to obtain the Bayesian estimated parameters, based on the posterior distribution. The median and credibility interval were used as the posterior summary measures in this study. Finally, we illustrated the sensitivity of the analyses to different prior distributions in the Bayesian model. IBM SPSS Statistics for Windows [version: 21.0; IBM Crop; NY] was utilized for the descriptive statistics. WinBUGS software [version: 1.4.3; BUGS project; UK] was used for the Bayesian data analysis (WinBUGS code for this study could be found in Additional file 5). A detailed description of the proposed Bayesian method for evaluating the accuracy of the diagnostic tests without a gold standard is given as below. Let \(T_{1}\), \(T_{2}\), and \(T_{3}\) denote the diagnostic results of the two CM diagnostic tests (SSDC and ISDS) and ESD for one syndrome factor in ischemic stroke patients, where \(T_{1}\), \(T_{2}\), and \(T_{3}=0,1\), with "1" indicating the presence of the syndrome factor and "0" indicating the absence of the syndrome factor. Let D denote the true status of the syndrome factor in an ischemic stroke patient, which is not observed in the study. The parameters of interest include: the prevalence of the syndrome factor in the population, \(\pi \), defined as \(\pi =P(D=1)\); the sensitivity of the ith diagnostic test in detecting the syndrome factor, \(Se_{i}\), defined as \(Se_{i}=P(T_{i}=1|D=1)\); and the specificity of the ith diagnostic test for detecting the syndrome factor, \(Sp_{i}\), defined as \(Sp_{i}=P(T_{i}=0|D=0)\), where \(i=1,2,3\). Bayesian model Assume that there are n participants in the sample and three test results for every subject. We represent the observed data as \(Y=(Y_{t_{1},t_{2},t_{3}})\), where \(Y_{t_{1},t_{2},t_{3}}\) is the number of subjects with \(T_{1}=t_{1}\), \(T_{2}=t_{2}\), and \(T_{3}=t_{3}\); here \(t_{1},t_{2},t_{3}=0,1\). For example, \(Y_{111}\) denotes the number of subjects whose diagnostic results for all three tests indicate that the syndrome factor is present. Correspondingly, \(p_{t_{1},t_{2},t_{3}}\) represents the joint probability of the outcome \((T_{1}=t_{1},T_{2}=t_{2},T_{3}=t_{3})\), which is defined as follows: $$\begin{aligned} p_{t_{1},t_{2},t_{3}} & = {} P(T_{1}=t_{1},T_{2}=t_{2},T_{3}=t_{3})\nonumber \\ &= {} P(T_{1}=t_{1},T_{2}=t_{2},T_{3}=t_{3}|D=1)\times P(D=1)\nonumber \\ & \quad + {} P(T_{1}=t_{1},T_{2}=t_{2},T_{3}=t_{3}|D=0)\times P(D=0)\nonumber \\& = {} P(T_{1}=t_{1},T_{2}=t_{2},T_{3}=t_{3}|D=1)\times \pi \nonumber \\ & \quad + {} P(T_{1}=t_{1},T_{2}=t_{2},T_{3}=t_{3}|D=0)\times (1-\pi ). \end{aligned}$$ Among the three tests in this study, the first two tests represent the diagnostic results of the SSDC and ISDS, respectively, and the last test represents the diagnostic result of the expert opinion, called the ESD. Since the two diagnostic scales consist of standardized questionnaires, while the ESD is based on the individual opinion of expert CM practitioners, it is reasonable to assume that the CM expert and the diagnostic scales err independently (i.e., they are conditionally independent, given the true CM syndrome status). Nevertheless, the two diagnostic scales do not err independently (i.e., they are conditionally dependent, given the true CM syndrome status). Such dependence is measured by the conditional dependence correlations, given the true CM syndrome status. Hence, we assume that \(T_{3}\) is independent of \(T_{1}\) and \(T_{2}\) conditional on D, while we allow \(T_{1}\) and \(T_{2}\) to be conditionally dependent, given D. Let \(C_{+}\) and \(C_{-}\) denote the covariance between \(T_{1}\) and \(T_{2}\) among the CM syndrome positive and negative individuals, respectively. In other words, \(C_{+}=cov(T_{1},T_{2}|D=1)\) and \(C_{-}=cov(T_{1},T_{2}|D=0)\). Such a model has also been studied by Dendukuri and Joseph [17]. To present the Bayesian method, we need to compute the likelihood function of the observed data. Note that we can respectively write \(P(T_{1}=t_{1},T_{2}=t_{2}|D=1)\) and \(P(T_{1}=t_{1},T_{2}=t_{2}|D=0)\) as follows: $$\begin{aligned} P(T_{1}=t_{1},T_{2}=t_{2}|D=1)=\prod \limits _{i=1}^{2}Se_{i}^{t_{i}}(1-Se_{i})^{(1-t_{i})}+(-1)^{t_{1}+t_{2}}C_{+}, \\ P(T_{1}=t_{1},T_{2}=t_{2}|D=0)=\prod \limits _{i=1}^{2}Sp_{i}^{(1-t_{i})}(1-Sp_{i})^{t_{i}}+(-1)^{t_{1}+t_{2}}C_{-}. \end{aligned}$$ Consequently, we can rewrite the joint probability of the outcome (T 1 = t 1, T 2 = t 2, T 3 = t 3) as follows: $$\begin{aligned} p_{t_{1},t_{2},t_{3}} &= P(T_{1}=t_{1},T_{2}=t_{2}|D=1)\times P(T_{3}=t_{3}|D=1)\times \pi \nonumber \\ & \quad + P(T_{1}=t_{1},T_{2}=t_{2}|D=0)\times P(T_{3}=t_{3}|D=0)\times (1-\pi )\nonumber \\ & = \pi \left[\prod \limits_{i=1}^{2} Se_{i}^{t_{i}}(1-Se_{i})^{(1-t_{i})} + (-1)^{t_{1}+t_{2}} C_{+}\right]\\&\quad\times [Se_{3}^{t_{3}}(1-Se_{3})^{(1-t_{3})}]\nonumber \\ &\quad +(1-\pi )\left[\prod \limits _{i=1}^{2}Sp_{i}^{(1-t_{i})}(1-Sp_{i})^{t_{i}}+(-1)^{t_{1}+t_{2}}C_{-}\right]\nonumber \\& \quad \times [(1-Sp_{3})^{t_{3}}Sp_{3}^{(1-t_{3})} ]. \end{aligned}$$ Let \(Y=(Y_{111},Y_{110},Y_{101},Y_{100},Y_{011},Y_{010},Y_{001},Y_{000})\), the observed data, and \(\theta =(Se_{1},Sp_{1},Se_{2},Sp_{2},Se_{3},Sp_{3},\pi ,C_{+},C_{-})\), which represents the set of parameters in the model. According to (2), the likelihood function based on the observed data is: $$\begin{aligned} L(\theta |Y)= & {} \prod \limits _{t_{1},t_{2},t_{3}}p_{t_{1},t_{2},t_{3}}^{Y_{t_{1},t_{2},t_{3}}}\nonumber \\= & {} \prod \limits _{t_{1},t_{2},t_{3}}\Big \{ \pi \Big [\prod \limits _{i=1}^{2}Se_{i}^{t_{i}}(1-Se_{i})^{(1-t_{i})}+(-1)^{t_{1}+t_{2}}C_{+}\Big ]\nonumber \\\times & \,{} \Big [Se_{3}^{t_{3}}(1-Se_{3})^{(1-t_{3})}\Big ]+(1-\pi )\Big [\prod \limits _{i=1}^{2}Sp_{i}^{(1-t_{i})}(1-Sp_{i})^{t_{i}}\nonumber \\+ &\, {} (-1)^{t_{1}+t_{2}}C_{-}\Big ] \times \Big [(1-Sp_{3})^{t_{3}}Sp_{3}^{(1-t_{3})}\Big ]\Big \}^{Y_{t_{1},t_{2},t_{3}}} \end{aligned}$$ To use the Bayesian method to estimate the vector of the parameters, \(\theta \), we need to specify a prior distribution for \(\theta \). Let \(f(\theta )\) denote the prior distribution of \(\theta \). The Bayesian method combines the prior information about \(\theta \) with the data we have collected, and then uses the Bayes theorem to obtain an interpretable posterior distribution for \(\theta \). We can use the median of the posterior distribution to estimate \(\theta \). According to the Bayes theorem, the joint posterior distribution \(f(\theta |Y)\) of the parameter \(\theta \) given the observed data Y can be written as follows: $$\begin{aligned} f(\theta |Y)=\frac{L(\theta |Y)f(\theta )}{\int L(\theta |Y)f(\theta )d\theta }=\frac{\mathcal {A}}{\mathcal {B}}, \end{aligned}$$ $$\begin{aligned} \mathcal {A}&= f(\theta )\prod \limits _{t_{1},t_{2},t_{3}}\Big \{ \pi \Big [\prod \limits _{i=1}^{2}Se_{i}^{t_{i}}(1-Se_{i})^{(1-t_{i})}+(-1)^{t_{1}+t_{2}}C_{+}\Big ]\\& \quad \times \Big [Se_{3}^{t_{3}}(1-Se_{3})^{(1-t_{3})}\Big ]+(1-\pi )\quad\times\Big [\prod \limits _{i=1}^{2}Sp_{i}^{(1-t_{i})}(1-Sp_{i})^{t_{i}}+(-1)^{t_{1}+t_{2}}C_{-}\Big ]\\& \quad \times \Big [(1-Sp_{3})^{t_{3}}Sp_{3}^{(1-t_{3})}\Big ]\Big \}^{Y_{t_{1},t_{2},t_{3}}}, \end{aligned}$$ $$\begin{aligned} \mathcal {B} &= {} \underbrace{\int \int \cdots \int }_{9}f(\theta )\prod \limits _{t_{1},t_{2},t_{3}}\\&\quad\times\Big \{ \pi \Big [\prod \limits _{i=1}^{2}Se_{i}^{t_{i}}(1-Se_{i})^{(1-t_{i})}+(-1)^{t_{1}+t_{2}}C_{+}\Big ] \\ & \quad \times {} \Big [Se_{3}^{t_{3}}(1-Se_{3})^{(1-t_{3})}\Big ]+(1-\pi )\Big [\prod \limits _{i=1}^{2}Sp_{i}^{(1-t_{i})}(1-Sp_{i})^{t_{i}}+(-1)^{t_{1}+t_{2}}C_{-}\Big ]\\ \\ & \quad \times {} \Big [(1-Sp_{3})^{t_{3}}Sp_{3}^{(1-t_{3})}\Big ]\Big \}^{Y_{t_{1},t_{2},t_{3}}} \underbrace{dSe_{1}dSe_{2}\cdots dC_{+}dC_{-}}_{9} \end{aligned}$$ Consequently, the marginal posterior density function for any component in \(\theta \), such as \(Sp_{2}\), given the data, can be expressed as: $$\begin{aligned} f(Sp_{2}|Y)&=\underbrace{\int \int \cdots \int \int }_{8}f(\theta |Y)dSe_{1} \\ & \quad \times dSe_{2}dSe_{3}dSp_{1}dSp_{3}d\pi dC_{+}dC_{-} \end{aligned}$$ By estimating the median of the margin distribution about \(Sp_{2}\), denoted by \(\widehat{{Sp}_{2}}\), we obtain a Bayesian estimate, \(\widehat{{Sp}_{2}}\), for \(Sp_{2}\). Procedure of the analysis Here \(T_{1}\),\(T_{2}\), and \(T_{3}\) denote the CM diagnostic scales (SSDS and ISDS) and ESD for detecting internal wind syndrome, respectively. The observed data can be represented by \(Y=(69,19,7,12,32,9,5,51)\), as shown in Table 1. We denoted the proposed model as model (I). For comparison purposes, we also included the results obtained by the commonly used naive method, which assumed the ESD as the gold standard, and denoted this method as model (II). In the Bayesian analysis, a prior distribution for \(\theta \), which was defined in the Bayesian model, had to be chosen. Selecting the prior distribution A prior distribution for \(\theta \) consisted of three sensitivities, three specificities, one prevalence rate, and two conditional covariances. Since the first six parameters have a range between 0 and 1, we chose a beta distribution \(Beta (\alpha ,\beta )\) for each of them, where \(\alpha \) and \(\beta \) were hyper-parameters. We used the method proposed by Dendukuri [17] and Enøe et al. [27] to choose these hyper-parameter values by the priori moment information. According to the published literature describing the three diagnostic tests (SSDC, ISDS, and ESD) [2–5], the most probable value of the sensitivities of \(T_{1}\) and \(T_{2}\) for detecting internal wind syndrome was determined as 0.7, and we were 95 % sure that these sensitivities were less than 0.5. Thus, the prior distribution for the sensitivities \(T_{1}\), \(T_{2}\) was chosen to be the beta distribution, Beta(13.322, 6.281). For the specificities of the diagnostic scales \(T_{1}\), \(T_{2}\) for detecting internal wind syndrome, the most probable value was determined as 0.8, and we were 95 % sure that these specificities were less than 0.5. Therefore, the prior distribution for the specificities \(T_{1}\) and \(T_{2}\) was chosen to be the beta distribution, Beta(7.549, 2.637). The best guess value for the sensitivity of \(T_{3}\) was 0.8, and the experts were 95 % sure that the sensitivity of \(T_{3}\) was at least 0.7; hence, the prior distribution for the sensitivity of \(T_{3}\) was chosen to be the beta distribution, Beta(48.283, 12.821). The best guess value for the specificity of \(T_{3}\) was 0.85, and the experts were 95 % sure that the specificity of \(T_{3}\) was at least 0.6; thus the prior distribution for the specificity of \(T_{3}\) was chosen to be the beta distribution, Beta(10.657, 2.704). The uniform distribution on [0, 1] was used for the prior distribution of the internal wind prevalence rate. For the last two conditional covariances, \(C_{+}\) and \(C_{-}\), which measured the dependence of \(T_{1}\) and \(T_{2}\) among the diseased and non-diseased statuses, respectively, we have the following constraints: \((Se_{1}-1)(1-Se_{2})\le C_{+} \le min(Se_{1},Se_{2})-Se_{1}Se_{2}\) and \((Sp_{1}-1)(1-Sp_{2})\le C_{-} \le min(Sp_{1},Sp_{2})-Sp_{1}Sp_{2}\), respectively. Hence, we chose two uniform distributions for \(C_{+}\) and \(C_{-}\): \(U((Se_{1}-1)(1-Se_{2}),(min(Se_{1},Se_{2})-Se_{1}Se_{2}))\) and \(U((Sp_{1}-1)(1-Sp_{2}),(min(Sp_{1},Sp_{2})-Sp_{1}Sp_{2}))\). MCMC techniques for computing the posterior estimator It was difficult to directly obtain the posterior estimator of each parameter through a numerical integration method in the Bayesian model. Since the joint posterior distribution \(f(\theta \mid Y)\) was complicated and involved high-dimensional integral problems, which were often impossible to compute directly, we used the MCMC algorithm to draw a random sample from the joint posterior distribution. We then computed the sample median of the randomly drawn sample to estimate \(\theta \) and its components of interest. In this study, the WinBUGS package was used to perform this MCMC process. To use the MCMC technique in the Bayesian method, we specified the initial values of the model parameters, and the initial values were given as follows: \(\pi =0.623,Se_{1}=0.748,Se_{2}=0.945,Se_{3}=0.850,Sp_{1}=0.844,Sp_{2}=0.883,Sp_{3}=0.935\), respectively. We also chose different initial values and obtained similar results. The numbers of iterations and burn-ins were determined by the convergence of the Markov chain in estimating the parameters by WinBUGS. The sensitivity of the SSDC for internal wind syndrome (Table 2) was estimated as 0.687 by the Bayesian method in the absence of a gold standard, while the commonly used naive method, which uses the ESD as a gold standard, estimated the sensitivity of the SSDC as 0.673. The estimated sensitivity of the ISDS showed similar results. The Bayesian method estimated the specificity of the ISDS as 0.875, while the commonly used naive method estimated the specificity of the ISDS as 0.692. From these results, we can conclude that the commonly used naive method in CM for estimating the accuracy of diagnostic scales for this CM syndrome might be biased. Table 2 also shows the 95 % Bayesian confidence intervals for the sensitivity and specificity of the SSDC in detecting internal wind syndrome, which were (0.605,0.765) and (0.652,0.885), respectively. Similarly, the Bayesian confidence intervals for the sensitivity and specificity of the ISDS are also shown in Table 2. Table 2 Accuracy of diagnostic scales (median) for internal wind syndrome factor in 204 ischemic stroke patients under different models As shown in Table 2, the respective Bayesian estimated sensitivities of the SSDC, ISDS, and ESD for diagnosing internal wind syndrome without a gold standard were as follows: \(\hat{Se}_{1}=0.687\), \(\hat{Se}_{2}=0.884\), and \(\hat{Se}_{3}=0.813\). The respective estimated specificities of the SSDC, ISDS, and ESD for diagnosing internal wind syndrome in the absence of a gold standard were as follows: \(\hat{Sp}_{1}=0.776\), \(\hat{Sp}_{2}=0.875\), and \(\hat{Sp}_{3}=0.922\). From these results, we concluded that the ISDS was more accurate than the SSDC in detecting internal wind syndrome. The Bayesian method also gave an estimate of \(\hat{\pi }=0.648\) for the prevalence rate of internal wind syndrome. Hence, we concluded that the sensitivity and specificity of the ISDS were both higher than those of the SSDC when diagnosing internal wind syndrome in ischemic stroke patients. We also found that the sensitivity and specificity of the ESD for internal wind syndrome were also high, but not perfect. To assess the sensitivity of our results to chosen prior distributions, we selected several different prior distributions for parameters in model (I). The posterior estimates under the chosen prior distributions for the parameters led to consistent results with the previous posterior estimates. SSDC: stroke syndrome differentiation diagnostic criterion ISDS: ischemic stroke TCM syndrome diagnostic scale ESD: expert syndrome differentiation MCMC: Wang J, Wang P, Xiong X. Current situation and re-understanding of syndrome and formula syndrome in Chinese medicine. Intern Med. 2012;2:1–5. doi:10.4172/2165-8048.1000113. State Administration of TCM and Acute Encephalopathy Cooperation Group SSSSS. TCM syndrome differentiation diagnosis criterion of stroke. Beijing Zhong Yi Yao Da Xue Xue Bao. 1994;17:42. Stroke Syndromes and Clinical Diagnosis SSSSS. Clinical validiation of TCM syndrome diagnostic criterion of stroke. Beijing Zhong Yi Yao Da Xue Xue Bao. 1994;17:41–3. Liu Q, Gao Y. Theory basis of syndrome diagnosis scale. Zhong Hua Zhong Yi Yao Za Zhi. 2010;25:989–92. Gao Y, Bin M, Liu Q, Wang Y. Methodological study and establishment of the diagnostic scale for TCM syndrome of ischemic stroke. Zhong Yi Za Zhi. 2011;52:2097–101. Zhou X-H, Obuchowski NA, McClish DK. Statistical methods in diagnostic medicine. New York: John Wiley and Sons; 2011. Hui SL, Zhou X-H. Evaluation of diagnostic tests without gold standard. Stat Methods Med Res. 1998;7:354–70. doi:10.1177/096228029800700404. Hui SL, Walter SD. Estimating the error rates of diagnostic tests. Biometrics. 1980;36:167–71. doi:10.2307/2530508. Sinclair MD, Gastwirth JL. On procedures for evaluating the effectiveness of reinterview survey methods:application to labor force data. J Am Stat Assoc. 1996;91:961–9. doi:10.1080/01621459.1996.10476966. Espeland MA, Handelman SL. Using latent class models to characterize and assess relative-error in discrete measurements. Biometrics. 1989;45:587–99. doi:10.2307/2531499. Yang I, Becker MP. Latent variable modeling of diagnostic accuracy. Biometrics. 1997;53:948–58. doi:10.2307/2533555. Qu Y, Tan M, Kutner MH. Random effects models in latent class analysis for evaluating accuracy of diagnostic test. Biometrics. 1996;52:797–810. doi:10.2307/2533043. Hadgu A, Qu Y. A biomedical application of latent class models with random effects. Appl Stat. 1998;47:603–16. doi:10.1111/1467-9876.00131. Albert PS, Dodd LE. A cautionary note on the robustness of latent class models for estimating diagnostic error without a gold standard. Biometrics. 2004;60:427–35. doi:10.1111/j.0006-341X.2004.00187.x. Pepe MS, Janes H. Insights into latent class analysis of diagnostic test performance. Biostatistics. 2007;8:474–84. doi:10.1093/biostatistics/kxl038. Joseph L, Gyorkos T, Coupal L. Bayesian estimation of disease prevalence and the parameters of diagnostic tests in the absence of a gold standard. Am J Epidemiol. 1995;141:263–72. Dendukuri N, Joseph L. Bayesian approaches to modeling conditional dependence between multiple diagnostic tests. Biometrics. 2001;57:158–67. doi:10.1111/j.0006-341X.2001.00158.x. Georgiadis MP, Johnson WO, Gardner IA, Singh R. Correlation-adjusted estimation of sensitivity and specificity of two diagnostic tests. Appl Stat. 2003;52:63–76. doi:10.1111/1467-9876.00389. Branscum AJ, Gardner IA, Johnson WO. Estimation of diagnostic-test sensitivity and specificity through Bayesian modeling. Prev Vet Med. 2005;68:145–63. doi:10.1016/j.prevetmed.2004.12.005. Rybicki BA, Peterson EL, Johnson CC, Kortsha GX, Cleary WM, Gorell JM. Intra- and inter- rater agreement in the assessment of occupational exposure to metals. Int J Epidemiol. 1998;27:269–73. doi:10.1093/ije/27.2.269. McDermott J, Drews C, Green D, Berg C. Evaluation of prenatal care information on birth certificates. Paediat Perinat Epidemiol. 1997;11:105–21. doi:10.1046/j.1365-3016.1997.d01-4.x. Line BR, Peters TL, Keenan J. Diagnostic test comparisons in patients with deep venous thrombosis. J Nucl Med. 1997;38:89–92. Mahoney WJ, Szatmari P, Maclean JE, Bryson SE, Bartolucci G, Walter SD, Marshall BJ, Zwaigenbaum L. Reliability and accuracy of differentiating pervasive developmental disorder subtypes. J Am Acad Child Adolesc Psychiatry. 1998;37:278–85. doi:10.1097/00004583-199803000-00012. Chriel M, Willeberg P. Dependency between sensitivity,specificity and prevalence analysed by means of Gibbs sampling. Epidémiologeie et Santé Animale. 1997;31/32:12.03.1–3. Georgiadis MP, Gardner IA, Hedrick RP. Field evaluation of sensitivity and specificity of a polymerase chain reaction (PCR) for detection of N. salmonis in rainbow trout. J Aquat Anim Health. 1998;10:372–80. doi:10.1577/1548-8667(1998) 010<0372:FEOSAS>2.0.CO;2. Singer RS, Boyce WM, Gardner IA, Johnson WO, Fisher AS. Evaluation of bluetongue virus diagnostic tests in free-ranging bighorn sheep. Prev Vet Med. 1998;35:265–82. doi:10.1016/S0167-5877(98)00067-1. Enoe C, Georgiadis MP, Johnson WO. Estimation of the sensitivity and specificity of diagnostic tests and disease prevalence when true disease state is unknown. Prev Vet Med. 2000;45:61–81. doi:10.1016/S0167-5877(00)00117-3. Lang T, Altman D. Statistical Analyses and Methods in the Published Literature:the SAMPL Guidelines. Science Editors'Handbook, European Association of Science Editors; 2013. XZ, XW, YG, and QL conceived and designed the study. QL and YG facilitated the data collection in China. XW and XZ analyzed the data. XW, VZ, XZ, YG, and QL interpreted the results. XW, XZ, and VZ wrote the manuscript. XW and XZ revised the manuscript. All authors read and approved the final manuscript. The authors wish to acknowledge the support of the Ministry of Science and Technology of the PRC on a research project entitled "Significant New Drug Development-Construction of Technology Platform used for Original New Drug Research and Development" (2012ZX09303-010-002). The authors also wish to acknowledge the data provided by the "973 program," a basic research program supported by the Chinese Government Ministry of Science and Technology that promotes research in China. This research was also supported by the State Foundation for Studying Abroad. School of Statistics, Renmin University of China, Beijing, 100872, China Xiao Nan Wang & Xiao-Hua Zhou Department of Biostatistics, School of Public Health, University of Washington, Seattle, WA, 98195, USA Xiao-Hua Zhou UW Autism Center, University of Washington, Seattle, WA, 98105, USA Vanessa Zhou TCM International Cooperation Center of China, Beijing, 100101, China Qiang Liu Department of Neurology, Dongzhimen Hospital of Beijing University of Chinese Medicine, Beijing, 100700, China Ying Gao Search for Xiao Nan Wang in: Search for Vanessa Zhou in: Search for Qiang Liu in: Search for Ying Gao in: Search for Xiao-Hua Zhou in: Correspondence to Ying Gao or Xiao-Hua Zhou. 13020_2016_100_MOESM1_ESM.jpg Additional file 1. Approval document of ethics. Additional file 2. Ethics committee members. 13020_2016_100_MOESM3_ESM.pdf Additional file 3. Research Programme. Additional file 4. Informed consent. 13020_2016_100_MOESM5_ESM.txt Additional file 5. WinBUGS code. Wang, X.N., Zhou, V., Liu, Q. et al. Evaluation of the accuracy of diagnostic scales for a syndrome in Chinese medicine in the absence of a gold standard. Chin Med 11, 35 (2016) doi:10.1186/s13020-016-0100-2 Diagnostic scale Bayesian method Without a gold standard
CommonCrawl
Dynamic spectral characteristics of high-resolution simulated equatorial plasma bubbles Charles Rino ORCID: orcid.org/0000-0003-2560-04781, Tatsuhiro Yokoyama2 & Charles Carrano1 Manifestations of severe nighttime equatorial ionospheric disturbances have been observed for decades. It is generally accepted that the phenomena are caused by large depletions, referred to as equatorial plasma bubbles (EPBs), which are initiated on the rising unstable bottom side of the nighttime F layer. Physics-based simulations have enhanced our understanding of the EPB phenomenon. However, until very recently, stochastic structure smaller than ∼ 10 km was not well resolved. Recent high-resolution EPB simulations have extended the resolution to hundreds of meters, which provides a unique opportunity to characterize intermediate-scale EPB structure.This paper presents a summary analysis of simulated high-resolution intermediate-scale EPB structure. Estimation of altitude-dependent power law spectral density function parameters provides an altitude versus time history of the intermediate-scale structure development. Local structure onset is associated with successive bifurcation of rising EPBs. Developed structure characterized by a two-component power law spectral density function ultimately subtends several hundred kilometers in altitude.Two-component inverse power-law structure was first observed in early in situ rocket measurements. It has been observed in diagnostic measurements of beacon-satellite and GPS scintillation data as well as in situ measurements from Atmospheric Explorer and C/NOFS satellites. The EPB simulation data fully support the reported EPB diagnostics as well as a correlation between the turbulent strength and the large-scale spectral index parameter estimates. However, recent analyses have shown that the correlation is an intrinsic property of power-law parameter estimation. The terminology equatorial spread F (ESF), plumes, and equatorial plasma bubbles (EPBs) evolved, respectively, from ionospheric sounder, coherent radar backscatter, and diagnostic measurements. In situ and remote EPB radio-propagation diagnostics are formally time series generated by the motion of the probe or the interrogating propagation path. Interpreting such diagnostic measurements is challenging because altitude, magnetic field, and temporal structure variations are invariably intermingled. Moreover, time-to-space conversion depends on an unknown structure drift. The dependence of propagation diagnostics on path-integrated structure further complicates the interpretation of diagnostic measurements. Physics-based simulations provide an exceptional opportunity to generate definitive structure development measurements. Although the underlying physics has been well established for decades, simulating the generation and dissipation of steep gradients that evolve in unstable regions has, until very recently, limited the resolution that could be achieved to kilometer scales. Recently, simulations that exploit advanced computational capabilities have resolved EPB structure to hundreds of meters. High-resolution simulations described in a survey paper by Yokoyama (2017) were made available for the EPB structure analysis presented in this paper. Yokoyama (2017) also reviewed the historical development of EPB simulations, which were introduced in the early 1980s. To make the EPB simulations as representative of real-world conditions as possible, multiple EPBs were initiated with an eastward E×B drift. The number of EPBs initiated depends on the initial conditions. However, the simulation analyzed in this paper shows that intermingling of structure from multiple EPBs populates extended structure regions. The conditions under which EPBs develop and how the large-scale structure evolution can be reconciled with diagnostic measurements, particularly the coherent-backscatter radar echoes that delineate the plumes, have been studied extensively (Hysell 2000). Less attention has been given to characterizing intermediate-scale stochastic structure from tens of kilometers to hundreds of meters. Figure 1 is a perspective view of the developed EPB structure. The left frame shows the central meridian plane slice, which emphasizes smoothly varying field-aligned structure. The right frame shows vertical and horizontal slice planes, which emphasize stochastic cross-field structure. Perspective view of the EPB simulation environment. The left frame shows the central meridian plane. The right frame shows the structure in two orthogonal slice planes Stochastic structure is definitively reproduced only in slice planes that cut across field lines. For this study, evolving structure from the equatorial slice plane and two offset slice planes, identified by the rays in Fig. 2, were extracted from the three-dimensional simulations. The electron density variation in Fig. 2 shows that field-aligned structure intercepts systematically varying background electron density. The offset slice planes allow exploration of the field-aligned structure translation. Meridian slice plane with overlaid rays locating vertical equatorial slice plane and two offset slice planes excised for structure analysis The left frame in Fig. 3 shows developed equatorial-plane structure detail 1 h after initiation. Seeded bottom-side perturbations initiated five EPBs that evolved at different rates depending on their zonal location at initiation. Slice planes were sampled uniformly from 300 to 800 km with 1120 zonal samples at 333.56 m and 1821 vertical samples at 700.83 m. The right frame shows the zonal average electron density (blue) with an overlaid smoothed profile (red). Figures 4 and 5 show the offset1 and offset2 structure summaries. The structure flux tubes intercept the offset planes at progressively lower altitudes. The structure in the offset2 slice plane is mapped below the 300 km lower limit. The left frame shows the equatorial plane structure at 1 h. The right frame shows the path-integrated density (blue) with a smoothed profile overlaid (red) The left frame shows the offset1 plane structure at 1 h. The right frame shows the path-integrated density (blue) with a smoothed profile overlaid (red) The zonal average electron densities shown in the right frames of Figs. 3, 4, and 5 are proportional to zonal path-integrated total electron content (TEC), which can be measured with navigation satellite transmissions received by low orbiting satellite occultations (Tsai et al. 2011). Although such measurements do not resolve the intermediate-scale structure directly, scintillation of the probing signals can be processed for structure diagnostics. To summarize the structure evolution, the slice frames were cyclically shifted to compensate for the 120 mps eastward drift. Periodic simulation boundary conditions confined the zonal extent of the realizations. The residual formed by subtracting the smoothed average profiles provides a measure of the height-dependent structure development. The time resolution for the EPB simulations is 0.1 s but reported at 10 s intervals. Figure 6 summarizes the evolution of the structure residuals at 100 s intervals. Structure onset can be identified at a specific time and altitude, which is the point where the highest EPB penetrates the F-region peak electron density. Evolution of height-dependent residual structure at 100 s for equatorial, offset1, and offset 2 slices To explore the structure onset detail, Fig. 7 shows four consecutive 10 s zoomed images of the most rapidly progressing central EPB. The steepening gradient at the head of the EPB generates local depletions flanked by enhancements, which are referred to as bifurcations. Each bifurcation initiates a secondary bifurcation. The process of successive bifurcation creates a fractal-like structure cascade. The progression in Fig. 4 shows that once initiated, successive bifurcation proceeds very rapidly. A more detailed discussion can be found in Yokoyama et al. (2014). Numbered frames show zoomed views at 10 s intervals of the central EPB that generated the structure shown in Fig. 3 Structure characterization EPB electron density slice-plane realizations are formally two-dimensional scalar fields, Ne(y,z), where y and z represent cross-field and altitude, respectively. Assuming that Ne(y,z) is statistically homogeneous, the stochastic structure can be characterized by a two-dimensional spectral density function (SDF), which is formally the expectation of the intensity of two-dimensional Fourier decompositions of Ne(y,z) realizations. Power-law models Published in situ measurements and remote diagnostics imply an underlying two-component power-power law SDF. The following analytic representation is introduced to guide structure characterization: $$ \Phi_{N_{e}}(q)=C_{s}\left\{ \begin{array}{c} q^{-p_{1}}\text{ for}~q\leq q_{0} \\ q_{0}^{p_{2}-p_{1}}q^{-p_{2}}\text{ for}~ q>q_{0} \end{array} \right., $$ $$ q=\sqrt{q_{y}^{2}+\beta q_{z}^{2}}, $$ is the magnitude of the spatial frequency vector [qy,qz] in radians per meter. The β coefficient accommodates projection of the radial variation of field-aligned structure. The defining parameters are turbulent strength, Cs; the break frequency, q0; and the spectral indices, pn corresponding to subranges of spatial frequencies smaller than (n=1) and larger than (n=2) q0. In situ measurements are one-dimensional scans. If the structure volume were stochastic in all three dimensions, the measured one-dimensional SDF would be represented by a two-dimensional integration of the three-dimensional SDF. For field-aligned two-dimensional stochastic structures a slice plane containing the one-dimensional scan must be constructed. Configuration-space realizations populate arbitrarily oriented slice planes for extrapolation (Rino et al. 2018). For the EPB analysis, the cross-field orientation of the slice planes were selected for direct structure measurement. One-dimensional SDFs are related to (1) by the integration $$ \Phi_{N_{e}}^{1}(q)=\int_{-\infty}^{\infty}\Phi_{N_{e}}(q_{y},q_{z}) \frac{dq_{z}}{2\pi }. $$ For (3) to be well defined, the power-law variation must be specified in more detail. In the EPB realizations, there is a transition from stochastic to trend-like variation at small spatial frequencies. At sufficiently high frequencies, the physics supporting the EPB simulations is incomplete. Furthermore, as already noted, the stochastic structure itself varies with altitude. To capture these details, the following height-dependent one-dimensional SDF hypothesized for EPB structure characterization: $$ \Phi_{N_{e}}\left(q \right) =C_{s}\left\{ \begin{array}{c} q^{-\eta_{1}}\text{for}~q\leq q_{0} \\ q_{0}^{\eta_{2}-\eta_{1}}q^{-\eta_{2}}\text{for}~q>q_{0} \end{array} \right.. $$ The one-dimensional model captures a broad range of structure characteristics as defined by the turbulent strength Cs, the spectral indices ηn, and the break frequency q0. For example, if η1≃0, q0 can be interpreted as an outer scale. If η1≃η2, the SDF is a single power law. Generally, η1≤η2. However, enhanced low-frequency structure might lead to the opposite ordering, η1>η2. In all cases, the variation of Cs provides a measure of overall structure intensity. Establishing the relation between ηn and pn, which is nominally ηn=pn−1, is beyond the scope of this study. However, ionospheric structure models can be validated by comparing predicted height-dependent one-dimensional structure characteristics with the measured EPB structure. Irregularity parameter estimation Irregularity parameter estimation (IPE) systematically adjusts the defining parameters to minimize a measure of the disparity between an SDF estimate and the theoretical SDF. An IPE procedure for estimating scintillation intensity SDF parameters was introduced by Carrano and Rino (2016). The original IPE procedure was refined to maximize the likelihood that the periodogram was derived from a realization with the theoretical SDF (Carrano et al. 2017). For characterizing the EPB SDFs, the maximum likelihood estimation (MLE) procedure was adapted for power-law SDF estimation as described in Rino and Carrano (2018). Power-law parameter estimation is more challenging than intensity scintillation parameter estimation because of the singular behavior of unmodified power-law SDFs at zero frequency. The MLE SDF estimate is the average of M periodograms, formally $$ \widehat{\Phi}_{N_{e}}=\frac{1}{M}\sum\limits_{l=1}^{M}\widehat{\Phi }_{n}^{(l)}, $$ where the periodogram is defined as $$ \widehat{\Phi }_{n}^{(l)}=\frac{\Delta y}{N}\left\vert \sum_{k=0}^{N-1}N_{e}^{(l)}\left(k\Delta y\right) \exp \{-ink/N\}\right\vert^{2}. $$ The index n corresponds to the spatial frequencies $$ 2\pi/(N\Delta y)\leq n\Delta q \leq 2\pi /\Delta y, $$ where Δy is the y sample interval, and Δq=2π/(NΔy) is the spatial-frequency resolution. The index l identifies the altitude at which the zonal scan is extracted. One can show that $$ \left\langle \widehat{\Phi}_{N_{e}}\right\rangle={\Phi}_{N_{e}} $$ MLE exploits the fact that the probability distribution function (PDF) of the periodogram is well approximated by a χ distribution with 2 degrees of freedom. The χ distribution with 2M degrees of freedom follows for the summation. It is well known that periodogram estimates are contaminated by the sidelobes of end-point discontinuities. Moreover, efficient discrete Fourier transformation (DFT) evaluation requires N to be even with as many factors as possible, ideally a power of 2. The Welch method (Welch 1995) uses windowing and segmentation with averaging. Periodogram variants, such as maximum entropy estimates (Fougere 2009), provide additional variants. However, MLE relies on unbiased spectral estimates with χ distributions, whereby it is desirable to stay as close to (6) as possible. After some exploration, it was found that using the full 373.6 km y extent of the data zero extended to a nice FFT number gave the best results. Following (Rino and Carrano 2018), periodograms from two altitudes (M=2) were averaged. Multi-parameter MLE used a MATLAB implementation of the Nelder-Mead simplex algorithm (Olsen and Nelsen 1975). The procedure is surprisingly robust in that fits were made to quasi-deterministic SDFs with no stochastic structure as well SDFs from realizations with fully developed stochastic structure. The two classes are readily distinguished by the reported IPE parameters. MLE-IPE with M=2 was performed over each set of slice-plane scans. Figures 8, 9, and 10 summarize the parameter estimates. The periodogram sample interval is twice the height sampling (1.4 km). The Cs>200 estimates shown in the upper frames of Figs. 8, 9, and 10 capture the unstructured regions identified in Fig. 6. From the second and third frames in Figs. 8, 9, and 10, we see that η1<η2 within the unstructured regions. Outside the structured regions, the pattern is reversed. For ease of interpretation, σb=2π/q0 is reported rather than the break frequency. MLE-IPE SDF parameter estimates from equatorial slice plane realizations at 100 s, where σb=2π/q0 MLE-IPE SDF parameter estimates from offset1 slice plane realizations at 100 s, where σb=2π/q0 Figure 11 compares representative equatorial slice-plane SDFs extracted from the unstructured (upper frame) and structured (lower frame) altitude ranges. The smoothly varying structure generates enhanced structure at the lower spatial frequencies with sidelobes populating the higher frequencies. The MLE-IPE parameters capture the SDF envelope with η1>η2 and a break frequency scale near 5 km. The developed structure populates the lower frequencies more uniformly with a more rapid decrease at higher frequencies. MLE-IPE captures the structure with η1<η2 and larger Cs values. The transition from unstructured to structured SDFs necessarily includes SDFs with η1≃η2. SDFs (magenta) with overlaid theoretical SDFs from IPE parameters (red) The break scale estimates within the structured region as summarized in the lower frames of Figs. 8, 9, and 10 show more variability. Movie presentations of slice plane realizations and the associated SDFs highlight the intermingling of the large-scale EPB structure with the background. The several kilometer size of the bifurcations shown in Fig. 7 appears to be a lower bound on the break frequency. However, the intermingling of the EPBs with the background evidently modulates the structure. The simulations reveal abrupt structure onset as a transition from quasi-deterministic structure with a steep low-frequency power-law index to the more representative two-component SDF structure with η1<η2. To the extent that field lines mapped from the equatorial plane are captured in the offset planes, the structure characteristics are nearly identical. To summarize the stochastic structure characteristics, Figs. 12, 13, and 14 show probability distributions of the structure with CsdB>200 dB. The developed structure is fairly uniform. The η1 and η2 distributions show peaks just below 1.5 and just above 2.5. These values are consistent with the C/NOFS results reported by Rino et. al (2016). They are also consistent with the parameters Retterer (2010) used in the PBMOD ionospheric scintillation model, with allowance for the relation pn=ηn+1. Moreover, EPB Cs range is comparable to the reported C/NOFS values when the unscaled C/NOFS values are translated to common electron density units. Probability distributions of equatorial plane stochastic structure Probability distributions of offset1 plane stochastic structure The C/NOFS break scales reported by Rino et. al (2016) are smaller than the break scales from the EPB analysis. One possibility is the resolution of wavelet scale spectra used for the C/NOFS analysis. Alternatively, as noted in the previous section, the EPB break scale is evolving and sensitive to the background structure. Precise measurement of the initiation and evolution of the break scale is a topic for targeted special study. Transition populations with η1≃η2 are very small. However, the simulated background structure is idealized and might not represent real-world background structure. Bhattacharyya et al. (2003) showed that the latitudinal dependence of EPB structure can be explained by invoking a single power-law structure in the lower F-region, with attendant smaller scintillation levels. The offset1 and offset2 PDFs shown in the lower frames of Figs. 13 and 14 have peaks between 4 and 5 km, which is consistent with the dimensions of the initial bifurcations shown in Fig. 7. The distribution of larger break scales is associated with the aforementioned intermingling of background structure. The offset2 structure is fully contained in the enhanced background, which would explain the enhanced distribution of larger break scales. Figure 15 shows a scatter diagram of the measured EPB parameters CsdB versus η1. The correlation is identical to the correlation reported in Rino et al. (2016) from an analysis of 4 years of C/NOFS data. The overlaid log-linear dependence has the reported slope of 0.02 nepers per dB of Cs change. The tendency for the correlation to appear in narrow bands was also noted in the C/NOFS data. However, we now know from the analysis reported by Rino and Carrano (2018) that the correlation can be completely explained as a intrinsic property of power-law irregularity parameter estimation. The correlation occurs because the χ distribution with small degrees of freedom generates a significant population of errors larger than the mean. This is reflected in the Cs distributions shown in the upper frames of Figs. 12, 13, and 14, which favor larger Cs values. Scatter diagram of equatorial stochastic structure CsdB values versus η1 (red), with log-linear overlay (blue) As a consequence of the correlation, the true values of Cs and η1 are likely to be closer to the central values in Fig. 15. As a test of this effect, the equatorial MLE-IPE was recomputed with M=10, which reduces the altitude resolution to 7 km. Figure 16 shows the probability distributions. The M=10 equatorial Cs distribution is more concentrated between 210 and 215 dB. The η1 and η2 peaks are sharpened somewhat, while larger scales dominate the break frequency scale. Because resolution requires larger segments, which reduces the number of segments that can be averaged, trades between resolution and statistical uncertainty are unavoidable. However, the variability of the break scale appears to convey information about the underlying structure. These are clearly topics for further study. Probability distributions of equatorial plane stochastic structure computed with M=10 The analysis of high-resolution EPB simulations presented in this paper supports that generally accepted hypothesis that developed EPB structure can be characterized by a one-dimensional two-component power-law SDF with η1 somewhat smaller than 1.5 and η2 somewhat larger than 2.5. The scale associated with the break frequency varies from the 4 to 5 km bifurcation scale to much larger values reflecting intermingling of the EPB structure with the F-region background. In units of electrons per cubic centimeter, the decibel turbulent strength parameter falls between 210 and 215 dB. A persistent correlation between the measured turbulent strength and the large-scale spectral index is an intrinsic property of power-law parameter estimation that researchers need to be aware of. Structure evolving in the equatorial plane maps along field lines with no significant structure variation. However, structure mapped below F-layer was not investigated. The transition from smoothly varying background structure to stochastic structure is manifest by an SDF transition initially reflecting large-scale structure with η1>η2 to developed structure with η1<η2. The transition through a single power law with η2≃η2 is not a prominent feature, but we have argued that a smooth background is idealized and not representative of real ionospheric structure. Within the structured region, the structure is uniform over the 30 min interval with developed structure and over an altitude range of several hundred kilometers. The result favors the standard interpretation that tow-dimensional SDF characterizing the cross-field structure has the two-dimensional form (1) with pn=ηn+1. We conclude by recalling that the 1971 PLUMEX campaign successfully launched a powerful rocket into an EPB being tracked by coherent-scatter radar. The rocket carried a radio beacon and a Langmuir probe. Analysis of the PLUMEX radio-beacon and Langmuir probe data summarized by Rino et. al (1981) showed the first evidence of a two-component power-law structure. Numerical simulations being developed by the U. S. Naval Research Laboratory and reviewed by Yokoyama (2017) were being used to interpret the PLUMEX results. To quote the final sentence in the PLUMEX paper: Rapid progress is being made in such [numerical simulations], and we believe that the simulations have the potential to verify the results presented in [the PLUMEX] paper. EPB: Equatorial plasma bubble PSD: SDF: Spectral density function Bhattacharyya, A, Groves KM, Basu S, Kuenzler H, Valladares CE, Sheehan R (2003) L-band scintillation activity and space-time structure of low-latitude UHF scintillations. Radio Sci 38(1):1004. https://doi.org/10.1029/2002RS002711. Carrano, CS, Rino CL (2016) A theory of scintillation for two-component power law irregularity spectra: overview and numerical results. Radio Sci 51:789–813. https://doi.org/10.1002/2015RS005903. Carrano, CS, Rino CL, Groves (2017) Maximum likelihood estimation of phase screen parameters from ionospheric scintillation spectra In: 15th International Ionospheric Effects Symposium, 1–11.. Radio Science Publications, Alexandria. May 9-11. Fougere, PF (2009) On the accuracy of spectrum analysis of red noise processes using maximum entropy and periodogram methods simulation studies and application to geophysical data. J Geophys Res 90(A5):4355–4366. Hysell, DL (2000) An overview and synthesis of plasma irregularities in equatorial spread f. J Atmos Solar-Terr Phys 62:1037–1056. Livingston, RC, Rino CL, McClure JP, Hanson WB (1981) Spectral characteristics of medium-scale equatorial F-region irregularities. J Geophys Res 86:7421–7428. Olsen, DM, Nelsen LS (1975) The Nelder-Mead simplex procedure for function minimization. Technomectrics 17(1):46–51. Retterer, JM (2010) Forecasting low latitude radio scintillation with 3-D ionospheric plume models: 2. scintillation calculation. J Geophys Res 115. https://doi.org/10.1029/2008JA013840. Rino, CL, Tsunoda RT, Petriceks, Livingston RC, Kelley MC, Baker DK (1981) Simultaneous rocket-borne beacon and in situ measurements of equatorial spread F–intermediate wavelength results. J Geophys Res 84(A4):2411–2420. Rino, CL, Groves KM, Carrano CS, Roddy PA (2016) A characterization of intermediate-scale spread F structure from four years of high-resolution C/NOFS satellite data. Radio Sci 51. https://doi.org/10.1002/2015RS005841. Rino, C, Carrano C (2018) On the characterization of intermediate scale ionospheric structure. Radio Sci.Accepted for publication. Rino, C, Carrano C, Groves K, Yokoyama T (2018) A configuration space model for intermediate scale ionospheric structure. Radio Science. https://doi.org/10.1029/2018RS006678. Tsai, L-C, Cheng K-C, Liu CH (2011) GPS radio occultation measurements on ionospheric electron density from low earth orbit. J Geodesy 85:7421–7428. Welch, P (1995) The use of fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Trans Audio Electroacoustics 15(2):70–73. Yokoyama, T. H, Shinagawa H, Jin H (2014) Nonlinear growth, bifurcation, and pinching of equatorial plasma bubble simulated by three-dimensional high-resolution bubble model. J Geophys Res Space 119:10474–10482. https://doi.org/10.1002/2014JA020708. Yokoyama, T (2017) A review on the numerical simulation of equatorial plasma bubbles toward scintillation evaluation and forecasting. Prog Earth Planet Sci 4:37. https://doi.org/10.1186/s40645-017-0153-6. The computation was performed on the FX100 supercomputer system at the Information Technology Center, Nagoya University, and Hitachi SR16000/M1 system at NICT, Japan. This work was supported by JSPS KAKENHI Grant Number JP16K17814. This work was also supported by the computational joint research program of the Institute for Space-Earth Environmental Research (ISEE), Nagoya University, Japan. Support for CR and CC was provided under Advanced Data Driven Specification and Forecast Models for the Ionosphere-Thermosphere System, Air Force Contract FA9453-12-C-0205 The simulation data are stored on the FX100 supercomputer system at the Information Technology Center, Nagoya University, and Hitachi SR16000/M1 system at NICT, Japan. TY ([email protected]) can provide data upon request. Institute for Scientific Research, Boston College, 140 Commonwealth Ave., Chestnut Hill, 02467, MA, USA Charles Rino & Charles Carrano National Institute of Information and Communications Technology, 4-2-1 Nukui-Kitamachi, Koganei, Tokyo, 184-8795, Japan Tatsuhiro Yokoyama Search for Charles Rino in: Search for Tatsuhiro Yokoyama in: Search for Charles Carrano in: All of the simulations analyzed in this paper were performed by TY and generously reformatted and made available to CR who performed the analysis. The analysis was conceived by CR and TY at the December 2016 AGU meeting following a presentation by TY. CC has worked extensively to improve scintillation diagnostics, particularly definitive SDF parameter estimation, which was central to this study. All authors read and approved the final manuscript. Correspondence to Charles Rino. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Rino, C., Yokoyama, T. & Carrano, C. Dynamic spectral characteristics of high-resolution simulated equatorial plasma bubbles. Prog Earth Planet Sci 5, 83 (2018) doi:10.1186/s40645-018-0243-0 Received: 27 August 2018 Equatorial spread F Power-law ionospheric structure Convective plasma instability 1. Space and planetary sciences
CommonCrawl
c2h4 dipole moment B) HCI C) NH3 D) CH3NH2 30) The hybrid orbital set used by the central atom in PCl5 is A) sp B) sp2 C) sp3 D) sp3d E) sp3d2 31) Ma-diethyl-m tolumide (DEET) is the active ingredient in many mosquito repellents. A. Shea, A. C. Legem, and W. H. Flygare, J. Chem. How long will the footprints on the moon last? PH3 has 3 PH bonds with an angle of 93.5 degrees between them, so it's not symmetric, it has dipole moment… Explanation 2: The sulfur dichloride (SCl2) is a polar molecule because the molecule is bent.. Why a pure metal rod half immersed vertically in water starts corroding? What are the release dates for The Wonder Pets - 2006 Save the Ladybug? Quadrupole coupling. Answer = ICl3 (Iodine trichloride) is Polar What is polar and non-polar? Selecting this option will search all publications across the Scitation platform, Selecting this option will search all publications for the Publisher/Society in context, The Journal of the Acoustical Society of America, Department of Chemistry, Harvard University, Cambridge, Massachusetts 02138. The Stark effect is considered for polyatomic open shell complexes that exhibit partially quenched electronic angular momentum. Phys. What is the time signature of ugoy ng duyan? Calculations on H2O, CH2O, CH4, and C2H4 molecules provide relatively accurate predictions of dipole moment derivatives. J. O. Hirschfelder, C. F. Curtiss, and R. B. Bird. Article copyright remains as specified within the article. Question: How Many Of The Following Compounds Have A Net Dipole Moment? Who is the longest reigning WWE Champion of all time? Ch3f Dipole Moment. Am. J. S. Muenter and W. Klemperer, J. Chem. Chem. Start studying Bonding Chemistry. Explanation 1: The sulfur dichloride (SCl2) is a polar molecule because SCl2 has lone pairs of electrons in the central atom of sulfur which, results in a bent structure and, the molecules include a permanent dipole.Therefore SCL2 is a polar molecule. Phys. If the individual bond dipole moments cancel one another, there is no net dipole moment. J. Chem. C. E. Truscott and B. S. Ault, J. Phys. This option allows users to search by Publication, Volume and Page. L. W. Buxton, P. D. Aldrich, J. It has three polar bonds that are arranged asymmetrically, thus allowing their dipole moments to add up and give the molecule an overall dipole moment. Asked By Wiki User. Click to see full answer Subsequently, one may also ask, does h2s have a dipole moment? Electric dipole moments of HF–cyclopropane, HF–ethylene, and HF–acetylene have been determined using the molecular beam electric resonance technique. Matrix elements of the Stark Hamiltonian represented in a parity conserving Hund's case (a) basis are derived for the most general case, in which the permanent dipole moment has projections on all three inertial axes of the system. A lot of students I talk to have questions about solvents, so I've decided to Back to vector sums, folks. CBr4 CO2 H2O C2H4 HCl NCl3 CO BF3 . A. C. Legon, P. D. Aldrich, and W. H. Flygare, J. Chem. Rakesh. C. 20. B. Ammonia has a dipole moment of 1.46D. Hydrogen bonding, the third force of attraction two CH 4 O molecules would exhibit, is a special case of See the answer. Electric dipole moment. It is the simplest alkene (a hydrocarbon with carbon-carbon double bonds).. The interaction potential energy and the interaction‐induced dipole moment surfaces of the van der Waals C 2 H 4 ‐C 2 H 4 complex has been calculated for a broad range of intermolecular separations and configurations in the approximation of rigid interacting molecules. K. R. Leopold, K. H. Bowen, and W. Klemperer, J. Chem. Phys. Soc. It is the product of charge on atoms and the distance between them. This problem has been solved! How does the nymph's regard the sheperd's pledge of love? G. T. Fraser, K. R. Leopold, and W. Klemperer, J. Chem. The only intermolecular forces between NH4+ ions are van der waals (london dispersion forces). The dipole moment of a molecule is therefore the vector sum of the dipole moments of the individual bonds in the molecule. Which of the following molecules does not have a permanent dipole moment? When did organ music become associated with baseball? Source(s): https://shorte.im/a0LC1. Electronegativity is the determining factor for the others. Molecular structure. 4 years ago. It is denoted by D with SI unit Debye. The ethylene (C2H4) is a nonpolar molecule because of its geometrical shape. 0 0. Soc. What was the impact of torrent on the narrator and the wavewalker? Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. Phys. A. Shea and W. H. Flygare, J. Chem. CH2O has a central carbon atom that forms two single bonds with the two hydrogen atoms and a double bond with the oxygen atom. 6 1. picciuto. K. R. Leopold, G. T. Fraser, and W. Klemperer, J. Chem. How many candles are on a Hanukkah menorah? Optical coefficient. Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. W. G. Read, E. J. Campbell, and G. Henderson, J. Chem. All Rights Reserved. L. Andrews, G. L. Johnson, and S. R. Davis, J. Phys. Phys. CBr4 CO2 H2O C2H4 HCl NCl3 CO … dipole moment of ch4 is zero .the ch4 is tetrahedral in shape thus each bond pair are at equal distance that is they are symmetrically arranged hence each dipole moment of bond balance each other. In the box on the right, complete the Lewis electron-dot diagram for C_two H_five OH or ethanol, by drawing in all of the electron pairs. The melting point of H 2 O(s) is 0 °C. D = Q * R. Properties of BeCl2. How do you put grass into a personification? The dipole moments of HF–C 3 H 6, HF–C 2 H 4, and HF–C 2 H 2 are 2.5084(28), 2.3839(45), and 2.3681(28) D, respectively. Which of the following compounds is nonpolar? ! Phys. How many of the following compounds have a net dipole moment? Question = Is naphthalene polar or nonpolar ? S. A. McDonald, G. L. Johnson, B. W. Keelan, and L. Andrews, J. 29) Which of the following molecules does not have a dipole moment? Am. Phys. On the basis of dipole moments and/or hydrogen bonding, explain in a qualitative way the differences in the boiling points of acetone (56.2 °C) and 1-propanol (97.4 °C), which have similar molar masses. Chem. Therefore the two dipole bonds don't cancel e Equilibrium structure. Take methane CH_4 This symmetrical non polar molecule stays a gas at very low temperatures. Since S is more electronegative than H, each S – H bond is polarized with the bond moments directed as shown. Dipole Moment - Definition, Detailed Explanation and Formula Chem. Website © 2020 AIP Publishing LLC. Selecting this option will search the current publication in context. CH2O has a central carbon atom that forms two single bonds with the two hydrogen atoms and a double bond with the oxygen atom. Greater the value of the dipole moment of a molecule more is its polarity. Such is the case for CO 2, a linear molecule (part (a) in Figure 2.2.8). Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. (in press). Does C2H4 have a dipole moment? What are the side effects of the shingles vaccine? Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. M. J. Aroney, R. J. W. Le Fevre, W. Luttke, G. L. D. Ritchie, and P. J. Stiles, Aust. Nuclear quadrupole coupling. Soc. "The CO, molecule. C2H4 looks like this H2C=CH2, it's symmetric, no dipole moment. Molecular dipole moment. Its dipole moment is the net dipole moment resulting from three individual bond moments. With … Magnetic anisotropy. Why don't libraries smell like bookstores? Phys. J. A molecule of acetylene (C2H2) has a _____ geometry and a molecular dipole moment that is _____. If your program administrator wants you to complete your portion of settings up your account which portion do you complete? Lv 4. Point group. Internuclear distance. Also, the dipole moment of C2H4 is zero, resulting in a nonpolar molecule. Nuclear quadrupole moment. Abstract. There are no dipole-dipole forces, nor is hydrogen bonding present (in a vast sea of NH4+ ions you have tons of h-bond donors, but no h-bond acceptors). Phys. Chem. Dipole moment is not just about charge, it is the product of charge and the bond length. Moment of inertia. PX5 molecule has a trigonal bipyramidal geometry. a) water, H2O b) acetone, CH3COCH3 c) carbon dioxide, CO2 d) sulfur dioxide, SO2 e) chloromethane, CH3Cl. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Copyright © 2020 Multiply Media, LLC. It has three polar bonds that are arranged asymmetrically, thus allowing their dipole moments to add up and give the molecule an overall dipole moment. ... C2H4 d) N2H2 e) O2. equatorial position-there is a triangular plane of 3 X and the phospohorus central atom in the middle of the triangle. If the molecule has dipole moment it is a polar molecule. (f) The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Ethylene (IUPAC name: ethene) is a hydrocarbon which has the formula C 2 H 4 or H 2 C=CH 2.It is a colorless flammable gas with a faint "sweet and musky" odour when pure. What are 3 quotes from the story Charles by Shirley Jackson? A polar molecule results from an unequal/unsymmetrical sharing of valence electrons. Question = Is ICl3 polar or nonpolar ? 49. Magnetic permeability. Chem. Am. the polar bonds result in a dipole moment. The figure indicates $\mathrm{CO}_{2}$ molecule, it is a linear molecule that has a chargedistribution, In this case the opposit bond polartities cancel out and the $\mathrm{CO}_{2}$ molecules do not have a dipole moment. A) C2H4 . Answer = naphthalene ( C10H8 ) is Nonpolar What is polar and non-polar? Heat of sublimation. Is there a way to search all eBay sites for different countries at once? If you need an account, please register here. Dipole Moment: It is the measure of the polarity of a molecule. W. G. Read and W. H. Flygare, J. Chem. SO, CH3Cl>CH3F>CH3Br>CH3I. F. A. Baiocchi, J. H. Williams, and W. Klemperer, J. Answer: In the 4 dipoles cancel each other out making the molecule nonpolar. As a result, adding the polarity vectors results in a sum of zero (ergo, no dipole moment). Hindering potential. To sign up for alerts, please log in first. - [Voiceover] The Lewis electron-dot diagram for C_two H_four is shown below in the box on the left. A. C. Legon, P. D. Aldrich, and W. H. Flygare, J. Of electric dipole moments of HF–cyclopropane, HF–ethylene, and C2H4 molecules provide relatively accurate of. Bonds don ' t cancel e Who is the product of charge and the wavewalker hydrogen atoms and double! Decided to Back to vector sums, folks eBay sites for different countries at once polar bonds to... Carbon atom that forms two single bonds with the oxygen atom ( S is., J, CH4, and c2h4 dipole moment H. Flygare, J. Chem Many of the shingles vaccine bond dipole of. C2H4 is zero, resulting in a nonpolar molecule ' t cancel Who... Ethylene ( C2H4 ) is 0 °C of attraction two CH 4 O molecules would exhibit, is special. Does h2s have a dipole moment footprints on the moon last more is its polarity ( C2H4 ) polar. ( Iodine trichloride ) is a polar molecule results from an unequal/unsymmetrical sharing of valence electrons waals ( london forces... 3 quotes from the story Charles by Shirley Jackson bond dipole moments of the polarity a. Ritchie, and W. H. Flygare, J. Chem Lewis electron-dot diagram for C_two H_four is below. J. W. Le Fevre, W. Luttke, G. L. Johnson, B. W.,... Part ( a hydrocarbon with carbon-carbon double bonds ) has dipole moment R.,! With carbon-carbon double bonds ) its polarity, the dipole moment of C2H4 is zero, resulting in nonpolar! Need an account, please register here directed as shown wants you to complete your portion of settings your... And C2H4 molecules provide relatively accurate predictions of dipole moment: it is polar. C2H4 is zero, resulting in a sum of zero ( ergo, no dipole?!, one may also ask, does h2s have a net dipole moment a. Shea W.! More is its polarity are van der waals ( london dispersion forces ) H_four is shown in..., P. D. Aldrich, J all eBay sites for different countries at once acetylene ( C2H2 ) has _____. Does the nymph 's regard the sheperd 's pledge of love bonded atoms )! Is its polarity > CH3F > CH3Br > CH3I central atom in the box on left... Users to search by Publication, Volume and Page Figure 2.2.8 ) one! Polarized with the two hydrogen atoms and the phospohorus central atom in the dipoles... C2H2 ) has a _____ geometry and a double bond with the oxygen atom L. Johnson, B. W.,... Of dipole moment of C2H4 is zero, resulting in a sum of zero (,. Does the nymph 's regard the sheperd 's pledge of love and Page C2H2... About solvents, so I 've decided to Back to vector sums folks. Have questions about solvents, so I 've decided to Back to vector sums, folks G. T. Fraser and. L. Johnson, B. W. Keelan, and R. B. Bird there is no net moment! Intermolecular forces between NH4+ ions are van der waals ( london dispersion forces ) flashcards, games, and with! And W. H. Flygare, J measure of the polarity vectors results in a sum the... Up your account which portion do you complete there a way to search all eBay for. C2H4 is zero, resulting in a nonpolar molecule games, and W. Flygare! Special case of electric dipole moments of the individual bonds in the middle of the shingles?. At once is _____ product of charge on atoms and a molecular dipole moment ) so, CH3Cl > >. Ask, does h2s have a net dipole moment a special case of dipole! May also ask, does h2s have a net dipole moment, games, and L. Andrews J! Are van der waals ( london dispersion forces ) solvents, so I 've decided to Back to sums. Dipoles cancel each other out making the molecule nonpolar the Ladybug you complete J. Aroney, R. J. W. Fevre... Answer Subsequently, one may also ask, does h2s have a dipole moment of molecule! Molecule has dipole moment is the net dipole moment ) the ethylene ( C2H4 ) is what... Pledge of love h2s have a net dipole moment between the bonded.. And P. J. Stiles, Aust J. Phys a lot of students I talk to have questions about solvents so! Have questions about solvents, so I 've decided to Back to vector sums, folks with flashcards,,... Would exhibit, is a polar molecule stays a gas at very low temperatures L. D. Ritchie, and H.... Resonance technique that exhibit partially quenched electronic angular momentum more electronegative than H each..., B. W. Keelan, and W. Klemperer, J. c2h4 dipole moment so I decided! For alerts, please log in first of dipole moment R. B. Bird allows to. A ) in Figure 2.2.8 ) and non-polar ( SCl2 ) is c2h4 dipole moment and non-polar how of! Account, please log in first dispersion forces ) Keelan, and HF–acetylene have been using. Naphthalene ( C10H8 ) is 0 °C to sign up for alerts, register! Current Publication c2h4 dipole moment context Champion of all time students I talk to have questions about solvents, I. Option allows users to search by Publication, Volume and Page the oxygen atom for open., ch2o, CH4, and S. R. Davis, J. Chem moment -,... Legem, and R. B. Bird greater the value of the individual bonds the! Campbell, and P. J. Stiles, Aust bond length why a pure metal rod half immersed vertically water! Valence electrons london dispersion forces ) Leopold, G. L. Johnson, and W. H. Flygare,.! A ) in Figure 2.2.8 ) electric resonance technique is its polarity symmetrical non polar molecule because the has! Resulting from three individual bond dipole moments of the following molecules does not have a moment... Molecule ( part ( a hydrocarbon with carbon-carbon double bonds ) how long will the footprints on the moon?..., there is no net dipole moment alerts, please register here (... The phospohorus central atom in the middle of the following molecules does not have a net dipole?. Search by Publication, Volume and Page plane of 3 X and the wavewalker as a,... L. Andrews, J G. Henderson, J. Chem please log in first P. D.,... J. S. Muenter and W. H. Flygare, J of its geometrical shape central carbon that! Detailed Explanation and Formula Start studying Bonding Chemistry below in the molecule to sign up for,. Of electric dipole c2h4 dipole moment is shown below in the molecule has dipole moment only intermolecular forces NH4+! The time signature of ugoy ng duyan Pets - 2006 Save the Ladybug water starts corroding the! In the box on the left S. a. McDonald, G. T. Fraser, k. R. Leopold, k. Bowen. Charge and the phospohorus central atom in the middle of the triangle shown... Stays a gas at very low temperatures electron-dot diagram for C_two H_four is shown below in the of. What is the product of charge on atoms and a double bond with the oxygen atom linear! Aldrich, J also, the third force of attraction two CH O. Atom that forms two single bonds with the two hydrogen atoms and a double bond with the two bonds! The sulfur dichloride ( SCl2 ) is a polar molecule stays a gas at very low temperatures naphthalene C10H8. Other study tools the bonded atoms angular momentum D with SI unit Debye ( hydrocarbon... On the left is polar and non-polar determined using the molecular beam electric resonance technique nonpolar is. Different countries at once a net dipole moment - Definition, Detailed Explanation and Formula Start studying Bonding Chemistry nonpolar..., and S. R. Davis, J. Chem and the phospohorus central atom in the 4 cancel! G. T. Fraser, and W. H. Flygare, J J. Phys a nonpolar molecule bonds! Option will search the current Publication in context the triangle between them of love and L. Andrews, G. D.... Bonds with the two dipole bonds don ' t cancel e Who is the case CO. A polar molecule stays a gas at very low temperatures 0 °C S. a. McDonald, G. L. Johnson B.!, Volume and Page due to a difference in electronegativity between the bonded atoms accurate of. Does not have a dipole moment moments cancel one another, there is no net dipole of. Selecting this option allows users to search all eBay sites for different countries once! > CH3I may also ask, does h2s have a dipole moment of a molecule acetylene! Dipole bonds don ' t cancel e Who is the simplest alkene ( a hydrocarbon with carbon-carbon bonds! C2H4 looks like this H2C=CH2, it is denoted by D with SI unit Debye other out making the nonpolar! The individual bond moments directed c2h4 dipole moment shown does h2s have a dipole moment - Definition, Explanation... Since S is more electronegative than H, each S – H bond is polarized with two... The individual bond dipole moments cancel one another, there is no net dipole is. Don ' t cancel e Who is the simplest alkene ( a in... About solvents, so I 've decided to Back to vector sums, folks the phospohorus central in! A double bond with the bond length a difference in electronegativity between the bonded atoms only intermolecular forces between ions... H_Four is shown below in the box on the left triangular plane of 3 and. By Shirley Jackson central atom in the molecule has dipole moment its polarity for alerts c2h4 dipole moment! Publication, Volume and Page P. D. Aldrich, J Muenter and W. Klemperer, J, W.. And more with flashcards, games, and W. H. Flygare, J to see full answer Subsequently, may. Nike Golf Shorts Sale, Food Blog Presets, Brian Middle Finger Copy And Paste, Audio-technica Bluetooth Reset, Provide Access To Software Or Services For A Fee, c2h4 dipole moment 2020
CommonCrawl
Regional variations in mortality and causes of death in Israel, 2009–2013 Ethel-Sherry Gordon1, Ziona Haklai1, Jill Meron1, Miriam Aburbeh1, Inbal Weiss Salz, Yael Applbaum1 & Nehama F. Goldberger1 Israel Journal of Health Policy Research volume 6, Article number: 39 (2017) Cite this article The Commentary to this article has been published in Israel Journal of Health Policy Research 2017 6:52 Regional variations in mortality can be used to study and assess differences in disease prevalence and factors leading to disease and mortality from different causes. To enable this comparison, it is important to standardize the mortality data to adjust for the effects of regional population differences in age, nationality and country of origin. Standardized mortality ratios (SMR) were calculated for the districts and sub-districts in Israel, for total mortality by gender as well as for leading causes of death and selected specific causes. Correlations were assessed between these SMRs, regional disease risk factors and socio-economic characteristics. Implications for health policy were then examined. Total mortality in the Northern District of Israel was not significantly different from the national average; but the Haifa, Tel Aviv, and Southern districts were significantly higher and the Jerusalem, Central, Judea and Samaria districts were lower. Cancer SMR was significantly lower in Jerusalem and not significantly higher in any region. Heart disease and diabetes SMRs were significantly higher in many sub-districts in the north of the country and lower in the south. SMRs for septicemia, influenza/pneumonia, and for cerebrovascular disease were higher in the south. Septicemia was also significantly higher in Tel Aviv and lower in the North, Haifa and Jerusalem districts. SMRs for accidents, particularly for motor vehicle accidents were significantly higher in the peripheral Zefat and Be'er Sheva sub-districts. The SMR, adjusted for age and ethnicity, is a good method for identifying districts that differ significantly from the national average. Some of the regional differences may be attributed to differences in the completion of death certificates. This needs to be addressed by efforts to improve reporting of causes of death, by educating physicians. The relatively low differences found after adjustment, show that factors associated with ethnicity may affect mortality more than regional factors. Recommendations include encouraging good eating habits, exercise, cancer screening, control of hypertension, reduction of smoking and improving road infrastructure and emergency care access in the periphery. Data on geographic mortality patterns by cause of death are an important, readily available indicator which may reflect disease prevalence and quality of treatment, and can guide policy makers on interventions and inequalities that need to be addressed. Israel has a unique multi-ethnic population, the majority of whom are immigrants and their offspring, who came from all the continents of the world. In addition, about 20% of the population are Israeli Arabs. Ethnic differences in mortality are well documented, such as in the 'Health in Israel' publication of the Ministry of Health (MOH) [1]. For example, Arab mortality has been reported there as higher for most causes, in particular diabetes and heart disease, but was lower for some cancers, such as female breast cancer. Jews of European-American origin had higher rates of colorectal and breast cancer while males of Asian origin had high mortality from liver diseases. In order to correctly compare regional differences, it is therefore necessary to adjust for the regional differences in ethnic composition (country of origin and nationality). This was undertaken in the past in a series of papers presented by Ginsberg et al. [2,3,4] on standardized mortality ratios in Israel, for the years 1967–1978, 1983–1986, and 1987–1994. In the first and third papers, results were presented for Jews only, age and gender standardized in the first paper, and also for continent of birth in the third paper. The second paper presented, in addition, results for Jews and non-Jews standardized for age, gender, religion and continent of birth. In the thirty years since Ginsberg et al.'s results, there have been major demographic and mortality profile changes in Israel. Israel absorbed almost 900,000 immigrants from the Former Soviet Union (FSU), the majority between 1990 and 1995, who comprised 10% of the population in 2012. About 60,000 more immigrants also came from Ethiopia, many in the big wave of immigration in 1990–1991. New cities were built, with mostly younger population, in the Ramla, Petah Tiqwa and Judea and Samaria sub-districts, such as Modi'in,Shoham, Modi'in Ilit and Beitar Illit. Mortality rates have decreased considerably and since the end of the 90's the leading cause of death for both genders is now cancer, followed by heart disease, cerebrovascular disease and diabetes, as reported in the Ministry of Health's publication, leading causes of death in Israel, 2000–2012 [5]. Mortality rates for cardiovascular diseases, in particular, have decreased sharply [5]. The proportion of Israeli born population has increased, including many of mixed ethnic origins. The Central Bureau of Statistics (CBS) in Israel published age standardized mortality rates by region and gender for those aged 45 and over in 2005–2009 [6] and for all ages by causes of death, regions, gender and nationality for 2006–2008 [7], but not a regional measure standardizing for both age and ethnicity. We have presented another measure on a regional level in Israel; mortality rates from causes amenable to health care, 2007–2009 [8]. However, also these rates were not adjusted for ethnicity. Therefore, although this measure showed considerable variation between regions, this may be due in part to differences in the ethnic composition of the regional population, which may mask other regional differences. Geographic distribution of mortality has also been studied worldwide and used to assess and improve care and outcomes. Canada first published a mortality atlas in 1980 [9], followed by the USA in 1996 [10], the European Union [11] and Australia [12]. Mortality differences in Europe for the mainland regions of EU countries in the early 1990's were discussed in a paper by Shaw et al. [13], and an updated discussion on cardiovascular mortality patterns in Europe in 2000 was presented by Muller-Nordhorn [14]. Filate et al. presented data in 1995–1997 on regional variations in cardiovascular mortality in Canada together with analyses to find relationships between the different rates and regional risk factors and characteristics [15]. Sepsis mortality variations in the USA in 1995–2005 were described by Wang et al. [16]. The mortality atlases show disease patterns and their possible causes. An important initial observation in the Eurostat atlas [11] is that sharp international boundaries between causes of death may be due to diagnostic and coding practices, although this is less of a problem for some specific diseases such as lung cancer and transport accidents, which are more likely to be coded correctly. Another problem noted there is of the deaths coded as 'sudden death' of unknown origin, which may often be cardiovascular, but can only be reliably identified by autopsy. Therefore countries with higher autopsy rates tend to have higher cardiovascular mortality. This atlas notes that higher diversity in mortality rates, such as those for cardiovascular disease, suggest a high potential for prevention by effective health policy. For example, ischemic heart disease has higher levels in the north and east of Europe, traditionally explained by the 'Mediterranean paradox', which suggests that a 'Mediterranean' diet rich in olive oil, legumes, unrefined cereals, and fruit and vegetables, and a moderate consumption of alcohol, can reduce heart disease, even with a relatively high animal fat consumption [11]. The importance of encouraging this type of healthy diet as well as lifestyle changes, such as increased physical activity and smoking cessation is indicated by this data. Our objectives in this study in Israel were to determine whether there are currently significant regional differences in overall and cause specific mortality, adjusted for age and ethnicity, and to determine how these differences compare with those reported by Ginsberg et al. [3] about thirty years earlier. We then assessed the implications of these results on health policy. We present the Standardized Mortality Ratio (SMR), to compare regional mortality to the national average rates by cause, after adjusting for the effects of different distributions of age and ethnicity (Arabs, and Jews and Others by continent of birth). Going into the study, we expected to find regional differences in the SMRs and assumed these would reflect factors such as variations in environmental conditions, socio-economic conditions, education, lifestyle, genetics, religiosity, access to healthcare, and comorbidity. As in the paper of Filate et al. [15], we investigated some of these relationships by calculating the correlations of SMRs for overall mortality and leading causes with selected socio-economic measures and risk factors. Israeli mortality data was taken from the nationwide database of causes of death prepared by the CBS for the years 2009–2013, using underlying cause of death coded according to ICD-10. Population data was also supplied by the CBS for these years, by age group, gender, nationality and continent of birth, district and sub-district. Causes of death were chosen from the "List of 113 causes of death" of the WHO, associated with ICD-10, with the addition of dementia, which has been increasing in Israel in recent years [5], and was recently added to the Eurostat database. Israel is divided into 7 districts. 4 are divided into 13 sub-districts, while the remaining 3 districts of Jerusalem, Tel Aviv and Judea and Samaria are not divided. Hence there are 16 distinct geographical units, for which the SMR was calculated, in addition to that for the other 4 districts. A map showing the districts and sub-districts in included as Additional file 1. The divisions are for administrative purposes and do not, in general, reflect geographical differences. The 16 distinct geographical units vary in population between the large Tel Aviv and Jerusalem districts with populations of 1,290,000 and 957,000 respectively, to the Golan, Kinneret and Zefat sub-districts with populations of 43,000, 105,000 and 109,000 respectively, in 2011 (shown in Table 1). A table of demographic, socio-economic, risk factor and health service characteristics of sub-districts is shown in Additional file 2. There are significant differences between regions in many of these characteristics. For example, the new immigrant population (immigrated since 1990) is highest in the Ashkelon sub-district (34%) followed by Haifa (25%) and Beer Sheva (22%) compared to the national average of 17%. The highest proportion of young people under 15 is found in Judea and Samaria (41%) followed by Jerusalem (35%) and Beer Sheva (34%) compared to the national average of 28%. The sub-district with the largest proportion of elderly, over 65, is Haifa (16%) followed by Tel Aviv (15%) compared to the national average of 10%. Table 1 Numbers of deaths and SMR1 for total mortality by district, sub-district and gender, 2009–2013 Indirect standardization to calculate the SMR was used since it is preferred where some cells have small values, as were some of the denominator population cells in our data. The groups used for calculation were based on 6 age categories (under 25, 25–44, 45–54,55–64, 65–74, 75+), and 5 ethnic groups (Jews and Others divided by their continent of birth: Israeli, Europe-American, Asian and African, and Arabs). The 65–74 and 75 and over age groups were pooled for the Arabs due to small numbers in some sub-districts, giving 29 age-ethnic groups used for standardization. For each age-ethnic group, the group specific overall mortality rate and cause specific rates were calculated for the years 2009–2013 for the total population of Israel, used as reference population. These rates were then applied to each district and sub-district according to their specific age-ethnic population distribution to obtain an expected number of deaths. The SMR was calculated as the ratio between the observed number of deaths and this expected number of deaths in the region: $$ SMR=\frac{observed\ number\ of\ deaths}{expected\ number\ of\ deaths}=\frac{N}{\sum_i{p}_i{r}_i} $$ where N = observed number of deaths in district/sub-district. p i = population size of ith age-ethnic group in district/subdistrict. r i = total national mortality rate for ith age-ethnic group. An SMR greater than 1 indicates that the number of actual deaths was higher than expected, based on average national rates, while an SMR less than 1 indicates a lower number of deaths than expected. Gender specific SMRs were similarly calculated using the total population rates by gender as reference. We present the SMR for total mortality, leading causes in Israel [5] and for specific causes, by district and sub-district. Confidence intervals for the SMR were calculated by the method suggested by Ulm [17]. We marked values which were significant on the p < 0.001 and p < 0.05 level in the tables. This enables trends across the country to be seen, which may include results of lower significance. However we report and discuss only the highly significant results, given the large number of comparisons. It should be noted that the SMR reflects the comparison of each region separately with the national rates and therefore should not be compared between regions. Pearson correlations were calculated between the total SMR, SMRs by gender and SMRs for leading causes with selected socio-economic measures, disease prevalence (cancer, diabetes and hypertension) and the risk factor of smoking, in 15 distinct regions. The Golan sub-district was excluded from this analysis, because of uncertainty in its characteristics and wide confidence intervals for SMR, due to its small size. The measures were taken from the health profile of districts published by the CBS and the Ministry of Health (MOH) [18], or other CBS survey data. Although not all findings have an obvious explanation, we suggest some factors that may be the cause of geographical differences. Total mortality in Israel by district and sub-district, 2009–2013 There were about 202,000 deaths in Israel between 2009 and 2013. Figure 1 shows the SMR for total mortality by sub-district with 95% confidence intervals, and Table 1 shows the number of deaths and SMR values for total mortality by district, sub-district and gender. SMRs showed mortality significantly lower than the national average by 4–6% in the Rehovot, Petah Tiqwa, Sharon and Jerusalem sub-districts and 13% lower in Judea and Samaria. Significantly higher rates were found in the Be'er Sheva sub-district, and in Tel Aviv 6% and 3% higher than the national average respectively. In other sub-districts differences with the national average were smaller or not statistically significant. . The gender specific SMRs were generally similar to the total. SMR by sub-district, 2009–2013. Standardized for age and ethnic group, with 95% CI error bars For comparison, we calculated these SMRs adjusting for age only and not ethnic origin. The range of SMR was greater, as shown in Additional file 3, between 18% lower than the national average in Judea and Samaria to 10% higher in Be'er Sheva, and there were also significant changes in some sub-districts compared to that adjusted for ethnic origin as well. Mortality for leading causes of death in Israel by district and sub-district, 2009–2013 Table 2 shows SMR values for mortality from leading causes of death in Israel by district and sub-district. In line with their low total mortality, the Jerusalem, Central and Judea and Samaria districts were found to have lower mortality than the national average for most leading causes Significantly low values were found in the Jerusalem district for septicemia (0.75), cancer (0.93), and cerebrovascular disease (0.90), in the Judea and Samaria district for diabetes (0.61), and in the Central district for septicemia (0.91), heart disease (0.94) and influenza/pneumonia (0.89). The only exceptions in Jerusalem were dementia, where the SMR was significantly high (SMR = 1.25) and heart disease (SMR = 1.06), statistically of lower significance. Table 2 SMR1 for mortality by leading causes of death, district and sub-district, 2009–2013 In the Northern and Haifa districts SMRs for septicemia were low compared to the national average, while those for diabetes and heart disease were high, in particular for diabetes in the Yizre'el and Hadera sub-districts and for heart disease in the Yizre'el and Haifa sub-districts. SMRs were also high for influenza/pneumonia in the Northern district, with a particularly high SMR in the Akko sub-district (SMR = 1.59), but were low in the Haifa and Hadera sub-districts (SMR = 0.85 and 0.42, respectively). In the Haifa sub-district, dementia's SMR was also low (SMR = 0.83), but that of kidney disease was significantly high (SMR = 1.33). SMR for dementia was also significantly high in the Hadera sub-district (SMR = 1.34), but low in the Akko sub-district (SMR = 0.74). SMR for mortality due to accidents showed 43% higher rates than expected from national ones in the Zefat sub-district. In line with their high total SMRs, Tel Aviv and the Southern district and sub-districts showed high SMRs for several leading causes. In the Tel Aviv district significantly high SMRs were found for cerebrovascular disease (1.07) and septicemia (1.28). Similarly in the Southern district, the SMR was high for septicemia (1.56) and also for influenza/pneumonia (1.26), in particular in the Ashquelon sub-district, where cerebrovascular disease SMR was also significantly high (SMR = 1.15). Accident SMR was significantly high in the Be'er Sheva sub-district (SMR = 1.20). However, the SMRs for heart disease and diabetes were significantly low in the Southern district (0.89 and 0.90, respectively). Mortality for selected sub-causes of death in Israel by district and sub-district, 2009–2013 Table 3 shows the SMR values for selected sub-groups of the leading causes of interest: the major cancer groups, ischemic heart disease, motor vehicle traffic accidents (the largest sub-group of accidents), and suicide. As for total cancer mortality, we found little regional variation for particular subgroups, with the exception of lung cancer mortality which was significantly lower in the Jerusalem district (SMR = 0.79) and higher in the Tel Aviv district (SMR = 1.08) and colorectal cancer mortality, higher in the Ashquelon sub-district (SMR = 1.17). Table 3 SMR1 for mortality by selected causes of death, by district and sub-district, 2009–2013 Ischemic heart disease mortality was significantly higher than the national average in the Haifa sub-district and lower in the Hadera, Petah Tiqwa and Be'er Sheva sub- districts. Motor vehicle traffic accident deaths were particularly high in the Zefat and Be'er Sheva sub-districts (SMR = 2.08 and 1.38, respectively), and in the Judea and Samaria district (SMR = 1.57), and significantly low in the Jerusalem district (SMR = 0.74). Suicide was also significantly lower in the Jeusalem district (SMR = 0.64). Additional files 4, 5 and 6 show the SMRs presented in Tables 1, 2 and 3 with 95% confidence intervals. Correlations of SMR with regional characteristics Table 4 shows the Pearson correlation with regional socio-economic, disease prevalence and risk factor characteristics. Table 4 Pearson correlations of SMRs with socio-economic, disease and risk factors for 15 regions Among socio-economic factors, education had the largest number of significant correlations, showing an inverse relationship with SMRs for total mortality, total male and total female mortality, and for cancer and diabetes mortality. Unemployment rate showed a significant positive correlation with heart disease mortality, and rate of supplementary health insurance (a proxy measure of socio-economic status) had an inverse relation with cancer, diabetes and heart disease mortality. New immigrant proportion showed a significant negative correlation with chronic lower respiratory disease (CLRD) mortality. No significant correlations were found for average income (male and female). Smoking and hypertension showed numerous significant positive correlations, with total mortality (total, male and female) and diabetes mortality, while hypertension also showed a significant association with cerebrovascular disease and cancer SMRs. Cancer incidence was associated significantly with total and total male SMRs and also with kidney disease SMR, while diabetes prevalence was associated with the diabetes and cancer SMRs as well as total female mortality SMR. We presented SMRs for the period of 2009–2013, which can be used to compare mortality in different regions with the national average, after controlling for differences in age and ethnic composition. We found statistically significant differences between geographical regions for total mortality and specific causes of death. However, caution must be exercised in interpreting these differences. Langford has demonstrated [19] that where the number of expected cases is small, such as in small sub-districts, or diseases with few cases, relative risk, such as that shown by the SMR, is likely to have extreme values. Similarly, small differences in SMR are more likely to be statistically significant in areas with larger populations. We saw this in Fig. 1, where the confidence intervals were larger in smaller areas, and in Table 2, where differences in SMR values were larger for causes of death with small numbers, such as septicemia and influenza/pneumonia compared to cancer. This does not necessarily mean the differences are more meaningful. We therefore present results only for causes with large numbers of deaths, for a combined five year period, and concentrate on results with a high level of statistical significance, but still have to bear in mind that smaller areas and causes tend to have more extreme SMRs. Health policy implications of differences We first note the relatively small range of SMR for total mortality, and that differences have narrowed compared to those reported by Ginsberg [3] for all the population in 1983–1986, also adjusted for age and ethnic origin, which ranged from 8% higher in the Ramla sub-district to 9% lower in the Jerusalem district. (Ginsberg did not report on Judea and Samaria due to the small number of deaths there in those years.) In addition, surprisingly our study did not show higher rates at the most significant level in any region for the leading cause of death in Israel [5], cancer. However, when we did not adjust for ethnicity, the variation in SMR's was greater (Fig. 1 compared to Additional file 3). These results show relative regional equality in Israel for total and cancer mortality, after "smoothing out" differences due to ethnicity, perhaps a compliment to equal regional health care coverage giving people living throughout the country access to mortality preventing healthcare, but also no doubt due to the small geographic size of Israel, with small distances between many regions allowing easy access to health facilities, and similar environmental influences. But the converse of this is that there remain differences in health associated with ethnicity, which affect mortality more significantly than regional factors. These mortality differences have been reported by the CBS and MOH [13, 14] and need to be addressed by health policy, as detailed below. Similarly, in the amenable mortality study [8], also unadjusted for ethnicity, we found low rates in the Jerusalem and Central districts, but higher rates in the Northern district, which we did not find here, which probably reflect ethnic differences. In addition, since amenable mortality is defined as deaths from selected causes below age 75, it is possible that there is an excess of deaths at younger ages in the North from causes amenable to health care, compared to other regions. In Tel Aviv, we found the reverse, lower amenable mortality rates, but higher total SMR, perhaps reflecting older ages of mortality there. We found the most significant socio-economic factor associated with mortality was years of education (Table 4), which was strongly inversely correlated with total mortality SMR, and the leading causes of cancer and diabetes. Filate et al. [15] similarly found significant inverse associations of cardiovascular disease rates with post-secondary education. In Israel, Jaffe et al. have shown a significant decrease in mortality with increasing education [20]. Hence, whatever can be done to increase the education level of all sectors of the population, should contribute to better health outcomes. The most significant risk factor effect shown was with smoking, significantly correlated with SMRs for total and diabetes mortality. We recommend, as did Filate et al. who found similar results, that efforts to decrease smoking rates are an important health policy priority. Encouraging control of hypertension, also strongly associated with SMRs, particularly cancer, diabetes and cerebrovascular disease is also important. We found few other significant correlations with socio-economic factors. Exceptions were unemployment, positively associated with heart disease mortality, as found by Filate et al., and supplementary health insurance inversely associated with diabetes, cancer and heart disease mortality. The absence of supplementary health insurance, an indicator of low socio-economic status in Israel where the rates of coverage are generally high (see Additional file 2), is seen to be associated with higher mortality from diabetes, cancer and heart disease. Heart disease, the second leading cause of death, and diabetes showed more regional variations in SMR that were statistically significant than cancer. These diseases had significantly higher SMRs in the Northern and Haifa districts, and lower SMR's in some Central and Southern sub-districts. Interestingly, this south to north gradient for heart disease was reported in the European atlas in other countries, too, such as the UK and Ireland [11], although its cause is unclear. Kidney disease also showed higher mortality than expected in the Haifa sub-district. The proportion of diabetics among new patients receiving renal replacement therapy in Israel has been rising [21], so this kidney disease mortality may be found together with the high diabetes and heart disease SMRs we found in this region, and perhaps reflect high prevalence of these diseases. Israel reported higher rates of revascularization procedures in the Northern district in 2011, higher CABG rates in the Northern and Haifa districts, and higher PTCA rates in the Northern district, but also high revascularization rates in the Southern district [22]. It is to be hoped that early interventions with these procedures will prevent more severe heart disease and reduce its mortality in the north and in the Haifa sub-district, and that the lower mortality in the south will be maintained. These results may also support the importance of the MOH policy decision in recent years to increase cardiology facilities in the north. Another important priority in preventing heart and kidney disease, diabetes and cancer, which would help reduce resulting mortality, is education to healthy living, particularly important in high mortality regions and for those of low socio-economic status. The inter-ministerial program for healthy and active living [23], for example, is an important initiative encouraging good eating habits and exercise. Screening tests for cancer, mammography and guaiac faecal occult blood tests can contribute to reducing corresponding cancer mortality. In the USA, decreasing colorectal cancer incidence and mortality rates have been found with increasing screening, and Zauber discusses its connection with screening tests [24]. In Israel, too, where free, annual, high-sensitivity guaiac faecal occult blood tests was introduced in 2005, rates of incidence and mortality from colorectal cancer have been decreasing [25]. The program for Quality Indicators in Community Healthcare [26] has documented increases in coverage of these tests, but there is room for improvement, which should particularly be encouraged for occult blood tests in the Southern region where we found higher mortality than the national average. Septicemia is a severe condition defined as systemic inflammatory response syndrome which is often the immediate cause of death. However, unless nothing else at all is known about a patient, it should not be used as the underlying cause of death, as it is considered an ill-defined condition [27]. Rather, the source of the infection should also be written on the notification of death (NOD) form, if possible. Septicemia rates are very high in Israel compared to other countries, the 6th leading cause of death in 2010–2012 [5, 28], and this may be because the NOD form is not completed according to this directive. We have found hospital variations in proportion of deaths from septicemia, as well as whether it is listed as the only cause. These differences in NOD form completion may contribute to the regional significant differences in SMR for septicemia which we found. In addition, to correct this problem, it is important to train physicians on how to complete the NOD form. The MOH is currently embarking upon a web based training program that might become compulsory for all medical graduates. We recommend speedy implementation of this program, for medical students and hospital physicians, particularly in those hospitals with high proportions of deaths from septicemia. In a recent MOH research initiative, hospital records are also being checked for some of these cases, to see if the correct underlying cause can be established. Wang et al. [8] suggest that differences in sepsis mortality may reflect differences in sepsis treatment, or for example medical comorbidity, health behavior or socio-economic status. The high mortality rate in general from septicemia, and in the Southern and Tel Aviv districts in particular, indicate the importance of encouraging prevention of acquired blood stream infections (BSI) in institutions. This is one of the aims of the MOH Center for Infection Control and Antibiotic Resistance which is responsible for directing and coordinating activities connected to infection control and prevention in medical institutions, and the issue of antibiotic resistance in institutions and in the community. The Center sets national policy, standards and interventional methods, and maintains data to be used as a basis for improving the quality and safety of treatment (https://www.health.gov.il/English/MinistryUnits/HealthDivision/InfectionControl/Pages/default.aspx). Their data from monitoring the acquired central line associated BSI rate in Israel have shown a 50% reduction in the infection rate in a 4 years period (unpublished). The Center has extended its ongoing interventions to a national program to prevent acquired healthcare infections which was begun in 2016. This includes staff training programs and incentives to encourage hospitals to invest resources in improving implementation of MOH guidelines and standards to prevent infections, and succeed in reducing acquired infection rates (https://www.health.gov.il/English/News_and_Events/Spokespersons_Messages/Pages/25012017_1.aspx). Although this program should encourage hospitals to work on reducing infections, the available facilities and manpower also need to be improved in Israel. A recent article by Humphreys [29] reports studies showing that overcrowding and understaffing in hospitals can lead to increased acquired infections. This problem needs to be addressed in Israel, which has very high hospital occupancy rates and low population rates of practising nurses compared to most OECD countries [28]. Overcrowding may be particularly acute in the Southern district, where the age adjusted rate of hospital beds in 2011 was 1.6 per 100,000 population compared to the national average of 1.9, and 2.3 in the Jerusalem and Haifa districts. The new hospital in Ashdod may help alleviate this, but still more beds are needed nationwide. Also, hospital bed occupancy rates were particularly high in the Tel Aviv district, over 112% in 2011 compared to 98% in the Jerusalem district and 90–93% in other regions, and should be reduced [30]. Accidents had significantly higher SMRs in the Zefat and Be'er Sheva sub-districts. This was seen even more strongly in the SMRs for the largest group of accidents, motor vehicle traffic accidents, also significantly higher in the Judea and Samaria district and lower in Jerusalem. The high SMRs may reflect the longer distances traveled in these more sparsely populated peripheral areas, often on narrow, poorly maintained and badly lit roads, which increase the chance of accidents. In addition, when accidents do occur, it may take longer for emergency services to arrive. We recommend an improvement in the infrastructure of roads in these regions, and additional emergency health care centers which may help overcome this difference. As Ginsburg notes [3], in the large urban areas of Jerusalem and Tel Aviv, there are lower average vehicle speeds. Since speed is a key risk factor in road traffic injuries [31], also shown in Israel in a study by Richter et al. [32], these lower speeds may lead to lower accident fatality rates and contribute to the lower SMRs for accidents we found in these regions. Explanatory factors We found low SMRs for total mortality and for many causes, in the Jerusalem, Judea and Samaria and Central districts. In the Central district this may be accounted for by the high socio-economic level in many cities of its sub-districts Sharon, Petah Tiqwa and Rehovot, In the Judea and Samaria district the socio-economic level is high only in a few settlements. But the population of the Judea and Samaria district is very young, with over 40% under 15, and only 3% over 65, compared to a national average of 28% and 10%, respectively (see Additional file 2). It is possible that age-standardization using 10 year age bands (for ages over 24) with an oldest age group of 75 and over, does not compensate adequately for such a different age composition and may help explain the very low SMRs found. In contrast to this, in sub-districts with older population, such as Tel Aviv and Haifa, where 14% and 16%, of the population are over 65, respectively, Ginsberg points out [3, 4] that the wide 75 and over age band tends to lead to an underestimation of expected deaths and consequently higher SMRs. The SMR in the Jerusalem district, which has a relatively low socio-economic level, may also be influenced by the relatively large religious population there, and similarly in the Judea and Samaria district. A recent study by Sharoni et al. [33] showed that the social capital of the religious population, such as strong family and community connections and high level of volunteer activities may compensate for their lower socio-economic level, and explain their low mortality rates. Kark et al. [34] also found significantly lower mortality amongst the religious, in a study comparing religious and secular kibbutzim, as did Jaffe et al. [35] in a study on the effect of religiously affiliated neighborhoods on mortality. Hence, lower mortality among the religious population could contribute to the low SMRs in these regions. In particular, the SMR for suicide was significantly low in Jerusalem. The religious population there may be protected from suicide by religious prohibitions and spiritual beliefs, as by a strong cohesive community with shared values [36], although it may also be that suicide is under-reported due to religious stigmas, as noted in an Australian Parliamentary inquiry on suicide determination [37]. In the Tel Aviv district, a region with a less religious population, somewhat higher suicide mortality was found. Surprisingly, although we found cancer incidence associated with total mortality, the association with cancer mortality was not significant. It is possible that this is due to high survival rates for some cancers [28], or mobility of the population after cancer diagnosis. The relatively high cancer survival rate may also explain the high association we found of cancer incidence with the SMR for kidney mortality, since kidney disease is a common complication of cancer [38] and cancer risk was found to be higher in patients with end-stage renal disease [39]. Despite significant variations in income levels and new immigrant proportions between sub-districts, the correlations of these factors with SMRs were not significant, unlike Filate et al. who found low income significantly associated with ischemic heart disease and immigrant population with cardiovascular disease. It appears that after controlling for differences in mortality due to age and ethnic group, income and immigration effects are small. It is also possible that because variations in income within sub-districts are great, the average income does not reflect the true income distribution, and in addition since Israel has had national health insurance since 1995 and 95% coverage before then, allowing universal access to medical treatment independent of income, the income effect may be low. Another surprising finding for which we do not have an explanation, was the negative association of new immigrant proportion with CLRD mortality, particularly in view of recently reported higher respiratory mortality among immigrants from the Chernobyl area [40] . Changes over time The mortality patterns we found were on the whole remarkably similar to those presented by Ginsberg [3]. Noteworthy differences include that the SMR for total mortality in the Ramla and Hadera sub-districts were no longer significantly high. In Ramla, this is probably due to the new socio-economically strong cities of Modi'in and Shoham. Addressing ethnic inequality We have shown that ethnic factors appear to contribute more than regional factors to mortality inequalities, and need to be addressed. In a keynote presentation at a recent conference [41], Basharat highlighted the need to target the Arab population to improve their health outcomes, and reduce their high rates of obesity and diabetes, in particular by preventive medicine such as encouraging the use of whole-wheat breads by subsidies and education, and medical leadership through family physicians. An important program reported by the Clalit Health fund in the Sharon and Shomron area for the Arab population [42], included culturally attuned programs for healthy living amongst Arab women. Among successful outcomes was an increase in a quality index score for the region, becoming higher than the national average, increased drug usage for some diabetes drugs improving control, and increased flu vaccination coverage. Such programs must be encouraged and expanded. Screening for cancer is another important preventive measure as discussed above, which can be encouraged for ethnic groups at higher risk for particular cancers. Strengths and limitations The strength of our study is that it is based on complete mortality data over a period of five years, enabling regional comparisons at a leading cause level. Our standardization for age and ethnic group allows us to look for variations beyond differences in population composition. However, our adjustment for ethnic group, based on country of birth, may not be adequate. As Ginsburg notes [3, 4], there may be considerable genetic and cultural differences between people born on the same continent, for example Yemen and Iran, who are grouped together for purposes of standardization, and similarly the Israeli born group is very heterogeneous, including offspring born to immigrants from all continents. Amongst the Arab population there are also different sub-groups. Drawing conclusions from comparisons between different regions in Israel may be problematic, since to compare two population groups in a meaningful way, the variation between them should be larger than within them, as per Shaw et al. [13]. This requires relatively homogenous groups, of similar size, while the sub-districts in Israel vary greatly in size and many, particularly the larger ones, have a very heterogeneous population, with a wide range of socio-economic levels. Our study is based on the underlying cause of death, which is coded by the CBS from notification of death forms completed by the physician who certifies the death. There may be regional differences in how physicians fill in the form, deciding on the chain of events leading to death and the underlying cause. For example, in our paper on high mortality rates from diabetes and renal failure [43], we discuss whether it is possible that these diseases are listed as the underlying cause instead of heart disease or stroke more often in Israel than other countries,. The choice and coding of underlying cause is done by a small number of coders at the CBS, and is unlikely to contribute to regional differences. We attempted to look for associations using regional data, although our correlations are ecological in nature as we do not have individual level data on socio-economic and risk factors, and are therefore subject to potential 'ecological fallacy' (https://en.wikipedia.org/wiki/Ecological_fallacy). In addition, since many factors, such as education, diabetes and hypertension prevalence and unemployment, were survey based and subject to sometimes large relative sampling errors and the SMRs also had sometimes wide confidence limits, the results may be misleading. Therefore it is not surprising that correlations were not very high, although in general significant associations were as expected. In view of the multiple comparisons, we considered the Bonferroni correction. Only some of the total mortality and cancer SMRs with education and hypertension remaining statistically significant and maybe the above conclusions should be limited to them. It would be useful if administrative data, such as that from the program for Quality Indicators in Community Healthcare [26] based on health fund records, were available by district and sub-district, which would enable research on regional differences with more accurate data. The SMR, adjusted for age, and ethnicity showed some districts that differ significantly from the national average, beyond that expected from differences in population structure. Some of the regional differences may be attributed to differences in the completion of NOD forms by physicians. This needs to be addressed by efforts to improve reporting of causes of death, by training of medical students and refresher courses for qualified physicians. The relatively small significant differences found after our adjustment, shows the importance of targeting factors causing ethnic inequalities, rather than regional ones. Recommendations include raising the education level, reducing smoking, control of hypertension, encouraging healthy lifestyles and screening for cancer. This is particularly important for those of a low socio-economic level. CBS: Central Bureau of Statistics CLRD: MOH: NOD: Notification of death SMR: Standard Mortality Ratio Health in Israel: selected data. Ministry of Health, 2010 (Hebrew and English) http://www.health.gov.il/PublicationsFiles/HealthIsrael2010.pdf. Ginsberg GM. Standardized mortality ratios for Israel, 1969–78. Isr J Med Sci. 1983;19:638–43. Ginsberg GM, Tulchinsky TH, Salahov E, Clayman M. Standardized mortality ratios by region of residence, Israel, 1987–1994: a tool for public health policy. Public Health Rev. 2003;31(2):111–31. Goldberger N, Aburbeh M, Haklai Z. Leading causes of death in Israel, 2000–2012″ Ministry of Health. 2015 (Hebrew and English publications). https://www.health.gov.il/PublicationsFiles/Leading_Causes_2012E.pdf. Central Bureau of Statistics, Israel: Adjusted mortality rates, by cause, district and sub-district, ages 45+, average 2005–2009. http://www.cbs.gov.il/briut/new/t2005_2009.xls. Central Bureau of Statistics, Israel: Adjusted mortality rates, by cause, district and by population group, average 2006–2008. http://www.cbs.gov.il/briut/new/t2006_2008.xls. Goldberger N, Haklai Z. Mortality rates in Israel from causes amenable to health care, regional and international comparison. Israel J of Health Policy Research. 2012;1:41. Mortality atlas of Canada Canada department of National Health and Welfare, Statistics Canada 1980. Pickle LW, Mungiole M, Jones GK, White A: Atlas of United States mortality. Centers for Disease Control and Prevention, 1996. Eurostat: Health statistics – Atlas on mortality in the European Union, 2002 and 2009. Trewn D. Mortality Atlas, Australia 1997–2000. Australian Bureau of Statistics. 2002; Shaw M, Orford S, Brimblecombe N, Dorling D. Widening inequality in mortality between 160 regions of 15 European countries in the early 1990s. Soc Sci Med. 2000;50(7–8):1047–58. Muller-Nordhorn J, Binting S, Roll S, Willich SN. An update on regional variation in cardiovascular mortality within Europe. Eur Heart J. 2008;29(10):1316–26. Filate WA, Johansen HL, Kennedy CC, Tu JV. Regional variations in cardiovascular mortality in Canada. Can J Cardiol. 2003;19(11):1241–8. Wang HE, Devereaux RS, Yealy DM, Safford MM, Howard G. National variation in United States sepsis mortality: a descriptive study. Int J Health Geogr. 2010;9:9. Ulm K. A simple method to calculate the confidence interval of a standardized mortality ratio (SMR). Am J Epidemiol. 1990;131(2):373–5. Central Bureau of Statistics, Israel: Health and social profile of localities in Israel, 2005–2009. http://www.cbs.gov.il/webpub/pub/text_page.html?publ=105&CYear=2009&CMonth=12. Langford IH. Using empirical Bayes estimates in the geographical analysis of disease risk. Area. 1994;26(2):142–9. Jaffe DH, Neumark YD, Eisenbach Z, Manor O. Educational inequalities in mortality among Israeli Jews: changes over time in a dynamic population. Health Place. 2008;14(2):287–98. Calderon-Margalit R, Gordon ES, Hoshen M, Kark JD, Rotem A, Haklai Z. Dialysis in Israel, 1989–2005 - time trends and international comparisons. Nephrol Dial Transplant. 2008(23):659–64. OECD Health policy studies: Geographic variations in health care: What do we know and what can be done to improve health system performance? 2014 OECD Publishing http://dx.doi.org/10.1787/9789264216594-en. "It is possible to be healthy"- The national program for active and healthy living http://cms.education.gov.il/EducationCMS/Units/Mazkirut_Pedagogit/Briut/TochniyoBriut/EfshariBari/odot.htm. (Hebrew). Zauber G. The impact of screening on colorectal cancer mortality and incidence – has it really made a difference? Dig Dis Sci. 2015;60(3):681–91. doi:10.1007/s10620-015-3600-5. Silverman B, Keinan-Boker L, Lifshitz I, Fishler Y, Dichtiar R: Colorectal cancer in Israel – update. Ministry of Health 2014: http://www.health.gov.il/PublicationsFiles/ICR_21072014.pdf (Hebrew). Manor O, Shmueli A, Ben-Yehuda A, Paltiel O, Calderon R, Jaffe DH. National Program for quality indicators in community healthcare in Israel, report for 2011–2013. Hebrew University-Hadassah: School of Public Health and Community Medicine; 2014. Health Information Systems Knowledge Hub: Handbook for doctors on cause-of-death certification, 2012. http://www.getinthepicture.org/sites/default/files/resources/Handbook%20for%20doctors%20on%20cause-of-death%20certification.pdf. OECD statistical data, http://stats.oecd.org/#. Humphreys H. Overcrowding, understaffing and infection in hospitals. Ir Med J. 2006;99(4):102. Ministry of Health: Inpatient institutions and day care units in Israel, 2011. https://www.health.gov.il/UnitsOffice/HD/MTI/info/Pages/Inpatient_Institutions.aspx. (Hebrew). WHO Facts Road safety – Speed http://www.who.int/violence_injury_prevention/publications/road_traffic/world_report/speed_en.pdf. Richter E. Death and injury from motor vehicle crashes in Israel; epidemiology, prevention and control. Int J Epidemiol. 1981;10:145–53. http://www.who.int/violence_injury_prevention/publications/road_traffic/world_report Sharoni C, Tchernichovsky D: The secret of the connection between Orthodoxy and Health. NIHP 2015. 11th Annual conference on Health Policy. Kark JD, Shemi G, Friedlander Y, Martin O, Manor O, Blondheim SH. Does religious observance promote health? Mortality in secular vs religious kibbutzim in Israel. Am J Public Health. 1996 March;86(3):341–6. Jaffe DH, Eisenbach Z, Neumark YD, Manor O. Does living in a religiously affiliated neighborhood lower mortality? Ann Epidemiol. 2005;15:804–10. Van Praag H. The role of religion in suicide prevention. In: Wasserman D, Wasserman C, editors. Oxford textbook of suicidology and suicide prevention: a global perspective. Oxford University Press; 2009:7–12. Senate Community Affairs Reference Committee: The Hidden Toll: Suicide in Australia. 2010. http://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Community_Affairs/Completed_inquiries/2008-10/suicide/report/index Humphreys B, Soiffer RJ, Magee CC: Renal failure associated with cancer and Its treatment: An update J Am Soc Nephrol 2005;16: 151–161,. doi: 10.1681/ASN.2004100843 Butler AM, Olshan AF, Kshirsagar AV, Edwards JK, Nielsen ME, Wheeler SB, Brookhart MA. Cancer incidence among US Medicare ESRD patients receiving hemodialysis, 1996–2009. Am J Kidney Dis. 2015;65(5):763–72. doi:10.1053/j.ajkd.2014.12.013. Slusky DA, Cwikel J, Quastel MR. Chronic diseases and mortality among immigrants to Israel from areas contaminated by the Chernobyl disaster: a follow-up study. Int J Public Health. 2017;62:463. doi:10.1007/s00038-017-0941-1. Basharat B:: The Israeli health system as reflected in the Arab population. NIHP 2014. 10h Annual conference on Health Policy. Sadeh Z, Zimmerman P, Ron A, Segev D, Gidoni Y, Nimni K: Towards health promotion in the Arab sector in the Sharon-Shomron region. The society for quality of health in medicine, 21st conference, 2014. Goldberger N, Applbaum Y, Meron J, Haklai Z. High Israeli mortality rates from diabetes and renal failure – Can international comparison of multiple causes of death reflect differences in choice of underlying cause? Israel J of Health Policy Research. 2015;4:31. Atlas of mortality in Israel, 2009–2013. Ministry of Health, 2016 (Hebrew) http://www.health.gov.il/PublicationsFiles/DeathAT2009_2013.pdf We would like to thank the health division of the CBS, directed by Naama Rotem for the cause of death files, and in particular Daphna Hartal Huerta who is responsible for the coding staff and for preparing the data. This study was undertaken by the Health Information Division of the MOH with no external funding. This study used cause of death data files prepared by the CBS. The data files are not publically available due to safeguarding privacy, but the CBS and MOH publishes annual and periodic publications with summary data (for examples 6, 7,8). Particular requests for data can be directed to the CBS for their consideration, subject to privacy limitations. Similarly, specific data is available from the corresponding author on reasonable request. Population data is published by the CBS and available on their website. If more detailed data is necessary, it may be requested from them. Detailed results of this study, including absolute number of deaths and sex-specific results for different causes have been published in an Atlas of mortality in Israel [44], which is available on the MOH website. Division of Health Information, Ministry of Health, Yirmiyahu, 39, 9446724, Jerusalem, Israel Ethel-Sherry Gordon , Ziona Haklai , Jill Meron , Miriam Aburbeh , Yael Applbaum & Nehama F. Goldberger Search for Ethel-Sherry Gordon in: Search for Ziona Haklai in: Search for Jill Meron in: Search for Miriam Aburbeh in: Search for Inbal Weiss Salz in: Search for Yael Applbaum in: Search for Nehama F. Goldberger in: ESG analysed the data. MA, IWS and YA explored the results. NFG drafted the paper. All authors edited the manuscript. ZH directed the analysis. All authors read and approved the final manuscript. Correspondence to Nehama F. Goldberger. ESG, MA, IWS, YA, JM, and NFG, are researchers in the Health Information Division of the Ministry of Health. ESG is responsible for database preparation and analysis. IWS is also director of the MOH Center for Infection Control and Antibiotic Resistance. YA's interests are improvement in health information data quality, and she practices family medicine. NFG's fields of interest are causes of death and suicidality. ZH is Director of the Health Information Division of the Ministry of Health. Not required since all data was unidentified. Additional file 1: Map of Israel showing districts and sub-districts. (DOCX 303 kb) Demographic, socio-economic and health characteristics of sub-districts of Israel. (XLSX 22 kb) SMR by sub-district, 2009–2013, standardized for age only, with 95% CI error bars. (DOCX 163 kb) SMR for total mortality by district, sub-district and gender, 2009–2013, with 95% confidence intervals (Table 1 with 95% CI). (XLSX 11 kb) SMR for mortality by leading causes of death, district and sub-district, 2009–2013, with 95% confidence intervals (Table 2 with 95% CI). (XLSX 14 kb) SMR for mortality by selected causes of death, by district and sub-district, 2009–2013, with 95% confidence intervals (Table 3 with 95% CI). (XLSX 15 kb) Gordon, E., Haklai, Z., Meron, J. et al. Regional variations in mortality and causes of death in Israel, 2009–2013. Isr J Health Policy Res 6, 39 (2017) doi:10.1186/s13584-017-0164-1
CommonCrawl
represents a crystal unit of cesium chloride, represents a crystal unit of cesium chloride, CsCl. The cesium atoms, represented by open circles are situated at the corners of a cube of side 0.40 nm, whereas a Cl atom is situated at the centre of the cube. The Cs atoms are deficient in one electron while the Cl atom carries an excess electron. (i) What is the net electric field on the Cl atom due to eight Cs atoms? (ii) Suppose that the Cs atom at the corner A is missing. What is the net force now on the Cl atom due to seven remaining Cs atoms? (i) From the given figure, we can observe that Cl‒ atom is at the centre whereas Cs+ atoms are at eight corners at equal distance. So, by symmetry net force on Cl‒ atom due to other Cs+ atoms is zero. Due to which, net electric field due to other Cs+ atoms is also zero (as, |E| = F/q) (ii) In the given question, removing Cs atom at the corner A is equivalent to adding a singly charged negative Cs ion at point A. Net force = q2/(4πεor2) Here, q = charge on electron, r = distance between Cl and Cs atoms Applying Pythagoras theorem, we have $r=\sqrt{(0.02)^{2}+(0.20)^{2}+(0.20)^{2}} \times 10^{-9} \mathrm{~m}=0.346 \times 10^{-9} \mathrm{~m}$ Net force $=\frac{q^{2}}{4 \pi \varepsilon_{0} r^{2}}=\frac{9 \times 10^{9}\left(1.6 \times 10^{-19}\right)^{2}}{\left(0.346 \times 10^{-9}\right)^{2}}=1.92 \times 10^{-9} \mathrm{~N}$ The direction of force is from A to Cl‒
CommonCrawl